� ���gU���R�dZddlZddlZddlZddlZddlZddlZddlZddlZddl Z ddl Z ddl Z ddl Z ddl Z ddlZddlZddlZddlZddlmZddlmZmZmZddlmZddlmZddlmZmZddlmZdd lm Z m!Z!dd l"m#Z#dd l$m%Z%dd l&m'Z'm(Z(m)Z)m*Z*m+Z+m,Z,m-Z-ddl.Z.ddl/Z0ddl1Z2ddl3Z4ddl5m6Z7dd l8m9Z9ddl:m;Z;m<Z<m=Z=m>Z>m?Z?m@Z@ddlAmBZBddlCmDZDddlEmFZFddlGmHZHddlImJZJddlKmLZLmMZMddlNmOZOddlPmQZQddlRmSZSmTZTmUZUmVZVmZmWZWmXZXddlYmZZZm[Z[m\Z\m]Z]m^Z^m_Z_ddl`maZaddlbmcZcmdZdmeZemfZfmgZgmhZhmiZimjZjmkZkmlZlddlmmnZnmoZompZpmqZqddlrmsZsmtZtddlumvZvmwZwddlxmyZydd lzm{Z{dd!l|m}Z}m~Z~mZm�Z�dd"l�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�dd#l�m�Z�dd$l�m�Z�dd%l�m�Z�dd&l�m�Z�dd'l�m�Z�dd(l�m�Z�m�Z�m�Z�m�Z�m�Z�m�Z�dd)l�m�Z�dd*l�m�Z�m�Z�m�Z�dd+l�m�Z�m�Z�e'rddl�Z�ddl�Z�ddl�Z�ddl�Z�dd,l�m�Z�dd-l�m�Z�e�j�e���Z�d.Z�Gd/�d0��Z�Gd1�d2��Z�Gd3�d4e���Z�d5�Z�d6e�d7eUfd8�Z�d9e�fd:�Z�d;e�e�fd<�Z�d=�Z�Gd>�d?e���Z�Gd@�dAe�e{e���Z� dZdBe�e�dCe+evdDe+e}dEe�fdF�Z� d[dHe�dAdIe+e�e�dJe+e�dCe+evdDe+e}dKe�dLd9dAfdM�Z�dNe�dOe�dPe�d9e�fdQ�Z� d\dRe*dSe�dTe�dUe�dVe+e,e�e�e�fdWe+e�f dX�Z� d\dRe*dSe�dTe�dUe�dVe+e,e�e�e�fdWe+e�f dY�Z�dS)]z'Simple Dataset wrapping an Arrow Table.�N)�Counter)�Iterable�Iterator�Mapping)�Sequence)�deepcopy)�partial�wraps)�BytesIO)�ceil�floor��Path)�sample)� TYPE_CHECKING�Any�BinaryIO�Callable�Optional�Union�overload)� url_to_fs)� CommitInfo�CommitOperationAdd�CommitOperationDelete� DatasetCard�DatasetCardData�HfApi)�RepoFile)�Pool)� thread_map�)�config)� ArrowReader)� ArrowWriter�OptimizedTypedSequence)�sanitize_patterns)�xgetsize)�Audio� ClassLabel�Features�Imager�Value�Video)� FeatureType�_align_features�!_check_if_features_can_be_aligned�generate_from_arrow_type�pandas_types_mapper�require_decoding)�is_remote_filesystem) �fingerprint_transform�format_kwargs_for_fingerprint� format_transform_for_fingerprint�generate_fingerprint�generate_random_fingerprint�#get_temporary_cache_files_directory�is_caching_enabled�,maybe_register_dataset_for_temp_dir_deletion�update_fingerprint�validate_fingerprint)� format_table�get_format_type_from_alias� get_formatter� query_table)�LazyDict�_is_range_contiguous)� DatasetInfo�DatasetInfosDict)� _split_re)�IndexableMixin)� NamedSplit�Split� SplitDict� SplitInfo) � InMemoryTable�MemoryMappedTable�Table�,_memory_mapped_record_batch_reader_from_file�cast_array_to_feature� concat_tables�embed_table_storage�list_table_cache_files� table_cast� table_iter� table_visitor)�logging)�tqdm)�estimate_dataset_size)�is_small_dataset)�MetadataConfigs)�Literal�asdict�convert_file_size_to_int�glob_pattern_to_regex�iflatmap_unordered�string_to_dict)�)stratified_shuffle_split_generate_indices)� dataset_to_tf�minimal_tf_collate_fn�multiprocess_dataset_to_tf)�ListLike�PathLike�� DatasetDict)�IterableDatasetzLdata/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.parquetc��eZdZdZdedeefd�Zed���Z ed���Z ede fd���Z ede fd ���Z ede fd ���Zedeefd ���Zede fd ���Zedeefd ���Zedeefd���Zedeefd���Zedee fd���Zedee fd���Zedeefd���Zed���Zed���ZdS)�DatasetInfoMixinzqThis base class exposes some attributes of DatasetInfo at the base level of the Dataset for easy access. �info�splitc�"�||_||_dS�N)�_info�_split)�selfrorps �f/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/datasets/arrow_dataset.py�__init__zDatasetInfoMixin.__init__�s���� ��� � � �c��|jS)zL[`~datasets.DatasetInfo`] object containing all the metadata in the dataset.)rs�rus rvrozDatasetInfoMixin.info�s ���z�rxc��|jS)zG[`~datasets.NamedSplit`] object corresponding to a named dataset split.)rtrzs rvrpzDatasetInfoMixin.split�s ���{�rx�returnc��|jjSrr)rs� builder_namerzs rvr~zDatasetInfoMixin.builder_name�� ���z�&�&rxc��|jjSrr)rs�citationrzs rvr�zDatasetInfoMixin.citation�� ���z�"�"rxc��|jjSrr)rs� config_namerzs rvr�zDatasetInfoMixin.config_name�� ���z�%�%rxc��|jjSrr)rs� dataset_sizerzs rvr�zDatasetInfoMixin.dataset_size�rrxc��|jjSrr)rs� descriptionrzs rvr�zDatasetInfoMixin.description�r�rxc��|jjSrr)rs�download_checksumsrzs rvr�z#DatasetInfoMixin.download_checksums�s ���z�,�,rxc��|jjSrr)rs� download_sizerzs rvr�zDatasetInfoMixin.download_size�� ���z�'�'rxc�Z�|jj�|jj���ndSrr)rs�features�copyrzs rvr�zDatasetInfoMixin.features�s)��-1�Z�-@�-L�t�z�"�'�'�)�)�)�RV�Vrxc��|jjSrr)rs�homepagerzs rvr�zDatasetInfoMixin.homepage�r�rxc��|jjSrr)rs�licenserzs rvr�zDatasetInfoMixin.license�� ���z�!�!rxc��|jjSrr)rs� size_in_bytesrzs rvr�zDatasetInfoMixin.size_in_bytes�r�rxc��|jjSrr)rs�supervised_keysrzs rvr�z DatasetInfoMixin.supervised_keys�s ���z�)�)rxc��|jjSrr)rs�versionrzs rvr�zDatasetInfoMixin.version�r�rxN)�__name__� __module__� __qualname__�__doc__rFrrJrw�propertyrorp�strr~r�r��intr�r��dictr�r�r+r�r�r�r�r�r��rxrvrnrn�s����������[���*�1E���������X������X���'�c�'�'�'��X�'��#�#�#�#�#��X�#��&�S�&�&�&��X�&��'�h�s�m�'�'�'��X�'��&�S�&�&�&��X�&��-�H�T�N�-�-�-��X�-��(�x��}�(�(�(��X�(��W�(�8�,�W�W�W��X�W��#�(�3�-�#�#�#��X�#��"��#��"�"�"��X�"��(�x��}�(�(�(��X�(��*�*��X�*��"�"��X�"�"�"rxrnc�H�eZdZe��Ze ddddededee e dee d e f d ���Z ddee dee e e e fdedeededeee efdee e e e fdede d e fd�ZdS)�TensorflowDatasetMixinN��dataset�Dataset� collate_fn�collate_fn_args�cols_to_retain� batch_size�num_test_batchesc�D�����tjrddl}ntd���t |��dkrt d���|�t t |��|��}d}��!tt�gd�z�����g}t|��D]�} ttt |����|��} || ��� �fd��� ��D����fd�t|��D���|�fi|���|� �����i} i} |d� ��D�]R��fd �|D��} g}| D]�}t|tj��r|� |���2t||j��r(|� |������o|� tj|������tj|djtj��s|djt.kr|j}tj}n�tj|djtj��r|j}tj}nH|djjd krtj}|j}nt=d |dj�d ����d �|D��}g}tt |d����D]|��fd�|D��}�dkr|� |���,t |��dkr(|� |������g|� d���}|� ||���| �<|| �<��T| | fS)aHPrivate method used by `to_tf_dataset()` to find the shapes and dtypes of samples from this dataset after being passed through the collate_fn. Tensorflow needs an exact signature for tf.numpy_function, so the only way to do this is to run test batches - the collator may add or rename columns, so we can't figure it out just by inspecting the dataset. Args: dataset (`Dataset`): Dataset to load samples from. collate_fn(`bool`): Shuffle the dataset order when loading. Recommended True for training, False for validation/evaluation. collate_fn(`Callable`): A function or callable object (such as a `DataCollator`) that will collate lists of samples into a batch. collate_fn_args (`Dict`): A `dict` of keyword arguments to be passed to the `collate_fn`. batch_size (`int`, optional): The size of batches loaded from the dataset. Used for shape inference. Can be None, which indicates that batch sizes can be variable. num_test_batches (`int`): The number of batches to load from the dataset for shape inference. Returns: `dict`: Dict mapping column names to tf.Tensorspec objects `dict`: Dict mapping column names to np.dtype objects rN�FCalled a Tensorflow-specific function but Tensorflow is not installed.z@Unable to get the output signature because the dataset is empty.r")� label_ids�label�labelsc�$��i|] \}}|�v� ||�� Sr�r�)�.0�key�valuer�s �rv� <dictcomp>z@TensorflowDatasetMixin._get_output_signature.<locals>.<dictcomp>s+���g�g�g�Z�S�%�QT�Xf�Qf�Qf�c�5�Qf�Qf�Qfrxc�R���g|]"��fd�����D����#S)c�(��i|]\}}||���Sr�r�)r�r�r��is �rvr�zKTensorflowDatasetMixin._get_output_signature.<locals>.<listcomp>.<dictcomp>s#���N�N�N�Z�S�%�3��a��N�N�Nrx��items)r�r�� test_batchs @�rv� <listcomp>z@TensorflowDatasetMixin._get_output_signature.<locals>.<listcomp>s<����o�o�o�ST�N�N�N�N�:�;K�;K�;M�;M�N�N�N�o�o�orxc� ��g|] }|��� Sr�r�)r��batch�columns �rvr�z@TensorflowDatasetMixin._get_output_signature.<locals>.<listcomp>s���B�B�B�E�%��-�B�B�Brx�UzUnrecognized array dtype z<. Nested types and image/audio types are not supported yet.c��g|] }|j�� Sr�)�shape)r��arrays rvr�z@TensorflowDatasetMixin._get_output_signature.<locals>.<listcomp>6s��9�9�9�e�e�k�9�9�9rxc� ��h|] }|��� Sr�r�)r�r��dims �rv� <setcomp>z?TensorflowDatasetMixin._get_output_signature.<locals>.<setcomp>9s���8�8�8���s��8�8�8rx)r��dtype)!r#� TF_AVAILABLE� tensorflow� ImportError�len� ValueError�min�list�set�rangerr��append�keys� isinstance�np�ndarray�Tensor�numpyr�� issubdtyper��integer�bool�int64�number�float32�kind�str_�string� RuntimeError�pop� TensorSpec)r�r�r�r�r�r��tf�test_batch_size� test_batches�_�indices�tf_columns_to_signatures�np_columns_to_dtypes� raw_arrays� np_arraysr��tf_dtype�np_dtype�shapes� static_shape�sizesr�r�r�s ` @@@rv�_get_output_signaturez,TensorflowDatasetMixin._get_output_signature�s�������< � � h� #� #� #� #� #��f�g�g� g� �w�<�<�1� � ��_�`�`� `� � !��S��\�\�:�6�6�J��� � %�!�#�n�7W�7W�7W�&W�"X�"X�Y�Y�N�� ��'�(�(� ,� ,�A��U�3�w�<�<�0�0�/�B�B�G� ��)�J��)�g�g�g�g�:�;K�;K�;M�;M�g�g�g� �o�o�o�o�X]�^m�Xn�Xn�o�o�o�J�#��J�B�B�/�B�B�J� � � � � +� +� +� +�#%� �!��"�1�o�*�*�,�,�& 4�& 4�F�B�B�B�B�\�B�B�B�J��I�#� 6� 6���e�R�Z�0�0�6��$�$�U�+�+�+�+���r�y�1�1�6��$�$�U�[�[�]�]�3�3�3�3��$�$�R�X�e�_�_�5�5�5�5��}�Y�q�\�/���<�<� � �!� �@R�VZ�@Z�@Z��8���8�����y��|�1�2�9�=�=� ��:���:����1��#�(�C�/�/��7���9���"�P� �!� �0B�P�P�P����:�9�y�9�9�9�F��L��S����^�^�,�,� .� .��8�8�8�8��8�8�8���!�8�8� �'�'� �3�3�3���u�:�:��?�?� �'�'�� � � � �4�4�4�4� �'�'��-�-�-�-�/1�}�}�<�W_�}�/`�/`� $�V� ,�+3� �� (� (�'�)=�=�=rxFTr�columns�shuffle�drop_remainder� label_cols�prefetch� num_workersc �������tjrddl} ntd���t �t ��rt ���dks(t �t ��r-t ���dkrtjdt��t | j � ��| j j ��rt�d��|�t}|�i}�r�st!d�����g�nt �t"��r�g�t t%�����t ���krt!d����rtt �t"��r�g�t t%�����t ���krt!d ���t t%��z����} nd} g��jd d vr��d ���n�����||| |r|nd| � ��\} }d| vr8d�vsd�vrd�vrd��D��dgz�d�vsd�vrd�vrd��D��dgz��D]}|| vrt!d|�d������D]}|| vrt!d|�d�����| dkrt-�| |||| |||�� � }n@| dkr+|�t/d���t1�| |||| |||| �� � }nt!d�����fd�}| �|�|��}|r$|�| jjj��}��fd�}�j�tAj!||����|S)a� Create a `tf.data.Dataset` from the underlying Dataset. This `tf.data.Dataset` will load and collate batches from the Dataset, and is suitable for passing to methods like `model.fit()` or `model.predict()`. The dataset will yield `dicts` for both inputs and labels unless the `dict` would contain only a single key, in which case a raw `tf.Tensor` is yielded instead. Args: batch_size (`int`, *optional*): Size of batches to load from the dataset. Defaults to `None`, which implies that the dataset won't be batched, but the returned dataset can be batched later with `tf_dataset.batch(batch_size)`. columns (`List[str]` or `str`, *optional*): Dataset column(s) to load in the `tf.data.Dataset`. Column names that are created by the `collate_fn` and that do not exist in the original dataset can be used. shuffle(`bool`, defaults to `False`): Shuffle the dataset order when loading. Recommended `True` for training, `False` for validation/evaluation. drop_remainder(`bool`, defaults to `False`): Drop the last incomplete batch when loading. Ensures that all batches yielded by the dataset will have the same length on the batch dimension. collate_fn(`Callable`, *optional*): A function or callable object (such as a `DataCollator`) that will collate lists of samples into a batch. collate_fn_args (`Dict`, *optional*): An optional `dict` of keyword arguments to be passed to the `collate_fn`. label_cols (`List[str]` or `str`, defaults to `None`): Dataset column(s) to load as labels. Note that many models compute loss internally rather than letting Keras do it, in which case passing the labels here is optional, as long as they're in the input `columns`. prefetch (`bool`, defaults to `True`): Whether to run the dataloader in a separate thread and maintain a small buffer of batches for training. Improves performance by allowing data to be loaded in the background while the model is training. num_workers (`int`, defaults to `0`): Number of workers to use for loading the dataset. num_test_batches (`int`, defaults to `20`): Number of batches to use to infer the output signature of the dataset. The higher this number, the more accurate the signature will be, but the longer it will take to create the dataset. Returns: `tf.data.Dataset` Example: ```py >>> ds_train = ds["train"].to_tf_dataset( ... columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'], ... shuffle=True, ... batch_size=16, ... collate_fn=data_collator, ... ) ``` rNr�r"aThe output of `to_tf_dataset` will change when a passing single element list for `labels` or `columns` in the next datasets version. To return a tuple structure rather than dict, pass a single string. Old behaviour: columns=['a'], labels=['labels'] -> (tf.Tensor, tf.Tensor) : columns='a', labels='labels' -> (tf.Tensor, tf.Tensor) New behaviour: columns=['a'],labels=['labels'] -> ({'a': tf.Tensor}, {'labels': tf.Tensor}) : columns='a', labels='labels' -> (tf.Tensor, tf.Tensor) a>Note that to_tf_dataset() loads the data with a generator rather than a full tf.data pipeline and is not compatible with remote TPU connections. If you encounter errors, please try using a TPU VM or, if your data can fit in memory, loading it into memory as a dict of Tensors instead of streaming with to_tf_dataset().z5Cannot specify label_cols without specifying columns!z'List of label_cols contains duplicates.z$List of columns contains duplicates.�type)�customr�r�)r�r�r�r�r�r�r�r�c��g|]}|dv�|�� S�)r�r�r��r��cols rvr�z8TensorflowDatasetMixin.to_tf_dataset.<locals>.<listcomp>�s#��W�W�W�3�S�@V�5V�5V�3�5V�5V�5Vrxc��g|]}|dv�|�� Srr�rs rvr�z8TensorflowDatasetMixin.to_tf_dataset.<locals>.<listcomp>�s#��]�]�]�c�3�F\�;\�;\�c�;\�;\�;\rxzColumn z not found in dataset!z Label column ) r�r�r�r��columns_to_np_types�output_signaturer�r�r�z�`batch_size` must be specified when using multiple workers, as unbatched multiprocessing is not supported yet. Please provide a `batch_size` if `num_workers` is greater than 0.) r�r�r�r�rrr�r�r�r�znum_workers must be >= 0c�����fd�|���D��}�fd�|���D��}t|��dkr't|�����d}t|��dkr't|�����d}t |t ��rt|��dkr|S||fS)Nc�$��i|] \}}|�v� ||�� Sr�r�)r�r��tensorr�s �rvr�z[TensorflowDatasetMixin.to_tf_dataset.<locals>.split_features_and_labels.<locals>.<dictcomp>�s%���]�]�]� ��V�c�U\�n�n��V�n�n�nrxc�$��i|] \}}|�v� ||�� Sr�r�)r�r�r r�s �rvr�z[TensorflowDatasetMixin.to_tf_dataset.<locals>.split_features_and_labels.<locals>.<dictcomp>�s*���^�^�^�k�c�6�C�S]�L]�L]�c�6�L]�L]�L]rxr"r)r�r�r��valuesr�r�)� input_batchr�r�r�r�s ��rv�split_features_and_labelszGTensorflowDatasetMixin.to_tf_dataset.<locals>.split_features_and_labels�s����]�]�]�]�{�7H�7H�7J�7J�]�]�]�H�^�^�^�^�[�5F�5F�5H�5H�^�^�^�F��8�}�}��!�!����� 1� 1�2�2�1�5���6�{�{�a����f�m�m�o�o�.�.�q�1���&�$�'�'� (�C��K�K�1�,<�,<�����'�'rxc�d�������j�|��dSrr)�__del__�_TF_DATASET_REFS�remove)�refr�rus ��rv�cleanup_callbackz>TensorflowDatasetMixin.to_tf_dataset.<locals>.cleanup_callback s0��� �O�O� � � � � !� (� (�� -� -� -� -� -rx)"r#r�r�r�r�r�r��warnings�warn� FutureWarning� distribute� get_strategy� TPUStrategy�logger�warningrfr�r�r��format� with_formatr�re�NotImplementedErrorrg�mapr��data� experimental�AUTOTUNEr�add�weakrefr)rur�r�r�r�r�r�r�r�r�r�r�r�rrr� tf_datasetr rr�s` ` ` @rv� to_tf_datasetz$TensorflowDatasetMixin.to_tf_datasetFs�������D � � h� #� #� #� #� #��f�g�g� g� �w�� %� %� �#�g�,�,�!�*;�*;� �z�4� (� (�+<�-0��_�_��-A�-A� �M�Y�� � � � �b�m�0�0�2�2�B�M�4M� N� N� � �N�N�E� � � � � �.�J� � "� �O� � V�g� V��T�U�U� U� � ��J�J� � �C� (� (� &�$��J� �s�:��� � �#�j�/�/� 1� 1��F�G�G� G� � ��'�3�'�'� $�"�)���3�w�<�<� � �3�w�<�<�/�/� �!G�H�H�H�!�#�g� �&:�";�";�<�<�N�N�!�N��G� �;�v� �&9� 9� 9��&�&�w�/�/�G�G��G�18�0M�0M� �!�+�)�%3�=�z�z��-� 1N�1 �1 �-��-� �'� '� '��w�&�&�'�W�*<�*<�(�RY�BY�BY�W�W�'�W�W�W�[c�Zd�d���z�)�)�W� �-B�-B��Xb�Hb�Hb�]�]�Z�]�]�]�ai�`j�j� �� H� H�C��*�*�*� �!F�3�!F�!F�!F�G�G�G�+�� N� N�C��*�*�*� �!L��!L�!L�!L�M�M�M�+� �!� � �&��-�%� /�$7�!1��%�-� � � �J�J��1�_�_��!�)�n����4��-�%� /�$7�!1��%�-�'� � � �J�J��7�8�8� 8� (� (� (� (� (� (� � %�#���(A�B�B�J� � L�#�,�,�R�W�-A�-J�K�K�J� .� .� .� .� .� .� ��!�!�'�+�j�:J�"K�"K�L�L�L��rx)NNr�) NNFNFNNTrr�)r�r�r�r�r� staticmethodrr�rr�r�r�r�rr�rr&r�rxrvr�r��s��������s�u�u��� /3�$(� "� _>�_>��_>��_>��_>�!��c��+� _>� �S�M� _>� � _>�_>�_>��\�_>�F%)�37��)-�$�48�6:��� "�M�M��S�M�M��%��T�#�Y��/�0�M�� M� �X�&� M� � M�"�$�s�C�x�.�1�M��U�3��S� �>�2�3�M��M��M��M�M�M�M�M�Mrxr�c��eZdZdS)�$DatasetTransformationNotAllowedErrorN)r�r�r�r�rxrvr)r)s�������Drxr)c�J��t����fd���}d|_|S)z|Wrapper for dataset transforms that recreate a new Dataset to transmit the format of the original dataset to the new datasetc���|r|d}|dd�}n|�d��}t|j��t|jpg��z }|j|j|j|jd�}� |g|�Ri|��}t|t��r!t|� ����n|g}|D]�}|� ��}|d�'tt|j��|z ��|d<|j|j|j�t|j��nd|jd�} | |kr|j } |jdi|��| |_ ��|S)Nrr"ru�r�� format_kwargsr��output_all_columnsr�r�)r�r�� column_names�_format_columns� _format_type�_format_kwargs�_output_all_columnsr�r�r�r r��sorted� _fingerprint� set_format) �args�kwargsru�unformatted_columns� self_format�out�datasetsr�� new_format� out_format� fingerprint�funcs �rv�wrapperz transmit_format.<locals>.wrappers���� � 1�"�1�g�D�����8�D�D�$�j�j��0�0�D�!�$�"3�4�4�s�4�;O�;U�SU�7V�7V�V���%�!�0��+�"&�":�  � � �04�t�D�/J�4�/J�/J�/J�6�/J�/J��:D�S�$�:O�:O�$Z�D������$6�$6�$6�VY�UZ��� 3� 3�G�$�)�)�+�+�J��)�$�0�(.�s�7�3G�/H�/H�K^�/^�(_�(_� �9�%��,�!(�!7�>E�>U�>a�6�'�"9�:�:�:�gk�&-�&A� ��J� �Z�'�'�%�2� �"��"�0�0�Z�0�0�0�'2��$��� rx�transmit_format)r �_decorator_name_)r@rAs` rvrBrBs>��� �4�[�[�!�!�!�!��[�!�F 1�G�� �Nrx�tabler�c�&��t�fd�|jD�����|jj�d|jjvr#t jt ������}n�tj|jjd� ����}d|vr!tt ������|d<n,tt ������d|dd<dtj |��i}|� |��}|S)z�To be used in dataset transforms that modify the features of the dataset, in order to update the features stored in the metadata of its schema.c�"��i|] }|�|�� Sr�r�)r��col_namer�s �rvr�z1update_metadata_with_features.<locals>.<dictcomp>Gs ���Y�Y�Y�(��8�H�#5�Y�Y�YrxN� huggingface�r�ror�� huggingface) r+r/�schema�metadatar%�_build_metadatarF�json�loads�decoder_�dumps�replace_schema_metadata)rDr�� pa_metadatarLs ` rv�update_metadata_with_featuresrTEs����Y�Y�Y�Y�e�FX�Y�Y�Y�Z�Z�H� �|��$��e�l�>S�(S�(S�!�1�+�x�2P�2P�2P�Q�Q� � ��:�e�l�3�N�C�J�J�L�L�M�M�� �� !� !�%�k�8�&D�&D�&D�E�E�H�V� � �+1�+�x�2P�2P�2P�+Q�+Q�R\�+]�H�V� �Z� (�$�d�j��&:�&:�;� � � )� )�+� 6� 6�E� �Lrxr|c��t|tj��rt|��St|t��r|St d|�d����)zVWe check the table type to make sure it's an instance of :class:`datasets.table.Table`zCExpected a pyarrow.Table or a datasets.table.Table object, but got �.)r��parPrN� TypeError)rDs rv� _check_tablerYUsZ���%���"�"�h��U�#�#�#� �E�5� !� !�h�� ��f�^c�f�f�f�g�g�grxr/c���t|���td�����D����s!�fd��D��}td|�d����dS)zBCheck the column names to make sure they don't contain duplicates.c3�"K�|] }|dkV�� dS)r"Nr�)r��counts rv� <genexpr>z&_check_column_names.<locals>.<genexpr>ds&����8�8�e�u��z�8�8�8�8�8�8rxc�,��g|]}�|dk�|��S�r"r�)r�r�counters �rvr�z'_check_column_names.<locals>.<listcomp>es'���I�I�I�c��� �q�8H�8H�c�8H�8H�8Hrxz4The table can't have duplicated columns but columns z are duplicated.N)r�allr r�)r/�duplicated_columnsr`s @rv�_check_column_namesrcas{����l�#�#�G� �8�8�w�~�~�'7�'7�8�8�8� 8� 8�v�I�I�I�I�W�I�I�I���t�Pb�t�t�t�u�u�u�v�vrxc�\�|dkr ||zdks||krtd|�d|�d����dS)NrzIndex z" out of range for dataset of size rV)� IndexError)�index�sizes rv�_check_valid_indices_valuerhisJ�� �� � �e�d�l�Q�&�&�E�T�M�M��R�%�R�R�4�R�R�R�S�S�S�-:�Mrxc��eZdZdZdS)�NonExistentDatasetErrorz.Used when we expect the existence of a datasetN)r�r�r�r�r�rxrvrjrjns������8�8��Drxrjc.����eZdZdZ d�dedeedeedeedeef d�Z e d e f�fd � ��Z e d�d edeedeed eeded df d���Ze d�dejdeedeedeejd df d���Ze d�dejdee deedeedeed df d���Ze d�dddee deedeed df d���Ze d�dedee deedeed df d���Ze d�deedee deedeed df d���Ze d�deeeefdeedee dededeef d ���Zeddd dde j!fd!e"dee deded"eedeedefd#���Z#e d�deeeefdeedee deded$eedeefd%���Z$e d�deeeefdeedee deded&eeedeefd'���Z%e d�deeeefdeedee dededeef d(���Z&e d�dd*deedee deded+ed,efd-���Z'e d�d.eed/fd0eed1d2d3fdee dedef d4���Z(d5�Z)d6�Z*d7�Z+d8�Z, d�d9ed:eeeefd;eedeed<eef d=�Z-ed>ed?dd@ed<eefdA���Z.edBed e/fdC���Z0e d�d9edeed<eed dfdD���Z1e d efdE���Z2e d eefdF���Z3e d efdG���Z4e d efdH���Z5e d eefdI���Z6e d e7eeffdJ���Z8dKed efdL�Z9d�dKedMed dfdN�Z:e;d �O��d�dQeed dfdR���Z< d�de dTeeded,eedUeedVeedeed dfdW�Z=e;d �O��d�dKedXe>dQeed dfdY���Z?e@e;d �O��d�dZeeeefdQeed dfd[�����ZAe;d �O�� d�d\ed]edQeed dfd^���ZBe;d �O��d�d_eeefdQeed dfd`���ZCe@e;d �O��d�dZeeeefdQeed dfda�����ZDdb�ZEdc�ZFd�dTeddefde�ZGdf�ZHe dg���ZIeJjK d�dheed&eediefdj���ZLe;d)�O�� d�dheed&eediefdk���ZMdl�ZN d�dmee"d&eediefdn�ZO d�dheed&eediefdo�ZP d�dmee"d&eediefdp�ZQdqeeeReeSefd eeeffdr�ZTeUdqeeeReVefd efds���ZWeUdqed efdt���ZWdu�ZWdved efdw�ZXd efdx�ZYdy�ZZe@ d�d{ee"d|ed}ed~eeeeefdedTeedded�eeeeefded,eedUeedVeedee d�ed�eedeed�edQeed�eed df(d����Z[e d�d?dd{ee"d|ed}ed~eeededTeedded�eeededUeedVeedee d�ed�eedQeed�eed�ed eVe7eeeedfff&d����Z\e@e;d �O�� d�dTeddedeedQeed df d������Z]e@e;d gd��d����� d�d{ee"d|ed}ed~eeeeefdedTeeded,eedUeedVeed�eedeed�edQeed�eed df d������Z^e@e;d dUg���� d�dedUeedVeedee d�edeedQeed dfd������Z_ d�d�eedeejdeed dfd��Z`e@e;d d�g���� d�d�eVded�eedVeedQeed df d������Zae@e;d �O�� d�d�ed�edQeed dfd������Zbe@e;d d�g���� d�d�eVded�eedVeedQeed df d������Zcd�ed dfd��Zdd�ed dfd��Zed�ed dfd��Zfe@e;d d,d�g���� �ddZeeegefd�eeegefd�eded,eed�eedVeedQeed dfd������Zhe@e;d d)d,d�g���� �dd�eed!eeijjjkded,eed�eedVeedQeed dfd������Zle@e;d d)d�d�ggd������ �dd�eemedfd�eemedfd�ed�eed�eed!eeijjjkded,eed�eed�eedVeed�eed�eed d�fd������Zn �dd;ed�ed�eded�eedVeed dfd��Zo d�d�eeepfdTeedeed<eed ef d��Zqd�dTeed eeereffd��Zsd efd��Zt d�d�eeepfdTeedeed<eed ef d��Zu d�dTeeded eejerejffd��Zv �ddTeeded�eed�ed ederdff d��Zw d�d�eeepfdTeed<eed efd��Zx d�d�ed0eed1d2d3fdTeed efd��Zyd efd��Zzed�eddTefd„��Z{ed efdÄ��Z|�dd;eed d�fdƄZ} �dd�ed�edeed�eed�eed�eed:eeeefd;eed�ed e7ee~eeffd΄Z �dd�ed�ed�eedeed�eed�eed�eed�eed�eed�eed�eed:eeeefd;eed�ed e�fdՄZ�e@e;d �O�� d�d�edKeeeij�fdQedXee>fdք����Z�ddddddSdd eij�f dKed�eed�eed�eed�eed�ed�dTed�eed�ef�fdބ Z�dddddSdd eij�fd�eij�d�ed�eed�eed�eed�ed�dTed�eed�ef�fd�� Z� �ddKed�eed�eed�eed�ed�d�eed�eef�fd� Z�e@e;d �O��d�edQefd�����Z�d�ed�ed dfd�Z��xZ�S( r�z#A Dataset backed by an Arrow table.N� arrow_tablerorp� indices_tabler?c ��|�|���n t��}t�|||���t j|��t |��|_|�t |��nd|_t|��d|_ i|_ d|_ d|_ ||_ |jjj�fd|jjjvrSt!j|jjjd�����}d|vr|j � |d|_ t'j|j��}|jj� ||j_nO |jj�|��|j_n$#t0$r}t1|�d����d}~wwxYw|jj|jjjkr.|j�|jjj��|_|j �t9|��|_ |jj�t1d���|j �t1d���|jjj|jkr9t1d|jj�d |jjj�d |�d |j�����|j�kt>j �!|j�"d ��j��s/t1d |j�"d ��j�����tG|jj$��tK|j|jj��|_dS) N�rorpFrHr?zn The 'source' features come from dataset_info.json, and the 'target' ones are those of the dataset arrow file.�*Features can't be None in a Dataset objectz-Fingerprint can't be None in a Dataset objectz4External features info don't match the dataset: Got z with type z but expected something like rzEindices must be an Arrow table of unsigned integers, current type is )&r�rFrnrwrIrY�_data�_indicesr=r1r2r0r3r5rKrLrNrOrPr+�from_arrow_schemaror��reorder_fields_asr�r � arrow_schema�castr9rsr�rW�types�is_unsigned_integerr�rcr/rT) rurlrorprmr?rL�inferred_features�es rvrwzDataset.__init__wsW��#�.�t�y�y�{�{�{�K�M�M���!�!�$�T��!�?�?�?����%�%�%�(��5�5�� �HU�Ha��m�)D�)D�)D�gk�� �4�T�:�:�:�+/���$&���/3���).�� �!,��� �:� � %� 1�n�� �HY�Hb�6b�6b��z�$�*�"3�"<�^�"L�"S�"S�"U�"U�V�V�H���)�)�d�.?�.G�$,�]�$;��!�%�6�{�7I�J�J�� �9� � %�!2�D�I� � � �%)�Y�%7�%I�%I�J[�%\�%\�� �"�"��� � � � ��J�J�J�������� ���� �9� �t�y�1�>� >� >������ �(:�(G�H�H�D�J� � � $� 4�T� :� :�D� � �:� � &��I�J�J� J� � � $��L�M�M� M� �9� � "�&7�&<� <� <��]���I[�]�]�jn�js�j|�kB�]�]�ev�]�]�EV�E[�]�]��� � �=� $��8�/�/�� �0D�0D�Q�0G�0G�0L�M�M� � �z�\`�\i�\p�\p�qr�\s�\s�\x�z�z���� �D�J�3�4�4�4�2�4�:�t�z�?R�S�S�� � � s�)F� F)�F$�$F)r|c�P��t��j}|�td���|S)Nrp)�superr�r�)rur�� __class__s �rvr�zDataset.features�s)����7�7�#�� � ��I�J�J� J��rxF�filename�indices_filename� in_memoryc��tj||���}|�tj||���}nd}|||||���S)aXInstantiate a Dataset backed by an Arrow table at filename. Args: filename (`str`): File name of the dataset. info (`DatasetInfo`, *optional*): Dataset information, like description, citation, etc. split (`NamedSplit`, *optional*): Name of the dataset split. indices_filename (`str`, *optional*): File names of the indices. in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. Returns: [`Dataset`] )r�N)rlrorprm)r$� read_table)�clsr~rorprr�rD�indices_pa_tables rv� from_filezDataset.from_file�se��4�&�x�9�E�E�E�� � '�*�5�6F�R[�\�\�\� � �#� ��s����*�  � � � rx�buffer�indices_bufferc�z�tj|��}|�tj|��}nd}|||||���S)a�Instantiate a Dataset backed by an Arrow buffer. Args: buffer (`pyarrow.Buffer`): Arrow buffer. info (`DatasetInfo`, *optional*): Dataset information, like description, citation, etc. split (`NamedSplit`, *optional*): Name of the dataset split. indices_buffer (`pyarrow.Buffer`, *optional*): Indices Arrow buffer. Returns: [`Dataset`] N)rorprm)rN� from_buffer)r�r�rorpr�rDrms rvr�zDataset.from_buffer�sJ��.�)�&�1�1�� � %�)�5�f�=�=�M�M� �M��s�5�t�5� �N�N�N�Nrx�dfr��preserve_indexc� �|�'|�%|j|krtd|�d|j�����|�|n |�|jnd}|�t��}||_tj||���}|�|�|j��}||||���S)a� Convert `pandas.DataFrame` to a `pyarrow.Table` to create a [`Dataset`]. The column types in the resulting Arrow Table are inferred from the dtypes of the `pandas.Series` in the DataFrame. In the case of non-object Series, the NumPy dtype is translated to its Arrow equivalent. In the case of `object`, we need to guess the datatype by looking at the Python objects in this Series. Be aware that Series of the `object` dtype don't carry enough information to always lead to a meaningful Arrow type. In the case that we cannot infer a type, e.g. because the DataFrame is of length 0 or the Series only contains `None/nan` objects, the type is set to `null`. This behavior can be avoided by constructing explicit features and passing it to this function. Important: a dataset created with from_pandas() lives in memory and therefore doesn't have an associated cache directory. This may change in the future, but in the meantime if you want to reduce memory usage you should write it back on disk and reload using e.g. save_to_disk / load_from_disk. Args: df (`pandas.DataFrame`): Dataframe that contains the dataset. features ([`Features`], *optional*): Dataset features. info (`DatasetInfo`, *optional*): Dataset information, like description, citation, etc. split (`NamedSplit`, *optional*): Name of the dataset split. preserve_index (`bool`, *optional*): Whether to store the index as an additional column in the resulting Dataset. The default of `None` will store the index as a column, except for `RangeIndex` which is stored as metadata only. Use `preserve_index=True` to force it to be stored as a column. Returns: [`Dataset`] Example: ```py >>> ds = Dataset.from_pandas(df) ``` N�IFeatures specified in `features` and `info.features` can't be different: � )r�r�ro)r�r�rFrN� from_pandasrvru)r�r�r�rorpr�rDs rvr�zDataset.from_pandass���d � �� 4���(�9R�9R��x�]e�x�x�im�iv�x�x��� � (�3�8�8�$�JZ����`d�� �<��=�=�D� �� ��)��)� � � �� � ��J�J�x�4�5�5�E��s�5�t�5�1�1�1�1rxz pl.DataFramec�6�|�'|�%|j|krtd|�d|j�����|�|n |�|jnd}|�t��}||_t|�����}|�|�|j��}||||���S)aK Collect the underlying arrow arrays in an Arrow Table. This operation is mostly zero copy. Data types that do copy: * CategoricalType Args: df (`polars.DataFrame`): DataFrame to convert to Arrow Table features (`Features`, optional): Dataset features. info (`DatasetInfo`, optional): Dataset information, like description, citation, etc. split (`NamedSplit`, optional): Name of the dataset split. Examples: ```py >>> ds = Dataset.from_polars(df) ``` Nr�r�ro)r�r�rFrN�to_arrowrvru)r�r�r�rorprDs rv� from_polarszDataset.from_polarsSs���6 � �� 4���(�9R�9R��x�]e�x�x�im�iv�x�x��� � (�3�8�8�$�JZ����`d�� �<��=�=�D� �� ��b�k�k�m�m�,�,�� � ��J�J�x�4�5�5�E��s�5�t�5�1�1�1�1rx�mappingc��|�'|�%|j|krtd|�d|j�����|�|n |�|jnd}i}|���D]\}}t|tjtjf��r|�t|||��n|}n4t|�|� ||��n||�||nd|���}|||<��|}tj |���}|�t��}||_|j�0td�|���D����|_||||���S)aS Convert `dict` to a `pyarrow.Table` to create a [`Dataset`]. Important: a dataset created with from_dict() lives in memory and therefore doesn't have an associated cache directory. This may change in the future, but in the meantime if you want to reduce memory usage you should write it back on disk and reload using e.g. save_to_disk / load_from_disk. Args: mapping (`Mapping`): Mapping of strings to Arrays or Python lists. features ([`Features`], *optional*): Dataset features. info (`DatasetInfo`, *optional*): Dataset information, like description, citation, etc. split (`NamedSplit`, *optional*): Name of the dataset split. Returns: [`Dataset`] Nr�r�)r�r)r�c��i|]T\}}|t|tjtjf��rt |j��n|�����USr�)r�rW�Array� ChunkedArrayr2r��get_inferred_type)r�rr s rvr�z%Dataset.from_dict.<locals>.<dictcomp>�sh�����"��T��!�$���2�?�(C�D�D�2�1�$�)�<�<�<��/�/�1�1���rxro)r�r�r�r�rWr�r�rRr&� encode_columnrN� from_pydictrFr+) r�r�r�rorp�arrow_typed_mappingrr �pa_tables rv� from_dictzDataset.from_dict}s���< � �� 4���(�9R�9R��x�]e�x�x�im�iv�x�x��� � (�3�8�8�$�JZ����`d�� �� ����� ,� ,�I�C���$���2�?� ;�<�<� �EM�EY�,�T�8�C�=�A�A�A�_c���-�9A�9M�H�*�*�4��5�5�5�SW�*2�*>��#���D������ (,� �� $� $�%�� �,�W�=�=�=�� �<��=�=�D� �� � �=� �$���&-�]�]�_�_� �����D�M��s�8�$�e�4�4�4�4rxc�d���r�fd��dD��ni�|��|||��S)a� Convert a list of dicts to a `pyarrow.Table` to create a [`Dataset`]`. Note that the keys of the first entry will be used to determine the dataset columns, regardless of what is passed to features. Important: a dataset created with from_list() lives in memory and therefore doesn't have an associated cache directory. This may change in the future, but in the meantime if you want to reduce memory usage you should write it back on disk and reload using e.g. save_to_disk / load_from_disk. Args: mapping (`List[dict]`): A list of mappings of strings to row values. features (`Features`, optional): Dataset features. info (`DatasetInfo`, optional): Dataset information, like description, citation, etc. split (`NamedSplit`, optional): Name of the dataset split. Returns: [`Dataset`] c�0���i|]���fd��D����S)c�:��g|]}|������Sr�)�get)r��r�ks �rvr�z0Dataset.from_list.<locals>.<dictcomp>.<listcomp>�s#���1�1�1�A�q�u�u�Q�x�x�1�1�1rxr�)r�r�r�s @�rvr�z%Dataset.from_list.<locals>.<dictcomp>�s1����F�F�F�a�1�1�1�1�1��1�1�1�F�F�Frxr)r�)r�r�r�rorps ` rv� from_listzDataset.from_list�sH���<KR�Y�F�F�F�F�7�1�:�F�F�F�F�WY���}�}�W�h��e�<�<�<rx� path_or_paths� cache_dir�keep_in_memory�num_procc �P�ddlm}||f|||||d�|�����S)a�Create Dataset from CSV file(s). Args: path_or_paths (`path-like` or list of `path-like`): Path(s) of the CSV file(s). split ([`NamedSplit`], *optional*): Split name to be assigned to the dataset. features ([`Features`], *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. num_proc (`int`, *optional*, defaults to `None`): Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default. <Added version="2.8.0"/> **kwargs (additional keyword arguments): Keyword arguments to be passed to [`pandas.read_csv`]. Returns: [`Dataset`] Example: ```py >>> ds = Dataset.from_csv('path/to/dataset.csv') ``` r")�CsvDatasetReader�rpr�r�r�r�)�io.csvr��read)r�rpr�r�r�r�r8r�s rv�from_csvzDataset.from_csv�s]��R -�,�,�,�,�,��� � ����)��  � �� � � �$�&�&� rx� generator� gen_kwargsc �R�ddlm}|d|||||||d�|�����S)a�Create a Dataset from a generator. Args: generator (:`Callable`): A generator function that `yields` examples. features ([`Features`], *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. gen_kwargs(`dict`, *optional*): Keyword arguments to be passed to the `generator` callable. You can define a sharded dataset by passing the list of shards in `gen_kwargs` and setting `num_proc` greater than 1. num_proc (`int`, *optional*, defaults to `None`): Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default. If `num_proc` is greater than one, then all list values in `gen_kwargs` must be the same length. These values will be split between calls to the generator. The number of shards will be the minimum of the shortest list in `gen_kwargs` and `num_proc`. <Added version="2.7.0"/> split ([`NamedSplit`], defaults to `Split.TRAIN`): Split name to be assigned to the dataset. <Added version="2.21.0"/> **kwargs (additional keyword arguments): Keyword arguments to be passed to :[`GeneratorConfig`]. Returns: [`Dataset`] Example: ```py >>> def gen(): ... yield {"text": "Good", "label": 0} ... yield {"text": "Bad", "label": 1} ... >>> ds = Dataset.from_generator(gen) ``` ```py >>> def gen(shards): ... for shard in shards: ... with open(shard) as f: ... for line in f: ... yield {"line": line} ... >>> shards = [f"data{i}.txt" for i in range(32)] >>> ds = Dataset.from_generator(gen, gen_kwargs={"shards": shards}) ``` r")�GeneratorDatasetInputStream)r�r�r�r�r�r�rpr�)� io.generatorr�r�) r�r�r�r�r�r�rpr8r�s rv�from_generatorzDataset.from_generators`��| >�=�=�=�=�=�*�*�  ����)�!���  �  ��  �  � �$�&�&� rx�fieldc �R�ddlm}||f||||||d�|�����S)a"Create Dataset from JSON or JSON Lines file(s). Args: path_or_paths (`path-like` or list of `path-like`): Path(s) of the JSON or JSON Lines file(s). split ([`NamedSplit`], *optional*): Split name to be assigned to the dataset. features ([`Features`], *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. field (`str`, *optional*): Field name of the JSON file where the dataset is contained in. num_proc (`int`, *optional* defaults to `None`): Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default. <Added version="2.8.0"/> **kwargs (additional keyword arguments): Keyword arguments to be passed to [`JsonConfig`]. Returns: [`Dataset`] Example: ```py >>> ds = Dataset.from_json('path/to/dataset.json') ``` r")�JsonDatasetReader)rpr�r�r�r�r�)�io.jsonr�r�) r�rpr�r�r�r�r�r8r�s rv� from_jsonzDataset.from_json\s`��X /�.�.�.�.�.� � � �  ����)���  �  ��  �  � �$�&�&� rxr�c �R�ddlm}||f||||||d�|�����S)a�Create Dataset from Parquet file(s). Args: path_or_paths (`path-like` or list of `path-like`): Path(s) of the Parquet file(s). split (`NamedSplit`, *optional*): Split name to be assigned to the dataset. features (`Features`, *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. columns (`List[str]`, *optional*): If not `None`, only these columns will be read from the file. A column name may be a prefix of a nested field, e.g. 'a' will select 'a.b', 'a.c', and 'a.d.e'. num_proc (`int`, *optional*, defaults to `None`): Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default. <Added version="2.8.0"/> **kwargs (additional keyword arguments): Keyword arguments to be passed to [`ParquetConfig`]. Returns: [`Dataset`] Example: ```py >>> ds = Dataset.from_parquet('path/to/dataset.parquet') ``` r")�ParquetDatasetReader)rpr�r�r�r�r�)� io.parquetr�r�) r�rpr�r�r�r�r�r8r�s rv� from_parquetzDataset.from_parquet�s`��\ 5�4�4�4�4�4�#�#� �  ����)���  �  ��  �  � �$�&�&� rxc �P�ddlm}||f|||||d�|�����S)a�Create Dataset from text file(s). Args: path_or_paths (`path-like` or list of `path-like`): Path(s) of the text file(s). split (`NamedSplit`, *optional*): Split name to be assigned to the dataset. features (`Features`, *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. num_proc (`int`, *optional*, defaults to `None`): Number of processes when downloading and generating the dataset locally. This is helpful if the dataset is made of multiple files. Multiprocessing is disabled by default. <Added version="2.8.0"/> **kwargs (additional keyword arguments): Keyword arguments to be passed to [`TextConfig`]. Returns: [`Dataset`] Example: ```py >>> ds = Dataset.from_text('path/to/dataset.txt') ``` r")�TextDatasetReaderr�)�io.textr�r�)r�rpr�r�r�r�r8r�s rv� from_textzDataset.from_text�s]��R /�.�.�.�.�.� � � � ����)��  � �� � � �$�&�&� rxTzpyspark.sql.DataFrame� working_dir�load_from_cache_filec ��ddlm}tjdkrt d���||f||d||||d�|�����S)aaCreate a Dataset from Spark DataFrame. Dataset downloading is distributed over Spark workers. Args: df (`pyspark.sql.DataFrame`): The DataFrame containing the desired data. split (`NamedSplit`, *optional*): Split name to be assigned to the dataset. features (`Features`, *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. When using a multi-node Spark cluster, the cache_dir must be accessible to both workers and the driver. keep_in_memory (`bool`): Whether to copy the data in-memory. working_dir (`str`, *optional*) Intermediate directory for each Spark worker to write data to before moving it to `cache_dir`. Setting a non-NFS intermediate directory may improve performance. load_from_cache_file (`bool`): Whether to load the dataset from the cache if possible. Returns: [`Dataset`] Example: ```py >>> df = spark.createDataFrame( >>> data=[[1, "Elia"], [2, "Teo"], [3, "Fang"]], >>> columns=["id", "name"], >>> ) >>> ds = Dataset.from_spark(df) ``` r")�SparkDatasetReader�win32z8Dataset.from_spark is not currently supported on WindowsF)rpr�� streamingr�r�r�r�)�io.sparkr��sys�platform�OSErrorr�) r�rpr�r�r�r�r�r8r�s rv� from_sparkzDataset.from_sparks��Z 1�0�0�0�0�0� �<�7� "� "��T�U�U� U�!�!� �  �����)�#�!5�  �  ��  �  � �$�&�&� rx�sqlzsqlalchemy.sql.Selectable�conzsqlalchemy.engine.Connectionzsqlalchemy.engine.Enginezsqlite3.Connectionc �N�ddlm}|||f|||d�|�����S)aXCreate Dataset from SQL query or database table. Args: sql (`str` or `sqlalchemy.sql.Selectable`): SQL query to be executed or a table name. con (`str` or `sqlite3.Connection` or `sqlalchemy.engine.Connection` or `sqlalchemy.engine.Connection`): A [URI string](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) used to instantiate a database connection or a SQLite3/SQLAlchemy connection object. features ([`Features`], *optional*): Dataset features. cache_dir (`str`, *optional*, defaults to `"~/.cache/huggingface/datasets"`): Directory to cache data. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. **kwargs (additional keyword arguments): Keyword arguments to be passed to [`SqlConfig`]. Returns: [`Dataset`] Example: ```py >>> # Fetch a database table >>> ds = Dataset.from_sql("test_data", "postgres:///db_name") >>> # Execute a SQL query on the table >>> ds = Dataset.from_sql("SELECT sentence FROM test_data", "postgres:///db_name") >>> # Use a Selectable object to specify the query >>> from sqlalchemy import select, text >>> stmt = select([text("sentence")]).select_from(text("test_data")) >>> ds = Dataset.from_sql(stmt, "postgres:///db_name") ``` <Tip> The returned dataset can only be cached if `con` is specified as URI string. </Tip> r")�SqlDatasetReader)r�r�r�)�io.sqlr�r�)r�r�r�r�r�r8r�s rv�from_sqlzDataset.from_sqlCs\��^ -�,�,�,�,�,��� � � ���)�  � � �  � � �$�&�&� rxc�X�|j�|��t|��|Srr)�__dict__�updater=)ru�states rv� __setstate__zDataset.__setstate__}s+�� � ���U�#�#�#�4�T�:�:�:�� rxc�R�t|d��r|`t|d��r|`dSdS)Nrqrr)�hasattrrqrrrzs rvrzDataset.__del__�s>�� �4�� !� !� �� � �4�� $� $� �� � � � � rxc��|Srrr�rzs rv� __enter__zDataset.__enter__�s��� rxc�.�|���dSrr)r)ru�exc_type�exc_val�exc_tbs rv�__exit__zDataset.__exit__�s�� � � �����rx� dataset_path�max_shard_size� num_shards�storage_optionsc �� ������|���td�������rtd�����V����}t|p tj��}t ||z ��dz�t�|pd���|�|nd}���n|�t�fi�pi��\}}t|��s�d��j D��} t���� ��� ��| vrDtdt���� ��� ���d����|��d� ���fd �d D��} �j�t#�j��n�j| d <�fd �t%���D��| d<| d���D]Y} t)j| d| ���$#t,$r)} t-t#| ��d| �d�z��d�d} ~ wwxYwt/�j���d} t3dt5���d| �d��d����}����fd�t%���D��}dg�z}dg�z}|dkr�t7|��5}|5t9|t:j|���D]k\}}}|rN| dz } |�d| �d��d���t@�!d|�d��d���|\||<||<�V|�"|���l ddd��n #1swxYwYddd��n #1swxYwYn�|5|D]}t;jd$i|��D]k\}}}|rN| dz } |�d| �d��d���t@�!d|�d��d���|\||<||<�V|�"|���l�� ddd��n #1swxYwY|�#tIj%�tj&��dd���5}t)j'| |d d�!��ddd��n #1swxYwY|�#tIj%�tj(��dd���5}�fd"�tS���D��}t)j'||d �#��ddd��dS#1swxYwYdS)%a� Saves a dataset to a dataset directory, or in a filesystem using any implementation of `fsspec.spec.AbstractFileSystem`. For [`Image`], [`Audio`] and [`Video`] data: All the Image(), Audio() and Video() data are stored in the arrow files. If you want to store paths or urls, please use the Value("string") type. Args: dataset_path (`path-like`): Path (e.g. `dataset/train`) or remote URI (e.g. `s3://my-bucket/dataset/train`) of the dataset directory where the dataset will be saved to. max_shard_size (`int` or `str`, *optional*, defaults to `"500MB"`): The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like `"50MB"`). num_shards (`int`, *optional*): Number of shards to write. By default the number of shards depends on `max_shard_size` and `num_proc`. <Added version="2.8.0"/> num_proc (`int`, *optional*): Number of processes when downloading and generating the dataset locally. Multiprocessing is disabled by default. <Added version="2.8.0"/> storage_options (`dict`, *optional*): Key/value pairs to be passed on to the file-system backend, if any. <Added version="2.8.0"/> Example: ```py >>> ds.save_to_disk("path/to/dataset/directory") >>> ds.save_to_disk("path/to/dataset/directory", max_shard_size="1GB") >>> ds.save_to_disk("path/to/dataset/directory", num_shards=1024) ``` N�XFailed to push_to_hub: please specify either max_shard_size or num_shards, but not both.zPplease remove all the indexes using `dataset.drop_index` before saving a datasetr"c�f�h|].}t|d�����j��/S�r~)r�resolve�parent�r��cache_filenames rvr�z'Dataset.save_to_disk.<locals>.<setcomp>�sA��(�(�(�FT��^�J�/�0�0�8�8�:�:�A�(�(�(rxzTried to overwrite z& but a dataset can't overwrite itself.T��exist_okc�,��i|]}|�j|��Sr�)r�)r�r�rus �rvr�z(Dataset.save_to_disk.<locals>.<dictcomp>�s2���  �  �  �� ���s�#�  �  �  rx)r5r0r2r1r3rtc�*��g|]}dd|d�d�d�d�i��S)r~�data-�05d�-of-�.arrowr�)r�� shard_idxr�s �rvr�z(Dataset.save_to_disk.<locals>.<listcomp>�sE��� � � �PY�Z�J��J�J�J�J�J�J�J�J� K� � � rx� _data_filesr2z7 The format kwargs must be JSON serializable, but key 'z' isn't.r� exampleszSaving the dataset (�/z shards)��unit�total�descc 3��K�|]=}|���|d���tj�d|d�d�d�d����d�V��>dS)T�r�rf� contiguousr�r�r�r�)�job_id�shard�fpathr�N)r� posixpath�join)r�r�r�r�rur�s ����rvr]z'Dataset.save_to_disk.<locals>.<genexpr>�s������ � �� $����z��W[��\�\�"�� �6g�i�6g�6g�6g�Q[�6g�6g�6g�6g�h�h�#2�  � � � � � � � rx��kwargs_iterablezFinished writing shard number � of rV�w�utf-8��encoding�)�indent� sort_keysc�"��i|] }|�|�� Sr�r�)r�r�� dataset_infos �rvr�z(Dataset.save_to_disk.<locals>.<dictcomp>%s ���'_�'_�'_�3��\�#�->�'_�'_�'_rx�r r�)*r�� list_indexes�_estimate_nbytesr`r#�MAX_SHARD_SIZEr��maxrr5� cache_filesr� expanduserr��PermissionError�makedirsrpr�r�r�rNrQrXr_rs�hf_tqdmr�r rbr��_save_to_disk_single�set_descriptionr�debugr��openrr�DATASET_STATE_JSON_FILENAME�dump�DATASET_INFO_FILENAMEr4)rur�r�r�r�r��dataset_nbytes�fsr��parent_cache_files_pathsr�r�rz� shards_done�pbar�kwargs_per_job� shard_lengths� shard_sizes�poolr�done�contentr8� state_file�dataset_info_file�sorted_keys_dataset_infors`` ` ` @rv� save_to_diskzDataset.save_to_disk�s��������Z � %�*�*@��j��� � � � � � � q��o�p�p� p� � �!�2�2�4�4�N�5�n�6]��H]�^�^�N��^�n�<�=�=��A�J��Z���Q�7�7�J�'�3�8�8���#-�#9�Z�Z�x� ��,�B�B�?�+@�b�B�B���A�#�B�'�'� �(�(�X\�Xh�(�(�(� $��L�!�!�,�,�.�.�6�6�8�8�<T�T�T�%�{�$�|�*<�*<�*G�*G�*I�*I�*Q�*Q�*S�*S�{�{�{���� � � �L�4� �0�0�0�  �  �  �  ��  �  �  ��.2�Z�-C�#�d�j�/�/�/����h�� � � � �]b�cm�]n�]n� � � ��m���'�(�-�-�/�/� � �A� �� �5�!1�2�1�5�6�6�6�6��� � � ����F�F�c�XY�c�c�c�c��������� ���� �d�j�)�)� �� ����d�)�)�J� �J�J�j�J�J�J� � � ��  � � � � � � �#�:�.�.� � � �����+� ��f�z�)� � �a�<�<��h��� 1�4�� 1� 1�1C��g�:�N�2�2�2� 1� 1�-���g� �1�'�1�,�K� �0�0�1j� �1j�1j�V`�1j�1j�1j�k�k�k�"�L�L�)c�&�)c�)c�V`�)c�)c�)c�d�d�d�IP�F�M�&�1�;�v�3F�3F� �K�K��0�0�0�0� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1���� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1���� 1� 1� 1� 1��� 1� 1�,�1�1�F�18�1M�1W�1W�PV�1W�1W�1�1�-���g��1�'�1�,�K� �0�0�1j� �1j�1j�V`�1j�1j�1j�k�k�k�"�L�L�)c�&�)c�)c�V`�)c�)c�)c�d�d�d�IP�F�M�&�1�;�v�3F�3F� �K�K��0�0�0�0�1�1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1���� 1� 1� 1� 1��W�W� �N�<��)K� L� L�c�\c�� � � C� � �I�e�Z��T� B� B� B� B� C� C� C� C� C� C� C� C� C� C� C���� C� C� C� C��W�W� �N�<��)E� F� F��V]�� � � M� �'_�'_�'_�'_�&�Q]�J^�J^�'_�'_�'_� $� �I�.�0A�!� L� L� L� L�  M� M� M� M� M� M� M� M� M� M� M� M���� M� M� M� M� M� Ms�� G1�1 H$�;$H�H$�"M�%B L;�/ M�;L? �?M�L? �M�M�M� BO0�0O4�7O4�0Q�Q�Q�3S�S�Srrrc#�K�tj}d}t|j||d���} t j��}|�d���|��D]i}|�|��|t|��z }t j��|tj zkrt j��}|d|fV�d}�j |d|fV�|� ��\} } |� ��n7#|d|fV�|� ��\} } |� ��wxYw|d| | ffV�dS)NrT)r��pathr��embed_local_files�arrowF) r#�DEFAULT_MAX_BATCH_SIZEr%r��timer�iter� write_tabler��PBAR_REFRESH_TIME_INTERVAL�finalize�close) rrrr�r��num_examples_progress_update�writer�_timer�� num_examples� num_bytess rvrzDataset._save_to_disk_single(sn�����2� �'(�$���^��+�"�  � � �� ��I�K�K�E�!�-�-�g�6�6�;�;�J�G�G� 5� 5���"�"�8�,�,�,�,��H� � �=�,��9�;�;���)J�!J�J�J� �I�K�K�E� �%�)E�E�E�E�E�34�0��  5��%�!=�=� =� =� =�&,�o�o�&7�&7� #�L�)� �L�L�N�N�N�N���%�!=�=� =� =� =�&,�o�o�&7�&7� #�L�)� �L�L�N�N�N�N�����d�\�9�5�5�5�5�5�5�5s �B%D�4D7� uri_or_pathc��t|��}t��}t||�|j����S)a� Builds and returns a Path concatenating a local temporary dir with the dir path (or absolute/relative path extracted from the uri) passed. Args: uri_or_path (`str`): Path (e.g. `"dataset/train"`) or remote URI (e.g. `"s3://my-bucket/dataset/train"`) to concatenate. Returns: :class:`Path`: the concatenated path (temp dir + path) )rr;� relative_to�anchor)rA�src_dataset_path�tmp_dirs rv�_build_local_temp_pathzDataset._build_local_temp_pathCs@�� � �,�,��5�7�7���G�-�9�9�:J�:Q�R�R�S�S�Srxc ���t|fi|pi��\}}|�tj�tj��}tj�tj��}tj�tj��}|�|��}|�|��}|�|��} |s0| s.|rtd|�d|�d����td|�d|�d����|s(|rtd|�d����td|�d����| s(|rtd|�d����td|�d����t|��r��} t� | ���|� | �� ��d���tj�tj��}tj�tj��}t|d � ��5} tj| ��} d d d ��n #1swxYwYt|d � ��5} t#jtj| ����}d d d ��n #1swxYwYt'�fd �| d D����}|�|nt)|��}|rt*nt,}t/t1|j�fd�| d D��t4dt7| d ��dkpd �����}| d}|�t9|��n|}t|||| d���}| d| d| d| dd�}|jdi|��}|S)aR Loads a dataset that was previously saved using [`save_to_disk`] from a dataset directory, or from a filesystem using any implementation of `fsspec.spec.AbstractFileSystem`. Args: dataset_path (`path-like`): Path (e.g. `"dataset/train"`) or remote URI (e.g. `"s3//my-bucket/dataset/train"`) of the dataset directory where the dataset will be loaded from. keep_in_memory (`bool`, defaults to `None`): Whether to copy the dataset in-memory. If `None`, the dataset will not be copied in-memory unless explicitly enabled by setting `datasets.config.IN_MEMORY_MAX_SIZE` to nonzero. See more details in the [improve performance](../cache#improve-performance) section. storage_options (`dict`, *optional*): Key/value pairs to be passed on to the file-system backend, if any. <Added version="2.8.0"/> Returns: [`Dataset`] or [`DatasetDict`]: - If `dataset_path` is a path of a dataset directory, the dataset requested. - If `dataset_path` is a path of a dataset dict directory, a `datasets.DatasetDict` with each split. Example: ```py >>> ds = load_from_disk("path/to/dataset/directory") ``` zNo such files: 'z', nor 'z�' found. Expected to load a `Dataset` object, but got a `DatasetDict`. Please use either `datasets.load_from_disk` or `DatasetDict.load_from_disk` instead.zR' found. Expected to load a `Dataset` object but provided path is not a `Dataset`.zNo such file: 'zL'. Expected to load a `Dataset` object but provided path is not a `Dataset`.T)� recursiver r Nc3�D�K�|]}t�|d��V��dS)r~Nr�r�� data_file�dest_dataset_paths �rvr]z)Dataset.load_from_disk.<locals>.<genexpr>�sD�����- �- �?H�D�"�I�j�$9� :� :�- �- �- �- �- �- rxr�c�F��g|]}tj�|d����Sr�)rrrKs �rvr�z*Dataset.load_from_disk.<locals>.<listcomp>�s+���p�p�p�i��� 1�9�Z�3H�I�I�p�p�prxzLoading dataset from disk�)� tqdm_classr��disablertr5)rlrorpr?r1r2r0r3r,r�)rrrr#�DATASETDICT_JSON_FILENAMErr!�isfile�FileNotFoundErrorr5r�rG�download�as_posixrrN�loadrFr�r[r\rNrOrSr!r�rr�rKr)r�r�r�r#�dataset_dict_json_path�dataset_state_json_path�dataset_info_path�dataset_dict_is_file�dataset_info_is_file�dataset_state_is_filerEr-r�r.rr�� table_clsrlrpr�rrMs @rv�load_from_diskzDataset.load_from_diskTs����H%�\�M�M�o�6K��M�M���L�(��!*��0A�6�Cc�!d�!d��"+�.�1B�F�Df�"g�"g��%�N�+<�f�>Z�[�[��!�y�y�)?�@�@��!�y�y�):�;�;�� "� � �*A� B� B��#� �,A� �#� �'�w�'8�w�w�BY�w�w�w����$�j�#4�j�j�>U�j�j�j��� �$� �#� �'�U�&7�U�U�U����$�B�"3�B�B�B��� �%� �#� �'�[�&=�[�[�[����$�H�"9�H�H�H��� � �� #� #� `�0� � '� >� >�?O� P� P� � �K�K�(�*;�*D�*D�*F�*F�RV�K� W� W� W�&/�n�5F��Hj�&k�&k� #� )��/@�&�B^� _� _� � �)�G� <� <� <� *� ��I�j�)�)�E� *� *� *� *� *� *� *� *� *� *� *���� *� *� *� *� �#�g� 6� 6� 6� O�:K�&�0���;L�1M�1M�N�N�L� O� O� O� O� O� O� O� O� O� O� O���� O� O� O� O�-�- �- �- �- �LQ�R_�L`�- �- �- � � � �,:�+E���K[�\h�Ki�Ki��%3�J�M�M�9J� �#� ��#�p�p�p�p�[`�an�[o�p�p�p�"�0��E�-�0�1�1�R�7�?�4�  � � �  �  � ��h��� %� 1��e� � � �u���#����n�-�  � � ���.�)�"�#3�4��.�/�"'�(=�">�  � �� &�'�%�/�/��/�/���s$�G=�=H�H�'I � I�Ic��|jS)a The Apache Arrow table backing the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.data MemoryMappedTable text: string label: int64 ---- text: [["compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .","the soundtrack alone is worth the price of admission .","rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .","beneath the film's obvious determination to shock at any cost lies considerable skill and determination , backed by sheer nerve .","bielinsky is a filmmaker of impressive talent .","so beautifully acted and directed , it's clear that washington most certainly has a new career ahead of him if he so chooses .","a visual spectacle full of stunning images and effects .","a gentle and engrossing character study .","it's enough to watch huppert scheming , with her small , intelligent eyes as steady as any noir villain , and to enjoy the perfectly pitched web of tension that chabrol spins .","an engrossing portrait of uncompromising artists trying to create something original against the backdrop of a corporate music industry that only seems to care about the bottom line .",...,"ultimately , jane learns her place as a girl , softens up and loses some of the intensity that made her an interesting character to begin with .","ah-nuld's action hero days might be over .","it's clear why deuces wild , which was shot two years ago , has been gathering dust on mgm's shelf .","feels like nothing quite so much as a middle-aged moviemaker's attempt to surround himself with beautiful , half-naked women .","when the precise nature of matthew's predicament finally comes into sharp focus , the revelation fails to justify the build-up .","this picture is murder by numbers , and as easy to be bored by as your abc's , despite a few whopping shootouts .","hilarious musical comedy though stymied by accents thick as mud .","if you are into splatter movies , then you will probably have a reasonably good time with the salton sea .","a dull , simple-minded and stereotypical tale of drugs , death and mind-numbing indifference on the inner-city streets .","the feature-length stretch . . . strains the show's concept ."]] label: [[1,1,1,1,1,1,1,1,1,1,...,0,0,0,0,0,0,0,0,0,0]] ``` �rqrzs rvr z Dataset.data�s ��$�z�rxc�~�t|j��}|j�|t|j��z }d�|D��S)a�The cache files containing the Apache Arrow table backing the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.cache_files [{'filename': '/root/.cache/huggingface/datasets/rotten_tomatoes_movie_review/default/1.0.0/40d411e45a6ce3484deed7cc15b82a53dad9a72aafd9f86f8f227134bec5ca46/rotten_tomatoes_movie_review-validation.arrow'}] ``` Nc��g|]}d|i��Sr�r�r�s rvr�z'Dataset.cache_files.<locals>.<listcomp>�s��O�O�O���^�,�O�O�Orx)rUrqrr)rurs rvrzDataset.cache_files�sD��-�T�Z�8�8� � �=� $� �1�$�-�@�@� @�K�O�O�;�O�O�O�Orxc��|jjS)a Number of columns in the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.num_columns 2 ``` )rq� num_columnsrzs rvrezDataset.num_columns�s���z�%�%rxc�@�|j� |jjS|jjS)a$Number of rows in the dataset (same as [`Dataset.__len__`]). Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.num_rows 1066 ``` )rr�num_rowsrqrzs rvrgzDataset.num_rowss"�� �=� $��=�)� )��z�"�"rxc��|jjS)aNames of the columns in the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.column_names ['text', 'label'] ``` �rqr/rzs rvr/zDataset.column_namess���z�&�&rxc�X�|j�|jj|jjfS|jjS)a#Shape of the dataset (number of columns, number of rows). Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.shape (1066, 2) ``` )rrrgrqrer�rzs rvr�z Dataset.shape"s,�� �=� $��M�*�D�J�,B�C� C��z��rxr�c�J�||jjvr td|�d|jj�d����|j�/|jj|jjkr|���}n|}|j�|��������S)aOReturn a list of the unique elements in a column. This is implemented in the low-level backend and as such, very fast. Args: column (`str`): Column name (list all the column names with [`~datasets.Dataset.column_names`]). Returns: `list`: List of unique elements in the given column. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.unique('label') [1, 0] ``` �Column (�) not in table columns (�).) rqr/r�rrrg�flatten_indicesr��unique� to_pylist)rur�r�s rvrpzDataset.unique3s���* ���0� 0� 0��c��c�c�� �H_�c�c�c�d�d� d� �=� $���)?�4�:�CV�)V�)V��*�*�,�,�G�G��G��}�#�#�F�+�+�2�2�4�4�>�>�@�@�@rx� include_nullsc ���� ��|jjvr td��d|jj�d����|jj�}t |t ��s5tdt j�d��dt|��j�d����|j dks�r6d |� ���vr��fd �}|� |d d � ��}n|}t�fd�|� ���D����}t|���� �� �fd�}|j���}� |�<|� |d |d���}|S)a4Casts the given column as [`~datasets.features.ClassLabel`] and updates the table. Args: column (`str`): The name of the column to cast (list all the column names with [`~datasets.Dataset.column_names`]) include_nulls (`bool`, defaults to `False`): Whether to include null values in the class labels. If `True`, the null values will be encoded as the `"None"` class label. <Added version="1.14.2"/> Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("boolq", split="validation") >>> ds.features {'answer': Value(dtype='bool', id=None), 'passage': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)} >>> ds = ds.class_encode_column('answer') >>> ds.features {'answer': ClassLabel(num_classes=2, names=['False', 'True'], id=None), 'passage': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None)} ``` rlrmrnz%Class encoding is only supported for � column, and column � is rVr�Nc�6���fd�|�D��|�<|S)Nc�:��g|]}�s|�t|��nd��Srr�r��r�rrrs �rvr�zIDataset.class_encode_column.<locals>.stringify_column.<locals>.<listcomp>ys:���!�!�!�U[�=�P�F�4F�C��K�K�K�D�!�!�!rxr�)r�r�rrs ��rv�stringify_columnz5Dataset.class_encode_column.<locals>.stringify_columnxs<���!�!�!�!�_d�ek�_l�!�!�!��f� �� rxTzStringifying the column)�batchedr�c3�>�K�|]}�s|�t|��V��dSrrrxrys �rvr]z.Dataset.class_encode_column.<locals>.<genexpr>�s4�����r�r�V�m�r�_e�_q�S��[�[�_q�_q�_q�_q�r�rrx��namesc�8����fd�|�D��|�<|S)Nc�`��g|]*}�s|�"��t|����nd��+Srr)�str2intr�)r�r�dst_featrrs ��rvr�zMDataset.class_encode_column.<locals>.cast_to_class_labels.<locals>.<listcomp>�sM�������2?�^�&�BT�� � ��V���-�-�-�Z^���rxr�)r�r�r�rrs ���rv�cast_to_class_labelsz9Dataset.class_encode_column.<locals>.cast_to_class_labels�s?��������#�F�m����E�&�M��LrxzCasting to class labels)r{r�r�)rqr/r�rsr�r�r-r�r�r�rprr4r*r�) rur�rr�src_featrz�dset� class_namesr�� new_featuresr�s `` @rv�class_encode_columnzDataset.class_encode_columnRs������8 ���0� 0� 0��c��c�c�� �H_�c�c�c�d�d� d��:�&�v�.���(�E�*�*� ��C���C�C�\b�C�C�hl�mu�hv�hv�h�C�C�C��� � �>�X� %� %�-� %�D�D�K�K�PV�DW�DW�<W�<W� � � � � � � �8�8� ��.����D�D� �D��r�r�r�r�t�{�{�6�7J�7J�r�r�r�r�r� ��K�0�0�0�� � � � � � � ��}�)�)�+�+� �'� �V���x�x� ��!�*� � � ��� rx)�inplacerO�new_fingerprintc�0��tj|���td|��D]D}td��jjD����r�j����_�D|jj�|����j _t�fd��j j D�����j _t�j�j���_t� d|�d|dz|krdnd�d���|�_�S) a�Flatten the table. Each column with a struct type is flattened into one column per struct field. Other columns are left unchanged. Args: new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A copy of the dataset with flattened columns. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("rajpurkar/squad", split="train") >>> ds.features {'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None), 'context': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)} >>> ds.flatten() Dataset({ features: ['id', 'title', 'context', 'question', 'answers.text', 'answers.answer_start'], num_rows: 87599 }) ``` r"c3�TK�|]#}t|jtj��V��$dSrr)r�r�rW� StructType)r�r�s rvr]z"Dataset.flatten.<locals>.<genexpr>�s0����[�[�U�:�e�j�"�-�8�8�[�[�[�[�[�[rx)� max_depthc�6��i|]}|�jj|��Sr�)ror�)r�rr�s �rvr�z#Dataset.flatten.<locals>.<dictcomp>�s%���)o�)o�)o�c�#�w�|�/D�S�/I�)o�)o�)orxzFlattened dataset from depth z to depth �unknownrV)r�rr��anyrqrK�flattenrsr�ror+r r/rTrr5)rur�r��depthr�s @rvr�zDataset.flatten�s���@�-��%�%���1�i�(�(� � �E��[�[�g�m�FZ�[�[�[�[�[� � '� � 5� 5� 7� 7�� � �� $� � 3� ;� ;�i� ;� P� P�� �� (�)o�)o�)o�)o�U\�Ua�Un�)o�)o�)o� p� p�� ��5�g�m�W�EU�V�V�� �� � �q�E�q�q�%�RS�)�V_�J_�J_�Q�Q�en�q�q�q�r�r�r�.����rx��r��cache_file_name�writer_batch_sizec �x�t|��t|jj��kr,tdt |���d|jj�����|j}|j} |�d��} | �tt|���d|||||||d�� � } | jdi| ��} | S) an Cast the dataset to a new set of features. Args: features ([`Features`]): New features to cast the dataset to. The name of the fields in the features must match the current column names. The type of the data must also be convertible from one type to the other. For non-trivial conversion, e.g. `str` <-> `ClassLabel` you should use [`~datasets.Dataset.map`] to update the Dataset. batch_size (`int`, defaults to `1000`): Number of examples per batch provided to cast. If `batch_size <= 0` or `batch_size == None` then provide the full dataset as a single batch to cast. keep_in_memory (`bool`, defaults to `False`): Whether to copy the data in-memory. load_from_cache_file (`bool`, defaults to `True` if caching is enabled): If a cache file storing the current computation from `function` can be identified, use it instead of recomputing. cache_file_name (`str`, *optional*, defaults to `None`): Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running [`~datasets.Dataset.map`]. num_proc (`int`, *optional*, defaults to `None`): Number of processes for multiprocessing. By default it doesn't use multiprocessing. Returns: [`Dataset`]: A copy of the dataset with casted features. Example: ```py >>> from datasets import load_dataset, ClassLabel, Value >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.features {'label': ClassLabel(names=['neg', 'pos'], id=None), 'text': Value(dtype='string', id=None)} >>> new_features = ds.features.copy() >>> new_features['label'] = ClassLabel(names=['bad', 'good']) >>> new_features['text'] = Value('large_string') >>> ds = ds.cast(new_features) >>> ds.features {'label': ClassLabel(names=['bad', 'good'], id=None), 'text': Value(dtype='large_string', id=None)} ``` zThe columns in features (z3) must be identical as the columns in the dataset: r4�rKTzCasting the dataset) r{r�r�r�r�r�r�r�r�r�) r4rqr/r�r�rurrrr rV) rur�r�r�r�r�r�r�rKrr�s rvrvz Dataset.cast�s���t �(� � �v�d�j�&=�>�>� >� >��L�D��N�N�L�L�26�*�2I�L�L��� � �&������"�"�7�+�+���+�+� �J�v� .� .� .��!�)�!5�+�/���&��  �  ��&�'�%�/�/��/�/���rx�featurec�L�t|d��rttj|��}||jj|<||_|j�|jj��|_t|j|j��|_|S|j}|||<|�|��S)a�Cast column to feature for decoding. Args: column (`str`): Column name. feature (`FeatureType`): Target feature. new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`] Example: ```py >>> from datasets import load_dataset, ClassLabel >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.features {'label': ClassLabel(names=['neg', 'pos'], id=None), 'text': Value(dtype='string', id=None)} >>> ds = ds.cast_column('label', ClassLabel(names=['bad', 'good'])) >>> ds.features {'label': ClassLabel(names=['bad', 'good'], id=None), 'text': Value(dtype='string', id=None)} ``` �decode_example) r�r�rrsr�r5rqrvrurT)rur�r�r�r�r�s rv� cast_columnzDataset.cast_columns���< �7�,� -� -� '��m�D�)�)�G�-4�G�M� "�6� *�#2�G� �#�M�.�.�w�/?�/L�M�M�G�M�9�'�-��IY�Z�Z�G�M��N��}�H�&�H�V� ��9�9�X�&�&� &rxr/c��tj|��}t|t��r|g}t |��t |jj��z }|r,tdt|���d|jj�����|D]}|j j |=�|j� |��|_t|j|j ��|_||_ |S)a Remove one or several column(s) in the dataset and the features associated to them. You can also remove a column using [`~datasets.Dataset.map`] with `remove_columns` but the present method doesn't copy the data of the remaining columns and is thus faster. Args: column_names (`Union[str, List[str]]`): Name of the column(s) to remove. new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A copy of the dataset object without the columns to remove. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds = ds.remove_columns('label') Dataset({ features: ['text'], num_rows: 1066 }) >>> ds = ds.remove_columns(column_names=ds.column_names) # Removing all the columns returns an empty dataset with the `num_rows` property set to 0 Dataset({ features: [], num_rows: 0 }) ``` � Column name �5 not in the dataset. Current columns in the dataset: )r�rr�r�r�rqr/r�r�rsr��droprTr5)rur/r�r��missing_columns� column_names rv�remove_columnszDataset.remove_columnsGs���H�-��%�%�� �l�C� (� (� *�(�>�L��l�+�+�c�$�*�2I�.J�.J�J�� � ��P�t�O�4�4�P�P�3:�=�3M�P�P��� � (� 4� 4�K�� �&�{�3�3�� �*�*�<�8�8�� �5�g�m�W�EU�V�V�� �.����rx�original_column_name�new_column_namec����tj|��}�|jjvrt d��d|jj������|jjvrt d��d|jj������st d�����fd�}||jj��}|j�||j��|_t ��fd�|jj� ��D����|j_|j� |��|_t|j|j��|_||_ |S) a� Rename a column in the dataset, and move the features associated to the original column under the new column name. Args: original_column_name (`str`): Name of the column to rename. new_column_name (`str`): New name for the column. new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A copy of the dataset with a renamed column. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds = ds.rename_column('label', 'label_new') Dataset({ features: ['text', 'label_new'], num_rows: 1066 }) ``` zOriginal column name r�zNew column name zz already in the dataset. Please choose a column name which is not already in the dataset. Current columns in the dataset: zNew column name is empty.c�"����fd�|D��S)Nc�$��g|] }|�kr�n|�� Sr�r�)r�rr�r�s ��rvr�z9Dataset.rename_column.<locals>.rename.<locals>.<listcomp>�s)���_�_�_�PS�s�.B�'B�'B�O�O��_�_�_rxr�)r�r�r�s ��rv�renamez%Dataset.rename_column.<locals>.rename�s!���_�_�_�_�_�W^�_�_�_� _rxNc�,��i|]\}}|�kr�n||��Sr�r�)r�rr�r�r�s ��rvr�z)Dataset.rename_column.<locals>.<dictcomp>�s>��� � � � �C��$'�*>�#>�#>���C�� � � rx) r�rrqr/r�r0r+rsr�r��rename_columnsrTr5)rur�r�r�r�r��new_column_namess `` rv� rename_columnzDataset.rename_column~s�����@�-��%�%�� �w�}�'A� A� A��P�(<�P�P�3:�=�3M�P�P��� � �g�m�8� 8� 8��P�?�P�P�3:�=�3M�P�P��� � � :��8�9�9� 9� `� `� `� `� `� `�"�6�$�*�"9�:�:�� � � +�&,�f�T�-A�&B�&B�G� #�!)� � � � � �$(�J�$7�$=�$=�$?�$?� � � �" �" �� �� � �4�4�5E�F�F�� �5�g�m�W�EU�V�V�� �.����rx�column_mappingc���tj|��}t������t|j��z }|rt d|�d|jj�����t������tt��������z }|dkrt d|�d����d�����D��}|rt d|�d�����fd �}||jj��}|j �||j ��|_ t�fd �|j j pi� ��D����|j _ |j�|��|_t|j|j ��|_||_|S) a� Rename several columns in the dataset, and move the features associated to the original columns under the new column names. Args: column_mapping (`Dict[str, str]`): A mapping of columns to rename to their new names new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A copy of the dataset with renamed columns Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds = ds.rename_columns({'text': 'text_new', 'label': 'label_new'}) Dataset({ features: ['text_new', 'label_new'], num_rows: 1066 }) ``` zOriginal column names r�rzDNew column names must all be different, but this column mapping has z duplicatesc��g|]}|�|��Sr�r�)r��new_cols rvr�z*Dataset.rename_columns.<locals>.<listcomp>�s��[�[�[��SZ�[�W�[�[�[rxzNew column names z are empty.c� ���fd�|D��S)Nc�,��g|]}|�vr�|n|��Sr�r�)r�rr�s �rvr�z:Dataset.rename_columns.<locals>.rename.<locals>.<listcomp>�s-���]�]�]�c�3�.�+@�+@�N�3�'�'�c�]�]�]rxr�)r�r�s �rvr�z&Dataset.rename_columns.<locals>.rename�s���]�]�]�]�U\�]�]�]� ]rxNc�4��i|]\}}|�vr�|n||��Sr�r�)r�rr�r�s �rvr�z*Dataset.rename_columns.<locals>.<dictcomp>�sC��� � � � �C��(+�n�'<�'<��s�#�#�#�w� � � rx)r�rr�r�r/r�rqr�r r0r+rsr�r�r�rTr5) rur�r�r�� extra_columns�#number_of_duplicates_in_new_columns�empty_new_columnsr�r�s ` rvr�zDataset.rename_columns�s���8�-��%�%���N�/�/�1�1�2�2�S��9M�5N�5N�N� � � ��P��P�P�3:�=�3M�P�P��� � /2�.�2G�2G�2I�2I�.J�.J�S�QT�Uc�Uj�Uj�Ul�Ul�Qm�Qm�Mn�Mn�.n�+� .�!� 3� 3��H�:�H�H�H��� � \�[�N�4I�4I�4K�4K�[�[�[�� � Q��O�1B�O�O�O�P�P� P� ^� ^� ^� ^� ^�"�6�$�*�"9�:�:�� � � +�&,�f�T�-A�&B�&B�G� #�!)� � � � �%)�Z�%8�%>�B�$E�$E�$G�$G� � � �" �" �� �� � �4�4�5E�F�F�� �5�g�m�W�EU�V�V�� �.����rxc����t|t��r|g}t|��t�jj��z }|r-t dt |���d�jj�d����tj���}|j� |��|_t�fd�|jjD����|j _ t|j|j ��|_||_|S)a�Select one or several column(s) in the dataset and the features associated to them. Args: column_names (`Union[str, List[str]]`): Name of the column(s) to keep. new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A copy of the dataset object which only consists of selected columns. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.select_columns(['text']) Dataset({ features: ['text'], num_rows: 1066 }) ``` r�r�rVc�6��i|]}|�jj|��Sr�)rsr�)r�rrus �rvr�z*Dataset.select_columns.<locals>.<dictcomp>. s%���*o�*o�*o�S�3�� �0C�C�0H�*o�*o�*orx)r�r�r�rqr/r�r�r�r�selectr+rsr�rTr5)rur/r�r�r�s` rv�select_columnszDataset.select_columns s���< �l�C� (� (� *�(�>�L��l�+�+�c�$�*�2I�.J�.J�J�� � ��.�t�O�4�4�.�.��:�*�.�.�.��� � �-��%�%��� �,�,�\�:�:�� �!)�*o�*o�*o�*o�T[�Ta�Tn�*o�*o�*o�!p�!p�� ��5�g�m�W�EU�V�V�� �.����rxc��|jS)a{Number of rows in the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.__len__ <bound method Dataset.__len__ of Dataset({ features: ['text', 'label'], num_rows: 1066 })> ``` �rgrzs rv�__len__zDataset.__len__3 s ���}�rxc #�K�|j��|j�|jni}t|jfd|jji|��}t j}t|j |���D]Q}t|j ��D]:}|� |d��}t|d||j|j���}|V��;�RdSt|j ��D]}|�|��V��dS)z�Iterate through the examples. If a formatting is set with [`Dataset.set_format`] rows will be returned with the selected format. Nr�)r�r"r�� formatter�format_columnsr.)rrr2rBr1rsr�r#�'ARROW_READER_BATCH_SIZE_IN_DATASET_ITERrWr r�rg�slicer@r0r3�_getitem)rur-r�r�� pa_subtabler��pa_subtable_ex�formatted_outputs rv�__iter__zDataset.__iter__D s7���� �=� �48�3F�3R�D�/�/�XZ�M�%�d�&7�g�g�$�*�BU�g�Yf�g�g�I��G�J�)�$�)� �K�K�K� +� +� ��{�3�4�4� +� +�A�%0�%6�%6�q�!�%<�%<�N�'3�&��"+�'+�';�+/�+C� (�(�(�$�+�*�*�*�*� +� +� +��4�=�)�)� � ���m�m�������� � rx�drop_last_batchc#�K�|j�~|j�|jni}t|jfd|jji|��}t |j||���D]6}t|t|j ��||j |j ���}|V��7dS|s|j n |j |z|z}td||��D]*}|� t|||z����V��+dS)a�Iterate through the batches of size `batch_size`. If a formatting is set with [`~datasets.Dataset.set_format`] rows will be returned with the selected format. Args: batch_size (:obj:`int`): size of each batch to yield. drop_last_batch (:obj:`bool`, default `False`): Whether a last batch smaller than the batch_size should be dropped Nr�)r�r�r�r)rrr2rBr1rsr�rWr r@r�rgr0r3r�r�) rur�r�r-r�r��formatted_batchrgr�s rvr7z Dataset.itera s6���� �=� �48�3F�3R�D�/�/�XZ�M�%�d�&7�g�g�$�*�BU�g�Yf�g�g�I�)�$�)� �\k�l�l�l� &� &� �".���+�.�/�/�'�#'�#7�'+�'?� #�#�#��&�%�%�%�%� &� &�-<�i�t�}�}���R\�A\�_i�Ai�H��1�h� �3�3� � ���m�m��!�Q��^�,�,������� � rxc�p�dt|jj������d|j�d�S)NzDataset({ features: z, num_rows: z }))r�rsr�r�rgrzs rv�__repr__zDataset.__repr__� s8��s�D���1D�1I�1I�1K�1K�,L�,L�s�s�_c�_l�s�s�s�srxc�T�|j|j|j�|jn|j|jd�S)Nr,)r1r2r0r/r3rzs rvrzDataset.format� s;���%�!�0�,0�,@�,H�t�(�(�d�Nb�"&�":�  � � rxr�r.c+�K�|j}|j}|j}|j} |j|||fi|��dV�|j|||fi|��dS#|j|||fi|��wxYw)aZTo be used in a `with` statement. Set `__getitem__` return format (type and columns). Args: type (`str`, *optional*): Either output type selected in `[None, 'numpy', 'torch', 'tensorflow', 'jax', 'arrow', 'pandas', 'polars']`. `None` means `__getitem__`` returns python objects (default). columns (`List[str]`, *optional*): Columns to format in the output. `None` means `__getitem__` returns all columns (default). output_all_columns (`bool`, defaults to `False`): Keep un-formatted columns as well in the output (as python objects). **format_kwargs (additional keyword arguments): Keywords arguments passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`. N)r1r2r0r3r6) rur�r�r.r-�old_format_type�old_format_kwargs�old_format_columns�old_output_all_columnss rv� formatted_aszDataset.formatted_as� s�����,�+�� �/��!�1��!%�!9�� n� �D�O�D�'�+=� O� O�� O� O� O� �E�E�E� �D�O�O�-?�AW� m� m�[l� m� m� m� m� m��O�D�O�O�-?�AW� m� m�[l� m� m� m� m���s �A�Ac ��|�|�di����t|��}t|fd|jji|��t |t��r|g}t |t��rt|��}|�Wt|��t|j j ��z }|r,tdt|���d|j j �����|�|���}||_||_||_||_t&�d|�dn||�dnt|��|rd nd ��dS) a8Set `__getitem__` return format (type and columns). The data formatting is applied on-the-fly. The format `type` (for example "numpy") is used to format batches when using `__getitem__`. It's also possible to use custom transforms for formatting using [`~datasets.Dataset.set_transform`]. Args: type (`str`, *optional*): Either output type selected in `[None, 'numpy', 'torch', 'tensorflow', 'jax', 'arrow', 'pandas', 'polars']`. `None` means `__getitem__` returns python objects (default). columns (`List[str]`, *optional*): Columns to format in the output. `None` means `__getitem__` returns all columns (default). output_all_columns (`bool`, defaults to `False`): Keep un-formatted columns as well in the output (as python objects). **format_kwargs (additional keyword arguments): Keywords arguments passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`. It is possible to call [`~datasets.Dataset.map`] after calling `set_format`. Since `map` may add new columns, then the list of formatted columns gets updated. In this case, if you apply `map` on a dataset to add a new column, then this column will be formatted as: ``` new formatted columns = (all columns - previously unformatted columns) ``` Example: ```py >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True) >>> ds.set_format(type='numpy', columns=['text', 'label']) >>> ds.format {'type': 'numpy', 'format_kwargs': {}, 'columns': ['text', 'label'], 'output_all_columns': False} ``` r-r�NzColumns r�z}Set __getitem__(key) output type to %s for %s columns (when key is int or slice) and %s output other (un-formatted) columns.zpython objects�no�dozdon't)r�r�rArBrsr�r�r��tupler�r�rqr/r�r�r1r2r0r3rr)rur�r�r.r-r�s rvr6zDataset.set_format� s���^ ���]�.�.���C�C�D�D�D�*�$�/�/���d�J�J�T�Z�%8�J�M�J�J�J� �g�s� #� #� ��i�G� �g�u� %� %� $��7�m�m�G� � �!�'�l�l�S���1H�-I�-I�I�O�� � �E�t�O�4�4�E�E�ko�ku�lC�E�E���� � ��l�l�n�n�G� ���+���&���#5�� �� � � V� $� � � �$��O�D�D��W���&� 3�D�D�G�  � � � � rxc�.�|���dS)a$Reset `__getitem__` return format to python objects and all columns. Same as `self.set_format()` Example: ```py >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True) >>> ds.set_format(type='numpy', columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) >>> ds.format {'columns': ['input_ids', 'token_type_ids', 'attention_mask', 'label'], 'format_kwargs': {}, 'output_all_columns': False, 'type': 'numpy'} >>> ds.reset_format() >>> ds.format {'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'], 'format_kwargs': {}, 'output_all_columns': False, 'type': None} ``` N�r6rzs rv� reset_formatzDataset.reset_format� s��6 �������rx� transformc�8�|�d|||���dS)a$Set `__getitem__` return format using this transform. The transform is applied on-the-fly on batches when `__getitem__` is called. As [`~datasets.Dataset.set_format`], this can be reset using [`~datasets.Dataset.reset_format`]. Args: transform (`Callable`, *optional*): User-defined formatting transform, replaces the format defined by [`~datasets.Dataset.set_format`]. A formatting function is a callable that takes a batch (as a `dict`) as input and returns a batch. This function is applied right before returning the objects in `__getitem__`. columns (`List[str]`, *optional*): Columns to format in the output. If specified, then the input batch of the transform only contains those columns. output_all_columns (`bool`, defaults to `False`): Keep un-formatted columns as well in the output (as python objects). If set to True, then the other un-formatted columns are kept with the output of the transform. Example: ```py >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') >>> def encode(batch): ... return tokenizer(batch['text'], padding=True, truncation=True, return_tensors='pt') >>> ds.set_transform(encode) >>> ds[0] {'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'input_ids': tensor([ 101, 29353, 2135, 15102, 1996, 9428, 20868, 2890, 8663, 6895, 20470, 2571, 3663, 2090, 4603, 3017, 3008, 1998, 2037, 24211, 5637, 1998, 11690, 2336, 1012, 102]), 'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])} ``` r�)r�r.r�Nr�)rur�r�r.s rv� set_transformzDataset.set_transform s'��R ����'�FX�dm��n�n�n�n�nrxc �P�tj|��}|jd|||d�|��|S)a[Set `__getitem__` return format (type and columns). The data formatting is applied on-the-fly. The format `type` (for example "numpy") is used to format batches when using `__getitem__`. It's also possible to use custom transforms for formatting using [`~datasets.Dataset.with_transform`]. Contrary to [`~datasets.Dataset.set_format`], `with_format` returns a new [`Dataset`] object. Args: type (`str`, *optional*): Either output type selected in `[None, 'numpy', 'torch', 'tensorflow', 'jax', 'arrow', 'pandas', 'polars']`. `None` means `__getitem__` returns python objects (default). columns (`List[str]`, *optional*): Columns to format in the output. `None` means `__getitem__` returns all columns (default). output_all_columns (`bool`, defaults to `False`): Keep un-formatted columns as well in the output (as python objects). **format_kwargs (additional keyword arguments): Keywords arguments passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`. Example: ```py >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True) >>> ds.format {'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'], 'format_kwargs': {}, 'output_all_columns': False, 'type': None} >>> ds = ds.with_format("torch") >>> ds.format {'columns': ['text', 'label', 'input_ids', 'token_type_ids', 'attention_mask'], 'format_kwargs': {}, 'output_all_columns': False, 'type': 'torch'} >>> ds[0] {'text': 'compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .', 'label': tensor(1), 'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617, 1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105, 1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), 'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])} ``` )r�r�r.r�)r�rr6)rur�r�r.r-r�s rvrzDataset.with_formatD s=��F�-��%�%�����n��g�J\�n�n�`m�n�n�n��rxc�^�tj|��}|�|||���|S)a�Set `__getitem__` return format using this transform. The transform is applied on-the-fly on batches when `__getitem__` is called. As [`~datasets.Dataset.set_format`], this can be reset using [`~datasets.Dataset.reset_format`]. Contrary to [`~datasets.Dataset.set_transform`], `with_transform` returns a new [`Dataset`] object. Args: transform (`Callable`, `optional`): User-defined formatting transform, replaces the format defined by [`~datasets.Dataset.set_format`]. A formatting function is a callable that takes a batch (as a `dict`) as input and returns a batch. This function is applied right before returning the objects in `__getitem__`. columns (`List[str]`, `optional`): Columns to format in the output. If specified, then the input batch of the transform only contains those columns. output_all_columns (`bool`, defaults to `False`): Keep un-formatted columns as well in the output (as python objects). If set to `True`, then the other un-formatted columns are kept with the output of the transform. Example: ```py >>> from datasets import load_dataset >>> from transformers import AutoTokenizer >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") >>> def encode(example): ... return tokenizer(example["text"], padding=True, truncation=True, return_tensors='pt') >>> ds = ds.with_transform(encode) >>> ds[0] {'attention_mask': tensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]), 'input_ids': tensor([ 101, 18027, 16310, 16001, 1103, 9321, 178, 11604, 7235, 6617, 1742, 2165, 2820, 1206, 6588, 22572, 12937, 1811, 2153, 1105, 1147, 12890, 19587, 6463, 1105, 15026, 1482, 119, 102]), 'token_type_ids': tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])} ``` )r�r�r.)r�rr�)rur�r�r.r�s rv�with_transformzDataset.with_transform� s5��X�-��%�%����� �7�Wi��j�j�j��rxr�c ��t|t��rtd���d|vr|dn|j}d|vr|dn|j}d|vr|dn|j}d|vr|dn|j}|�|ni}t|fd|jj i|��}t|j ||j ���}t|||||� ��} | S) z} Can be used to index columns (by string names) or rows (by integer, slice, or list-like of integer indices) zDdataset index must be int, str, slice or collection of int, not bool� format_typer�r.r-Nr�)r�r�)r�r�rXr1r0r3r2rBrsr�rCrqrrr@) rur�r8r�r�r.r-r�r�r�s rvr�zDataset._getitem� s�� �c�4� � � d��b�c�c� c�/<��/F�/F�f�]�+�+�D�L]� �5E��5O�5O�� 0�1�1�UY�Ui��,@�F�,J�,J�F�'� (� (�PT�Ph� �4C�f�3L�3L���/�/�RV�Re� �)6�)B� � �� �!�+�]�]�� �8K�]�}�]�]� �!�$�*�c�4�=�I�I�I� �'� �� �.�ew� � � �� �rxc��dSrrr��rur�s rv� __getitem__zDataset.__getitem__� ��� �rxc��dSrrr�r�s rvr�zDataset.__getitem__� r�rxc�,�|�|��S)zjCan be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).)r�r�s rvr�zDataset.__getitem__� s���}�}�S�!�!�!rxr�c����|�|���t�tt�������}�fd�t |��D��S)z<Can be used to get a batch using a list of integers indices.c�R���g|]"��fd�����D����#S)c�(��i|]\}}||���Sr�r�)r�rr�r�s �rvr�z3Dataset.__getitems__.<locals>.<listcomp>.<dictcomp>� s#���?�?�?�:�3���e�A�h�?�?�?rxr�)r�r�r�s @�rvr�z(Dataset.__getitems__.<locals>.<listcomp>� s7����[�[�[�A�?�?�?�?������?�?�?�[�[�[rx)r�r��nextr7r�)rur�� n_examplesr�s @rv� __getitems__zDataset.__getitems__� sX���� � ��&�&����t�D��K�K�0�0�1�2�2� �[�[�[�[��z�IZ�IZ�[�[�[�[rxc��d�|jD��}|sdStj�|d��}t�d|����tj|��}g}|D]�}tj�tj�||����}|� d��rL|� d��r7||vrt�d|������|� |����|D]3}t�d|����tj |���4t|��S)aClean up all cache files in the dataset cache directory, excepted the currently used cache file if there is one. Be careful when running this command that no other process is currently using other cache files. Returns: `int`: Number of removed files. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.cleanup_cache_files() 10 ``` c�X�g|]'}tj�|d����(Sr�)�osr2�abspath)r�� cache_files rvr�z/Dataset.cleanup_cache_files.<locals>.<listcomp>� s+��j�j�j�:�r�w���z�*�/E�F�F�j�j�jrxrzListing files in �cache-r�z%Keeping currently used cache file at z Removing )rr�r2�dirnamerro�listdirr�r� startswith�endswithr�rr�)ru�current_cache_files�cache_directory�files�files_to_remove�f_name� full_name� file_paths rv�cleanup_cache_fileszDataset.cleanup_cache_files� sW��$k�j�Y]�Yi�j�j�j��"� ��1��'�/�/�*=�a�*@�A�A��� � �9��9�9�:�:�:��:�o�6�6����� 2� 2�F������� � �_�f�(M�(M�N�N�I�� � ��*�*� 2�v���x�/H�/H� 2�� 3�3�3��K�K� S� � S� S�T�T�T���&�&�y�1�1�1��(� !� !�I� �K�K�/�I�/�/� 0� 0� 0� �I�i� � � � ��?�#�#�#rxc�&�t��r@|jr9d|zdz}tj�|jdd��}n"dt ��zdz}t ��}tj�||��}|S)Nr�r�rr~)r<rr�r2r�r:r;r)rur?r�r��cache_file_paths rv�_get_cache_file_pathzDataset._get_cache_file_path s��� � � � D�D�$4� D�&��4�x�?�O� �g�o�o�d�.>�q�.A�*�.M�N�N�O�O�&�)D�)F�)F�F��Q�O�A�C�C�O��'�,�,���H�H���rx�_{rank:05d}_of_{num_proc:05d}�function� with_indices� with_rank� input_columnsr{r��disable_nullable� fn_kwargs�suffix_templater�c��� � � � ����&�'�(�)�� r� �td������dkrtd���t���dkrh�j�Ht�j�dd���j����j�����|r�� |��S�S|�d�}t|t��r|g}|�Wt|��t�j j��z }|r,tdt|���d�j j�����t|t��r|g}|�Wt|��t�j j��z }|r,td t|���d�j j������ �� n t!��� |�i}��`�t���krMt����t"�d t����d ��d t����d ����||||||||� | � ||d��&��Pt'tj��}t+tjd�&��}d|d<t-�j||���nt1�����&d<�jr� ������� � �&d<� � fd�}���nd}|r|rt���|z|z|z|z}nt���}d}���dkr�d} |�&��}t"�d�&d����n#t6$rYnwxYw|��t9d||pd���5}tjdi�&��D]F\}}}|r)|dz }t"�d|�d|�d ���|}�1|�|���G ddd��n #1swxYwY|� Jd���|j�jkr�|_|Sdt>tdt@tBtDdfdt>tf��fd� �'dtdtBdtf��fd � �(tGtHj%��} | �&d!d"���'��d#vrt"�d$��d"tHj%d!<� ��fd%�tQ���D���)� �&�'�(��)fd&�tQ|��D��}!dg|z}"tQ|��D],} ||!|��|"|<d|!|<�#t6$rY�)wxYwd'�|!D��}!|!�rjt|!��|kr.t"�d(t|!���d)|�d*���tSt|!����5}#| tH_%t"�d+��d,���t9d||pdd-��d.�z���5}tU|#tj|!�/��D]I\}}}|r,|dz }t"�d|�d|�d ���||"|<�4|�|���J ddd��n #1swxYwY|#�+��|#�,��ddd��n #1swxYwY|!D]}$|$d0=�n't"�d�'� d������d|"vrtd1|"�d2����t"�d3��d4���t[|"��}%t]d5�t_|"�)��D����r�|%_n �j|%_|%S)6a� Apply a function to all the examples in the table (individually or in batches) and update the table. If your function returns a column that already exists, then it overwrites it. You can specify whether the function should be batched or not with the `batched` parameter: - If batched is `False`, then the function takes 1 example in and should return 1 example. An example is a dictionary, e.g. `{"text": "Hello there !"}`. - If batched is `True` and `batch_size` is 1, then the function takes a batch of 1 example as input and can return a batch with 1 or more examples. A batch is a dictionary, e.g. a batch of 1 example is `{"text": ["Hello there !"]}`. - If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples. Note that the last batch may have less than `n` examples. A batch is a dictionary, e.g. a batch of `n` examples is `{"text": ["Hello there !"] * n}`. If the function is asynchronous, then `map` will run your function in parallel, with up to one thousand simulatenous calls. It is recommended to use a `asyncio.Semaphore` in your function if you want to set a maximum number of operations that can run at the same time. Args: function (`Callable`): Function with one of the following signatures: - `function(example: Dict[str, Any]) -> Dict[str, Any]` if `batched=False` and `with_indices=False` and `with_rank=False` - `function(example: Dict[str, Any], *extra_args) -> Dict[str, Any]` if `batched=False` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) - `function(batch: Dict[str, List]) -> Dict[str, List]` if `batched=True` and `with_indices=False` and `with_rank=False` - `function(batch: Dict[str, List], *extra_args) -> Dict[str, List]` if `batched=True` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) For advanced usage, the function can also return a `pyarrow.Table`. If the function is asynchronous, then `map` will run your function in parallel. Moreover if your function returns nothing (`None`), then `map` will run your function and return the dataset unchanged. If no function is provided, default to identity function: `lambda x: x`. with_indices (`bool`, defaults to `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx[, rank]): ...`. with_rank (`bool`, defaults to `False`): Provide process rank to `function`. Note that in this case the signature of `function` should be `def function(example[, idx], rank): ...`. input_columns (`Optional[Union[str, List[str]]]`, defaults to `None`): The columns to be passed into `function` as positional arguments. If `None`, a `dict` mapping to all formatted columns is passed as one argument. batched (`bool`, defaults to `False`): Provide batch of examples to `function`. batch_size (`int`, *optional*, defaults to `1000`): Number of examples per batch provided to `function` if `batched=True`. If `batch_size <= 0` or `batch_size == None`, provide the full dataset as a single batch to `function`. drop_last_batch (`bool`, defaults to `False`): Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function. remove_columns (`Optional[Union[str, List[str]]]`, defaults to `None`): Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in `remove_columns`, these columns will be kept. keep_in_memory (`bool`, defaults to `False`): Keep the dataset in memory instead of writing it to a cache file. load_from_cache_file (`Optional[bool]`, defaults to `True` if caching is enabled): If a cache file storing the current computation from `function` can be identified, use it instead of recomputing. cache_file_name (`str`, *optional*, defaults to `None`): Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. features (`Optional[datasets.Features]`, defaults to `None`): Use a specific Features to store the cache file instead of the automatically generated one. disable_nullable (`bool`, defaults to `False`): Disallow null values in the table. fn_kwargs (`Dict`, *optional*, defaults to `None`): Keyword arguments to be passed to `function`. num_proc (`int`, *optional*, defaults to `None`): Max number of processes when generating cache. Already cached shards are loaded sequentially. suffix_template (`str`): If `cache_file_name` is specified, then this suffix will be added at the end of the base name of each. Defaults to `"_{rank:05d}_of_{num_proc:05d}"`. For example, if `cache_file_name` is "processed.arrow", then for `rank=1` and `num_proc=4`, the resulting file would be `"processed_00001_of_00004.arrow"` for the default suffix. new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. desc (`str`, *optional*, defaults to `None`): Meaningful description to be displayed alongside with the progress bar while mapping examples. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> def add_prefix(example): ... example["text"] = "Review: " + example["text"] ... return example >>> ds = ds.map(add_prefix) >>> ds[0:3]["text"] ['Review: compassionately explores the seemingly irreconcilable situation between conservative christian parents and their estranged gay and lesbian children .', 'Review: the soundtrack alone is worth the price of admission .', 'Review: rodriguez does a splendid job of racial profiling hollywood style--casting excellent latin actors of all ages--a trend long overdue .'] # process a batch of examples >>> ds = ds.map(lambda example: tokenizer(example["text"]), batched=True) # set number of processors >>> ds = ds.map(add_prefix, num_proc=4) ``` NzEPlease use either `keep_in_memory` or `cache_file_name` but not both.rz num_proc must be an integer > 0.�rorpr?c��|Srrr���xs rv�<lambda>zDataset.map.<locals>.<lambda>� s���rxz Input column r�zColumn to remove znum_proc must be <= z. Reducing num_proc to z for dataset of size rV)rrrr r r{r�r�r�r�r�r�r r r�r��fingerprint_namer�c���|d}|d�otj�|d��rJ�rH|j���}�|_t �|d||j���St�)zILoad a processed shard from cache if it exists, otherwise throw an error.rr�Nro) r�r2�existsror�r�r�r�rprj)� shard_kwargsrror�r�s ��rv�load_processed_shard_from_cachez4Dataset.map.<locals>.load_processed_shard_from_cache� s���� ��)�E��-�.�:��7�>�>�,�/@�"A�B�B�l�G[�l� �:�?�?�,�,�D�$,�D�M�"�,�,�\�:K�-L�SW�_d�_j�,�k�k�k�)� )rxr"z$Loading cached processed dataset at r��Mapr�z!Finished processing shard number rz&Failed to retrieve the result from map�rank�*r|c�j��|s|S|�d��}|d|�||d�}}t|t��r>|��|����z|z}t�d|�d|����n1|��dd���|����z|z}|S)NrV�rr�z Process #z will write at z {rank:05d}z{rank})�rindexr�r�rrro�replace)r�r�sep� base_name� extensionr�r s ��rv�format_cache_file_namez+Dataset.map.<locals>.format_cache_file_name s����'�+�*�*�%�,�,�S�1�1��'6�t��t�'<�o�c�d�d�>S�9� ��d�C�(�(��&/�/�2H�2H�d�]e�2H�2f�2f�&f�ir�&r�O��K�K� R�D� R� R�� R� R�S�S�S�S�"�)�1�1�,��I�I�P�P�VZ�em�P�n�n�o�#�$�$� '�&rxc�Z��|��|����z}t|��|S)Nr)rr?)r�rr�r s ��rv�format_new_fingerprintz+Dataset.map.<locals>.format_new_fingerprint# s6���"1�O�4J�4J�PT�_g�4J�4h�4h�"h��$�_�5�5�5�&�&rx�TOKENIZERS_PARALLELISM�false)��offr'�fr��n�0z:Setting TOKENIZERS_PARALLELISM=false for forked processes.c�B��g|]}���|d������S)T)r�rfr�r��r)r�rr�r�rus ���rvr�zDataset.map.<locals>.<listcomp>6 s>�������� � �h�d�t�\j� �k�k���rxc ���g|]G}i���|��|��|td��d|�D������|��d����HS)c3�4K�|]}t|��V��dSrr�r�)r��ss rvr]z)Dataset.map.<locals>.<listcomp>.<genexpr>@ s(����!@�!@�Q�#�a�&�&�!@�!@�!@�!@�!@�!@rxN)rr�r�offsetr�)�sum)r�rr��dataset_kwargsr#r%r��shardss ������rvr�zDataset.map.<locals>.<listcomp>: s���� � � ���$��#�D�\�'=�'=�o�t�'T�'T� �!�!@�!@�&��$��-�!@�!@�!@�@�@�'=�'=�o�t�'T�'T� ��� � � rxc��g|]}|�|��Srrr�)r�r8s rvr�zDataset.map.<locals>.<listcomp>N s��X�X�X��V�EW�f�EW�EW�EWrxz Reprocessing r�z9 shards because some of them were missing from the cache.z Spawning z processesz (num_proc=�)rrz1Failed to retrieve results from map: result list zG still contains None - at least one worker failed to return its resultszConcatenating z shardsc3�<K�|]\}}|j|jkV��dSrr�r5)r��transformed_shardrs rvr]zDataset.map.<locals>.<genexpr>v sF������,�%�u�"�.�%�2D�D������rx)0r�r�rrr�r r�ror�rpr�r�r�r�rqr/r�r<rrr8� _map_singler7r>r5r?rrrjrrr�rrr�r^rr��environr��lowerr�r rbr;r�_concatenate_map_style_datasetsr��zip)*rurrr r r{r�r�r�r�r�r�r�r�r r r�r r�r�r�r��kwargs_for_fingerprintrr�� pbar_totalr%�transformed_datasetr&rr+r,�prev_envr'�transformed_shardsr*r8�resultr5r#r%r6s*` ``` ` ``` @@@@rvrz Dataset.map s� ��������������x � f�o�9��d�e�e� e� � �H��M�M��?�@�@� @� �t�9�9��>�>��}�(���I�O�O�A�q�)�)�����)�)��*� /� ���� � ��*�*�>�:�:�:�� � � �"�{�H� �m�S� )� )� ,�*�O�M� � $�!�-�0�0�3�t�z�7N�3O�3O�O�O�� � �J�D��$9�$9�J�J�pt�pz�qH�J�J���� �n�c� *� *� .�,�-�N� � %�!�.�1�1�C�� �8O�4P�4P�P�O�� � �N��_�(=�(=�N�N�tx�t~�uL�N�N����8L�7W�3�3�]o�]q�]q�� � ��I� � �H�s�4�y�y�$8�$8��4�y�y�H� �N�N�t�s�4�y�y�t�t��t�t�hk�lp�hq�hq�t�t�t� � � � � �(�"�*��$�.�,�,�!2� � 0�"� � ��" � "�9��9L�M�M�I�%B�7�CV�XZ�\j�%k�%k� "�9J� "�#5� 6�0��1B�I�Oe�f�f�O�O� �� 1� 1� 1�,;��(�)� � � M��&�"&�";�";�O�"L�"L��,;��(�)� *� *� *� *� *� *�"*�!5�X�X�1� � � #�� #��T���j�0�J�>��K�j�X�J�J��T���J�� � � �x�1�}�}�"&� � �&E�&E�n�&U�&U�#�� � �f�>�Rc�Cd�f�f�g�g�g�g��*� � � ��� ����"�*��$�$������� 1��/6�/B�/T�/T�^�/T�/T�1�1�+��d�G��1�'�1�,�K�"�L�L�)d�T�)d�)d�Wa�)d�)d�)d�e�e�e�29�/�/� �K�K��0�0�0�0� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1���� 1� 1� 1� 1�'�2�2�4\�2�2�2�"�/�4�3D�D�D�3B�#�0�&� &� '�!)�#�� '��C����-�.� '��#�� '� '� '� '� '� '� '�& '�� '�3� '�3� '� '� '� '� '� '� '�  �� �+�+�H��|�|�4�g�>�>�D�D�F�F�O������[�\�\�\�3:�B�J�/� 0�������!�(�O�O����F� � � � � � � � � �"�*�-�-� � � �N�#'��*�!4� ��j�)�)� � ���/N�/N�~�^b�Oc�/d�/d�&�t�,�+/�N�4�(�(��.�����D�����Y�X�>�X�X�X�N�� s��~�&�&��3�3��K�K�D��N�(;�(;�D�D�j�D�D�D�����#�n�-�-�.�.� �$�!)�B�J��K�K� @�H� @� @� @�A�A�A� �(�(�"�m�e�/H�X�/H�/H�/H�H���� 5��3E� �'�"5�~�4�4�4�5�5�/�D�$�� $�5� +�q� 0� � &� � �-h�QU�-h�-h�[e�-h�-h�-h� i� i� i�;B� 2�4� 8� 8� $� � �G� 4� 4� 4� 4�5� 5� 5� 5� 5� 5� 5� 5� 5� 5� 5� 5���� 5� 5� 5� 5��J�J�L�L�L��I�I�K�K�K�% � � � � � � � � � � ���� � � � �(-�(�(�F��w���(�� � �q�CY�CY�Zi�kn�Co�Co�q�q�r�r�r��)�)�)� �D�HZ�D�D�D���� �K�K�:��:�:�:� ;� ;� ;�4�5G�H�H�F����03�4F��0O�0O������ 8�'6��#�#�&*�&7��#��Mso�3.L"�" L/�.L/�AN.�.N2�5N2�T� T,�+T,�AZ�!A'Y� Z�Y �Z�Y �+Z�Z�Zrrr3c #����������� � � � � �����%�&�'�(�)�*�+�,�-�.�/K���i��r���dkr�j�d�.�j���}�s �j�d|d<t �jfd�ji|���(�o$t ������dk�'�fd��/d����(���fd� �+�'����.�/fd��,d��+�,fd � �%d��+�,fd � �&� � � � ��� fd �}g�-tj ���r8 tj ���)n%#t$rtj ���)YnwxYwd�)�%�&��)��-fd �}d}d \}}}tjrdt jvrddl}t'j��5} ��d��}�st-|��}ng|st ���nt ����z�z�*t/��*fd�t1d�*���D��|��|�����}��s�t5j��}||��D�]`\}}�.�r|dkr#|��\}}}|�|��t9|t:j��r|�|��n�t9|t@j!��r3|�t:j�"|����nltjrKdt jvr=t9||j!��r(|�|�#����n|�$|��|dz }t5j��|tj%zkrt5j��}�d|fV�d}��b�n�t5j��}||��D�]w\}} t |��}!�.�r|r/|ddkr#|��\}}}|�|��t9| t:j��r|�&| ��n�t9| t@j!��r3|�&t:j�"| ����nltjrKdt jvr=t9| |j!��r(|�&| �#����n|�'| ��||!z }t5j��|tj%zkrt5j��}�d|fV�d}��y�.r|�|�(���n*#tRtTf$�r�d|fV��.ri|�|�(��|�Q|�+��tXj-�.|j/��rtYj0|j/���)r�tb�2dt �-���d����-D]}"|"�3d���� �)�4tj5�-���n6#tj6tnf$rtb�2d��YnwxYw�wxYwddd��n #1swxYwY�d|fV��.rq|�o|�+��tqj9|j/� ��tYj:d��}#tYj:|#��tYj;� d|#z���.r��j<���}$|j=|$_|�)�dt|�?� |$�j@���fV�dS�dt|�A|�B��|$�j@���fV�dS�d�fV�dS)aApply a function to all the elements in the table (individually or in batches) and update the table (if function does update examples). Args: shard (`datasets.Dataset`): Dataset to map the transform on. function (`Callable`): with one of the following signature: - `function(example: Dict[str, Any]) -> Dict[str, Any]` if `batched=False` and `with_indices=False` and `with_rank=False` - `function(example: Dict[str, Any], *extra_args) -> Dict[str, Any]` if `batched=False` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) - `function(batch: Dict[str, List]) -> Dict[str, List]` if `batched=True` and `with_indices=False` and `with_rank=False` - `function(batch: Dict[str, List], *extra_args) -> Dict[str, List]` if `batched=True` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) For advanced usage, the function can also return a `pyarrow.Table`. Moreover if your function returns nothing (`None`), then `map` will run your function and return the dataset unchanged. If no function is provided, default to identity function: lambda x: x with_indices (`bool`, defaults to `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx[, rank]): ...`. with_rank (`bool`, default `False`): Provide process rank to `function`. Note that in this case the signature of `function` should be `def function(example[, idx], rank): ...`. input_columns (`Optional[List[str]]`, defaults to `None`): The columns to be passed into `function` as positional arguments. If `None`, a dict mapping to all formatted columns is passed as one argument. batched (`bool`, defaults to `False`): Provide batch of examples to `function` batch_size (`int`, optional, defaults to `1000`): Number of examples per batch provided to `function` if `batched=True` `batch_size <= 0` or `batch_size == None`: Provide the full dataset as a single batch to `function` drop_last_batch (`bool`, default: `False`): Whether a last batch smaller than the batch_size should be dropped instead of being processed by the function. remove_columns (`Optional[List[str]]`, defaults to `None`): Remove a selection of columns while doing the mapping. Columns will be removed before updating the examples with the output of `function`, i.e. if `function` is adding columns with names in `remove_columns`, these columns will be kept. keep_in_memory (`bool`, defaults to `False`): Keep the dataset in memory instead of writing it to a cache file. cache_file_name (`str`, optional, defaults to `None`): Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. writer_batch_size (`int`, default `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `.map()`. features (`Optional[datasets.Features]`, defaults to `None`): Use a specific Features to store the cache file instead of the automatically generated one. disable_nullable (`bool`, defaults to `False`): Disallow null values in the table. fn_kwargs (`Dict`, optional, defaults to `None`): Keyword arguments to be passed to `function` new_fingerprint (`str`, optional, defaults to `None`): the new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments rank: (`int`, optional, defaults to `None`): If specified, this is the process rank when doing multiprocessing offset: (`int`, defaults to 0): If specified, this is an offset applied to the indices passed to `function` if `with_indices=True`. NrT�lazyr�c����ttjtjf}t jrdtjvrddl }||jfz }|�0t||��s tdt|���d������rCt|t���r/ttjtjf�t jr#dtjvrddl }�|j|jfz �t jrdtjvrddl}�|jfz �t jrdtjvrddl}�|jfz �t jrdtjvrddlm}�|jfz �t3�fd �|���D����}|d ur6td d �|���D���d ��d����dSdSdS)z$Validate output of the map function.�polarsrNzYProvided `function` which is applied to all elements of table returns a variable of type z�. Make sure provided `function` returns a variable of type `dict` (or a pyarrow table) to update the dataset or `None` if you are only interested in side effects.r��torch�jaxc3�8�K�|]}t|���V��dSrr)r�)r�r��allowed_batch_return_typess �rvr]zHDataset._map_single.<locals>.validate_function_output.<locals>.<genexpr>� s?�����0�0�FK�J�u�&@�A�A�0�0�0�0�0�0rxFzXProvided `function` which is applied to all elements of table returns a `dict` of types c�,�g|]}t|����Sr��r�)r�rs rvr�zIDataset._map_single.<locals>.validate_function_output.<locals>.<listcomp>� s?��t`�t`�t`�AB�tx�yz�t{�t{�t`�t`�t`rxz[. When using `batched=True`, make sure provided `function` returns a `dict` of types like `z`.)rrWrP�pd� DataFramer#�POLARS_AVAILABLEr��modulesrJr�rXr�r�r�r��Seriesr�r�r��TORCH_AVAILABLErK� JAX_AVAILABLE� jax.numpyr�rar ) �processed_inputs�allowed_processed_inputs_types�plr�rK�jnp�all_dict_values_are_listsrNr{s @�rv�validate_function_outputz5Dataset._map_single.<locals>.validate_function_output� s�����.5�r�x���-N� *��&� B�8�s�{�+B�+B�#�#�#�#�.�2�<�/�A�.��+�J�?O�Qo�4p�4p�+��k�pt�vF�qG�qG�k�k�k����� �:�&6��@�@� �.2�B�J�� �-J�*��*�L�x�3�;�/F�/F�'�'�'�'�.�2�9�b�l�2K�K�.��&�?�<�3�;�+F�+F�+�+�+�+�.�2�9�,�>�.��)�B�g���.D�.D� �L�L�L�.�5�<�/�A�.��'�A�E�S�[�,@�,@�+�+�+�+�+�+�.�3�;�.�@�.�,/�0�0�0�0�O_�Of�Of�Oh�Oh�0�0�0�-�-�)�-��5�5�#�[�t`�t`�FV�F]�F]�F_�F_�t`�t`�t`�[�[�}W�[�[�[����- � � � �*6�5rxc�����t|�sdnt|j��� � ����� ��gn �fd�� D��}�dkr|}n(t|t��r�fd�|D��n|�z}d}� r||fz }� r|� fz }�||�fS)�8Utility to apply the function on a selection of columns.r)r�r�Nc� ��g|] }�|�� Sr�r�)r�r�inputss �rvr�z?Dataset._map_single.<locals>.prepare_inputs.<locals>.<listcomp> s���=c�=c�=c�c�f�S�k�=c�=c�=crxc���g|]}|�z��Sr�r�)r�r�r3s �rvr�z?Dataset._map_single.<locals>.prepare_inputs.<locals>.<listcomp> s���$A�$A�$A�A�Q��Z�$A�$A�$Arxr�)r@r�rgr�r�)� pa_inputsr�r3�fn_args�effective_indices�additional_argsrbr{r r �input_formatterrrr s ` @�������rv�prepare_inputsz+Dataset._map_single.<locals>.prepare_inputs� s������!�� �?���e�I�,>�&?�&?�,�)� ���F� #0�"7�v�h�h�=c�=c�=c�=c�Ub�=c�=c�=c�G���{�{�$+�!�!�EO�PW�Y]�E^�E^�$t�$A�$A�$A�$A��$A�$A�$A�$A�dk�nt�dt�!� �O�� 8��$5�#7�7��� +��D�7�*���7�O�Y�>� >rxc �P������dux� sdSt�t��r(�fd��j���D���d}nd}� ���� js� r5t t �j��������}n>t�t��r'��fd��j���D��}n�}� �9� D]6}||vr|� |��|r|�vr�� |���7�rft���}t�tt�� ��������}||krtd��d�t�t��rt�t��ri|���S�S)Nc�.��i|]\}}|�jv�||��Sr���keys_to_format)r�r��vrYs �rvr�z@Dataset._map_single.<locals>.prepare_outputs.<locals>.<dictcomp> s4���$�$�$�!�Q��a�O_�On�Fn�Fn�A�q�Fn�Fn�FnrxTFc�>��i|]\}}||�jvr|n�|��Sr�rl)r�r�rnrbrds ��rvr�z@Dataset._map_single.<locals>.prepare_outputs.<locals>.<dictcomp> sC���#�#�#�SW�ST�VW�A�Q�f�&;�;�;����1��#�#�#rxz�Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it.)r�rDr r�r1r�r@r/� itercolumnsr�r�r�r7r�r)r)rdrbrY�returned_lazy_dict�inputs_to_merger��input_num_examples�processed_inputs_num_examples�check_same_num_examplesr r�r� update_datar^s``` ������rv�prepare_outputsz,Dataset._map_single.<locals>.prepare_outputs s#������$4�D�$@�A�K� ��t��*�H�5�5� +�$�$�$�$�%5�%:�%@�%@�%B�%B�$�$�$� �&*�"�"�%*�"� $� $�%5� 6� 6� 6��!� )�]� )�"&�s�9�+A�9�CX�CX�CZ�CZ�'[�'[�"\�"\����F�H�-�-� )�#�#�#�#�#�[a�[f�[l�[l�[n�[n�#�#�#���#)���)�,�5�5�F���0�0�'�+�+�F�3�3�3�)�5�f�8H�.H�.H�(�,�,�V�4�4�4��&� �%(��^�^�"�03�4D�T�$�O_�Od�Od�Of�Of�Jg�Jg�Eh�Eh�4i�0j�0j�-�%�)F�F�F�>�f���� ��&�'�*�*� (�z�:J�G�/T�/T� (�?�/�>�-=�>�>�'�'rxc�^��� |||���\}}}}�g|�|�Ri|��}� |||��S)r`�r3r�� rdr�r3rbrergr rYrrirws ���rv�apply_functionz+Dataset._map_single.<locals>.apply_function: s\���:H�.��T[�dj�:k�:k�:k� 7�F�G�_�i�'�x�P��P�?�P�P�P�i�P�P� �"�?�9�f�6F�G�G� Grxc��n�K�� |||���\}}}}�g|�|�Ri|���d{V��}� |||��S)zLUtility to apply the function on a selection of columns. Same code but asyncryNr�rzs ���rv�async_apply_functionz1Dataset._map_single.<locals>.async_apply_function@ sr�����:H�.��T[�dj�:k�:k�:k� 7�F�G�_�i�%-�X�%V�w�%V��%V�%V�%V�I�%V�%V�V�V�V�V�V�V� �"�?�9�f�6F�G�G� Grxc����}|� � j}d}nd}� s��+tj��}d}t||� |� ����}n�d}t�d�����t j����}t j |d���tj d|d���}t||j � |� ����}|||fS) NTF)r��streamr��update_featuresr?r zCaching processed dataset at r��wb��dir�delete)r�r2r�r�r?r ) r�rW�BufferOutputStreamr%rror�r2r�r�tempfile�NamedTemporaryFile�name) �writer_featuresr�� buf_writer�tmp_filer=r�r�r r�r�r�rr�s �������rv�init_buffer_and_writerz3Dataset._map_single.<locals>.init_buffer_and_writerF s���&�O��&�"'�.��"&���"'��� ��!8��2�4�4� ���$�,�%�&7�$3� /�%5� �����"� �� � �M�O�M�M�N�N�N��G�O�O�O�<�<� �� �I��5�5�5�5�#�6�t��SX�Y�Y�Y��$�,�!��&7�$3� /�%5� �����v�x�/� /rxc 3�<�K�tj����r�g}|D�]�\}}|�|��� �� ��||� �������t � ��t jkr�� �tj � tj �����\}}� rrt |��t jkrU� �tj � tj �����\}}� rt |��t jk�U� rt� d� ��rZ|� d��� � d��� ��fV�� r� d� ���Z���� rS|d� �� d��fV�|� d��� � d��f� �QdSdS|D]\}}|�||� ���fV��dS)Nry)� return_whenr)�inspect�iscoroutinefunctionr�� create_taskr�r#�/MAX_NUM_RUNNING_ASYNC_MAP_FUNCTIONS_IN_PARALLEL�run_until_complete�asyncio�wait�FIRST_COMPLETEDr+r�rF) �shard_iterabler�r��exampler+�pendingr{r}r�loopr3�taskss ������rv� iter_outputsz)Dataset._map_single.<locals>.iter_outputsr sJ������*�8�4�4� G�=?��"0�D�D�J�A�w��N�N�1�%�%�%��L�L��!1�!1�2F�2F�w�PQ�Z`�2a�2a�2a�!b�!b�c�c�c��5�z�z�V�%[�[�[�(,�(?�(?�#�L��G�<S�T�T�T�)�)� ��g�$���G� � ��8n�(n�(n�,0�,C�,C� '� �U��@W� X� X� X�-�-�M�D�'�$���G� � ��8n�(n�(n�  �D�E�!�H�M�M�O�O�D�%�k�k�!�n�n�e�i�i��l�l�.A�.A�.C�.C�C�C�C�C� �D�E�!�H�M�M�O�O�D���1�!�!�*�d�&=�&=�e�A�h�&G�&G�G�G�G�G��K�K��N�N�E�I�I�a�L�L�0�0��1�1�1�1�1�#1�G�G�J�A�w��^�^�G�Q�v�F�F�F�F�F�F�F�F�G�Grx�NNNrJr4c 3�t�K�|]2}tt|t|�z�������V��3dSrr)r�r�r�)r�r�r�rgs ��rvr]z&Dataset._map_single.<locals>.<genexpr>� sC�����o�o�1��e�A�s�1�z�>�8�'D�'D�E�E�F�F�o�o�o�o�o�orx)r�r"Fz Canceling z async tasks.�KeyboardInterrupt)�msgzTasks canceled.�ro�r)Crgr2r�r1rBr�r�rr�r�r��get_running_loopr��new_event_loopr#rSr�rTrJ� contextlib� ExitStackr� enumerater@r�r7r6� enter_contextr�rWrP� write_rowrQrRr�r��writer9r8� write_batchr:� Exceptionr�r;r�r2rr�rrr�cancelr��gather�CancelledErrorr��shutil�move�umask�chmodro� _featuresr�r�rpr��getvalue)0rrrr r r{r�r�r�r�r�r�r�r r r�rr3r-r�r�r<r�r=r�r[�stack�arrow_formatted_shardr�r>r�r�r��num_examples_in_batch�taskr�ror{r}rurhr�rgrirwr�rvr^s0``````` `````````` @@@@@@@@@@@rvr<zDataset._map_single s� ��������������������������������| � ��I� � (� �*�j�A�o�o���J� � ��,�1�1�3�3� �� )��!3�!;�$(�M�&� !�'� � � � ��^� �� � �� #*�"K�c�%�2D�2D�2F�2F�.G�.G�!�.K��# �# �# �# �# �J ?� ?� ?� ?� ?� ?� ?� ?� ?� ?� ?� ?�() (�) (�) (�) (�) (�) (�) (�) (�) (�) (�V H� H� H� H� H� H� H� H�  H� H� H� H� H� H� H� H� ! 0�! 0�! 0�! 0�! 0�! 0�! 0�! 0�! 0�! 0�! 0�F%'�� � &�x� 0� 0� � 0��/�1�1����� 0� 0� 0��-�/�/���� 0�����D� G� G� G� G� G� G� G� G� G� G�4()�$�'7�$� �F�H� � "� �x�3�;�'>�'>� � � � �� !� #� #�O �u�N �(-�(9�(9�'�(B�(B�%���%.�/D�%E�%E�N�N�1@�k�s�5�z�z�z�c�%�j�j�T^�F^�ak�Fk�H�%(�o�o�o�o�o�PU�VW�Ya�cm�Pn�Pn�o�o�o�-�2�2�:��2�_�_�&�&�N��.=� �I�K�K�E�&2�l�>�&B�&B�=�=� ��7�&�6� �A�v�v�?U�?U�?W�?W� <� �F�H� %� 3� 3�F� ;� ;� ;�)�'�2�8�<�<� 6� &� 0� 0�� 9� 9� 9� 9�!+�G�R�\�!B�!B� 6� &� 0� 0���1E�1E�g�1N�1N� O� O� O� O� &� 7�6�$,�� �$;�$;�$.�w�� �$E�$E�%<�!'� 0� 0��1A�1A�1C�1C� D� D� D� D� &� � �W� 5� 5� 5�4��9�4��9�;�;���1R�)R�R�R�$(�I�K�K�E�"&��/K�"K�K�K�K�;<�8��+=�.!�I�K�K�E�$0�L��$@�$@�=�=���5�03�A���-�&� :� �<�Q�q�T�Q�Y�Y�?U�?U�?W�?W� <� �F�H� %� 3� 3�F� ;� ;� ;�)�%���:�:� :� &� 2� 2�5� 9� 9� 9� 9�!+�E�2�<�!@�!@�:� &� 2� 2�2�8�3G�3G��3N�3N� O� O� O� O� &� 7�:�<D�� �<S�<S�Xb�ch�jl�jv�Xw�Xw�<S� &� 2� 2�5�>�>�3C�3C� D� D� D� D� &� 2� 2�5� 9� 9� 9�4�8M�M�4��9�;�;���1R�)R�R�R�$(�I�K�K�E�"&��/K�"K�K�K�K�;<�8���&�6�#5��O�O�%�%�%����0�1� � � ��E�#?�?�?�?�?��5��)����)�)�)��+� ���(�(�(��7�>�>�(�-�8�8�5��I�h�m�4�4�4��8��L�L�!G�c�%�j�j�!G�!G�!G�H�H�H� %�=�=��� � �(;� �<�<�<�<�8��/�/����0F�G�G�G�G��#�2�J�?�8�8�8�� � �%6�7�7�7�7�7�8�����# ����}O �O �O �O �O �O �O �O �O �O �O ����O �O �O �O �b�E�7�7�7�7�7� � 6�8�/� �N�N� � � � �K�� �� 7� 7� 7��H�U�O�O�E� �H�U�O�O�O� �H�_�e�u�f�n� 5� 5� 5� � $��:�?�?�$�$�D�"�,�D�M��!��D�'�"3�"3�O�$�V[�Va�"3�"b�"b�b�b�b�b�b�b��D�'�"5�"5�j�6I�6I�6K�6K�RV�^c�^i�"5�"j�"j�j�j�j�j�j�j���e�#� #� #� #� #� #sb�*C>�>D�D�&Y"�(OT,�*Y"�,C Y�9!X�Y�0Y� Y� Y�Y�Y"�"Y&�)Y&c �@�d�}|�|d||||d���S)a� Group samples from the dataset into batches. Args: batch_size (`int`): The number of samples in each batch. drop_last_batch (`bool`, defaults to `False`): Whether to drop the last incomplete batch. num_proc (`int`, *optional*, defaults to `None`): Max number of processes when generating cache. Already cached shards are loaded sequentially. new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Returns: [`Dataset`]: A new Dataset where each item is a batch of multiple samples from the original dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="train") >>> batched_ds = ds.batch(batch_size=4) >>> batched_ds[0] {'text': ['compassionately explores the seemingly irreconcilable situation...', ...], # 4 items 'label': [1, 1, 1, 1]} ``` c�>�d�|���D��S)Nc��i|] \}}||g�� Sr�r��r�r�rns rvr�z3Dataset.batch.<locals>.batch_fn.<locals>.<dictcomp> s ��7�7�7�t�q�!�A��s�7�7�7rxr�)r�s rv�batch_fnzDataset.batch.<locals>.batch_fns��7�7�w�}�}���7�7�7� 7rxTzBatching examples)r{r�r�r�r�r��r)rur�r�r�r�r�s rvr�z Dataset.batch� sD��L 8� 8� 8��x�x� ��!�+��+�$�� � � rx)r�r�r�z2.0.1)r�� ignore_kwargsr�c�"�t|�����dkrtd���|�d�}t|��dkr|Stj|��r|sd}|�t tj|��rtnt||||||j ��ddtdtd��i��d||j ||| | | | | |||pd � ��}tj|��}|j|_ ||_|S) azApply a filter function to all the elements in the table in batches and update the table so that the dataset only includes examples according to the filter function. If the function is asynchronous, then `filter` will run your function in parallel, with up to one thousand simulatenous calls (configurable). It is recommended to use a `asyncio.Semaphore` in your function if you want to set a maximum number of operations that can run at the same time. Args: function (`Callable`): Callable with one of the following signatures: - `function(example: Dict[str, Any]) -> bool` if `batched=False` and `with_indices=False` and `with_rank=False` - `function(example: Dict[str, Any], *extra_args) -> bool` if `batched=False` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) - `function(batch: Dict[str, List]) -> List[bool]` if `batched=True` and `with_indices=False` and `with_rank=False` - `function(batch: Dict[str, List], *extra_args) -> List[bool]` if `batched=True` and `with_indices=True` and/or `with_rank=True` (one extra arg for each) If the function is asynchronous, then `filter` will run your function in parallel. If no function is provided, defaults to an always `True` function: `lambda x: True`. with_indices (`bool`, defaults to `False`): Provide example indices to `function`. Note that in this case the signature of `function` should be `def function(example, idx[, rank]): ...`. with_rank (`bool`, defaults to `False`): Provide process rank to `function`. Note that in this case the signature of `function` should be `def function(example[, idx], rank): ...`. input_columns (`str` or `List[str]`, *optional*): The columns to be passed into `function` as positional arguments. If `None`, a `dict` mapping to all formatted columns is passed as one argument. batched (`bool`, defaults to `False`): Provide batch of examples to `function`. batch_size (`int`, *optional*, defaults to `1000`): Number of examples per batch provided to `function` if `batched = True`. If `batched = False`, one example per batch is passed to `function`. If `batch_size <= 0` or `batch_size == None`, provide the full dataset as a single batch to `function`. keep_in_memory (`bool`, defaults to `False`): Keep the dataset in memory instead of writing it to a cache file. load_from_cache_file (`Optional[bool]`, defaults to `True` if caching is enabled): If a cache file storing the current computation from `function` can be identified, use it instead of recomputing. cache_file_name (`str`, *optional*): Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. fn_kwargs (`dict`, *optional*): Keyword arguments to be passed to `function`. num_proc (`int`, *optional*): Number of processes for multiprocessing. By default it doesn't use multiprocessing. suffix_template (`str`): If `cache_file_name` is specified, then this suffix will be added at the end of the base name of each. For example, if `cache_file_name` is `"processed.arrow"`, then for `rank = 1` and `num_proc = 4`, the resulting file would be `"processed_00001_of_00004.arrow"` for the default suffix (default `_{rank:05d}_of_{num_proc:05d}`). new_fingerprint (`str`, *optional*): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. desc (`str`, *optional*, defaults to `None`): Meaningful description to be displayed alongside with the progress bar while filtering examples. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.filter(lambda x: x["label"] == 1) Dataset({ features: ['text', 'label'], num_rows: 533 }) ``` rz�Using `.filter` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it.`Nc��dS)NTr�rs rvrz Dataset.filter.<locals>.<lambda>�s���rxr"Tr��uint64�Filter)rrr r�r{r�r�r�r�r�r�r r�r r�r r�)r�rr)r�r�rr �$async_get_indices_from_mask_function�get_indices_from_mask_functionrrr+r-r/r�rr r5)rurrr r r{r�r�r�r�r�r r�r r�r�r�� new_datasets rv�filterzDataset.filter,sK��z �t� � �"�"� #� #�a� '� '�6�`��� � � �%�~�H� �t�9�9��>�>��K� � &�x� 0� 0� �� ��J��(�(���.�x�8�8�4�4�4�3������� � � ����y�%��/�/�:�;�;��!��,�)�!5�+�/���+�+�'��!��7� � ��:�m�D�)�)� �&�|� ��#2� � ��rx)r�r�c �>�|�d||||||d|�� � S)aBCreate and cache a new Dataset by flattening the indices mapping. Args: keep_in_memory (`bool`, defaults to `False`): Keep the dataset in memory instead of writing it to a cache file. cache_file_name (`str`, *optional*, default `None`): Provide the name of a path for the cache file. It is used to store the results of the computation instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. features (`Optional[datasets.Features]`, defaults to `None`): Use a specific [`Features`] to store the cache file instead of the automatically generated one. disable_nullable (`bool`, defaults to `False`): Allow null values in the table. num_proc (`int`, optional, default `None`): Max number of processes when generating cache. Already cached shards are loaded sequentially new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments TzFlattening the indices) r{r�r�r�r�r r�r�r�r�)rur�r�r�r�r r�r�s rvrozDataset.flatten_indices�s;��H�x�x��)�+�/��-�+�)���  �  � rx�indices_cache_file_namec� �|�|�td���|�td���|�tj|��}ntj|��}t |j|j���|j ||���S)z�Return a new Dataset obtained by adding indices (provided in indices_cache_file_name or in a buffer) to the current Dataset. NzKAt least one of indices_cache_file_name or indices_buffer must be provided.z9please specify a fingerprint for the dataset with indices�rorprmr?) r�rOr�rNr�r�rqror�rp)rur�r�r?rms rv�_new_dataset_with_indicesz!Dataset._new_dataset_with_indices�s��� #� *�~�/E��j�k�k� k� � ��X�Y�Y� Y� "� .�-�7�8O�P�P�M�M�)�5�n�E�E�M�� �J�����!�!��*�'�#�  � � � rxr�c��|r|�td���t|�����dkrtd���t|��dkr|St |t jt jf��r1|���� tj ��}t |t��rt|��}t |t��rIt|��r9|jdkr.|j|j|jz }}|�|||���Sn� t't)|����}n(#t*$r|�dd|���cYSwxYw|dkrft-j|���}t1d�t3||��D����r*t'|��|z }|�|||���S|�|||||���S) a�Create a new dataset with rows selected following the list/array of indices. Args: indices (`range`, `list`, `iterable`, `ndarray` or `Series`): Range, list or 1D-array of integer indices for indexing. If the indices correspond to a contiguous range, the Arrow table is simply sliced. However passing a list of indices that are not contiguous creates indices mapping, which is much less efficient, but still faster than recreating an Arrow table made of the requested rows. keep_in_memory (`bool`, defaults to `False`): Keep the indices mapping in memory instead of writing it to a cache file. indices_cache_file_name (`str`, *optional*, defaults to `None`): Provide the name of a path for the cache file. It is used to store the indices mapping instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds.select(range(4)) Dataset({ features: ['text', 'label'], num_rows: 4 }) ``` N�MPlease use either `keep_in_memory` or `indices_cache_file_name` but not both.r��Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it.�r�)�startc3�(K�|] \}}||kV��dSrrr�)r�r��js rvr]z!Dataset.select.<locals>.<genexpr>Vs*����K�K�$�!�Q�q�A�v�K�K�K�K�K�Krx)r�r�r�r�)r�r�rr)r�rWr�r��to_numpy�astyper�r�rr�r�rEr��stop�_select_contiguousr�r7� StopIteration� itertoolsr\rar@�_select_with_indices_mapping) rur�r�r�r�r�r��length�counter_from_starts rvr�zDataset.select s0��V � n�5�A��l�m�m� m� �t� � �"�"� #� #�a� '� '�6�_��� � �t�9�9��>�>��K� �g���"�/�:� ;� ;� :��&�&�(�(�/�/���9�9�G� �g�x� (� (� $��7�m�m�G� �g�u� %� %� c�#�G�,�,� _���!�1C�1C� '� �w�|�g�m�/K�v���.�.�u�f�o�.�^�^�^�� V��T�'�]�]�+�+���� � V� V� V��.�.�q�!�_�.�U�U�U�U�U� V������z�z�%.�_�5�%A�%A�%A�"��K�K�#�g�7I�*J�*J�K�K�K�K�K�c�!�"4�5�5��=�F��2�2�5�&�Ra�2�b�b�b��0�0� �)�$;�/�+� 1� � � s�7E�"E9�8E9r�r�c �Z�t|�����dkrtd���t|��dkr|St|t|����t||zdz t|����|j�|dkrHt |j�||��|j� ��|j |���St |j|j� ��|j |j�||��|���S)a/Create a new dataset with rows from a contiguous slice of data. The slice is defined by that start index and its length. Args: start (`int`): start index. length (`int`): length of the slice to select. new_fingerprint (`str`, optional, default `None`): the new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds._select_contiguous(0, 4) Dataset({ features: ['text', 'label'], num_rows: 4 }) ``` rr�r"Nrr�) r�rr)rhrrr�r r�ror�rp)rur�r�r�s rvr�zDataset._select_contiguouscs!��: �t� � �"�"� #� #�a� '� '�6�_��� � �t�9�9��>�>��K�"�5�#�d�)�)�4�4�4�"�5�6�>�A�#5�s�4�y�y�A�A�A� �=� �F�a�K�K��� ����v�.�.��Y�^�^�%�%��j�+� ��� ��� ��Y�^�^�%�%��j�"�m�1�1�%��@�@�+� ��� rxc�&�|r|�td���t|�����dkrtd���t|��dkr|S|s|�)t j��}d}t |||d���}n�d}t�d|����tj � |��} tj | d� ��tjd | d � ��}t |j||d� ��}t!|t"��r|nt#|��}t|��} |rWt%t't)|����| ���t%t't+|����| ���n|�dd|���St j|t j�����} |j�-|j�d���| ��} tj�| gdg���} |5 |�| ��|���ni#t@tBf$rU|�Q|�"��tj �#|j��rtj$|j���wxYw ddd��n #1swxYwY|�o|�"��tKj&|j|��tj'd��} tj'| ��tj(|d| z��|�|�)||���S|�)|�*��|���S)aPCreate a new dataset with rows selected following the list/array of indices. The new dataset is made by creating a new indices mapping on top of the main arrow table. Args: indices (sequence, iterable, range, ndarray or Series): List or 1D-array of integer indices for indexing. keep_in_memory (`bool`, default `False`): Keep the indices mapping in memory instead of writing it to a cache file. indices_cache_file_name (`str`, optional, default `None`): Provide the name of a path for the cache file. It is used to store the indices mapping instead of the automatically generated cache file name. writer_batch_size (`int`, default `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `.map()`. new_fingerprint (`str`, optional, default `None`): the new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds._select_with_indices_mapping(range(4)) Dataset({ features: ['text', 'label'], num_rows: 4 }) ``` Nr�rr�r�)rr�r?r�zCaching indices mapping at Tr�r�Fr�)r2r�r?r�)rgr�rPr}r�)r�r?)r�r?)+r�r�rr)rWr�r%rror�r2r�rr�r�r�r�r�rhr�rr�r�r�r�rrr��takerP� from_arraysr8r:r�r�r;rrr�r�r�r�r�r�)rur�r�r�r�r�r�r�r=r�rg� indices_arrayrmr�s rvr�z$Dataset._select_with_indices_mapping�s���H � n�5�A��l�m�m� m� �t� � �"�"� #� #�a� '� '�6�_��� � �t�9�9��>�>��K� � �4�<��.�0�0�J��H� �!�5F�Tc�js����F�F��J� �K�K�O�6M�O�O� P� P� P�����(?�@�@�I� �K� �D� 1� 1� 1� 1��2�4�Y�u�U�U�U�H� ��]�6G�Ud�kt����F�(���6�6�I�'�'�D��M�M���4�y�y�� � R� &�s�3�w�<�<�'8�'8�t� D� D� D� D� &�s�3�w�<�<�'8�'8�t� D� D� D� D� D��*�*�1�a��*�Q�Q� Q����r�y�{�{�;�;�;� � �=� $� �M�0�0��3�3�8�8��G�G�M���,�,�m�_�Y�K�,�P�P� � � � � ��"�"�=�1�1�1����!�!�!�!���0�1� � � ��'��N�N�$�$�$��w�~�~�h�m�4�4�1�� �(�-�0�0�0��  ����"� � � � � � � � � � � ���� � � � � � � �N�N� � � � �K�� �'>� ?� ?� ?��H�U�O�O�E� �H�U�O�O�O� �H�,�e�u�f�n� =� =� =� � ��1�1�(?�_�2��� ��1�1��AT�AT�AV�AV�ds�1�t�t� ts+�3K�5)I�K�A&K�K�K�Kr+c�b�|�t|t|������S)a� Create a new [`Dataset`] that skips the first `n` elements. Args: n (`int`): Number of elements to skip. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="train") >>> list(ds.take(3)) [{'label': 1, 'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}] >>> ds = ds.skip(1) >>> list(ds.take(3)) [{'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}, {'label': 1, 'text': 'if you sometimes like to go to the movies to have fun , wasabi is a good place to start .'}] ``` )r�r�r��rur+s rv�skipz Dataset.skips&��8�{�{�5��C��I�I�.�.�/�/�/rx� num_timesc��|�td���|dkrt|g|z��n|�g��S)a[ Create a new [`Dataset`] that repeats the underlying dataset `num_times` times. Like itertools.repeat, repeating once just returns the full dataset. Args: num_times (`int`): Number of times to repeat the dataset. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="train") >>> ds = ds.take(2).repeat(2) >>> list(ds) [{'label': 1, 'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}, {'label': 1, 'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}, {'label': 1, 'text': 'effective but too-tepid biopic'}] ``` Nz8Map style datasets do not support indefinite repetition.r)r�r?r�)rur�s rv�repeatzDataset.repeat$sM��8 � ��W�X�X� X�FO�RS�m�m�.��v� �/A�B�B�B�Y]�Yd�Yd�eg�Yh�Yh�hrxc�F�|�t|����S)a{ Create a new [`Dataset`] with only the first `n` elements. Args: n (`int`): Number of elements to take. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="train") >>> small_ds = ds.take(2) >>> list(small_ds) [{'label': 1, 'text': 'the rock is destined to be the 21st century's new " conan " and that he's going to make a splash even greater than arnold schwarzenegger , jean-claud van damme or steven segal .'}, {'label': 1, 'text': 'the gorgeously elaborate continuation of " the lord of the rings " trilogy is so huge that a column of words cannot adequately describe co-writer/director peter jackson's expanded vision of j . r . r . tolkien's middle-earth .'}] ``` )r�r�r�s rvr�z Dataset.takeDs��*�{�{�5��8�8�$�$�$rx�at_end�reverse�null_placementc �b�t|�����dkrtd���t|��dkr|St|t��r|g}t|t ��s0t|��t|��krt d���n|gt|��z}|D]D} t| t��r| |jjvrt d| �d|jj������E|dvr%|dkrd}n|d krd }nt d |�d ����|�|n t��}|j rl|�|� |��}tj �|��r6|r4t�d|����|�||���St%|jt'dt|����|j���} d�t+||��D��} t-j| | |���} |�| ||||���S)at Create a new dataset sorted according to a single or multiple columns. Args: column_names (`Union[str, Sequence[str]]`): Column name(s) to sort by. reverse (`Union[bool, Sequence[bool]]`, defaults to `False`): If `True`, sort by descending order rather than ascending. If a single bool is provided, the value is applied to the sorting of all column names. Otherwise a list of bools with the same length and order as column_names must be provided. null_placement (`str`, defaults to `at_end`): Put `None` values at the beginning if `at_start` or `first` or at the end if `at_end` or `last` <Added version="1.14.2"/> keep_in_memory (`bool`, defaults to `False`): Keep the sorted indices in memory instead of writing it to a cache file. load_from_cache_file (`Optional[bool]`, defaults to `True` if caching is enabled): If a cache file storing the sorted indices can be identified, use it instead of recomputing. indices_cache_file_name (`str`, *optional*, defaults to `None`): Provide the name of a path for the cache file. It is used to store the sorted indices instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. Higher value gives smaller cache files, lower value consume less temporary memory. new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset('cornell-movie-review-data/rotten_tomatoes', split='validation') >>> ds['label'][:10] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] >>> sorted_ds = ds.sort('label') >>> sorted_ds['label'][:10] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] >>> another_sorted_ds = ds.sort(['label', 'text'], reverse=[True, False]) >>> another_sorted_ds['label'][:10] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` rz�Using `.sort` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it.zlParameter 'reverse' should be either a boolean or a list of booleans with the same length as 'column_names'.zColumn 'zA' not found in the dataset. Please provide a column selected in: )�at_startr��firstr��lastr�znull_placement 'zX' is an invalid parameter value. Must be either 'last', 'at_end', 'first' or 'at_start'.Nz-Loading cached sorted indices for dataset at �r?r��rDr�r�c�$�g|] \}}||sdndf��S)� ascending� descendingr�)r�r� col_reverses rvr�z Dataset.sort.<locals>.<listcomp>�s7�� � � �HX��[�S�[�B�+�+�l� C� � � rx)rr��r�r�r�r�r�)r�rr)r�r�r�r�rqr/r<rrr�r2rrror�rCr�rrr@�pc� sort_indicesr�) rur/r�r�r�r�r�r�r�r�� sort_tablerr�s rv�sortz Dataset.sort[s���p �t� � �"�"� #� #�a� '� '�6�]��� � �t�9�9��>�>��K� �l�C� (� (� *�(�>�L��'�4�(�(� 4��7�|�|�s�<�0�0�0�0� �C����1� �i�#�l�"3�"3�3�G�#� � �F��f�c�*�*� �f�D�J�<S�.S�.S� �B�v�B�B�hl�hr�h�B�B����/T� �!7� 7� 7���(�(�!+����6�)�)�!)��� �P�~�P�P�P����8L�7W�3�3�]o�]q�]q�� � � �&�.�*.�*C�*C�O�*T�*T�'��w�~�~�5�6�6� �;O� �� � �e�Lc�e�e�f�f�f��5�5� /�I`�6����!��*��a��T���#�#��M� � � � �  � �\_�`l�nu�\v�\v� � � � ��/�*� �R`�a�a�a���{�{��)�$;�/�+� � � � rx)r��randomized_functionr��seedc��t|�����dkrtd���t|��dkr|S|r|�td���|�|�td���|�.t |t jj��std���|�|n t��}|�w|�Vt j� ��^}}} }| dkr|| n|d}t j���}t j� |��}|j rl|�|� |��}tj�|��r6|r4t �d|����|�||� ��S|�t|����} |�| ||s|nd||� ��S) a*Create a new Dataset where the rows are shuffled. Currently shuffling uses numpy random generators. You can either supply a NumPy BitGenerator to use, or a seed to initiate NumPy's default random generator (PCG64). Shuffling takes the list of indices `[0:len(my_dataset)]` and shuffles it to create an indices mapping. However as soon as your [`Dataset`] has an indices mapping, the speed can become 10x slower. This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren't reading contiguous chunks of data anymore. To restore the speed, you'd need to rewrite the entire dataset on your disk again using [`Dataset.flatten_indices`], which removes the indices mapping. This may take a lot of time depending of the size of your dataset though: ```python my_dataset[0] # fast my_dataset = my_dataset.shuffle(seed=42) my_dataset[0] # up to 10x slower my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data my_dataset[0] # fast again ``` In this case, we recommend switching to an [`IterableDataset`] and leveraging its fast approximate shuffling method [`IterableDataset.shuffle`]. It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal: ```python my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards=128) for example in enumerate(my_iterable_dataset): # fast pass shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100) for example in enumerate(shuffled_iterable_dataset): # as fast as before pass ``` Args: seed (`int`, *optional*): A seed to initialize the default BitGenerator if `generator=None`. If `None`, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. generator (`numpy.random.Generator`, *optional*): Numpy random Generator to use to compute the permutation of the dataset rows. If `generator=None` (default), uses `np.random.default_rng` (the default BitGenerator (PCG64) of NumPy). keep_in_memory (`bool`, default `False`): Keep the shuffled indices in memory instead of writing it to a cache file. load_from_cache_file (`Optional[bool]`, defaults to `True` if caching is enabled): If a cache file storing the shuffled indices can be identified, use it instead of recomputing. indices_cache_file_name (`str`, *optional*): Provide the name of a path for the cache file. It is used to store the shuffled indices instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the dataset after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds['label'][:10] [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] # set a seed >>> shuffled_ds = ds.shuffle(seed=42) >>> shuffled_ds['label'][:10] [1, 0, 1, 1, 0, 0, 0, 0, 0, 0] ``` rz�Using `.shuffle` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it.Nr�zKBoth `seed` and `generator` were provided. Please specify just one of them.zDThe provided generator must be an instance of numpy.random.Generator�pz/Loading cached shuffled indices for dataset at r�r�)r�rr)r�r�r��random� Generatorr<� get_state� default_rngrrr�r2rrror�� permutationr�) rur�r�r�r�r�r�r�r��posrs rvr�zDataset.shuffle�s ��j �t� � �"�"� #� #�a� '� '�6�`��� � �t�9�9��>�>��K� � n�5�A��l�m�m� m� � � � 5��j�k�k� k� � ��I�r�y�?R�)S�)S� ��c�d�d� d�7K�7W�3�3�]o�]q�]q�� � ��|�#%�9�#6�#6�#8�#8� ��4��q�$'�#�I�I�t�C�y�y�4��7���I�$�$�&�&��� �-�-�d�3�3�I� � � �&�.�*.�*C�*C�O�*T�*T�'��w�~�~�5�6�6� �;O� �� � �g�Ne�g�g�h�h�h��5�5� /�I`�6���� �+�+�C��I�I�6�6� ��{�{��)�CQ�$[�$;�$;�W[�/�+� � � � rx�train_new_fingerprint�test_new_fingerprint)r��train_indices_cache_file_name�test_indices_cache_file_name)r�r��fingerprint_namesr�� test_size� train_sizer��stratify_by_columnrr rkc � �ddlm}t|�����dkrt d���t|��dkr|||d���S|�|�d}t|��}t |t ��r ||ks'|dks!t |t��r"|dks|dkrtd|�d |�d ����t |t ��r ||ks'|dks!t |t��r"|dks|dkrtd |�d |�d ����|�>t |t tf��s"td |�d t|�������|�>t |t tf��s"td|�d t|�������t |t��r4t |t��r||zdkrtd||z�d����t |t��rt||z��}n$t |t ��rt|��}t |t��rt||z��}n$t |t ��rt|��}|�||z }n|�||z }||z|krtd||z�d|�d����t |��t |��}}|dkrtd|�d|�d|�d����|�|n t��}|�{|durw|�Vtj���^}}}}|dkr||n|d}tj���}tj�|��}|jr�| �| �.| �|�| ��} | �|�| ��} t&j�| ��ryt&j�| ��rZ|rXt,�d| �d| ����||�| | ���|�| | ���d���S|s?|�td���tj|��}tj|||z��}�no|��3||jj���vr1td|�d|jj��������t |jj|t:��sEtd t:j�d!|�d"t|jj|��j�d#���� t?tA|�!d$��||||�%����\}}ns#tD$r-}tG|��d&krtd'|�d(����|�d}~wwxYw|�$t|����}|d|�}||||z�}|�%||| | | �)��}|�%||| | | �)��}|||d���S)*a�Return a dictionary ([`datasets.DatasetDict`]) with two random train and test subsets (`train` and `test` `Dataset` splits). Splits are created from the dataset according to `test_size`, `train_size` and `shuffle`. This method is similar to scikit-learn `train_test_split`. Args: test_size (`numpy.random.Generator`, *optional*): Size of the test split If `float`, should be between `0.0` and `1.0` and represent the proportion of the dataset to include in the test split. If `int`, represents the absolute number of test samples. If `None`, the value is set to the complement of the train size. If `train_size` is also `None`, it will be set to `0.25`. train_size (`numpy.random.Generator`, *optional*): Size of the train split If `float`, should be between `0.0` and `1.0` and represent the proportion of the dataset to include in the train split. If `int`, represents the absolute number of train samples. If `None`, the value is automatically set to the complement of the test size. shuffle (`bool`, *optional*, defaults to `True`): Whether or not to shuffle the data before splitting. stratify_by_column (`str`, *optional*, defaults to `None`): The column name of labels to be used to perform stratified split of data. seed (`int`, *optional*): A seed to initialize the default BitGenerator if `generator=None`. If `None`, then fresh, unpredictable entropy will be pulled from the OS. If an `int` or `array_like[ints]` is passed, then it will be passed to SeedSequence to derive the initial BitGenerator state. generator (`numpy.random.Generator`, *optional*): Numpy random Generator to use to compute the permutation of the dataset rows. If `generator=None` (default), uses `np.random.default_rng` (the default BitGenerator (PCG64) of NumPy). keep_in_memory (`bool`, defaults to `False`): Keep the splits indices in memory instead of writing it to a cache file. load_from_cache_file (`Optional[bool]`, defaults to `True` if caching is enabled): If a cache file storing the splits indices can be identified, use it instead of recomputing. train_cache_file_name (`str`, *optional*): Provide the name of a path for the cache file. It is used to store the train split indices instead of the automatically generated cache file name. test_cache_file_name (`str`, *optional*): Provide the name of a path for the cache file. It is used to store the test split indices instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): Number of rows per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. train_new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the train set after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments test_new_fingerprint (`str`, *optional*, defaults to `None`): The new fingerprint of the test set after transform. If `None`, the new fingerprint is computed using a hash of the previous fingerprint, and the transform arguments Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds = ds.train_test_split(test_size=0.2, shuffle=True) DatasetDict({ train: Dataset({ features: ['text', 'label'], num_rows: 852 }) test: Dataset({ features: ['text', 'label'], num_rows: 214 }) }) # set a seed >>> ds = ds.train_test_split(test_size=0.2, seed=42) # stratified split >>> ds = load_dataset("imdb",split="train") Dataset({ features: ['text', 'label'], num_rows: 25000 }) >>> ds = ds.train_test_split(test_size=0.2, stratify_by_column="label") DatasetDict({ train: Dataset({ features: ['text', 'label'], num_rows: 20000 }) test: Dataset({ features: ['text', 'label'], num_rows: 5000 }) }) ``` r"rjrz�Using `.train_test_split` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it.)�train�testNg�?z test_size=zB should be either positive and smaller than the number of samples z or a float in the (0, 1) rangez train_size=zInvalid value for train_size: z of type zInvalid value for test_size: z&The sum of test_size and train_size = zD, should be in the (0, 1) range. Reduce test_size and/or train_size.z&The sum of train_size and test_size = z/, should be smaller than the number of samples z%. Reduce test_size and/or train_size.zWith n_samples=z , test_size=z and train_size=zU, the resulting train set will be empty. Adjust any of the aforementioned parameters.Tr�z,Loading cached split indices for dataset at z and r�zBStratified train/test split is not implemented for `shuffle=False`zKey z not found in z,Stratifying by column is only supported for rtrurVr�)�rngzMinimum class count errorzThe least populated class in zn column has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.r�)&� dataset_dictrkr�rr)r�r��floatr�r�r r r<r�rrrrrr�r2rrror��arangersr�r�r*r�r�rdrr�r�rr�)rur r r�r r�r�r�r�rr r�rrrk� n_samples�n_test�n_trainr�r� train_indices� test_indices�errorr� train_split� test_splits rv�train_test_splitzDataset.train_test_split_s+��` .�-�-�-�-�-� �t� � �"�"� #� #�a� '� '�6�i��� � �t�9�9��>�>��;��t�<�<�=�=� =� � ��!3��I���I�I� � �y�#� &� &� ��i�'�'�9��>�>��)�U�+�+�,:��a���9��>�>��Y�Y�Y�Y�.7�Y�Y�Y��� � �z�3� '� '� ��y�(�(�J�!�O�O��*�e�,�,�-<��q���J�!�O�O��Y�j�Y�Y�.7�Y�Y�Y��� � � !�*�Z�#�u��*N�*N� !��e�j�e�e�SW�Xb�Sc�Sc�e�e�f�f� f� � ��I��U�|�)L�)L� ��b�Y�b�b�QU�V_�Q`�Q`�b�b�c�c� c� �j�%� (� (� �Z� �5�-I�-I� �j�[d�Nd�gh�Nh�Nh��>��i�9O�>�>�>��� � �i�� '� '� &��)�i�/�0�0�F�F� � �3� '� '� &��9�%�%�F� �j�%� (� (� (��J��2�3�3�G�G� � �C� (� (� (��J�'�'�G� � ��&�(�G�G� � ���(�F� �V� �i� '� '����6�9I���$������ ��g�,�,��F� � ��� �a�<�<��-�)�-�-��-�-�T^�-�-�-��� � 8L�7W�3�3�]o�]q�]q�� � ��D����|�#%�9�#6�#6�#8�#8� ��4��q�$'�#�I�I�t�C�y�y�4��7���I�$�$�&�&��� �-�-�d�3�3�I� � � �,�4�8T�8\�1�8�48�4M�4M�Nc�4d�4d�1�/�7�37�3L�3L�Ma�3b�3b�0�����<�=�=� ��G�N�N�#?�@�@� �)� � � � �F�C`�F�F�hD�F�F����#�{�!%�!?�!?�(=�Wt�"@�"�"�!%� >� >�(<�Vr�!?�!�!� �� � � ��# I�!�-� �!e�f�f�f��I�g�.�.�M��9�W�g��.>�?�?�L�L�"�-�%�T�Z�-@�-E�-E�-G�-G�G�G�$�%j�,>�%j�%j�d�j�Na�Nf�Nf�Nh�Nh�%j�%j�k�k�k�!�$�*�"5�6H�"I�:�V�V��$�B�z�GZ�B�B�qC�B�B�IM�NR�NX�Na�bt�Nu�Iv�Iv�I�B�B�B����$�26�A� �,�,�W�5�5�6H�I�7�TZ�`i����3�3�/�M�<�<�� !� $� $� $��5�z�z�%@�@�@�(�/�<N�/�/�/����$� ����� $����(�3�3�C��I�I�>�>� �*�7�F�7�3� � +�F�f�w�6F�,G� H� ��k�k�!�)�$A�/�1� "� � � ��[�[� �)�$@�/�0� !� � � ��{�[�*�E�E�F�F�Fs�<V � W�(V?�?Wrfr�c�p�d|cxkr|ksntd���|r[t|��|z}t|��|z}||zt||��z} | |z||krdndz} t| | ��} n#t j|t|��|��} |�| |||���S)a� Return the `index`-nth shard from dataset split into `num_shards` pieces. This shards deterministically. `dataset.shard(n, i)` splits the dataset into contiguous chunks, so it can be easily concatenated back together after processing. If `len(dataset) % n == l`, then the first `l` dataset each have length `(len(dataset) // n) + 1`, and the remaining dataset have length `(len(dataset) // n)`. `datasets.concatenate_datasets([dset.shard(n, i) for i in range(n)])` returns a dataset with the same order as the original. Note: n should be less or equal to the number of elements in the dataset `len(dataset)`. On the other hand, `dataset.shard(n, i, contiguous=False)` contains all elements of the dataset whose index mod `n = i`. Be sure to shard before using any randomizing operator (such as `shuffle`). It is best if the shard operator is used early in the dataset pipeline. Args: num_shards (`int`): How many shards to split the dataset into. index (`int`): Which shard to select and return. contiguous: (`bool`, defaults to `True`): Whether to select contiguous blocks of indices for shards. keep_in_memory (`bool`, defaults to `False`): Keep the dataset in memory instead of writing it to a cache file. indices_cache_file_name (`str`, *optional*): Provide the name of a path for the cache file. It is used to store the indices of each shard instead of the automatically generated cache file name. writer_batch_size (`int`, defaults to `1000`): This only concerns the indices mapping. Number of indices per write operation for the cache file writer. This value is a good trade-off between memory usage during the processing, and processing speed. Higher value makes the processing do fewer lookups, lower value consume less temporary memory while running `map`. Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> ds Dataset({ features: ['text', 'label'], num_rows: 1066 }) >>> ds.shard(num_shards=2, index=0) Dataset({ features: ['text', 'label'], num_rows: 533 }) ``` rz$index should be in [0, num_shards-1]r")r�r�r�r�)r�r�r�r�r�rr�) rur�rfr�r�r�r��div�modr��endr�s rvrz Dataset.shardzs���t�E�&�&�&�&�J�&�&�&�&��C�D�D� D� � >��d�)�)�z�)�C��d�)�)�j�(�C��%�K�#�e�S�/�/�1�E��#�+�e�c�k�k���q�9�C��E�3�'�'�G�G��i��s�4�y�y�*�=�=�G��{�{��)�$;�/� � � � rx� path_or_bufc �N�ddlm}|||f|||d�|�����S)a�Exports the dataset to csv Args: path_or_buf (`PathLike` or `FileOrBuffer`): Either a path to a file (e.g. `file.csv`), a remote URI (e.g. `hf://datasets/username/my_dataset_name/data.csv`), or a BinaryIO, where the dataset will be saved to in the specified format. batch_size (`int`, *optional*): Size of the batch to load in memory and write at once. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. num_proc (`int`, *optional*): Number of processes for multiprocessing. By default it doesn't use multiprocessing. `batch_size` in this case defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE` but feel free to make it 5x or 10x of the default value if you have sufficient compute power. storage_options (`dict`, *optional*): Key/value pairs to be passed on to the file-system backend, if any. <Added version="2.19.0"/> **to_csv_kwargs (additional keyword arguments): Parameters to pass to pandas's [`pandas.DataFrame.to_csv`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html). <Changed version="2.10.0"> Now, `index` defaults to `False` if not specified. If you would like to write the index, pass `index=True` and also set a name for the index column by passing `index_label`. </Changed> Returns: `int`: The number of characters or bytes written. Example: ```py >>> ds.to_csv("path/to/dataset/directory") ``` r")�CsvDatasetWriter�r�r�r�)r�r$r�)rur"r�r�r�� to_csv_kwargsr$s rv�to_csvzDataset.to_csv�s\��` -�,�,�,�,�,��� � � �"��+�  � � �  � � �%�'�'� rxc ��t|jtdt|����|j������S)a�Returns the dataset as a Python dict. Can also return a generator for large datasets. Args: batch_size (`int`, *optional*): The size (number of rows) of the batches if `batched` is `True`. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. Returns: `dict` or `Iterator[dict]` Example: ```py >>> ds.to_dict() ``` rr�)rCrqr�r�rr� to_pydict)rur�s rv�to_dictzDataset.to_dictsC�� ��*��a��T���#�#��M� � � � �)�+�+�  rxc ��t|jtdt|����|j������S)z�Returns the dataset as a Python list. Returns: `list` Example: ```py >>> ds.to_list() ``` rr�)rCrqr�r�rrrqrzs rv�to_listzDataset.to_listsC����*��a��T���#�#��M� � � � �)�+�+�  rxc �N�ddlm}|||f|||d�|�����S)a�Export the dataset to JSON Lines or JSON. The default output format is [JSON Lines](https://jsonlines.org/). To export to [JSON](https://www.json.org), pass `lines=False` argument and the desired `orient`. Args: path_or_buf (`PathLike` or `FileOrBuffer`): Either a path to a file (e.g. `file.json`), a remote URI (e.g. `hf://datasets/username/my_dataset_name/data.json`), or a BinaryIO, where the dataset will be saved to in the specified format. batch_size (`int`, *optional*): Size of the batch to load in memory and write at once. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. num_proc (`int`, *optional*): Number of processes for multiprocessing. By default, it doesn't use multiprocessing. `batch_size` in this case defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE` but feel free to make it 5x or 10x of the default value if you have sufficient compute power. storage_options (`dict`, *optional*): Key/value pairs to be passed on to the file-system backend, if any. <Added version="2.19.0"/> **to_json_kwargs (additional keyword arguments): Parameters to pass to pandas's [`pandas.DataFrame.to_json`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html). Default arguments are `lines=True` and `orient="records". <Changed version="2.11.0"> The parameter `index` defaults to `False` if `orient` is `"split"` or `"table"`. If you would like to write the index, pass `index=True`. </Changed> Returns: `int`: The number of characters or bytes written. Example: ```py >>> ds.to_json("path/to/dataset/directory/filename.jsonl") ``` r")�JsonDatasetWriterr%)r�r.r�)rur"r�r�r��to_json_kwargsr.s rv�to_jsonzDataset.to_json)s\��f /�.�.�.�.�.� � � � � �"��+�  � � �  � � �%�'�'� rxc � ���|sPt�jtdt������j����t ���S�r�n tj���fd�tdt������D��S)a�Returns the dataset as a `pandas.DataFrame`. Can also return a generator for large datasets. Args: batched (`bool`): Set to `True` to return a generator that yields the dataset as batches of `batch_size` rows. Defaults to `False` (returns the whole datasets once). batch_size (`int`, *optional*): The size (number of rows) of the batches if `batched` is `True`. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. Returns: `pandas.DataFrame` or `Iterator[pandas.DataFrame]` Example: ```py >>> ds.to_pandas() ``` rr��� types_mapperc 3��K�|]J}t�jt||�z���j����t ���V��KdS)r�r2N)rCrqr�rr� to_pandasr3)r�r3r�rus ��rvr]z$Dataset.to_pandas.<locals>.<genexpr>�sv������� � ��*��f�f�z�&9�:�:� �M�����)�)<�)�=�=� �����rx) rCrqr�r�rrr5r3r#r5r�)rur�r{s`` rvr5zDataset.to_pandasgs�����,� ���j��!�S��Y�Y�'�'�� �����i�%8�i�9�9�  :� (2�T���v�7T�J������ $�A�s�4�y�y�*�=�=� ��� rx�schema_overrides�rechunkc �l������tjr�ddl�|sQ�jt �jt dt������j��jnd��������S�r�n tj ������fd�tdt������D��Std���)a�Returns the dataset as a `polars.DataFrame`. Can also return a generator for large datasets. Args: batched (`bool`): Set to `True` to return a generator that yields the dataset as batches of `batch_size` rows. Defaults to `False` (returns the whole datasets once). batch_size (`int`, *optional*): The size (number of rows) of the batches if `batched` is `True`. Defaults to `genomicsml.datasets.config.DEFAULT_MAX_BATCH_SIZE`. schema_overrides (`dict`, *optional*): Support type specification or override of one or more columns; note that any dtypes inferred from the schema param will be overridden. rechunk (`bool`): Make sure that all data is in contiguous memory. Defaults to `True`. Returns: `polars.DataFrame` or `Iterator[polars.DataFrame]` Example: ```py >>> ds.to_polars() ``` rNr��r6r7c 3��K�|]K}�jt�jt||�z���j��jnd��������V��LdS)Nr�r9)� from_arrowrCrqr�rr)r�r3r�r[r7r6rus �����rvr]z$Dataset.to_polars.<locals>.<genexpr>�s������ � ��"�B�M�#�"&�*� %�f�f�z�.A� B� B�59�]�5N�D�M�M�TX���� *:� '���� � � � � � rxzDPolars needs to be installed to be able to return Polars dataframes.) r#rSrJr;rCrqr�r�rrr5r�r�)rur�r{r6r7r[s`` ``@rv� to_polarszDataset.to_polars�s��������< � "� e� � � � �� �$�r�}��"�j�!�!�S��Y�Y�/�/�15��1J�� � �PT���� &6�#�����,6�X�Z�Z�6�;X� � � � � � � � � �#(��3�t�9�9�j�"A�"A� � � � ��c�d�d� drxc �L�ddlm}|||f||d�|�����S)aExports the dataset to parquet Args: path_or_buf (`PathLike` or `FileOrBuffer`): Either a path to a file (e.g. `file.parquet`), a remote URI (e.g. `hf://datasets/username/my_dataset_name/data.parquet`), or a BinaryIO, where the dataset will be saved to in the specified format. batch_size (`int`, *optional*): Size of the batch to load in memory and write at once. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. storage_options (`dict`, *optional*): Key/value pairs to be passed on to the file-system backend, if any. <Added version="2.19.0"/> **parquet_writer_kwargs (additional keyword arguments): Parameters to pass to PyArrow's `pyarrow.parquet.ParquetWriter`. Returns: `int`: The number of characters or bytes written. Example: ```py >>> ds.to_parquet("path/to/dataset/directory") ``` r")�ParquetDatasetWriter)r�r�)r�r>r�)rur"r�r��parquet_writer_kwargsr>s rv� to_parquetzDataset.to_parquet�sQ��B 5�4�4�4�4�4�#�#� �+� �*4�o� � �Yn� � � �%�'�'� rxr�c �L�ddlm}||||fd|i|�����S)a�Exports the dataset to a SQL database. Args: name (`str`): Name of SQL table. con (`str` or `sqlite3.Connection` or `sqlalchemy.engine.Connection` or `sqlalchemy.engine.Connection`): A [URI string](https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls) or a SQLite3/SQLAlchemy connection object used to write to a database. batch_size (`int`, *optional*): Size of the batch to load in memory and write at once. Defaults to `datasets.config.DEFAULT_MAX_BATCH_SIZE`. **sql_writer_kwargs (additional keyword arguments): Parameters to pass to pandas's [`pandas.DataFrame.to_sql`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html). <Changed version="2.11.0"> Now, `index` defaults to `False` if not specified. If you would like to write the index, pass `index=True` and also set a name for the index column by passing `index_label`. </Changed> Returns: `int`: The number of records written. Example: ```py >>> # con provided as a connection URI string >>> ds.to_sql("data", "sqlite:///my_own_db.sql") >>> # con provided as a sqlite3 connection object >>> import sqlite3 >>> con = sqlite3.connect("my_own_db.sql") >>> with con: ... ds.to_sql("data", con) ``` r")�SqlDatasetWriterr�)r�rBr�)rur�r�r��sql_writer_kwargsrBs rv�to_sqlzDataset.to_sql�sF��Z -�,�,�,�,�,����d�C�\�\�J�\�J[�\�\�b�b�d�d�drxc���|jj}d�|jj���D��}|r`d��fd�}|�d��dd�}t ||���t|j��zt|��z �|�z}|j�,|t|j��zt|j��z }|S)Nc�:�g|]\}}t|d����|��S�T)�ignore_decode_attribute�r4r�s rvr�z,Dataset._estimate_nbytes.<locals>.<listcomp>'s@�� � � ��!�Q�9I�!�ei�9j�9j�9j� � � � � rxrc���t|tttf��rb|���D].}|�*|d�"|d�t |d��}�|z ��/�|�d��jz�dSdS)N�bytesr2)r�r)r,r.rqr(r��nbytes)r�r�rrg� extra_nbytess �rv�extra_nbytes_visitorz6Dataset._estimate_nbytes.<locals>.extra_nbytes_visitor/s�����g��u�e�'<�=�=�?�"�_�_�.�.�1�1���=�Q�w�Z�-?�A�f�I�DY�#+�A�f�I�#6�#6�D�(�D�0�L�� �E�K�K��$7�$7�$>�>�L�L�L� ?�?rxr4r�) r rLrsr�r�rrXr�rr)rur"�decodable_columnsrNrDrMs @rvrzDataset._estimate_nbytes"s������)�� � ��*�-�3�3�5�5� � � �� � ;��L� ?� ?� ?� ?� ?��$�$�W�-�-�e�t�e�4�E� �%�!5� 6� 6� 6�'�#�d�i�.�.�8�3�u�:�:�E�L�+�l�:�N� �=� $�+�c�$�-�.@�.@�@�3�t�y�>�>�Q�N��rxr6c#�K�t|��D]6\}}|�d���|��D]}||fV�� �7dS)Nr4)r�rr7)r6r�r�rr�s rv�_generate_tables_from_shardsz$Dataset._generate_tables_from_shardsBsp���� )�&� 1� 1� *� *� �I�u�!�-�-�g�6�6�;�;�J�G�G� *� *����)�)�)�)�)� *� *� *rxc#�K�tt|����D])\}}|tj�|g��fV��*dSrr)r�rQrWrP� from_batches)r~� batch_idxr�s rv� _generate_tables_from_cache_filez(Dataset._generate_tables_from_cache_fileHs\���� )�*V�W_�*`�*`� a� a� <� <� �I�u��R�X�2�2�E�7�;�;�;� ;� ;� ;� ;� <� <rxr"rlc����ddlm}m}�j�G�js1�j�9t �j��t �j��krtd����t���kr#tdt����d��d�����j �t� d���dkrtj���gn��fd �t!���D��}|t"j|t&jd �� ��}||t+�j� ��� ��}�jr|��j��}|S)a�Get an [`datasets.IterableDataset`] from a map-style [`datasets.Dataset`]. This is equivalent to loading a dataset in streaming mode with [`datasets.load_dataset`], but much faster since the data is streamed from local files. Contrary to map-style datasets, iterable datasets are lazy and can only be iterated over (e.g. using a for loop). Since they are read sequentially in training loops, iterable datasets are much faster than map-style datasets. All the transformations applied to iterable datasets like filtering or processing are done on-the-fly when you start iterating over the dataset. Still, it is possible to shuffle an iterable dataset using [`datasets.IterableDataset.shuffle`]. This is a fast approximate shuffling that works best if you have multiple shards and if you specify a buffer size that is big enough. To get the best speed performance, make sure your dataset doesn't have an indices mapping. If this is the case, the data are not read contiguously, which can be slow sometimes. You can use `ds = ds.flatten_indices()` to write your dataset in contiguous chunks of data and have optimal speed before switching to an iterable dataset. Args: num_shards (`int`, default to `1`): Number of shards to define when instantiating the iterable dataset. This is especially useful for big datasets to be able to shuffle properly, and also to enable fast parallel loading using a PyTorch DataLoader or in distributed setups for example. Shards are defined using [`datasets.Dataset.shard`]: it simply slices the data without writing anything on disk. Returns: [`datasets.IterableDataset`] Example: Basic usage: ```python >>> ids = ds.to_iterable_dataset() >>> for example in ids: ... pass ``` With lazy filtering and processing: ```python >>> ids = ds.to_iterable_dataset() >>> ids = ids.filter(filter_fn).map(process_fn) # will filter and process on-the-fly when you start iterating over the iterable dataset >>> for example in ids: ... pass ``` With sharding to enable efficient shuffling: ```python >>> ids = ds.to_iterable_dataset(num_shards=64) # the dataset is split into 64 shards to be iterated over >>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer for fast approximate shuffling when you start iterating >>> for example in ids: ... pass ``` With a PyTorch DataLoader: ```python >>> import torch >>> ids = ds.to_iterable_dataset(num_shards=64) >>> ids = ids.filter(filter_fn).map(process_fn) >>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards to each worker to load, filter and process when you start iterating >>> for example in ids: ... pass ``` With a PyTorch DataLoader and shuffling: ```python >>> import torch >>> ids = ds.to_iterable_dataset(num_shards=64) >>> ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating >>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from the shuffled list of shards to each worker when you start iterating >>> for example in ids: ... pass ``` In a distributed setup like PyTorch DDP with a PyTorch DataLoader and shuffling ```python >>> from datasets.distributed import split_dataset_by_node >>> ids = ds.to_iterable_dataset(num_shards=512) >>> ids = ids.shuffle(buffer_size=10_000, seed=42) # will shuffle the shards order and use a shuffle buffer when you start iterating >>> ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating >>> dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating >>> for example in ids: ... pass ``` With shuffling and multiple epochs: ```python >>> ids = ds.to_iterable_dataset(num_shards=64) >>> ids = ids.shuffle(buffer_size=10_000, seed=42) # will shuffle the shards order and use a shuffle buffer when you start iterating >>> for epoch in range(n_epochs): ... ids.set_epoch(epoch) # will use effective_seed = seed + epoch to shuffle the shards and for the shuffle buffer when you start iterating ... for example in ids: ... pass ``` Feel free to also use [`IterableDataset.set_epoch`] when using a PyTorch DataLoader or in distributed setups. r")�ArrowExamplesIterablerlNz�Converting a formatted dataset with kwargs or selected columns to a formatted iterable dataset is not implemented yet. Please run `my_dataset = my_dataset.with_format(None)` before calling to_iterable_datasetz"Unable to shard a dataset of size z into z= shards (the number of shards exceeds the number of samples).z�Converting an Arrow dataset to iterable but it has an indices mapping that can make it slower. You can use `ds = ds.flatten_indices()` to write your dataset in contiguous chunks of data and have optimal speed.c�@��g|]}���|d�����S)Tr�r.)r�r�r�rus ��rvr�z/Dataset.to_iterable_dataset.<locals>.<listcomp>�s8������Xa�� � �j� �d� �S�S���rx)r6r�)r8rI�ro)�iterable_datasetrWrlr1r2r0r�r/rr�r�rrrror�rr�r�rQr#r5rFr�r)rur�rWrlr6� ex_iterable�dss`` rv�to_iterable_datasetzDataset.to_iterable_datasetMs�����v M�L�L�L�L�L�L�L� � � (��"� ��$�0�S��9M�5N�5N�RU�VZ�Vg�Rh�Rh�5h�5h�)�g���� ��D� � � !� !��P�S��Y�Y�P�P�j�P�P�P��� � �=� $� �K�K�E� � � � �Q����]�4� � � !� !������ej�ku�ev�ev���� �,�+� � 0�$�F�4Q�R�R� � � � ��_�[�{�D�M�/R�/R�/R� S� S� S�� � � 3����� 1�2�2�B�� rxr �repo_id�data_dir�token�revision� create_pr�embed_external_filesc �f����| r(d��jj���D��ng} ����} ��@t |p t j��}t| |z ��dz�t�d�����fd�t���D��} | r;ddl m �dttdttf�fd� } | | ��} tt j|� ��}d }g}t!t#| ��d �� ��D]�\}}|�d |�d|d�d�d�d�}t%��}|�|��||���z }t+||���}|�||gd||���|�|����||| fS)aqPushes the dataset shards as Parquet files to the hub. Returns: additions (`List[CommitOperation]`): list of the `CommitOperationAdd` of the uploaded shards uploaded_size (`int`): number of uploaded bytes to the repository dataset_nbytes (`int`): approximate size in bytes of the uploaded dataset afer uncompression c�:�g|]\}}t|d����|��SrGrIr�s rvr�z7Dataset._push_parquet_shards_to_hub.<locals>.<listcomp>�s0�� l� l� l�4�1�a�:J�1�fj�:k�:k�:k� l�Q� l� l� lrxNr"c3�H�K�|]}���|d���V��dS)Tr�Nr.)r�r�r�rus ��rvr]z6Dataset._push_parquet_shards_to_hub.<locals>.<genexpr>�s6�����i�i�RS�$�*�*� �!��*�M�M�i�i�i�i�i�irx)�get_writer_batch_sizer6r|c3���K�|D][}|j}|�d��}|�td�|j��d���}|jdi|��}|V��\dS)Nr4T)r{r�r�r�)rrrrTr�)r6rrrgs �rv�#shards_with_embedded_external_fileszPDataset._push_parquet_shards_to_hub.<locals>.shards_with_embedded_external_files�s������#�  �  �E�"�\�F�!�-�-�g�6�6�E�!�I�I�+� $�#8�#8���#H�#H�'+� &���E� .�E�-�7�7��7�7�E��K�K�K�K�  �  rx��endpointr`rzUploading the dataset shards)r�r�r��-r�r�z.parquet�� path_in_repo�path_or_fileobjr�)r^� additions� repo_typerarb)rsr�r�rr`r#rr�rr�r�rgrr�r� HF_ENDPOINTrr�r r@�tellr�preupload_lfs_filesr�)rur^r_rpr`rarbr�r�rcrOr"r6ri�api� uploaded_sizerprfr�shard_path_in_repor��shard_additionrgs` ` @rv�_push_parquet_shards_to_hubz#Dataset._push_parquet_shards_to_hub�s4�����.$� � l� l�4�:�.�4�4�6�6� l� l� l� l�� � �.�.�0�0�� � �5�n�6]��H]�^�^�N��^�n�<�=�=��A�J��Z��+�+�J�i�i�i�i�i�W\�]g�Wh�Wh�i�i�i�� � A� 9� 9� 9� 9� 9� 9� �H�W�<M� �RZ�[b�Rc� � � � � � �9�8��@�@�F��V�/�u�=�=�=��� �.0� �#� �f� � �/�� � � � -� -�L�E�5� %-�!^�!^�u�!^�!^�u�!^�!^�!^�j�!^�!^�!^�!^� ��Y�Y�F� � � �V� $� $� $� �V�[�[�]�]� *�M�/�=O�ag�h�h�h�N� � #� #��)�*�#�!�#� $� � � � � � �^� ,� ,� ,� ,��-��7�7rx�defaultr�� set_default�commit_message�commit_description�privatec �6�dt|j��vrtd���|dkrtd���| �| �td���|�|j�t|j��nd}t jt|��stdt�d |�d ����ttj | � ��}|� || d |d ���}|j }| �/| � d��s|�|| | d d ���|s |dkr|nd}|�|||| | | | | |�� � \}}}d\}}g}d}g}d�|D��}|�|| d | d ���D�]&}t#|t$��s�|jtjkrd }�1|jtjkrd }�I|j� |�d|�d���r<|j|vr3|�t/|j�����||jz }��t3j|jt4�dd����rNt9t4��}t;|j|��}|�J�|d}||vr|�|����(d|vr|�d��nd|f\}}|j���} d| _ || _!|| _"||z| _#|| _$tK|tM||tO|��|���i��| _(|r�|�)|tjd | ���}!tUj+tY|!����}"|"j-}#t]j/|#��}$taj/|#��}%|%r ||%vr |%|}&n�d}&n�|r�d}"tc��}#t]��}$|�)|tjd | ���}'te|'d� ��5}(tgj+|(��}%|%r|%�4|d��nd})|)rtkj6|)��nd}&ddd��n #1swxYwYn d}"tc��}#t]��}$d}&|&��Ltn�d!��|&j(�r*tq|&j(��|gk�r|j9j|&jkr$td"|j9j�d#|&j�����||&j(vrL|&xj!|zc_!|&xj"|&j(�4|tM����j:pdzc_"d|&_ |&j!pd|z|&_!|&j"pd|z|&_"|&j!|&j"z|&_#|&j(�;|d��tM||tO|��|���|&j(|<|&} |$s4|r2d$d%�|D��i}*t]d|*i���<|#��||$vrQ|$|}+d$|+vrt{|+d$��},ni},|�d|�d&�g|,|<d$d'�|,�>��D��i}-nd$||�d|�d&�d(�gi}-|rQ|dkrK|$rD|$�?��}.|.dkrtd)���|$|.�;d��}/d |-d<|r�|�)|tjd | ���}'te|'d� ��5}(tgj+|(��}%ddd��n #1swxYwYt�| ��|%|<t���}0|0�BtgjC|%d*�+���Dd����|�t�tj|0�,����ta|| i���<|#��t]||-i���<|#��|"�tUd-|#�d.���n|"}"|�t�tjt|"���D���,����|�|nd/}tO|��tjFkr!|�G|||z||| d | | �0��}1n�tn�d1tjF�d2���t�jItO|��tjFz ��}2t�d|2��D]�}3||3tjFz|3d3ztjFz�|3dkr|ngz}4|�G||4|d4|3d5�d6|2d5�d7�z|| d | | �0��}1tn�d8|3d3z�d9�|2|3z d3z r d:|2|3z d3z �d;�nd<zd=z����|1S)>a_Pushes the dataset to the hub as a Parquet dataset. The dataset is pushed using HTTP requests and does not need to have neither git or git-lfs installed. The resulting Parquet files are self-contained by default. If your dataset contains [`Image`], [`Audio`] or [`Video`] data, the Parquet files will store the bytes of your images or audio files. You can disable this by setting `embed_external_files` to `False`. Args: repo_id (`str`): The ID of the repository to push to in the following format: `<user>/<dataset_name>` or `<org>/<dataset_name>`. Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user. config_name (`str`, defaults to "default"): The configuration name (or subset) of a dataset. Defaults to "default". set_default (`bool`, *optional*): Whether to set this configuration as the default one. Otherwise, the default configuration is the one named "default". split (`str`, *optional*): The name of the split that will be given to that dataset. Defaults to `self.split`. data_dir (`str`, *optional*): Directory name that will contain the uploaded data files. Defaults to the `config_name` if different from "default", else "data". <Added version="2.17.0"/> commit_message (`str`, *optional*): Message to commit while pushing. Will default to `"Upload dataset"`. commit_description (`str`, *optional*): Description of the commit that will be created. Additionally, description of the PR if a PR is created (`create_pr` is True). <Added version="2.16.0"/> private (`bool`, *optional*): Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists. token (`str`, *optional*): An optional authentication token for the Hugging Face Hub. If no token is passed, will default to the token saved locally when logging in with `huggingface-cli login`. Will raise an error if no token is passed and the user is not logged-in. revision (`str`, *optional*): Branch to push the uploaded files to. Defaults to the `"main"` branch. <Added version="2.15.0"/> create_pr (`bool`, *optional*, defaults to `False`): Whether to create a PR with the uploaded files or directly commit. <Added version="2.15.0"/> max_shard_size (`int` or `str`, *optional*, defaults to `"500MB"`): The maximum size of the dataset shards to be uploaded to the hub. If expressed as a string, needs to be digits followed by a unit (like `"5MB"`). num_shards (`int`, *optional*): Number of shards to write. By default, the number of shards depends on `max_shard_size`. <Added version="2.8.0"/> embed_external_files (`bool`, defaults to `True`): Whether to embed file bytes in the shards. In particular, this will do the following before the push for the fields of type: - [`Audio`] and [`Image`]: remove local path information and embed file content in the Parquet files. Return: huggingface_hub.CommitInfo Example: ```python >>> dataset.push_to_hub("<organization>/<dataset_id>") >>> dataset_dict.push_to_hub("<organization>/<dataset_id>", private=True) >>> dataset.push_to_hub("<organization>/<dataset_id>", max_shard_size="1GB") >>> dataset.push_to_hub("<organization>/<dataset_id>", num_shards=1024) ``` If your dataset has multiple splits (e.g. train/validation/test): ```python >>> train_dataset.push_to_hub("<organization>/<dataset_id>", split="train") >>> val_dataset.push_to_hub("<organization>/<dataset_id>", split="validation") >>> # later >>> dataset = load_dataset("<organization>/<dataset_id>") >>> train_dataset = dataset["train"] >>> val_dataset = dataset["validation"] ``` If you want to add a new configuration (or subset) to a dataset (e.g. if the dataset has multiple tasks/versions/languages): ```python >>> english_dataset.push_to_hub("<organization>/<dataset_id>", "en") >>> french_dataset.push_to_hub("<organization>/<dataset_id>", "fr") >>> # later >>> english_dataset = load_dataset("<organization>/<dataset_id>", "en") >>> french_dataset = load_dataset("<organization>/<dataset_id>", "fr") ``` zVideo(a]push_to_hub is not implemented for video datasets, instead you should upload the video files using e.g. the huggingface_hub library and optionally upload a metadata.csv or metadata.jsonl file containing other information like video captions, features or labels. More information at https://huggingface.co/docs/datasets/main/en/video_load#videofolderr zN`config_name` cannot be 'data'. Please, choose another name for configuration.Nr�rzSplit name should match 'z ' but got 'z'.rjr�T)r`rqr~r�zrefs/pr/)�branchr`rqr�rz) r^r_rpr`rar�r�rbrc)FFrc��g|] }|j�� Sr��rn)r��additions rvr�z'Dataset.push_to_hub.<locals>.<listcomp>�s��M�M�M�x�X�2�M�M�Mrx)r^rarqr`rIr�rlr�z{split}rrp)r@r?� dataset_name)rqrar r z0Updating downloaded metadata with the new split.zVFeatures of the new split don't match the features of the existing splits on the hub: z != � data_filesc� �g|] }|d|�d�d��� S)zdata/�-*�rpr2r�)r�rps rvr�z'Dataset.push_to_hub.<locals>.<listcomp>s,��d�d�d�u��8I��8I�8I�8I�J�J�d�d�drxr�c�T�g|]%\}}|t|��dkr|dn|d���&S)r"rr�r1)r�rt�_patterns rvr�z'Dataset.push_to_hub.<locals>.<listcomp>&sS����� )���"(�/2�8�}�}��/A�/A��� � �x�����rxr�zzThere exists a configuration named 'default'. To set a different configuration as default, rename the 'default' one first.�rrmz--- z --- zUpload dataset)� operationsr|r}r`rqrarbz)Number of files to upload is larger than z+. Splitting the push into multiple commits.r"z (part r�r�r8zCommit #z completedz (still z to go)r(rV)Kr�r�rr�rp�re�matchrHrr#rr� create_repor^r�� create_branchry�list_repo_treer�r� rfilename�REPOCARD_FILENAME�DATASETDICT_INFOS_FILENAMEr�rrg�fnmatch�:PUSH_TO_HUB_WITHOUT_METADATA_CONFIGS_SPLIT_PATTERN_SHARDEDrrarcror�r�r�r�r�r�rLrMr��splits�hf_hub_downloadrrWrr r]�from_dataset_card_datarGrrrNr�rFr�rr�rsr@r��to_dataset_card_datar'r��get_default_config_namer_r r�rQ�encoder�UPLOADS_MAX_NUMBER_PER_COMMIT� create_commit�mathr r�)5rur^r�r{rpr_r|r}r~r`rarbr�r�rcru�repo_urlrprvr"�repo_with_dataset_card�repo_with_dataset_infos� deletions� deleted_size� repo_splits�repo_files_to_add� repo_file�pattern�split_pattern_fields� repo_split� organizationr�� info_to_dump�dataset_card_path� dataset_card�dataset_card_data�metadata_configs� dataset_infos� repo_info�dataset_infos_pathr*r� default_metadata_configs_to_dump�metadata_config�data_files_to_dump�metadata_config_to_dump�default_config_namer�r�� commit_info� num_commitsr�r�s5 rv� push_to_hubzDataset.push_to_hubsA ��Z �s�4�=�)�)� )� )�%�Y��� � �&� � ��m�n�n� n� � %�*�*@��j��� � �=�'+�z�'=�C�� �O�O�O�7�E��x� �5�)�)� Z��X��X�X�u�X�X�X�Y�Y� Y��V�/�u�=�=�=���?�?� ����� #� � ���"�� � ��(;�(;�J�(G�(G� � � � �g�h�e�y�cg� � h� h� h�� K�&1�Y�&>�&>�{�{�F�H�37�3S�3S������)�!��!5�4T� 4 � 4 �0� �=�.�;G�7�� 7�13� �� �!#� �M�M�9�M�M�M���+�+��h�)�5�\`�,� � � 3� 3�I��i��2�2� ���"�f�&>�>�>�)-�&�&��$��(I�I�I�*.�'�'��#�.�.�(�/E�/E�U�/E�/E�/E�F�F� 3�KT�K^�fw�Kw�Kw�� � �!6�I�DW�!X�!X�!X�Y�Y�Y�� ��.� � ����#�%_�%g�%g�hq�sv�%w�%w��� 3�0�0j�k�k��'5�i�6I�7�'S�'S�$�+�7�7�7�1�'�:� ��[�0�0��&�&�z�2�2�2��;>�'�>�>�W�]�]�3�%7�%7�%7�PT�V]��"� �l��y�~�~�'�'� �*.� �'�%2� �"�$2� �!�%2�^�%C� �"�#.� � �'� �I�e�~�C�PT�I�I�dp�q�q�q� r� � � �� "� � #� 3� 3���1�Y�QY�!4�!�!� �'�+�D�1B�,C�,C�D�D�L� ,� 1� �.�E�FW�X�X� �.>�.U�Vg�.h�.h�M�� !�� �!=�!=�)�+�6� � � � � � $� ��L� /� 1� 1� �.�0�0� �!$�!4�!4���:�i�Zb�"5�"�"� ��(�7�;�;�;� Z�q�&*�i��l�l� �GT�^�}�0�0��d�C�C�C�Z^� �CO�Y�K�1�,�?�?�?�UY� � Z� Z� Z� Z� Z� Z� Z� Z� Z� Z� Z���� Z� Z� Z� Z��  �L� /� 1� 1� �.�0�0� ��I� � � �K�K�J� K� K� K��� )�D��)9�$:�$:�u�g�$E�$E��:�&�)�*<�<�<�$�_�qu�q{�rE�_�_�KT�K]�_�_�����I�,�,�,��+�+�|�;�+�+��*�*�i�.>�.B�.B�5�)�+�+�.V�.V�.`�.e�de�e�*�*�/3� �,�+4�+B�+G�a�=�*X� �'�*3�*@�*E�A��)W� �&�*3�*A�I�DZ�*Z� �'�� �$�$�U�D�1�1�1�*3��^�#�d�)�)�Zf�+�+�+� � ��'� )� �� s�K� s��d�d�Xc�d�d�d�0� ,� �Y�(H�I� J� J� _� _�`q� r� r� r� �*� *� *�.�{�;�O���.�.�%6��|�7T�%U�%U�"�"�%'�"�,4�)@�)@�u�)@�)@�)@�(A� �u� %���� -?�,D�,D�,F�,F� ���'� #� #�(4��RZ�Of�Of�]b�Of�Of�Of�6g�6g�5h�&i� #� � 6�;�)�3�3�� M�&6�&N�&N�&P�&P�#�&�)�3�3�$�:���� )�)<�=�A�A�)�L�L�A�15� #�I� .� "� �!$�!4�!4���:�i�Zb�"5�"�"� ��(�7�;�;�;� 3�q�&*�i��l�l� � 3� 3� 3� 3� 3� 3� 3� 3� 3� 3� 3���� 3� 3� 3� 3�)/� �)=�)=�M�+� &��Y�Y�F� �L�L���M�!�<�<�<�C�C�G�L�L� M� M� M� � � �"��0Q�ci�j�j�j� � � � �+�|�4�5�5�J�J�K\�]�]�]���&=�>�?�?�T�T�Uf�g�g�g�JV�J^�{�#E�+<�#E�#E�#E�F�F�F�dp� ���� �F�,D�VY�Zf�Vg�Vg�Vn�Vn�Vp�Vp� q� q� q� � � �,:�+E���K[�� �y�>�>�V�A� A� A��+�+��$�y�0�-�#5��#�!�#�,� � �K�K� �K�K�N�F�<`�N�N�N� � � ��)�C� �N�N�V�5Y�$Y�Z�Z�K��1�k�*�*� � ��&���<�<��A���Im�?m�m��"#�q�&�&�Y�Y�b�2� �"�/�/��)�#1�4[�a�4[�4[�4[��4[�4[�4[�4[�#[�'9��'�%�'�0� � � �� � �0�q�1�u�0�0�0�BM�PQ�/�TU�BU�]�>�+��/�A�"5�>�>�>�>�[]�_������� �s%�AQ�Q�Q�4\�\�\c��|rt||i��j}nd}tj||i|���}t |jj|jz��|j�|���n|}t|j|gd���}|j � ��} | j � tj|j����t!|| j ��}t#|| |jd|���S)a�Add column to Dataset. <Added version="1.7"/> Args: name (`str`): Column name. column (`list` or `np.array`): Column data to be added. feature (`FeatureType` or `None`, defaults to `None`): Column datatype. Returns: [`Dataset`] Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> more_text = ds["text"] >>> ds.add_column(name="text_2", column=more_text) Dataset({ features: ['text', 'label', 'text_2'], num_rows: 1066 }) ``` Nr�r"��axisr�)r+rurNr�rcrqr/rrrorSror�r�r�rsrKrTr�rp) rur�r�r�r��pyarrow_schema� column_tabler�rDros rv� add_columnzDataset.add_columnvs���D � "�%�t�W�o�6�6�C�N�N�!�N�$�0�$����W�W�W� ��D�J�3�l�6O�O�P�P�P�,0�M�,E�$�&�&�(�(�(�4���w�}�l�;�!�D�D�D���|� � �"�"�� � ���X�7� �8K�L�L�M�M�M�-�e�T�]�C�C���u�4�t�z��[j�k�k�k�krx� index_name�device�string_factory� metric_type� custom_indexz faiss.Index� faiss_verbosec ���|�d|g| ���5t���||||||||| �� � ddd��n #1swxYwY|S)a� Add a dense index using Faiss for fast retrieval. By default the index is done over the vectors of the specified column. You can specify `device` if you want to run it on GPU (`device` must be the GPU index). You can find more information about Faiss here: - For [string factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory) Args: column (`str`): The column of the vectors to add to the index. index_name (`str`, *optional*): The `index_name`/identifier of the index. This is the `index_name` that is used to call [`~datasets.Dataset.get_nearest_examples`] or [`~datasets.Dataset.search`]. By default it corresponds to `column`. device (`Union[int, List[int]]`, *optional*): If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. string_factory (`str`, *optional*): This is passed to the index factory of Faiss to create the index. Default index class is `IndexFlat`. metric_type (`int`, *optional*): Type of metric. Ex: `faiss.METRIC_INNER_PRODUCT` or `faiss.METRIC_L2`. custom_index (`faiss.Index`, *optional*): Custom Faiss index that you already have instantiated and configured for your needs. batch_size (`int`): Size of the batch to use while adding vectors to the `FaissIndex`. Default value is `1000`. <Added version="2.4.0"/> train_size (`int`, *optional*): If the index needs a training step, specifies how many vectors will be used to train the index. faiss_verbose (`bool`, defaults to `False`): Enable the verbosity of the Faiss index. dtype (`data-type`): The dtype of the numpy arrays that are indexed. Default is `np.float32`. Example: ```python >>> ds = datasets.load_dataset('crime_and_punish', split='train') >>> ds_with_embeddings = ds.map(lambda example: {'embeddings': embed(example['line']})) >>> ds_with_embeddings.add_faiss_index(column='embeddings') >>> # query >>> scores, retrieved_examples = ds_with_embeddings.get_nearest_examples('embeddings', embed('my new query'), k=10) >>> # save index >>> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss') >>> ds = datasets.load_dataset('crime_and_punish', split='train') >>> # load index >>> ds.load_faiss_index('embeddings', 'my_index.faiss') >>> # query >>> scores, retrieved_examples = ds.get_nearest_examples('embeddings', embed('my new query'), k=10) ``` r�)r�r�r�) r�r�r�r�r�r�r�r r�N)r�r|�add_faiss_index) rur�r�r�r�r�r�r�r r�r�r}s �rvr�zDataset.add_faiss_index�s����D� � �G�f�X�U� � K� K� � � �G�G� #� #��%��-�'�)�%�%�+� $� � � � � � � � � � � � � � ���� � � � �� s�+A�A�A�external_arraysc ���t���|�| ��|||||||| �� � dS)a�Add a dense index using Faiss for fast retrieval. The index is created using the vectors of `external_arrays`. You can specify `device` if you want to run it on GPU (`device` must be the GPU index). You can find more information about Faiss here: - For [string factory](https://github.com/facebookresearch/faiss/wiki/The-index-factory) Args: external_arrays (`np.array`): If you want to use arrays from outside the lib for the index, you can set `external_arrays`. It will use `external_arrays` to create the Faiss index instead of the arrays in the given `column`. index_name (`str`): The `index_name`/identifier of the index. This is the `index_name` that is used to call [`~datasets.Dataset.get_nearest_examples`] or [`~datasets.Dataset.search`]. device (Optional `Union[int, List[int]]`, *optional*): If positive integer, this is the index of the GPU to use. If negative integer, use all GPUs. If a list of positive integers is passed in, run only on those GPUs. By default it uses the CPU. string_factory (`str`, *optional*): This is passed to the index factory of Faiss to create the index. Default index class is `IndexFlat`. metric_type (`int`, *optional*): Type of metric. Ex: `faiss.faiss.METRIC_INNER_PRODUCT` or `faiss.METRIC_L2`. custom_index (`faiss.Index`, *optional*): Custom Faiss index that you already have instantiated and configured for your needs. batch_size (`int`, *optional*): Size of the batch to use while adding vectors to the FaissIndex. Default value is 1000. <Added version="2.4.0"/> train_size (`int`, *optional*): If the index needs a training step, specifies how many vectors will be used to train the index. faiss_verbose (`bool`, defaults to False): Enable the verbosity of the Faiss index. dtype (`numpy.dtype`): The dtype of the numpy arrays that are indexed. Default is np.float32. ) r�r�r�r�r�r�r�r r�N)r|�$add_faiss_index_from_external_arraysr�) rur�r�r�r�r�r�r�r r�r�r}s �rvr�z,Dataset.add_faiss_index_from_external_arrays�sZ���^ ���4�4�+�2�2�5�9�9�!��)�#�%�!�!�'� 5� � � � � rx�host�port� es_clientzelasticsearch.Elasticsearch� es_index_name�es_index_configc ���|�d|g���5t���|||||||���ddd��n #1swxYwY|S)a�Add a text index using ElasticSearch for fast retrieval. This is done in-place. Args: column (`str`): The column of the documents to add to the index. index_name (`str`, *optional*): The `index_name`/identifier of the index. This is the index name that is used to call [`~Dataset.get_nearest_examples`] or [`~Dataset.search`]. By default it corresponds to `column`. host (`str`, *optional*, defaults to `localhost`): Host of where ElasticSearch is running. port (`str`, *optional*, defaults to `9200`): Port of where ElasticSearch is running. es_client (`elasticsearch.Elasticsearch`, *optional*): The elasticsearch client used to create the index if host and port are `None`. es_index_name (`str`, *optional*): The elasticsearch index name used to create the index. es_index_config (`dict`, *optional*): The configuration of the elasticsearch index. Default config is: ``` { "settings": { "number_of_shards": 1, "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": { "properties": { "text": { "type": "text", "analyzer": "standard", "similarity": "BM25" }, } }, } ``` Example: ```python >>> es_client = elasticsearch.Elasticsearch() >>> ds = datasets.load_dataset('crime_and_punish', split='train') >>> ds.add_elasticsearch_index(column='line', es_client=es_client, es_index_name="my_es_index") >>> scores, retrieved_examples = ds.get_nearest_examples('line', 'my new query', k=10) ``` N)r�r�)r�r�r�r�r�r�r�)r�r|�add_elasticsearch_index) rur�r�r�r�r�r�r�r}s �rvr�zDataset.add_elasticsearch_index3s����p� � �D�6�(� � ;� ;� � � �G�G� +� +��%���#�+� /� ,� � � � � � � � � � � � � � ���� � � � �� s�)A�A�A�itemc�,�tjd�|���D����}t|jjt j|j��g��\}}t|jj|kr|j � |j ��n|j |� |j ��g��}|j �d}nhtjt!|j ��gtj�����}tj|gdg���} t|j | g��}|j���} | j�|��t-|| j��}t/|| |j||���S)aoAdd item to Dataset. <Added version="1.7"/> Args: item (`dict`): Item data to be added. Returns: [`Dataset`] Example: ```py >>> from datasets import load_dataset >>> ds = load_dataset("cornell-movie-review-data/rotten_tomatoes", split="validation") >>> new_review = {'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'} >>> ds = ds.add_item(new_review) >>> ds[-1] {'label': 0, 'text': 'this movie is the absolute worst thing I have ever seen'} ``` c��i|] \}}||g�� Sr�r�r�s rvr�z$Dataset.add_item.<locals>.<dictcomp>�s ��/P�/P�/P�4�1�a��A�3�/P�/P�/PrxNrPr�r}r�)rNr�r�r0rsr�r+rsrKrSrqrvrurrrWr�r�r�r�ror�r�rTr�rp) rur�r�� item_table� dset_features� item_featuresrDrm�item_indices_array�item_indices_tableros rv�add_itemzDataset.add_itemwsy��2#�.�/P�/P�4�:�:�<�<�/P�/P�/P�Q�Q� �'6� �Z� �(�"<�Z�=N�"O�"O� P�( �( �$� �}��?C�z�?R�Vc�?c�?c�� ��� � :�;�;�;�im�is���� � :�;�;� � � �� �=� � �M�M�!#��3�t�z�?�?�*;�"�)�+�+�!N�!N�!N� �!.�!:�<N�;O�Xa�Wb�!c�!c�!c� �)�4�=�:L�*M�N�N�M��y�~�~���� � ���]�+�+�+�-�e�T�]�C�C��� ���*�'�'�  � � � rx�label2id� label_columnc�(�����|jjvr td��d|jj�d����|jj�}t |t ��s~t |t��rt |jt ��sOtdt j �dtj �dt j �d|�dt|��j �d � ���tt�� ��d �� �����t������}d ��� ��D���t |t ��r|jn |jj�t |t ��r���fd �}n���fd�}|j}t |t ��rt t#|��|���n*tt t#|��|�����|�<|�||dd���S)acAlign the dataset's label ID and label name mapping to match an input `label2id` mapping. This is useful when you want to ensure that a model's predicted labels are aligned with the dataset. The alignment in done using the lowercase label names. Args: label2id (`dict`): The label name to ID mapping to align the dataset with. label_column (`str`): The column name of labels to align on. Example: ```python >>> # dataset with mapping {'entailment': 0, 'neutral': 1, 'contradiction': 2} >>> ds = load_dataset("nyu-mll/glue", "mnli", split="train") >>> # mapping to align with >>> label2id = {'CONTRADICTION': 0, 'NEUTRAL': 1, 'ENTAILMENT': 2} >>> ds_aligned = ds.align_labels_with_mapping(label2id, "label") ``` rlrmrnz5Aligning labels with a mapping is only supported for z column or z column with the inner type z , and column z is of type rVc��|dS)Nr"r�)r�s rvrz3Dataset.align_labels_with_mapping.<locals>.<lambda>�s ��$�q�'�rx)r�c�>�i|]\}}|���|��Sr��r>r�s rvr�z5Dataset.align_labels_with_mapping.<locals>.<dictcomp>�s&��>�>�>�T�Q��A�G�G�I�I�q�>�>�>rxc�R���fd�|�D��}�fd�|D��|�<|S)Nc�R��g|]#}|��|�����nd��$Srrr��r��label_id�int2str_functions �rvr�zPDataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>�sM���$�$�$� �;C�:N�$�$�X�.�.�4�4�6�6�6�TX�$�$�$rxc�(��g|]}|��|nd��Srrr��r�� label_namer�s �rvr�zPDataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>�s5���'�'�'�Q[�J�,B�H�Z�(�(��'�'�'rxr��r��dset_label_namesr�r�r�s ���rv�process_label_idsz<Dataset.align_labels_with_mapping.<locals>.process_label_ids�se���$�$�$�$�$)�,�$7�$�$�$� �'�'�'�'�_o�'�'�'��l�#�� rxc�R���fd�|�D��}�fd�|D��|�<|S)Nc�,��g|]}�fd�|D����S)c�R��g|]#}|��|�����nd��$Srrr�r�s �rvr�z[Dataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>.<listcomp>�s=���n�n�n�^f�8�;O�%�%�h�/�/�5�5�7�7�7�UY�n�n�nrxr�)r��seqr�s �rvr�zPDataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>�s?���$�$�$��o�n�n�n�jm�n�n�n�$�$�$rxc�,��g|]}�fd�|D����S)c�(��g|]}|��|nd��Srrr�r�s �rvr�z[Dataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>.<listcomp>�s)���d�d�d�R\�Z�-C�X�j�)�)��d�d�drxr�)r�r�r�s �rvr�zPDataset.align_labels_with_mapping.<locals>.process_label_ids.<locals>.<listcomp>�s?���'�'�'��e�d�d�d�`c�d�d�d�'�'�'rxr�r�s ���rvr�z<Dataset.align_labels_with_mapping.<locals>.process_label_ids�se���$�$�$�$�$�\�2�$�$�$� �'�'�'�'�/�'�'�'��l�#�� rx)� num_classesr~TzAligning the labels)r�r{r�)rqr/r�rsr�r�r*rr�r�r�r�r4r�r�r��int2strr�r)rur�r�� label_feature� label_namesr�r�r�s `` @rv�align_labels_with_mappingz!Dataset.align_labels_with_mapping�s������. �t�z�6� 6� 6��i� �i�i�d�j�Ne�i�i�i�j�j� j�� �+�L�9� � �}�j� 1� 1� ��=�(�3�3� �8B�=�CX�Zd�8e�8e� ��t� �H[�t�t�hp�hy�t�t�Xb�Xk�t�t�zG�t�t�UY�Zg�Uh�Uh�Uq�t�t�t��� � ��x�~�~�/�/�5I�5I�J�J�J�K�K���8�=�=�?�?�+�+� �>�>�X�^�^�-=�-=�>�>�>��%/� �z�%J�%J� m�M� !� !�P]�Pe�Pm� � �m�Z� 0� 0� � � � � � � � � � � � � � � � ��=���-��4�4� W�J�3�{�#3�#3�;� G� G� G� G��*��[�1A�1A��U�U�U�V�V� ��� �x�x�)�H�d�Qf�x�g�g�grx)NNNN)NNNFr�)NNNFN)NNNFNN)NNFNNT)NNF)NN)F)NrO)r�FNNr�Nrr)NF)NFFNFr�FNFNNr�NFNNrNN)NFFNFr�FNFNr�NFNNNr)FNN)NFFNFr�FNNr�NNrNN)FNr�NFNN)FNr�N)Fr�FNNr�N)NNFNNr�N) NNTNNNFNNNr�NN)TFNr�)NFNTr_)r NNNFNNT) rzNNNNNNNNFNNT)NNNNNN)�r�r�r�r�rPrrFrJr�rwr�r+r�� classmethodr�r�rW�Bufferr�rQrRr�r�r�r�r�r�r'rrir�r�rK�TRAINrr�r�r�r�r�r�r�rr�r�r0rrrGr_r rrergr/r�r�rpr�r6r�rvr/r�rBr�r�r�r�r�r�r7r�rr��contextmanagerr�r6r�r�rr�r�rhr�rrr�r�rrrr<r�r�ror�r�r�r�r�r�r�� Sequence_r�r�rrr�rrrrr'rr*r,r0r5r<r@rDrrQrUr]rryrr�r�r�r�r�r�r�r�r�� __classcell__)r}s@rvr�r�ts"%�������-�-� '+�&*�)-�%)� GT�GT��GT��{�#�GT�� �#� GT�  ��� GT� �c�]� GT�GT�GT�GT�R��(�������X�� �'+�&*�*.�� % �% ��% ��{�#�% �� �#� % � #�3�-� % � � % � �% �% �% ��[�% �N�'+�&*�.2� O�O�� �O��{�#�O�� �#� O� !���+� O� � O�O�O��[�O�>�(,�&*�&*�)-� A2�A2� �L�A2��8�$�A2��{�#� A2� � �#� A2� !��� A2� �A2�A2�A2��[�A2�F�(,�&*�&*� '2�'2� �'2��8�$�'2��{�#� '2� � �#� '2� � '2�'2�'2��[�'2�R�(,�&*�&*� ;5�;5��;5��8�$�;5��{�#� ;5� � �#� ;5� � ;5�;5�;5��[�;5�z�(,�&*�&*� =�=��d��=��8�$�=��{�#� =� � �#� =� � =�=�=��[�=�@�'+�'+��$�"&� 2�2��X�t�H�~�5�6�2�� �#�2��8�$�2�� 2� � 2� �3�-� 2�2�2��\�2�h�(,��$�%)�"&�!�K�H�H��H��8�$�H��H�� H� �T�N� H� �3�-� H��H�H�H��\�H�T�'+�'+��$�#�"&�6�6��X�t�H�~�5�6�6�� �#�6��8�$�6�� 6� � 6� ��}� 6��3�-�6�6�6��\�6�p�'+�'+��$�'+�"&�8�8��X�t�H�~�5�6�8�� �#�8��8�$�8�� 8� � 8� �$�s�)�$� 8��3�-�8�8�8��\�8�t�'+�'+��$�"&� 2�2��X�t�H�~�5�6�2�� �#�2��8�$�2�� 2� � 2� �3�-� 2�2�2��\�2�h�'+�'+�$���%)�;�;� #�;�� �#�;��8�$�;�� ;� � ;� � ;�#�;�;�;��\�;�z�(,��$� 7�7� �3�3�3� 4�7� �3�6�8R�Th�h� i�7��8�$�7�� 7� � 7�7�7��\�7�r��� ��� ������59�$(�"&�*.� WM�WM��WM�!��s�C�x��1�WM��S�M� WM� �3�-� WM� "�$�� WM�WM�WM�WM�r�6�S�6��6�3�6�Ya�bf�Yg�6�6�6��\�6�4�T�C�T�D�T�T�T��\�T� �*.�*.�v�v��v� ���v�"�$��v� � v�v�v��\�v�p��e�����X��&�P�T�$�Z�P�P�P��X�P�"� &�S� &� &� &��X� &��#�#�#�#�#��X�#� � '�d�3�i� '� '� '��X� '�� �u�S�#�X�� � � ��X� � A�S�A�T�A�A�A�A�>I�I�#�I�d�I�y�I�I�I�I�V��5�)�)�)�*�*�x��}�*�i�*�*�*�*�)�*�^%)�$�/3�)-�+/�"&�Q�Q��Q��S�M�Q�� Q� '�t�n� Q� "�#�� Q�$�C�=�Q��3�-�Q� �Q�Q�Q�Q�f��5�)�)�)�''�''�#�''� �''�h�WZ�m�''�gp�''�''�''�*�)�''�R���5�)�)�)�3�3�5��d�3�i��+@�3�S[�\_�S`�3�lu�3�3�3�*�)��_�3�j��5�)�)�)�`d�?�?�$'�?�:=�?�PX�Y\�P]�?� �?�?�?�*�)�?�B��5�)�)�)�@�@�T�#�s�(�^�@�h�WZ�m�@�gp�@�@�@�*�)�@�D���5�)�)�)�,�,�5��d�3�i��+@�,�S[�\_�S`�,�lu�,�,�,�*�)��_�,�\���"���:��s��T�����@t�t�t�� � ��X� ���#�"&�#(� n�n��s�m�n��$��n�!� n�n�n���n�>��4�(�(�(�#�"&�#(� L �L ��s�m�L ��$��L �!� L �L �L �)�(�L �\���@#'�#(� )o�)o��H�%�)o��$��)o�!� )o�)o�)o�)o�Z#�"&�#(� E�E��s�m�E��$��E�!� E�E�E�E�T#'�#(� .�.��H�%�.��$��.�!� .�.�.�.�` �E�#�u�c�8�C�=�"@�A� �PU�VZ�\`�V`�Pa� � � � �(� �u�S�%��#��%>�?� �D� � � ��X� �� �s� �t� � � ��X� �"�"�"�\��\�$�\�\�\�\� #$�S�#$�#$�#$�#$�J����(,�"��9=��$(� %�:>�$�/3�)-�+/�'+�!&�$(�"&�>�)-�"�)l�l��8�$�l��l�� l�  ��c�4��9�n� 5�6� l� � l��S�M�l��l�!��s�D��I�~�!6�7�l��l�'�t�n�l�"�#��l�$�C�=�l��8�$�l��l� �D�>�!l�"�3�-�#l�$�%l�&"�#��'l�(�s�m�)l�* �+l�l�l��_�l�\ �(,�"��-1��$(� %�.2�$�)-�+/�'+�!&�$(�)-�"��%w$�w$��w$��8�$�w$��w$�� w$�  ��S� �*� w$� � w$��S�M�w$��w$�!��c��+�w$��w$�"�#��w$�$�C�=�w$��8�$�w$��w$��D�>�w$� "�#��!w$�"�s�m�#w$�$�%w$�& �%��T�5��i��#8�8�9� :�'w$�w$�w$��\�w$�r ���5�)�)�)�!&�"&�)-� / �/ ��/ ��/ ��3�-� / � "�#�� / � � / �/ �/ �*�)��_�/ �b����%X�%X�%X�bi���� (,�"��9=��$(�$�/3�)-�+/�$(�"&�>�)-�"�!I�I��8�$�I��I�� I�  ��c�4��9�n� 5�6� I� � I��S�M�I��I�'�t�n�I�"�#��I�$�C�=�I��D�>�I��3�-�I��I�"�#��I� �s�m�!I�" �#I�I�I����_�I�V���5�9J�8K�L�L�L� %�)-�+/�'+�!&�"&�)-�, �, ��, �"�#��, �$�C�=� , � �8�$� , � � , ��3�-�, �"�#��, � �, �, �, �M�L��_�, �`26�.2�%)�  � �!)�#�� �!���+� ��c�]�  � �  � � � �>���5�9R�8S�T�T�T� %�15�+/�)-� U �U ��U ��U �"*�#�� U � $�C�=� U � "�#�� U � �U �U �U �U�T��_�U �n���5�)�)�)� *.� 4�4��4��4�"�#�� 4� � 4�4�4�*�)��_�4�l���5�9R�8S�T�T�T� %�15�+/�)-� gu�gu��gu��gu�"*�#�� gu� $�C�=� gu� "�#�� gu� �gu�gu�gu�U�T��_�gu�R0�c�0�i�0�0�0�0�<i��i� �i�i�i�i�@%�c�%�i�%�%�%�%�.���5�9O�Qj�8k�l�l�l�16�&�$�/3�15�+/�)-�| �| ��C��3��/�0�| ��t�Y�t�_�,�-�| �� | � � | � '�t�n� | �"*�#��| �$�C�=�| �"�#��| � �| �| �| �m�l��_�| �|����4�@V�Xq�?r���� #�37�$�/3�15�+/�)-�~ �~ ��s�m�~ ��B�I�/�0�~ �� ~ � '�t�n� ~ � "*�#�� ~ �$�C�=�~ �"�#��~ � �~ �~ �~ ����_�~ �@���� �2�4J�K�o�o�o� ���.2�.2��,0�"�37�$�/3�7;�6:�+/�/3�.2�RG�RG����T�)�*�RG��%��d�*�+�RG�� RG� %�S�M� RG� �s�m� RG��B�I�/�0�RG��RG�'�t�n�RG�(0��}�RG�'/�s�m�RG�$�C�=�RG� (��}�RG�'�s�m�RG� �RG�RG�RG� ���_�RG�p �$�15�+/�J �J ��J ��J �� J � � J � "*�#�� J �$�C�=�J � �J �J �J �J �^%)�"&�*.� 9�9��8�X�-�.�9��S�M�9��3�-� 9� "�$�� 9� �9�9�9�9�v��(�3�-��5��x�PT�~�AU�;V�����,������*%)�"&�*.� <�<��8�X�-�.�<��S�M�<��3�-� <� "�$�� <� �<�<�<�<�~AF�%�%�"�3�-�%�9=�%� �r�|�X�b�l�3�3� 4�%�%�%�%�R%)��+/�� :e�:e��S�M�:e��:e�#�4�.� :e� � :e� �~�x��7�7� 8� :e�:e�:e�:e�~%)�*.� %�%��8�X�-�.�%��S�M�%�"�$�� %� � %�%�%�%�V%)� /e�/e��/e��3�6�8R�Th�h� i�/e��S�M� /e� � /e�/e�/e�/e�b�#�����@�*�T�)�_�*�#�*�*�*��\�*� �<�3�<�<�<��\�<�{�{�h�s�m�{�DU�{�{�{�{�@�#�#�"&�$)�48�$(�%)�M8�M8��M8��M8���}� M8� ��}� M8� �3�-� M8��D�>�M8�!��s�C�x��1�M8��S�M�M8�#�M8� �t�&�'��c�1� 2�M8�M8�M8�M8�d%�&*�#�"&�(,�,0�"&�#�"&�$)�48�$(�%)�[�[��[��[��d�^� [� ��}� [� �3�-� [�!�� �[�%�S�M�[��$��[���}�[��3�-�[��D�>�[�!��s�C�x��1�[��S�M�[�#�[� �![�[�[�[�z ���5�)�)�)�os�.l�.l��.l�!&�t�R�X�~�!6�.l�IL�.l�W_�`k�Wl�.l�.l�.l�*�)��_�.l�f%)� $�(,�%)�04��$(�#��j�N�N��N��S�M�N��� � N� !�� � N� �c�]� N��}�-�N��N��S�M�N��N�N�N�N�N�N�h!%�(,�%)�04��$(�#��j�9 �9 ���9 ��9 ��� � 9 � !�� � 9 � �c�]� 9 ��}�-�9 ��9 ��S�M�9 ��9 �9 �9 �9 �9 �9 �|%)�"�"�=A�'+�*.�B�B��B��S�M�B��s�m� B� �s�m� B� �9�:� B� ��}�B�"�$��B�B�B�B�B�B�H���5�)�)�)�2 �T�2 �C�2 �2 �2 �*�)��_�2 �hKh�$�Kh�c�Kh�i�Kh�Kh�Kh�Kh�Kh�Kh�Kh�Khrxr��dsetsrorpr�c ���� �td��D����r d��D���n�dS|dkrtd��D����nCt�fd��D����std���t d��D�����dj� t� fd��D����ri� t �d ��d �}td ��D�����r�|dk�r9g}tt�����D]m}�|j �>�|� tt�|�������|<|� �|j ���nd}tt�����D]4}||||��||<|t�|j ��z }�5d �|D��}|rt|��}n�tjgt#jdt#j��i�����}nbt���dkr�dj }nAtt�����D]}�|����|<� d }nd }td��D��|���} |dkrt+d��D����} n d��D��} t-| d�| D����} |�t/jd��D����}t3d�d��D����t6||d���} t9| |||| ���} | jdi� ��| S)a� Converts a list of :class:`Dataset` with the same schema into a single :class:`Dataset`. When you concatenate on axis 0, missing data are filled with None values. Args: dsets (`List[datasets.Dataset]`): List of Datasets to concatenate. info (:class:`DatasetInfo`, optional): Dataset information, like description, citation, etc. split (:class:`NamedSplit`, optional): Name of the dataset split. axis (``{0, 1}``, default ``0``, meaning over rows): Axis to concatenate over, where ``0`` means over rows (vertically) and ``1`` means over columns (horizontally). *New in version 1.6.0* Example: ```py >>> ds3 = _concatenate_map_style_datasets([ds1, ds2]) ``` c3�,K�|]}|jdkV��dS�rNr��r�r�s rvr]z2_concatenate_map_style_datasets.<locals>.<genexpr>s)���� /� /��4�=�1� � /� /� /� /� /� /rxc�(�g|]}|jdk� |��Sr�r�rs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>s$��=�=�=�$�4�=�1�+<�+<��+<�+<�+<rxrc��g|] }|j�� Sr�rIrs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>s��*K�*K�*K�T�4�=�*K�*K�*Krxc3�D�K�|]}|j�djkV��dSrr�)r�r�r�s �rvr]z2_concatenate_map_style_datasets.<locals>.<genexpr> s1�����H�H�$�4�=�E�!�H�$5�5�H�H�H�H�H�Hrxz*Number of rows must match for all datasetsc�0�g|]}|jjD]}|���Sr�ri)r�r�rGs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>"s*��`�`�`�$�� �H_�`�`�H�X�`�`�`�`rxc3�.�K�|]}|j�kV��dSrr)r)r�r�rs �rvr]z2_concatenate_map_style_datasets.<locals>.<genexpr>&s*����� 3� 3�T�4�;�&� � 3� 3� 3� 3� 3� 3rxz]Some of the datasets have disparate format. Resetting the format of the concatenated dataset.c���|dkr|S|d}tj|tj|tj�������}t j|gdg���S)Nrr�rPr})r�r#rW�scalarr�rNr�)rDr3r�� new_arrays rv�apply_offset_to_indices_tablezF_concatenate_map_style_datasets.<locals>.apply_offset_to_indices_table*sZ�� �Q�;�;��L��)�$�E���u�b�i��R�Y�[�[�&I�&I�&I�J�J�I� �,�i�[�� �L�L�L� Lrxc3�(K�|] }|jduV��dSrr)rrrs rvr]z2_concatenate_map_style_datasets.<locals>.<genexpr>3s)���� 7� 7��4�=�� $� 7� 7� 7� 7� 7� 7rxNc�8�g|]}t|��dk�|��Sr�r1)r��ts rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>Ds#��F�F�F�A�3�q�6�6�A�:�:�a�:�:�:rxr�r�r"c��g|] }|j�� Sr�rars rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>Ss��8�8�8�$�4�:�8�8�8rxr�c��g|] }|j�� Sr�rIrs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>Us��(I�(I�(I�4���(I�(I�(Irxc��g|] }|j�� Sr�rIrs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>Ws��9�9�9�4���9�9�9rxc�H�i|]}|���D]\}}||�� � Sr�r�)r�r�r�rns rvr�z3_concatenate_map_style_datasets.<locals>.<dictcomp>Xs;��1r�1r�1r�8�ai�ao�ao�aq�aq�1r�1r�Y]�YZ�\]�!�Q�1r�1r�1r�1rrxc��g|] }|j�� Sr�rYrs rvr�z3_concatenate_map_style_datasets.<locals>.<listcomp>\s��&C�&C�&C�T�t�y�&C�&C�&Crxr(c3�$K�|] }|jV�� dSrrr:rs rvr]z2_concatenate_map_style_datasets.<locals>.<genexpr>^s%����4�4�d��!�4�4�4�4�4�4rxror�r�)r�r1rar�rcrrror�r�rrr�r�rqrSrNrSrWrKr�ror0rTrF� from_merger>rr?r�r6)r�rorpr�r �indices_tablesr�r3rmrD� features_listr?�concatenated_datasetrs` @rvr?r?�s����6 � /� /�� /� /� /�/�/��=�=�%�=�=�=����Q�x�� �q�y�y�)�*K�*K�U�*K�*K�*K�L�L�L�L��H�H�H�H�%�H�H�H�H�H� K��I�J�J� J��`�`�%�`�`�`�a�a�a��1�X�_�F� � 3� 3� 3� 3�U� 3� 3� 3�3�3�u���� � �s�t�t�t�M�M�M� � 7� 7�� 7� 7� 7�7�7�� �1�9�9� �N��3�u�:�:�&�&� 9� 9����8�$�,�$�Q�x�D�D�U�3�u�UV�x�=�=�EY�EY�Z�Z�E�!�H��%�%�e�A�h�&7�8�8�8�8��F��3�u�:�:�&�&� .� .��$A�$A�.�QR�BS�U[�$\�$\��q�!��#�e�A�h�n�-�-�-���G�F��F�F�F�N�� j� -�n� =� =� � � -� :�2�b�i�QZ�\^�\d�\f�\f�Pg�Fh�Fh� i� i� i� � ��5�z�z�Q��� %�a�� 1� � ��s�5�z�z�*�*�:�:�A�$�Q�x�7�7�9�9�E�!�H�H� $� � �� � �8�8�%�8�8�8�t� D� D� D�E� �q�y�y�'�(I�(I�5�(I�(I�(I�J�J� � �9�9�5�9�9�9� � )�%�1r�1r�}�1r�1r�1r� s� s�E� �|��%�&C�&C�U�&C�&C�&C�D�D��$� ���4�4�e�4�4�4�4�4�6U�`d�ot�Wu�Wu���K� #� � ��#�� ����$��#�-�-�f�-�-�-� �rx�first_exhaustedr<� probabilitiesr��stopping_strategy�r� all_exhaustedc �����|dvr't|�dt�d�������t�||���}d��D��}tjdg|dd�z��} |dk} ��t| sr| �d d��tjt|�����dd ��z���� ��} �n\���tj tjt|�����dd ��tj |���d d����} | | z���� ��} n�tj t|��d ��} | r tjn tj} ���fd �}dgt���z}g} |��D]_}| | ��rnQ| �||| |z��||xxd z cc<||||kr d | |<d||<�`|j| fi|��S) a Interleave several map-style datasets (sources) into a single map-style dataset. The new dataset is constructed by alternating between the sources to get the examples. If `probabilities = None` (default) the new dataset is constructed by cycling between each source to get the examples. If `probabilities` is not `None, the new dataset is constructed by getting examples from a random source at a time according to the provided probabilities. Args: datasets (`List[Dataset]`): list of datasets to interleave probabilities (`List[float]`, optional, default None): If specified, the new dataset is constructed by sampling examples from one source at a time according to these probabilities. seed (`int`, optional, default None): The random seed used to choose a source for each example. info (:class:`DatasetInfo`, optional): Dataset information, like description, citation, etc. split (:class:`NamedSplit`, optional): Name of the dataset split. stopping_strategy (`str`, defaults to `first_exhausted`): Two strategies are proposed right now. By default, `first_exhausted` is an undersampling strategy, i.e the dataset construction is stopped as soon as one dataset has ran out of samples. If the strategy is `all_exhausted`, we use an oversampling strategy, i.e the dataset construction is stopped as soon as every samples of every dataset has been added at least once. Note that if the strategy is `all_exhausted`, the interleaved dataset size can get enormous: - with no probabilities, the resulting dataset will have max_length_datasets*nb_dataset samples. - with given probabilities, the resulting dataset will have more samples if some datasets have really low probability of visiting. **kwargs (additional keyword arguments): Keyword arguments to be passed to :meth:`datasets.Datasets.select` when selecting the indices used to interleave the datasets. Output: :class:`datasets.Dataset` rzR stopping strategy in `interleave_datasets` is not implemented yet with a list of rroc�,�g|]}t|����Sr�r1rs rvr�z2_interleave_map_style_datasets.<locals>.<listcomp>�s��.�.�.�T�s�4�y�y�.�.�.rxN�����rr"Fc3��K�tj����} d�|�t ���d����D��Ed{V���6)z]Get an infinite iterator that randomly samples the index of the source to pick examples from.Tc3�4K�|]}t|��V��dSrr)r�)r�r�s rvr]zN_interleave_map_style_datasets.<locals>.iter_random_indices.<locals>.<genexpr>�s(����b�b�q�C��F�F�b�b�b�b�b�brxr�)rg�pN)r�rr�choicer�)rr<rr�s ���rv�iter_random_indicesz;_interleave_map_style_datasets.<locals>.iter_random_indices�sg������)�'�'��-�-�C� c�b�b�C�J�J�s�8�}�}�4�S`�J�,a�,a�b�b�b�b�b�b�b�b�b�b� crxT)r�r�r?r��cumsum�reshaperr�r��tolistr rr��fullr�rar�r�r�)r<rr�rorprr8�concatenated_datasets�lengths�offsets� oversamplingr�� is_exhausted�bool_strategy_funcr&� current_index� source_idxs``` rv�_interleave_map_style_datasetsr3ms������D� D�D�D�� � H� H�tx�zB�CD�zE�uF�uF� H� H� � � � <�H�4�W\�]�]�]��/�.�X�.�.�.�G��i���g�c�r�c�l�*�+�+�G�%��7�L���\���?�?�1�b�)�)�B�I�c�'�l�l�,C�,C�,K�,K�B�PQ�,R�,R�R�[�[�]�]�d�d�f�f��� � ��&���3�w�<�<�0�0�8�8��Q�?�?���'�AR�AR�AZ�AZ�[\�^`�Aa�Aa�b�b���W�$�-�-�/�/�6�6�8�8����w�s�7�|�|�U�3�3� �(4�?�R�V�V���� c� c� c� c� c� c� c� ��c�(�m�m�+� ���-�-�/�/� .� .�J�"�!�,�/�/� ��� �N�N�=��4�w�z�7J�J� K� K� K� �*� %� %� %�� *� %� %� %��Z�(�G�J�,?�?�?�+/� �Z�(�,-� �j�)�� '� � '�� :� :�6� :� :�:rxr�r� world_sizec�2�|�||d���S)aD Split a dataset for the node at rank `rank` in a pool of nodes of size `world_size`. Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. To maximize data loading throughput, chunks are made of contiguous data on disk if possible. Args: dataset ([`Dataset`]): The dataset to split by node. rank (`int`): Rank of the current node. world_size (`int`): Total number of nodes. Returns: [`Dataset`]: The dataset to be used on the node at rank `rank`. Tr�r.)r�rr4s rv� _split_by_node_map_style_datasetr6�s��" �=�=�J�d�t�=� L� L�Lrxrr{rr r �indices_mappingc ����|ra|�^}} } d} |r| | fz } |r| | fz } |g|�| �Ri|��} t| tjtjf��r| ���} �n|�^}} } g} |��|d�t �t t����������} t| ��D]H���fd��D��}d} |r | | �fz } |r| | fz } | � ||g| �Ri|�����Ino|}t |d��} t| ��D]H��fd�|D��}d} |r | | �fz } |r| | fz } | � |g|�| �Ri|�����Id�t| | ��D��}|�ctj |tj �����}|�d���|��}|���}d|iS)Nr�rc�.��i|]}|�|���Sr�r��r�r�r�r�s ��rvr�z2get_indices_from_mask_function.<locals>.<dictcomp> �#���?�?�?�#�3��c� �1� �?�?�?rxc� ��g|] }|��� Sr�r��r�r�r�s �rvr�z2get_indices_from_mask_function.<locals>.<listcomp>����9�9�9�v����9�9�9rxc��g|] \}}|�|�� Sr�r��r�r��to_keeps rvr�z2get_indices_from_mask_function.<locals>.<listcomp> �!��G�G�G�:�1�g�w�G�Q�G�G�GrxrPr��r�rWr�r�rqr�r�r7r�r�r�r@r�r�r�r��rr{rr r r7r7r rbr�rrg�maskr?r�r��inputr�r�r�s @@rvr�r��s������'M�!%����$��� � *� ��z� )�O� � '� ��w� &�O��x�?��?�/�?�?�?�Y�?�?�� �d�R�X�r��7� 8� 8� $��>�>�#�#�D��"&����$��� � � ��)�E��u�T�$�u�z�z�|�|�*<�*<�%=�%=�>�?�?�L��<�(�(� N� N��?�?�?�?�?��?�?�?��"$���5�#��� �}�4�O��/�#��w�.�O�� � �H�H�W�L��L�L�L�)�L�L�M�M�M�M� N�#)�G��w�q�z�?�?�L��<�(�(� M� M��9�9�9�9��9�9�9��"$���5�#��� �}�4�O��/�#��w�.�O�� � �H�H�K�e�K�o�K�K�K��K�K�L�L�L�L�G�G��W�d�);�);�G�G�G�M��"����R�Y�[�[�A�A�A� �'�.�.�q�1�1�6�6�}�E�E� �%�/�/�1�1� � �}� %�%rxc ��0��K�|rg|�^}} } d} |r| | fz } |r| | fz } |g|�| �Ri|���d{V��} t| tjtjf��r| ���} �n'|�^}} } g} |��|d�t �t t����������} t| ��D]N���fd��D��}d} |r | | �fz } |r| | fz } | � ||g| �Ri|���d{V�����Onu|}t |d��} t| ��D]N��fd�|D��}d} |r | | �fz } |r| | fz } | � |g|�| �Ri|���d{V�����Od�t| | ��D��}|�ctj |tj �����}|�d���|��}|���}d|iS) zsame function but asyncr�Nrc�.��i|]}|�|���Sr�r�r:s ��rvr�z8async_get_indices_from_mask_function.<locals>.<dictcomp>Hr;rxc� ��g|] }|��� Sr�r�r=s �rvr�z8async_get_indices_from_mask_function.<locals>.<listcomp>Tr>rxc��g|] \}}|�|�� Sr�r�r@s rvr�z8async_get_indices_from_mask_function.<locals>.<listcomp>[rBrxrPr�rCrDs @@rvr�r�(s��������'S�!%����$��� � *� ��z� )�O� � '� ��w� &�O��X�E�v�E��E�E�E�9�E�E�E�E�E�E�E�E�� �d�R�X�r��7� 8� 8� $��>�>�#�#�D��"&����$��� � � ��)�E��u�T�$�u�z�z�|�|�*<�*<�%=�%=�>�?�?�L��<�(�(� T� T��?�?�?�?�?��?�?�?��"$���5�#��� �}�4�O��/�#��w�.�O�� � �(�(�7�"R�_�"R�"R�"R� �"R�"R�R�R�R�R�R�R�S�S�S�S� T�#)�G��w�q�z�?�?�L��<�(�(� S� S��9�9�9�9��9�9�9��"$���5�#��� �}�4�O��/�#��w�.�O�� � �(�(�"Q�E�"Q�O�"Q�"Q�"Q�y�"Q�"Q�Q�Q�Q�Q�Q�Q�R�R�R�R�G�G��W�d�);�);�G�G�G�M��"����R�Y�[�[�A�A�A� �'�.�.�q�1�1�6�6�}�E�E� �%�/�/�1�1� � �}� %�%rx)NNr)NNNNrrr)�r�r�r�r�r�r�r�rNr�r�rr�r�r�r�r6rr$� collectionsr�collections.abcrrrrr�r� functoolsr r �ior r r �pathlibrrr�typingrrrrrrr�fsspecr�r��pandasrQ�pyarrowrW�pyarrow.compute�computer�� fsspec.corer�huggingface_hubrrrrrr�huggingface_hub.hf_apir� multiprocessr �tqdm.contrib.concurrentr!r(r#� arrow_readerr$� arrow_writerr%r&r�r'�#download.streaming_download_managerr(r�r)r*r+r,r-r.�features.featuresr/r0r1r2r3r4� filesystemsr5r?r6r7r8r9r:r;r<r=r>r?� formattingr@rArBrC�formatting.formattingrDrErorFrG�namingrH�searchrIr�rJrKrLrMrDrNrOrPrQrRrSrTrUrVrWrX�utilsrYrZr�utils.file_utilsr[�utils.info_utilsr\�utils.metadatar]�utils.py_utilsr^r_r`rarbrc�utils.stratifyrd�utils.tf_utilsrerfrg� utils.typingrhri�sqlite3rJr[�pyspark� sqlalchemyrrkrZrl� get_loggerr�rr�rnr�r�r)rBrTrYr�r�rcrhrjr�r�r?rr3r6r�r�r�r�rxrv�<module>rps; �� .�-��������� � � � ������������� � � � � � � � � � � � ����� � � � � � � � � � � � ����� � � � ���������������7�7�7�7�7�7�7�7�7�7�1�1�1�1�1�1�������$�$�$�$�$�$�$�$���������������������������������������������� � � �������������������!�!�!�!�!�!�����������������,�+�+�+�+�+�������.�.�.�.�.�.�������%�%�%�%�%�%�=�=�=�=�=�=�=�=�)�)�)�)�)�)�9�9�9�9�9�9�P�P�P�P�P�P�P�P�P�P�P�P�P�P�P�P�P�P�����������������.�-�-�-�-�-� � � � � � � � � � � � � � � � � � � � � � � � �]�\�\�\�\�\�\�\�\�\�\�\�A�A�A�A�A�A�A�A�/�/�/�/�/�/�/�/�������"�"�"�"�"�"�;�;�;�;�;�;�;�;�;�;�;�;� � � � � � � � � � � � � � � � � � � � � � � � � � �������"�"�"�"�"�"�3�3�3�3�3�3�.�.�.�.�.�.�+�+�+�+�+�+�����������������F�E�E�E�E�E�\�\�\�\�\�\�\�\�\�\�,�,�,�,�,�,�,�,��2��N�N�N������N�N�N�����)�)�)�)�)�)�1�1�1�1�1�1� �� �H� %� %��S�;� E"�E"�E"�E"�E"�E"�E"�E"�Pr�r�r�r�r�r�r�r�j  � � � � �9� � � �(�(�(�V �� �(� � � � � h�5� h� h� h� h�v�d�3�i�v�v�v�v�T�T�T�  � � � � �i� � � � DVh�DVh�DVh�DVh�DVh���0F�DVh�DVh�DVh�Rl#'�"&�� o �o � ��=�o � �;� �o � �J� �o � � o �o �o �o �h,0��"&�"&�EV� g;�g;��9�o�g;��D��K�(�g;� �3�-�g;� �;� � g;� �J� � g;� �A�B� g;��g;�g;�g;�g;�TM�g�M�S�M�c�M�V]�M�M�M�M�:(,� 7&�7&��7&� �7&��7&�� 7&� �E�#�t�C�y�.�1�2� 7&� �e�_� 7&�7&�7&�7&�@(,� 8&�8&��8&� �8&��8&�� 8&� �E�#�t�C�y�.�1�2� 8&� �e�_� 8&�8&�8&�8&�8&�8&rx
Memory