� I�ggJ� ��dZddlmZddlZddlZddlmZddlZddlZddl Z ddl Z ddl Z ddl m Z ddlmZddlZddlZddlZddlmZddlmZd d lmZd d lmZmZd d lmZd d lmZm Z m!Z!m"Z"m#Z#d dl$m%Z%m&Z&d dlm'Z'd dlm(Z(e)ed��Z*e!e"d�Z+da,ddhZ-e�e e+d<d dl.m/Z/e#e+d<da,dZ0e j1��Z2d�Z3de3iZ4e&d���e&d���e&d���e&d���e&d���e&d���e&d���e&d���d�Z5dZ6dZ7d�Z8e5d e5d!e5d"fd#�Z9e5d e5d!e5d"fd$�Z:Gd%�d&��Z;Gd'�d(e;��Z<dZ=e)ed)��rBej>�?d*d+���@��pdZAeA�ejBeA�,��Z=Gd-�d.eC��ZDd/ZEd0ZFd1ZGd=d3�ZHd4�ZId5�ZJGd6�d7eC��ZKd=d8�ZLd>d:�ZMGd;�d<e��ZNdS)?z+ Helpers for embarrassingly parallel code. �)�divisionN)�sqrt)�uuid4)�Integral)� nullcontext)� TimeoutError�)�mp)�Logger�short_format_time)�memstr_to_bytes)�FallbackToBackend�MultiprocessingBackend�ThreadingBackend�SequentialBackend� LokyBackend)� eval_expr� _Sentinel)�AutoBatchingMixin)�ParallelBackendBase�pypy_version_info)� threading� sequentialr�multiprocessing�loky)rc�|� ddlm}td|��dS#t$r}d}t|��|�d}~wwxYw)zDRegister Dask Backend if called with parallel_config(backend="dask")r )�DaskDistributedBackend�daskz�To use the dask.distributed backend you must install both the `dask` and distributed modules. See https://dask.pydata.org/en/latest/install.html for more information.N)�_daskr�register_parallel_backend� ImportError)r�e�msgs �_/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/joblib/parallel.py�_register_daskr%Jsh��&�1�1�1�1�1�1�!�&�*@�A�A�A�A�A�� �&�&�&����#���A�%����� &���s�� ;�6�;r)� default_value�1M�r)�backend�n_jobs�verbose� temp_folder� max_nbytes� mmap_mode�prefer�require)� processes�threadsN)� sharedmemNc�l�|t|ur|S||t|ur||S|jS)z�Return the value of a parallel config parameter Explicitly setting it in Parallel has priority over setting in a parallel_(config/backend) context manager. )�default_parallel_configr&)�param�context_config�keys r$�_get_config_paramr9nsF��  �+�C�0�0�0�� ��c��"9�#�">�>�>��c�"�"� � ��r/r0r+c�j�t|||��\}}ttd|d��}||fS)�!Return the active default backendr*)�_get_active_backendr9r5)r/r0r+r)�configr*s r$�get_active_backendr?�s@�� *�&�'�7�C�C�O�G�V� ���)�6�8���F� �F�?�r:c�<�ttdt��}ttd|d��}t||d��}t||d��}t||d��}|tvrt d|�dt�����|t vrt d|�d t �����|d kr|d krt d ���d }|�ttd���}d}|j }t|dd��}t|dd��}|d ko| } | | o|dko| z} | rjtt|���} |dkr,|r*td| j j �d|j j �d���|���} d| d<| | fS||fS)r<r>r)r/r0r+zprefer=z. is not a valid backend hint, expected one of zrequire=z4 is not a valid backend constraint, expected one of r1r3zJprefer == 'processes' and require == 'sharedmem' are inconsistent settingsTNr�� nesting_levelF� uses_threads�supports_sharedmemr2� zUsing z as joblib backend instead of z8 as the latter does not provide shared memory semantics.r r*)�getattr�_backendr5r9�VALID_BACKEND_HINTS� ValueError�VALID_BACKEND_CONSTRAINTS�BACKENDS�DEFAULT_BACKENDrB�DEFAULT_THREAD_BACKEND�print� __class__�__name__�copy) r/r0r+�backend_configr)�explicit_backendrBrCrD� force_threads�sharedmem_backend� thread_configs r$r=r=�sR���X�x�1H�I�I�N��� �*�N�I���G��v�~�x� @� @�F�����C�C�G�����C�C�G� �(�(�(�� 5�f� 5� 5�2� 5� 5� � � ��/�/�/�� ;�w� ;� ;�8� ;� ;� � � ������K�!7�!7�� )� � � � ���� �?�+�!�<�<�<�� ���)�M��7�N�E�:�:�L� ��*>��F�F��� �+�F�4F�0F�M�� ��I��9�!4�I�\�9I��M��0�%�%;�<�'� � � �� �b�=�=�-�=� �J�*�4�=�J�J�-4�->�-G�J�J�J� � � � '�+�+�-�-� �"#� �h�� �-�/�/� �N� "�"r:c ��eZdZdZedfededededededed d d �d �Zd �Zd�Zd�Zd�Z d S)�parallel_configa�Set the default backend or configuration for :class:`~joblib.Parallel`. This is an alternative to directly passing keyword arguments to the :class:`~joblib.Parallel` class constructor. It is particularly useful when calling into library code that uses joblib internally but does not expose the various parallel configuration arguments in its own API. Parameters ---------- backend: str or ParallelBackendBase instance, default=None If ``backend`` is a string it must match a previously registered implementation using the :func:`~register_parallel_backend` function. By default the following backends are available: - 'loky': single-host, process-based parallelism (used by default), - 'threading': single-host, thread-based parallelism, - 'multiprocessing': legacy single-host, process-based parallelism. 'loky' is recommended to run functions that manipulate Python objects. 'threading' is a low-overhead alternative that is most efficient for functions that release the Global Interpreter Lock: e.g. I/O-bound code or CPU-bound code in a few calls to native code that explicitly releases the GIL. Note that on some rare systems (such as pyodide), multiprocessing and loky may not be available, in which case joblib defaults to threading. In addition, if the ``dask`` and ``distributed`` Python packages are installed, it is possible to use the 'dask' backend for better scheduling of nested parallel calls without over-subscription and potentially distribute parallel calls over a networked cluster of several hosts. It is also possible to use the distributed 'ray' backend for distributing the workload to a cluster of nodes. See more details in the Examples section below. Alternatively the backend can be passed directly as an instance. n_jobs: int, default=None The maximum number of concurrently running jobs, such as the number of Python worker processes when ``backend="loky"`` or the size of the thread-pool when ``backend="threading"``. This argument is converted to an integer, rounded below for float. If -1 is given, `joblib` tries to use all CPUs. The number of CPUs ``n_cpus`` is obtained with :func:`~cpu_count`. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. For instance, using ``n_jobs=-2`` will result in all CPUs but one being used. This argument can also go above ``n_cpus``, which will cause oversubscription. In some cases, slight oversubscription can be beneficial, e.g., for tasks with large I/O operations. If 1 is given, no parallel computing code is used at all, and the behavior amounts to a simple python `for` loop. This mode is not compatible with `timeout`. None is a marker for 'unset' that will be interpreted as n_jobs=1 unless the call is performed under a :func:`~parallel_config` context manager that sets another value for ``n_jobs``. If n_jobs = 0 then a ValueError is raised. verbose: int, default=0 The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. temp_folder: str or None, default=None Folder to be used by the pool for memmapping large arrays for sharing memory with worker processes. If None, this will try in order: - a folder pointed by the ``JOBLIB_TEMP_FOLDER`` environment variable, - ``/dev/shm`` if the folder exists and is writable: this is a RAM disk filesystem available by default on modern Linux distributions, - the default system temporary folder that can be overridden with ``TMP``, ``TMPDIR`` or ``TEMP`` environment variables, typically ``/tmp`` under Unix operating systems. max_nbytes int, str, or None, optional, default='1M' Threshold on the size of arrays passed to the workers that triggers automated memory mapping in temp_folder. Can be an int in Bytes, or a human-readable string, e.g., '1M' for 1 megabyte. Use None to disable memmapping of large arrays. mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, default='r' Memmapping mode for numpy arrays passed to workers. None will disable memmapping, other modes defined in the numpy.memmap doc: https://numpy.org/doc/stable/reference/generated/numpy.memmap.html Also, see 'max_nbytes' parameter documentation for more details. prefer: str in {'processes', 'threads'} or None, default=None Soft hint to choose the default backend. The default process-based backend is 'loky' and the default thread-based backend is 'threading'. Ignored if the ``backend`` parameter is specified. require: 'sharedmem' or None, default=None Hard constraint to select the backend. If set to 'sharedmem', the selected backend will be single-host and thread-based. inner_max_num_threads: int, default=None If not None, overwrites the limit set on the number of threads usable in some third-party library threadpools like OpenBLAS, MKL or OpenMP. This is only used with the ``loky`` backend. backend_params: dict Additional parameters to pass to the backend constructor when backend is a string. Notes ----- Joblib tries to limit the oversubscription by limiting the number of threads usable in some third-party library threadpools like OpenBLAS, MKL or OpenMP. The default limit in each worker is set to ``max(cpu_count() // effective_n_jobs, 1)`` but this limit can be overwritten with the ``inner_max_num_threads`` argument which will be used to set this limit in the child processes. .. versionadded:: 1.3 Examples -------- >>> from operator import neg >>> with parallel_config(backend='threading'): ... print(Parallel()(delayed(neg)(i + 1) for i in range(5))) ... [-1, -2, -3, -4, -5] To use the 'ray' joblib backend add the following lines: >>> from ray.util.joblib import register_ray # doctest: +SKIP >>> register_ray() # doctest: +SKIP >>> with parallel_config(backend="ray"): # doctest: +SKIP ... print(Parallel()(delayed(neg)(i + 1) for i in range(5))) [-1, -2, -3, -4, -5] r)r*r+r,r-r.r/r0N)r*r+r,r-r.r/r0�inner_max_num_threadsc �X�ttdt��|_|j|| fi| ��}||||||||d�} |j���|_|j�d�| ���D����ttd|j��dS)Nr>)r*r+r,r-r.r/r0r)c�D�i|]\}}t|t���||��S�)� isinstancer)�.0�k�vs r$� <dictcomp>z,parallel_config.__init__.<locals>.<dictcomp>�s>��% �% �% ��Q���a��+�+�% � �q�% �% �% r:) rFrGr5�old_parallel_config�_check_backendrQrX�update�items�setattr) �selfr)r*r+r,r-r.r/r0rY�backend_params� new_configs r$�__init__zparallel_config.__init__ds���$+� �h� 7�$ �$ �� �&�$�%� �*� � �.<� � �� ��&�$�"����  �  � � $�7�<�<�>�>��� ��#�#�% �% �'�-�-�/�/�% �% �% � � � � ��(�D�$8�9�9�9�9�9r:c ���|tdur&|�t|��dkrtd���|St|t��r�|t vr�|t vrt |}|��n�|tvrCtj d|�dt�d�td���t tt |<n9td |�d tt � ���������t |d i|��}|�'|jj�d �}|js J|���||_|j�-|jd}|tdurd}n|j}||_|S) Nr)rzrinner_max_num_threads and other constructor parameters backend_params are only supported when backend is not None.�joblib backend '�3' is not available on your system, falling back to �.��� stacklevel�Invalid backend: �, expected one of z< does not accept setting the inner_max_num_threads argument.r\)r5�lenrIr]�strrK�EXTERNAL_BACKENDS�MAYBE_AVAILABLE_BACKENDS�warnings�warnrL� UserWarning�sorted�keysrOrP�supports_inner_max_num_threadsrYrBrb)rgr)rYrh�registerr#�parent_backendrBs r$rczparallel_config._check_backend�s��� �-�i�8� 8� 8�$�0�C��4G�4G�!�4K�4K� �0���� �N� �g�s� #� #� :��h�&�&��/�/�/�0��9�H��H�J�J�J�J�� 8�8�8��M�K�7�K�K�8G�K�K�K�#�#$� ���� )1��(A�H�W�%�%�$�5�G�5�5�!�(�-�-�/�/�2�2�5�5���� �w�'�9�9�.�9�9�G� � ,��$�-�2�2�2� ��9� >� >�3� >� >�9�,A�G� )� � � (�!�5�i�@�N��!8��!C�C�C� !� � � .� <� �$1�G� !��r:c��|jS�N)rX�rgs r$� __enter__zparallel_config.__enter__�s ���#�#r:c�.�|���dSr�)� unregister)rg�type�value� tracebacks r$�__exit__zparallel_config.__exit__�s�� �������r:c�<�ttd|j��dS)Nr>)rfrGrbr�s r$r�zparallel_config.unregister�s����(�D�$<�=�=�=�=�=r:) rP� __module__� __qualname__�__doc__r5rjrcr�r�r�r\r:r$rXrX�s�������I�I�X(� �2�':�'�x�0�'� �2�+�M�:�*�<�8�)�+�6�&�x�0�'� �2�"�':�':�':�':�':�R2�2�2�h$�$�$����>�>�>�>�>r:rXc�*��eZdZdZd�fd� Zd�Z�xZS)�parallel_backenda� Change the default backend used by Parallel inside a with block. .. warning:: It is advised to use the :class:`~joblib.parallel_config` context manager instead, which allows more fine-grained control over the backend configuration. If ``backend`` is a string it must match a previously registered implementation using the :func:`~register_parallel_backend` function. By default the following backends are available: - 'loky': single-host, process-based parallelism (used by default), - 'threading': single-host, thread-based parallelism, - 'multiprocessing': legacy single-host, process-based parallelism. 'loky' is recommended to run functions that manipulate Python objects. 'threading' is a low-overhead alternative that is most efficient for functions that release the Global Interpreter Lock: e.g. I/O-bound code or CPU-bound code in a few calls to native code that explicitly releases the GIL. Note that on some rare systems (such as Pyodide), multiprocessing and loky may not be available, in which case joblib defaults to threading. You can also use the `Dask <https://docs.dask.org/en/stable/>`_ joblib backend to distribute work across machines. This works well with scikit-learn estimators with the ``n_jobs`` parameter, for example:: >>> import joblib # doctest: +SKIP >>> from sklearn.model_selection import GridSearchCV # doctest: +SKIP >>> from dask.distributed import Client, LocalCluster # doctest: +SKIP >>> # create a local Dask cluster >>> cluster = LocalCluster() # doctest: +SKIP >>> client = Client(cluster) # doctest: +SKIP >>> grid_search = GridSearchCV(estimator, param_grid, n_jobs=-1) ... # doctest: +SKIP >>> with joblib.parallel_backend("dask", scatter=[X, y]): # doctest: +SKIP ... grid_search.fit(X, y) It is also possible to use the distributed 'ray' backend for distributing the workload to a cluster of nodes. To use the 'ray' joblib backend add the following lines:: >>> from ray.util.joblib import register_ray # doctest: +SKIP >>> register_ray() # doctest: +SKIP >>> with parallel_backend("ray"): # doctest: +SKIP ... print(Parallel()(delayed(neg)(i + 1) for i in range(5))) [-1, -2, -3, -4, -5] Alternatively the backend can be passed directly as an instance. By default all available workers will be used (``n_jobs=-1``) unless the caller passes an explicit value for the ``n_jobs`` parameter. This is an alternative to passing a ``backend='backend_name'`` argument to the :class:`~Parallel` class constructor. It is particularly useful when calling into library code that uses joblib internally but does not expose the backend argument in its own API. >>> from operator import neg >>> with parallel_backend('threading'): ... print(Parallel()(delayed(neg)(i + 1) for i in range(5))) ... [-1, -2, -3, -4, -5] Joblib also tries to limit the oversubscription by limiting the number of threads usable in some third-party library threadpools like OpenBLAS, MKL or OpenMP. The default limit in each worker is set to ``max(cpu_count() // effective_n_jobs, 1)`` but this limit can be overwritten with the ``inner_max_num_threads`` argument which will be used to set this limit in the child processes. .. versionadded:: 0.10 See Also -------- joblib.parallel_config: context manager to change the backend configuration. �����Nc ����t��jd|||d�|��|j�d|_n|jd|jdf|_|jd|jdf|_dS)N)r)r*rYr)r*r\)�superrjrb�old_backend_and_jobsrX�new_backend_and_jobs)rgr)r*rYrhrOs �r$rjzparallel_backend.__init__s���� ����� ���"7� � ��  � � � � #� +�(,�D� %� %��(��3��(��2�)�D� %� � �� +� � �� *�% ��!�!�!r:c��|jSr�)r�r�s r$r�zparallel_backend.__enter__2s ���(�(r:)r�N)rPr�r�r�rjr�� __classcell__�rOs@r$r�r��sY�������O�O�` � � � � � �,)�)�)�)�)�)�)r:r�� get_context�JOBLIB_START_METHOD�)�methodc�0�eZdZdZ dd�Zd�Zd�Zd�ZdS)� BatchedCallszCWrap a sequence of (func, args, kwargs) tuples as a single callableNc���t|��|_t|j��|_||_t |t ��r|\|_|_n|dc|_|_|�|ni|_ dSr�) �listrert�_size�_reducer_callbackr]�tuplerG�_n_jobs� _pickle_cache)rg�iterator_slice�backend_and_jobs�reducer_callback� pickle_caches r$rjzBatchedCalls.__init__Esu���.�)�)�� ����_�_�� �!1��� �&�� .� .� A�*:� '�D�M�4�<�<�+;�D� '�D�M�4�<�-9�-E�\�\�2����r:c��t|j|j���5d�|jD��cddd��S#1swxYwYdS)N)r)r*c�&�g|]\}}}||i|����Sr\r\)r^�func�args�kwargss r$� <listcomp>z)BatchedCalls.__call__.<locals>.<listcomp>Vs=��:�:�:�*��d�F��D�$�)�&�)�)�:�:�:r:)rXrGr�rer�s r$�__call__zBatchedCalls.__call__Rs����T�]�4�<� H� H� H� :� :�:�:�.2�j�:�:�:� :� :� :� :� :� :� :� :� :� :� :� :���� :� :� :� :� :� :s �:�>�>c�~�|j�|���t|j|j|jfd|jffSr�)r�r�rerGr�r�r�s r$� __reduce__zBatchedCalls.__reduce__YsI�� � !� -� � "� "� $� $� $� � �Z�$�-���6�� � � !� � r:c��|jSr�)r�r�s r$�__len__zBatchedCalls.__len__cs ���z�r:)NN)rPr�r�r�rjr�r�r�r\r:r$r�r�Bse������M�M�JN�"� N� N� N� N�:�:�:� � � �����r:r��Done�Error�PendingFc�>�t�dStj|���S)a�Return the number of CPUs. This delegates to loky.cpu_count that takes into account additional constraints such as Linux CFS scheduler quotas (typically set by container runtimes such as docker) and CPU affinity (for instance using the taskset command on Linux). If only_physical_cores is True, do not take hyperthreading / SMT logical cores into account. Nr ��only_physical_cores)r r� cpu_countr�s r$r�r�ps"�� �z��q� �>�.A� B� B� B�Br:c���|sdS|dkrdS|dkrdSdd|z dzz}t||z ��}t|dz|z ��}t|��t|��kS) z� Returns False for indices increasingly apart, the distance depending on the value of verbose. We use a lag increasing as the square of index TrEFrg�?� ror )r�int)�indexr+�scale� next_scales r$�_verbosity_filterr��s~�� ���t� �2����u� ��z�z��u��B��L�Q�&�&�G� ���� !� !�E��u�q�y�G�+�,�,�J� � �O�O�s�5�z�z� )�*r:c�p���fd�} tj���|��}n#t$rYnwxYw|S)z6Decorator used to capture the arguments of a function.c����||fSr�r\)r�r��functions �r$�delayed_functionz!delayed.<locals>.delayed_function�s�����v�%�%r:)� functools�wraps�AttributeError)r�r�s` r$�delayedr��sf���&�&�&�&�&�;�4�9�?�8�4�4�5E�F�F���� �;�;�;�:�:�;���� �s �&� 3�3c�H�eZdZdZd�Zd�Zd�Zd�Zd�Zd�Z d�Z d �Z d �Z d S) �BatchCompletionCallBackaCallback to keep track of completed results and schedule the next tasks. This callable is executed by the parent process whenever a worker process has completed a batch of tasks. It is used for progress reporting, to update estimate of the batch processing duration and to schedule the next batch of tasks to be processed. It is assumed that this callback will always be triggered by the backend right after the end of a task, in case of success as well as in case of failure. c��||_||_||_|j|_d|_|jjs d|_dSt|_dSr�) �dispatch_timestamp� batch_size�parallel�_call_id�parallel_call_id�jobrG�supports_retrieve_callback�status� TASK_PENDING)rgr�r�r�s r$rjz BatchCompletionCallBack.__init__�sT��"4���$��� �� � (� 1������� �;� '��D�K�K�K�'�D�K�K�Kr:c��||_dS)z.Register the object returned by `apply_async`.N)r�)rgr�s r$� register_jobz$BatchCompletionCallBack.register_job�s ������r:c��|jj}|jr|���S |jr|j�|���}n|j���}t|t���}n-#t$r }t|t���}Yd}~nd}~wwxYw|� |��|���S)aHReturns the raw result of the task that was submitted. If the task raised an exception rather than returning, this same exception will be raised instead. If the backend supports the retrieval callback, it is assumed that this method is only called after the result has been registered. It is ensured by checking that `self.status(timeout)` does not return TASK_PENDING. In this case, `get_result` directly returns the registered result (or raise the registered exception). For other backends, there are no such assumptions, but `get_result` still needs to synchronously retrieve the result before it can return it or raise. It will block at most `self.timeout` seconds waiting for retrieval to complete, after that it raises a TimeoutError. ��timeout��resultr�N) r�rGr��_return_or_raise�supports_timeoutr��get�dict� TASK_DONE� BaseException� TASK_ERROR�_register_outcome)rgr�r)r��outcomer"s r$� get_resultz"BatchCompletionCallBack.get_result�s���$�-�(�� � -� +��(�(�*�*� *� 8��'� (�����g��6�6�����������&��;�;�;�G�G��� 8� 8� 8��!�J�7�7�7�G�G�G�G�G�G����� 8���� ���w�'�'�'��$�$�&�&�&s�AA<�< B&�B!�!B&c�R� |jtkr|j�|j|`S#|`wxYwr�)r�r��_resultr�s r$r�z(BatchCompletionCallBack._return_or_raise�s9�� ��{�j�(�(��l�"��<�� � ��� � � � � s�"�&c� �|�|jtkr|jStj��}t|d��s||_||jz |kr7t t ��t���}|�|��|jS)z�Get the status of the task. This function also checks if the timeout has been reached and register the TimeoutError outcome when it is the case. N�_completion_timeout_counterr�) r�r��time�hasattrr�r�rr�r�)rgr��nowr�s r$� get_statusz"BatchCompletionCallBack.get_statuss��� �?�d�k�\�9�9��;� ��i�k�k���t�:�;�;� 3�/2�D� ,� �$�2� 2�g� =� =��,�.�.��D�D�D�G� � "� "�7� +� +� +��{�r:c���|jjjs|���dS|jj5|jj|jkr ddd��dS|jjr ddd��dS|�|��}|jj s|jj � |��ddd��n #1swxYwY|r|���dSdS)z@Function called by the callback thread after a job is completed.N) r�rGr�� _dispatch_new�_lockr�r�� _aborting�_retrieve_result�return_ordered�_jobs�append)rg�out� job_succeededs r$r�z BatchCompletionCallBack.__call__sp�� �}�%�@� � � � � � � � �F� �]� � 1� 1� �}�%��)>�>�>��  1� 1� 1� 1� 1� 1� 1� 1��}�&� �� 1� 1� 1� 1� 1� 1� 1� 1�!�1�1�#�6�6�M��=�/� 1�� �#�*�*�4�0�0�0�) 1� 1� 1� 1� 1� 1� 1� 1� 1� 1� 1���� 1� 1� 1� 1�, � !� � � � � � � � � !� !s�B>� B>�2AB>�>C�Cc��tj��|jz }|jj�|j|��|jj5|jxj|jz c_|j���|jj �|j� ��ddd��dS#1swxYwYdS)z1Schedule the next batch of tasks to be processed.N) r�r�r�rG�batch_completedr�r��n_completed_tasks�print_progress�_original_iterator� dispatch_next)rg�this_batch_durations r$r�z%BatchCompletionCallBack._dispatch_new>s���#�i�k�k�D�,C�C�� � ��.�.�t��/B� D� D� D��]� � .� .� �M� +� +�t�� >� +� +� �M� (� (� *� *� *��}�/�;�� �+�+�-�-�-�  .� .� .� .� .� .� .� .� .� .� .� .���� .� .� .� .� .� .s� AB3�3B7�:B7c�$� |jj�|��}tt|���}n4#t $r'}d|_t|t���}Yd}~nd}~wwxYw|�|��|dtkS)z�Fetch and register the outcome of a task. Return True if the task succeeded, False otherwise. This function is only called by backends that support retrieving the task result in the callback thread. )r�r�Nr�r�) r�rG�retrieve_result_callbackr�r�r�� __traceback__r�r�)rgr�r�r�r"s r$r�z(BatchCompletionCallBack._retrieve_resultMs��� 8��]�+�D�D�S�I�I�F��)�F�;�;�;�G�G��� 8� 8� 8�"�A�O��!�J�7�7�7�G�G�G�G�G�G����� 8���� ���w�'�'�'��x� �J�.�.s�58� A)�A$�$A)c� �|jj5|jtdfvr ddd��dS|d|_ddd��n #1swxYwY|d|_d|_|jt krd|j_d|j_dSdS)ztRegister the outcome of a task. This method can be called only once, future calls will be ignored. Nr�r�T) r�r�r�r�r�r�r�� _exceptionr�)rgr�s r$r�z)BatchCompletionCallBack._register_outcomecs����]� � ,� ,��{�<��"6�6�6�� ,� ,� ,� ,� ,� ,� ,� ,�"�(�+�D�K� ,� ,� ,� ,� ,� ,� ,� ,� ,� ,� ,���� ,� ,� ,� ,� �x�(�� ���� �;�*� $� $�'+�D�M� $�&*�D�M� #� #� #� %� $s�A� A�A � A N) rPr�r�r�rjr�r�r�r�r�r�r�r�r\r:r$r�r��s������� � �"'�'�'�.���%'�%'�%'�N������0$!�$!�$!�L .� .� .�/�/�/�,+�+�+�+�+r:r�c�&�|t|<|r|adSdS)a�Register a new Parallel backend factory. The new backend can then be selected by passing its name as the backend argument to the :class:`~Parallel` class. Moreover, the default backend can be overwritten globally by setting make_default=True. The factory can be any callable that takes no argument and return an instance of ``ParallelBackendBase``. Warning: this function is experimental and subject to change in a future version of joblib. .. versionadded:: 0.10 N)rKrL)�name�factory� make_defaults r$r r ~s(���H�T�N��������r:r�c�h�|dkrdSt��\}}|�|}|�|���S)aeDetermine the number of jobs that can actually run in parallel n_jobs is the number of workers requested by the callers. Passing n_jobs=-1 means requesting all available workers for instance matching the number of CPU cores on the worker host(s). This method should return a guesstimate of the number of workers that can actually perform work concurrently with the currently enabled default backend. The primary use case is to make it possible for the caller to know in how many chunks to slice the work. In general working on larger data chunks is more efficient (less scheduling overhead and better use of CPU cache prefetching heuristics) as long as all the workers have enough work to do. Warning: this function is experimental and subject to change in a future version of joblib. .. versionadded:: 0.10 r N)r*)r?�effective_n_jobs)r*r)�backend_n_jobss r$r r �sC��*��{�{��q�0�2�2��G�^� �~��� � #� #�6� #� 2� 2�2r:c �&��eZdZdZededdedddded ed ed ed ed f �fd� Zd�Zd�Zd�Zd�Z d�Z d�Z d�Z d�Z d�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd�Zd �Zd!�Zd"�Zd#�Zd$�Zd%�Z�xZS)&�Parallela�2 Helper class for readable parallel mapping. Read more in the :ref:`User Guide <parallel>`. Parameters ---------- n_jobs: int, default=None The maximum number of concurrently running jobs, such as the number of Python worker processes when ``backend="loky"`` or the size of the thread-pool when ``backend="threading"``. This argument is converted to an integer, rounded below for float. If -1 is given, `joblib` tries to use all CPUs. The number of CPUs ``n_cpus`` is obtained with :func:`~cpu_count`. For n_jobs below -1, (n_cpus + 1 + n_jobs) are used. For instance, using ``n_jobs=-2`` will result in all CPUs but one being used. This argument can also go above ``n_cpus``, which will cause oversubscription. In some cases, slight oversubscription can be beneficial, e.g., for tasks with large I/O operations. If 1 is given, no parallel computing code is used at all, and the behavior amounts to a simple python `for` loop. This mode is not compatible with ``timeout``. None is a marker for 'unset' that will be interpreted as n_jobs=1 unless the call is performed under a :func:`~parallel_config` context manager that sets another value for ``n_jobs``. If n_jobs = 0 then a ValueError is raised. backend: str, ParallelBackendBase instance or None, default='loky' Specify the parallelization backend implementation. Supported backends are: - "loky" used by default, can induce some communication and memory overhead when exchanging input and output data with the worker Python processes. On some rare systems (such as Pyiodide), the loky backend may not be available. - "multiprocessing" previous process-based backend based on `multiprocessing.Pool`. Less robust than `loky`. - "threading" is a very low-overhead backend but it suffers from the Python Global Interpreter Lock if the called function relies a lot on Python objects. "threading" is mostly useful when the execution bottleneck is a compiled extension that explicitly releases the GIL (for instance a Cython loop wrapped in a "with nogil" block or an expensive call to a library such as NumPy). - finally, you can register backends by calling :func:`~register_parallel_backend`. This will allow you to implement a backend of your liking. It is not recommended to hard-code the backend name in a call to :class:`~Parallel` in a library. Instead it is recommended to set soft hints (prefer) or hard constraints (require) so as to make it possible for library users to change the backend from the outside using the :func:`~parallel_config` context manager. return_as: str in {'list', 'generator', 'generator_unordered'}, default='list' If 'list', calls to this instance will return a list, only when all results have been processed and retrieved. If 'generator', it will return a generator that yields the results as soon as they are available, in the order the tasks have been submitted with. If 'generator_unordered', the generator will immediately yield available results independently of the submission order. The output order is not deterministic in this case because it depends on the concurrency of the workers. prefer: str in {'processes', 'threads'} or None, default=None Soft hint to choose the default backend if no specific backend was selected with the :func:`~parallel_config` context manager. The default process-based backend is 'loky' and the default thread-based backend is 'threading'. Ignored if the ``backend`` parameter is specified. require: 'sharedmem' or None, default=None Hard constraint to select the backend. If set to 'sharedmem', the selected backend will be single-host and thread-based even if the user asked for a non-thread based backend with :func:`~joblib.parallel_config`. verbose: int, default=0 The verbosity level: if non zero, progress messages are printed. Above 50, the output is sent to stdout. The frequency of the messages increases with the verbosity level. If it more than 10, all iterations are reported. timeout: float or None, default=None Timeout limit for each task to complete. If any task takes longer a TimeOutError will be raised. Only applied when n_jobs != 1 pre_dispatch: {'all', integer, or expression, as in '3*n_jobs'}, default='2*n_jobs' The number of batches (of tasks) to be pre-dispatched. Default is '2*n_jobs'. When batch_size="auto" this is reasonable default and the workers should never starve. Note that only basic arithmetics are allowed here and no modules can be used in this expression. batch_size: int or 'auto', default='auto' The number of atomic tasks to dispatch at once to each worker. When individual evaluations are very fast, dispatching calls to workers can be slower than sequential computation because of the overhead. Batching fast computations together can mitigate this. The ``'auto'`` strategy keeps track of the time it takes for a batch to complete, and dynamically adjusts the batch size to keep the time on the order of half a second, using a heuristic. The initial batch size is 1. ``batch_size="auto"`` with ``backend="threading"`` will dispatch batches of a single task at a time as the threading backend has very little overhead and using larger batch size has not proved to bring any gain in that case. temp_folder: str or None, default=None Folder to be used by the pool for memmapping large arrays for sharing memory with worker processes. If None, this will try in order: - a folder pointed by the JOBLIB_TEMP_FOLDER environment variable, - /dev/shm if the folder exists and is writable: this is a RAM disk filesystem available by default on modern Linux distributions, - the default system temporary folder that can be overridden with TMP, TMPDIR or TEMP environment variables, typically /tmp under Unix operating systems. Only active when ``backend="loky"`` or ``"multiprocessing"``. max_nbytes int, str, or None, optional, default='1M' Threshold on the size of arrays passed to the workers that triggers automated memory mapping in temp_folder. Can be an int in Bytes, or a human-readable string, e.g., '1M' for 1 megabyte. Use None to disable memmapping of large arrays. Only active when ``backend="loky"`` or ``"multiprocessing"``. mmap_mode: {None, 'r+', 'r', 'w+', 'c'}, default='r' Memmapping mode for numpy arrays passed to workers. None will disable memmapping, other modes defined in the numpy.memmap doc: https://numpy.org/doc/stable/reference/generated/numpy.memmap.html Also, see 'max_nbytes' parameter documentation for more details. Notes ----- This object uses workers to compute in parallel the application of a function to many different arguments. The main functionality it brings in addition to using the raw multiprocessing or concurrent.futures API are (see examples for details): * More readable code, in particular since it avoids constructing list of arguments. * Easier debugging: - informative tracebacks even when the error happens on the client side - using 'n_jobs=1' enables to turn off parallel computing for debugging without changing the codepath - early capture of pickling errors * An optional progress meter. * Interruption of multiprocesses jobs with 'Ctrl-C' * Flexible pickling control for the communication to and from the worker processes. * Ability to use shared memory efficiently with worker processes for large numpy-based datastructures. Note that the intended usage is to run one call at a time. Multiple calls to the same Parallel object will result in a ``RuntimeError`` Examples -------- A simple example: >>> from math import sqrt >>> from joblib import Parallel, delayed >>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10)) [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0] Reshaping the output when the function has several return values: >>> from math import modf >>> from joblib import Parallel, delayed >>> r = Parallel(n_jobs=1)(delayed(modf)(i/2.) for i in range(10)) >>> res, i = zip(*r) >>> res (0.0, 0.5, 0.0, 0.5, 0.0, 0.5, 0.0, 0.5, 0.0, 0.5) >>> i (0.0, 0.0, 1.0, 1.0, 2.0, 2.0, 3.0, 3.0, 4.0, 4.0) The progress meter: the higher the value of `verbose`, the more messages: >>> from time import sleep >>> from joblib import Parallel, delayed >>> r = Parallel(n_jobs=2, verbose=10)( ... delayed(sleep)(.2) for _ in range(10)) #doctest: +SKIP [Parallel(n_jobs=2)]: Done 1 tasks | elapsed: 0.6s [Parallel(n_jobs=2)]: Done 4 tasks | elapsed: 0.8s [Parallel(n_jobs=2)]: Done 10 out of 10 | elapsed: 1.4s finished Traceback example, note how the line of the error is indicated as well as the values of the parameter passed to the function that triggered the exception, even though the traceback happens in the child process: >>> from heapq import nlargest >>> from joblib import Parallel, delayed >>> Parallel(n_jobs=2)( ... delayed(nlargest)(2, n) for n in (range(4), 'abcde', 3)) ... # doctest: +SKIP ----------------------------------------------------------------------- Sub-process traceback: ----------------------------------------------------------------------- TypeError Mon Nov 12 11:37:46 2012 PID: 12934 Python 2.7.3: /usr/bin/python ........................................................................ /usr/lib/python2.7/heapq.pyc in nlargest(n=2, iterable=3, key=None) 419 if n >= size: 420 return sorted(iterable, key=key, reverse=True)[:n] 421 422 # When key is none, use simpler decoration 423 if key is None: --> 424 it = izip(iterable, count(0,-1)) # decorate 425 result = _nlargest(n, it) 426 return map(itemgetter(0), result) # undecorate 427 428 # General case, slowest method TypeError: izip argument #1 must support iteration _______________________________________________________________________ Using pre_dispatch in a producer/consumer situation, where the data is generated on the fly. Note how the producer is first called 3 times before the parallel loop is initiated, and then called to generate new data on the fly: >>> from math import sqrt >>> from joblib import Parallel, delayed >>> def producer(): ... for i in range(6): ... print('Produced %s' % i) ... yield i >>> out = Parallel(n_jobs=2, verbose=100, pre_dispatch='1.5*n_jobs')( ... delayed(sqrt)(i) for i in producer()) #doctest: +SKIP Produced 0 Produced 1 Produced 2 [Parallel(n_jobs=2)]: Done 1 jobs | elapsed: 0.0s Produced 3 [Parallel(n_jobs=2)]: Done 2 jobs | elapsed: 0.0s Produced 4 [Parallel(n_jobs=2)]: Done 3 jobs | elapsed: 0.0s Produced 5 [Parallel(n_jobs=2)]: Done 4 jobs | elapsed: 0.0s [Parallel(n_jobs=2)]: Done 6 out of 6 | elapsed: 0.0s remaining: 0.0s [Parallel(n_jobs=2)]: Done 6 out of 6 | elapsed: 0.0s finished r*r)r�r+Nz 2 * n_jobs�autor,r-r.r/r0c ����t�����|� td}t| | |���\} �| j}t |�d��|_||_||_|dvrtd|�d����||_ |dk|_ |dk|_ �fd �| d f|d f| d f| d f| df|dffD��|_ t|j d t��r"t!|j d ��|j d <t#d|j ddz ��|j d<t$�t$|j d<n0t't(d��rt)j��|j d<|tdus|�| }�n7t|t,��r|j�||_�nt'|d��r+t'|d��r||j d<t/|���}n�|t0vrg|t2vr^t5jd|�dt8�d�t:d���t0t8t0|<t1t8|���}ng t0|}nL#t<$r?}td|�dt?t0� ��������|�d}~wwxYw||���}t |�d��}|�|j!} tE|��}n#t$rtd���wxYw||_#| dkr#tI|d d!��std"|z���|d#kst|tJ��r|dkr||_&ntd$|z���t|tN��s�|j r*|j(s#td%�)||�����tUj+��|_,t[j.��|_/ta��|_1tej3��|_4d|_5||_6d!|_7d!|_8ts��j:|_;d|_<dS)&Nr*)r/r0r+r+>r�� generator�generator_unorderedzlExpected `return_as` parameter to be a string equal to "list","generator" or "generator_unordered", but got z instead.r�rc�:��i|]\}}|t|�|����Sr\)r9)r^r6r_r7s �r$raz%Parallel.__init__.<locals>.<dictcomp>�s9���  �  �  �?G�u�a�A� ����:�:�  �  �  r:r-r,r.r/r0r�2�contextr�r)�Pool�LockrArlrmrnrorprrrsz$n_jobs could not be converted to intr3rDFz)Backend %s does not support shared memoryrz8batch_size must be 'auto' or a positive integer, got: %rz(Backend {} does not support return_as={})=r�rjr5r=rBr9r+r�� pre_dispatchrI� return_as�return_generatorr�� _backend_argsr]rur �max�DEFAULT_MP_CONTEXTr�r r�rrrKrwrxryrLrz�KeyErrorr{r|�default_n_jobsr�r*rFrr�r�supports_return_generator�formatr�RLockr�� collections�dequer�r��_pending_outputs�queue�Queue�_ready_batchesr�rG�_running�_managed_backendr�hex�_id� _call_ref)rgr*r)rr+r�rr�r,r-r.r/r0�active_backendrB�backend_factoryr"r7rOs @�r$rjzParallel.__init__�s ���� �������� �>�,�X�6�F�)<��7�G�* �* �* �&���'�4� �(��.�)�L�L�� ��� �(��� �H� H� H���BK������ � #��� )�V� 3���'�+@�@���  �  �  �  ��\�*��m�,��K�(���"��)�$��)�$� L�  �  �  ��� �d�(��6�� <� <� �/>��"�<�0�0�0�D� �|� ,�),� �t�!�)�,�r�1�) �) ���9�%� � )�,>�D� �y� )� )� �R�� '� '� =�,.�N�,<�,<�D� �y� )� �-�i�8� 8� 8�G�O�$�G�G� ��!4� 5� 5� C��$�,�(5��%�� �W�f� %� %� C�'�'�6�*B�*B� C�-4�D� �y� )�,�=�I�I�I�G�G� �H� $� $��4L�)L�)L� �M�C�7�C�C�0?�C�C�C���  � � � � !)�� 9�H�W� ���/�m�L�L�L�G�G� N�"*�7�"3����� N� N� N� �j�$+�G�G�V�H�M�M�O�O�-D�-D�-D�"F�G�G�LM�N����� N����&�o�M�B�B�B�G�"�6�>�8�D�D�� �>��+�F� E���[�[�F�F��� E� E� E��C�D�D� D� E������ � �{� "� "��G�%9�5�A�A� #��H�&�'�(�(� (� �&� � �J�z�8�$D�$D� ��Q���(�D�O�O��J����� ��'�#4�5�5� *��$� �W�-N� � �#�#)�6�'�9�#=�#=���� #��*�*�D�J�$�*�,�,�D�J�$(�F�F�D� !�"'�+�-�-�D� �%)�D� "� �� ��� � %����7�7�;�������s$� I.�. J7�8:J2�2J7�!K1�1L c�J�d|_d|_|���|S)NTF)r)�_calling�_initialize_backendr�s r$r�zParallel.__enter__@s(�� $����� � � � �"�"�"�� r:c��d|_|jr|jr|���|���dS)NF)r)rr0�_abort�_terminate_and_reset)rg�exc_type� exc_valuer�s r$r�zParallel.__exit__FsC�� %��� � � �T�]� � �K�K�M�M�M� �!�!�#�#�#�#�#r:c�T� |jjd|j|d�|j��}|j�H|jjs<t jd�|jj j |j����n7#t$r*}|j |_|� ��}Yd}~nd}~wwxYw|S)z?Build a process or thread pool and return the number of workers)r*r�Nz�The backend class {!r} does not support timeout. You have set 'timeout={}' in Parallel but the 'timeout' parameter will not be used.r\)rG� configurer*rr�r�rxryr rOrPrr)r1)rgr*r"s r$r1zParallel._initialize_backendLs��� 0�,�T�]�,�C�D�K�$�C�C�/3�/A�C�C�F��|�'�� �0N�'�� �@�@F��� �/�8�� �A&�A&�'�'�'���!� 0� 0� 0��I�D�M��-�-�/�/�F�F�F�F�F�F����� 0���� � s�A.A1�1 B%�; B � B%c�R�|jr|j�|j��SdS)Nr )rGr r*r�s r$�_effective_n_jobszParallel._effective_n_jobs`s)�� �=� ?��=�1�1�$�+�>�>� >��qr:c���t|jd��r |jr|j���d|_|js|j���dSdS)N� stop_callF)r�rGr0r<r)� terminater�s r$r4zParallel._terminate_and_resetesi�� �4�=�+� .� .� &�4�=� &� �M� #� #� %� %� %��� ��$� &� �M� #� #� %� %� %� %� %� &� &r:c�b�|jrdSt|��}|xj|z c_|xjdz c_t j��}t |||��}|jr|j�|��|j � ||���}|� |��dS)z�Queue the batch for computing, with or without multiprocessing WARNING: this method is not thread-safe: it should be only called indirectly via dispatch_one_batch. Nr )�callback) r�rt�n_dispatched_tasks�n_dispatched_batchesr�r�r�r�r�rG� apply_asyncr�)rg�batchr�r�� batch_trackerr�s r$� _dispatchzParallel._dispatchls��� �>� � �F���Z�Z� � ���:�-��� �!�!�Q�&�!�!�!�Y�[�[��/� � �D� � � � � � -� �J� � �m� ,� ,� ,��m�'�'�� �'�F�F���"�"�3�'�'�'�'�'r:c�Z�|�|j��sd|_d|_dSdS)aDispatch more data for parallel processing This method is meant to be called concurrently by the multiprocessing callback. We rely on the thread-safety of dispatch_one_batch to protect against concurrent consumption of the unprotected iterator. FN)�dispatch_one_batchr�� _iteratingr�s r$r�zParallel.dispatch_next�s<���&�&�t�'>�?�?� +�#�D�O�&*�D� #� #� #� +� +r:c ��|jrdS|���}|j5 |j�d���}�n�#t j$�r�|j}||z} ttj ||����}n�#t$r�}t|j t j��rd|_td||��}|j�|��|�t'|t(�����Yd}~Yddd��dSd}~wwxYwt+|��dkrYddd��dS||jur7t+|��|kr$t/dt+|��d|zz��} n t/dt+|��|z��} t1dt+|��| ��D]Z} t3|| | | z�|j���|j|j��}|j�|���[|j�d���}YnwxYwt+|��dkr ddd��dS|�|�� ddd��dS#1swxYwYdS) aTPrefetch the tasks for the next batch and dispatch them. The effective size of the batch is computed here. If there are no more jobs to dispatch, return False, else return True. The iterator consumption and dispatching is protected by the same lock so calling this function should be thread safe. F)�blockNrr�Tr rE) r��_get_batch_sizer�r'r�r%�Empty�_cached_effective_n_jobsr�� itertools�islice� Exceptionr]� __context__� __cause__r�r�r�r�r�r�rtr�r�ranger�rG�get_nested_backendr�r��putrE) rg�iteratorr��tasksr*�big_batch_sizerOr"rD�final_batch_size�is r$rGzParallel.dispatch_one_batch�s��� �>� ��5��)�)�+�+� � �Z�E �E �8 =��+�/�/�e�/�<�<�����;�6 =�6 =�6 =��6��!+�f�!4�� �!�)�"2�8�^�"L�"L�M�M�F�F�� � � � � "�!�-���=�=�+� '+�� �$;��:�t�%�%�M��J�%�%�m�4�4�4�!�3�3�D� ��5�5�5���� �4�4�4�4�OE �E �E �E �E �E �E �E �����* ����(�v�;�;�!�#�#� �UE �E �E �E �E �E �E �E �V�$�"9�9�9��&�k�k�N�2�2� (+�1�c�&�k�k�b�6�k�.J�'K�'K�$�$�'*�1�c�&�k�k�V�.C�'D�'D�$��q�#�f�+�+�/?�@�@�3�3�A�(���!�6F�2F�0F�)G�)-��)I�)I�)K�)K�)-�)?�)-�);�=�=�E��'�+�+�E�2�2�2�2��+�/�/�e�/�<�<����m6 =����n�5�z�z�Q����EE �E �E �E �E �E �E �E �H���u�%�%�%��KE �E �E �E �E �E �E �E �E �E �E �E ����E �E �E �E �E �E sx�J�A�J�H:�!"B�H:� D�A:D�H:� J�D�H:�5J�C4H:�7J�9H:�:J�J�J�Jc�X�|jdkr|j���S|jS)z-Returns the effective batch size for dispatchr)r�rG�compute_batch_sizer�s r$rKzParallel._get_batch_size�s-�� �?�f� $� $��=�3�3�5�5� 5��?� "r:c��|jsdS|jdkrtjj}ntjj}|d|�d|�d���dS)z=Display the message on stout or stderr depending on verbosityNr�[z]: � )r+�sys�stderr�write�stdout)rgr#�writers r$�_printzParallel._print�s`���|� � �F� �<�"� � ��Z�%�F�F��Z�%�F���#�4�#�#�C�#�#�#�$�$�$�$�$r:c�@�|j|jko|jp|j S)z&Check if all tasks have been completed)r�r@rHr�r�s r$� _is_completedzParallel._is_completeds,���%��)@�@� � �O� -�t�~�F � r:c ��|jsdStj��|jz }|���r:|�d|jd�d|jd�dt |���d���dS|j�Lt|j |j��rdS|�d|jd�dt |������dS|j}|j }|dks/||z d z|j z }||jzd z}|d z|k}|s||zrdS||z |j |d zz z}|�d|d�d|d�dt |���d t |������dS) zvDisplay the process of the parallel execution only a fraction of time, controlled by self.verbose. NzDone �3dz out of z | elapsed: z finishedz tasks | elapsed: rr g�?z remaining: ) r+r�� _start_timergrer�r r�r�rAr@�_pre_dispatch_amount)rg� elapsed_timer�� total_tasks�cursor� frequency� is_last_item�remaining_times r$r�zParallel.print_progress s �� �|� � �F��y�{�{�T�%5�5� � � � � � �* � �K�K�>��.�;�>�>��)�:�>�>�$�\�2�2�>�>�>� � � � �F�� $� 0� ��!:�D�L�I�I� ��� �K�K�5��.�J�5�5�$�\�2�2�5�5� � � � � � �*�E��1�K��A�:�:�&��-��1��3�4��(�D�L�8�A�=� � %�� �[� 8� � ��F�Y�$6���F�*�U�2�"�5��� �C�E�N� �K�K�7��F�7�7�+�F�7�7�$�\�2�2�7�7�$�^�4�4�7�7� � � � � r:c��d|_|j}|js-t|d��r|j}|�|���d|_dS)NT�abort_everything)� ensure_ready)r�rG�_abortedr�r)rs)rgr)rts r$r3zParallel._abort@s\����� �-��� � @�'�'�3E�"F�"F� @� �0�L� � $� $�,� $� ?� ?� ?��� � � r:c���d|_|�|��r|jdu|_|�|��r |�|���|dkr d|_dSdS)NF�all)rHrGr�)rgrVrs r$�_startzParallel._startQs��� ��� � "� "�8� ,� ,� B�"�5�T�A�D�O��%�%�h�/�/� � ��%�%�h�/�/� � �5� � �$�D�O�O�O� !� r:c#�*� K�tj��}d} |�||��dV�|j���5|���Ed{V��ddd��n #1swxYwY�n"#t $r�d|_|tj��kr�tstj d��d}|� G� fd�dtj ��}|d���� ��Y|jrgn|j }tj��|_ d|_|s|���dSdS|���|jr|����t*$rd|_|����wxYw|jrgn|j }tj��|_ d|_|s|���nK#|jrgn|j }tj��|_ d|_|s|���wwxYwt-|��d krL|���}|�|j��}|D]}|V��t-|��d k�JdSdS) z?Iterator returning the tasks' output as soon as they are ready.FNTaA generator produced by joblib.Parallel has been gc'ed in an unexpected thread. This behavior should not cause major -issues but to make sure, please report this warning and your use case at https://github.com/joblib/joblib/issues so it can be investigated.c���eZdZ�fd�ZdS)�3Parallel._get_outputs.<locals>._GeneratorExitThreadc��������jr��������dSr�)r3r�_warn_exit_earlyr4)rg� _parallels �r$�runz7Parallel._get_outputs.<locals>._GeneratorExitThread.run�sJ���!�(�(�*�*�*�$�5�9�%�6�6�8�8�8�!�6�6�8�8�8�8�8r:N)rPr�r�r)r~s�r$�_GeneratorExitThreadr{�s.�������9�9�9�9�9�9�9r:r��GeneratorExitThread)rr)r� get_identrxrG�retrieval_context� _retrieve� GeneratorExitr�IS_PYPYrxry�Thread�startr�r"r#r(r4r3rr}r�rt�popleftr�r�) rgrVr�dispatch_thread_id�detach_generator_exitr��_remaining_outputs�batched_resultsr�r~s @r$� _get_outputszParallel._get_outputsfsW�����&�0�2�2�� %��E ,� �K�K��,� /� /� /� �E�E�E���0�0�2�2� ,� ,��>�>�+�+�+�+�+�+�+�+�+� ,� ,� ,� ,� ,� ,� ,� ,� ,� ,� ,���� ,� ,� ,� ,����- �- �- �#�D�O�"�Y�%8�%:�%:�:�:����M�+����)-�%� � �9�9�9�9�9�9�9�9�+;�9�9�9�%�$�.�����%�'�'�'��$)-��"G�"�"�T�Z� �$�*�,�,�D�J�!�D�M�(� ,��)�)�+�+�+�+�+� ,� ,�# �K�K�M�M�M��$� (��%�%�'�'�'� �� � � �"�D�O� �K�K�M�M�M� � ���� )-��"G�"�"�T�Z� �$�*�,�,�D�J�!�D�M�(� ,��)�)�+�+�+��� )-��"G�"�"�T�Z� �$�*�,�,�D�J�!�D�M�(� ,��)�)�+�+�+�+� ,�����$�%�%��)�)�0�8�8�:�:�O�-�8�8���F�F�O�)� � ��� � � � � �$�%�%��)�)�)�)�)�)sP�3B� A4�( B�4A8�8B�;A8�<B�?G)�A?F �G)� AF � G)�)AH1c��|jrdS|j|jkrdS|jjst |j��dkrdSdS)z9Return True if we need to continue retrieving some tasks.TrF)rHr�r@rGr�rtr�r�s r$�_wait_retrievalzParallel._wait_retrieval�sW�� �?� ��4� � !�D�$;� ;� ;��4��}�7� ��4�:����"�"��t��ur:c#�&K�|���r�|jr|���dSt|j��dks/|jd�|j���tkrtj d����|j 5|j� ��}ddd��n #1swxYwY|� |j��}|D]}|xj dz c_ |V��|�����dSdS)Nrr�g{�G�z�?r )r�r��_raise_error_fastrtr�r�r�r�r��sleepr�r�r�� _nb_consumed)rgr�r�s r$r�zParallel._retrieve�sv�����"�"�$�$� � �~� ��&�&�(�(�(����T�Z���A�%�%���A��)�)� �L�*�*�*�-9�:�:�� �4� � � �� �� 7� 7�"&�*�"4�"4�"6�"6�� 7� 7� 7� 7� 7� 7� 7� 7� 7� 7� 7���� 7� 7� 7� 7�.�8�8���F�F�O�)� � ���!�!�Q�&�!�!�� � � � �9�"�"�$�$� � � � � s�B=�=C�Cc��|j5td�|jD��d��}ddd��n #1swxYwY|�|�|j��dSdS)z3If we are aborting, raise if a job caused an error.c3�:K�|]}|jtk�|V��dSr�)r�r�)r^r�s r$� <genexpr>z-Parallel._raise_error_fast.<locals>.<genexpr>�s:����;�;�c�!$��z�!9�!9�"�!9�!9�!9�!9�;�;r:N)r��nextr�r�r�)rg� error_jobs r$r�zParallel._raise_error_fast�s����Z� B� B��;�;�T�Z�;�;�;�<@�B�B�I� B� B� B� B� B� B� B� B� B� B� B���� B� B� B� B� � � � � ��� .� .� .� .� .� !� s � 4�8�8c���|j|jz }|���}d}|r||�d�z }|s|dz }|s||j|jz �d�z }|r|dz }t j|��dSdS)z?Warn the user if the generator is gc'ed before being consumned.r�z5 tasks have been successfully executed but not used.z Additionally, zK tasks which were still being processed by the workers have been cancelled.z` You could benefit from adjusting the input task iterator to limit unnecessary computation time.N)r�r�rgr@rxry)rg� ready_outputs� is_completedr#s r$r}zParallel._warn_exit_early�s����.��1B�B� ��)�)�+�+� ��� � )� � �!�!�!� �C� � )��(�(��� � ��*�T�-C�C���� �C� � � �B� �C� �M�#� � � � � �  � r:c#���K� d|_||_|�����dkr/t|���t��fd�d��}d�|D��}dV�|D]f\}}}|xjdz c_|xjdz c_||i|��}|xjdz c_|���|V�|xjdz c_�gn$#t$rd|_ d|_ d|_ �wxYw |���d|_ d|_d|_dS#|���d|_ d|_d|_wxYw)z�Separate loop for sequential output. This simplifies the traceback in case of errors and reduces the overhead of calling sequential tasks with `joblib`. Tr c�H��ttj������Sr�)r�rNrO)r��its��r$�<lambda>z1Parallel._get_sequential_output.<locals>.<lambda>)s���E�)�"2�2�z�"B�"B�C�C�r:r\c3�$K�|] }|D]}|V��� dSr�r\)r^rC�tasks r$r�z2Parallel._get_sequential_output.<locals>.<genexpr>+sG������"�5���;?�D�������r:NF)rHr�rK�iterrAr@r�r�r�r�rr�rur() rg�iterable�iterable_batchedr�r�r��resr�r�s @@r$�_get_sequential_outputzParallel._get_sequential_outputs������� $ +�"�D�O�&.�D� #��-�-�/�/�J��Q����(�^�^��#'�C�C�C�C�C�R�$�$� ���&6����� �J�J�J�'/� '� '�"��d�F��)�)�Q�.�)�)��'�'�1�,�'�'��d�D�+�F�+�+���&�&�!�+�&�&��#�#�%�%�%�� � � ��!�!�Q�&�!�!�!� '��� � � �"�D�O�!�D�N� �D�M� �  ���� '� � � � !� !� !�!�D�M�#�D�O�&*�D� #� #� #�� � � � !� !� !�!�D�M�#�D�O�&*�D� #� *� *� *� *s�CC � D� !C,�,D�+Ec�,�t|dt����5|jrd}|jdur|dz }t |���d|_ddd��n #1swxYwYd|_d|_d|_d|_d|_ d|_ d|_ dS)z9Reset the counters and flags used to track the execution.r�z+This Parallel instance is already running !Tz� Before submitting new tasks, you must wait for the completion of all the previous tasks, or clean all references to the output generator.NrF) rFrr(r� RuntimeErrorrAr@r�r�rr�ru)rgr#s r$�_reset_run_trackingzParallel._reset_run_trackingGs��� �T�7�K�M�M� 2� 2� !� !��}� (�C���(�D�0�0��>��C� #�3�'�'�'� �D�M� !� !� !� !� !� !� !� !� !� !� !���� !� !� !� !�%&��!�"#���!"��� ��� �������� � � s�.A�A�Ac� ������tj���_�js����}n����}|dkr<��|��}t|���jr|nt|��S�j 5t��j �_ ddd��n #1swxYwY|�_t�jt"��r �fd�}|�_|�_�jjj}|dkrt+d|z�����d|�d|�d���t/�jd ��r�j���d �_t5|��}�j}|d krd�_d�_nw|�_t/|d ��r0t=|�d tA|������}tC|��x�_}tEj#|�j��}tI���_%��&||��}tOj(|���_)t|���jr|nt|��S)z)Main function to dispatch parallel tasks.r Nc�Z���jjj��j��dSr�)rG�_workers�_temp_folder_manager�set_current_contextr+r�s�r$�_batched_calls_reducer_callbackz:Parallel.__call__.<locals>._batched_calls_reducer_callback�s4���� �&�;�O�O��H�����r:rz%s has no active worker.zUsing backend z with z concurrent workers.� start_callTrw�endswithr*)*r�r�rjr)r1r:r�r�rr�r�rr*r�rMr]rGrr�rOrPr�rer�r�r0r�rr�rkr�replacerur�rNrOr�r�r��weakref�refr,)rgr�r*�outputr�� backend_namerVrs` r$r�zParallel.__call__ns���� � � �"�"�"��9�;�;����$� .��-�-�/�/�F�F��+�+�-�-�F� �Q�;�;��0�0��:�:�F� ��L�L�L�!�2�D�6�6��V� � � D��Z� (� (�!�G�G�K�D�M� (� (� (� (� (� (� (� (� (� (� (���� (� (� (� (� )/��%� �d�m�[� 1� 1� E� � � � � �&E�D� "�)/��%��}�.�7� � �Q�;�;��9�L�H�I�I� I� � � � M�\� M� M�� M� M� M� � � � �4�=�,� /� /� '� �M� $� $� &� &� &��� ���>�>���(� � �5� � �&*�D� #�()�D� %� %�&.�D� #��|�Z�0�0� �(� �(�(��3�v�;�;�?�?� � � �8;�<�7H�7H� H�D� %� �!�'��$�2K�L�L�H� "�V�V����"�"�8�\�:�:�� ��V�,�,��� �V� � � ��.�@�v�v�D��L�L�@s�'C � C�Cc�0�|jj�d|j�d�S)Nz(n_jobs=�))rOrPr*r�s r$�__repr__zParallel.__repr__�s��"&�.�"9�"9�"9�4�;�;�;�G�Gr:)rPr�r�r�r5rjr�r�r1r:r4rEr�rGrKrergr�r3rxr�r�r�r�r}r�r�r�r�r�r�s@r$r r �s�������y�y�x'�x�0�'� �2��'� �2��!��+�M�:�*�<�8�)�+�6�&�x�0�'� �2�Q�Q�Q�Q�Q�Q�f��� $�$�$� ���(��� &�&�&�(�(�(�B +� +� +�U�U�U�n#�#�#� %� %� %� � � � 4�4�4�l���"$�$�$�*O�O�O�b���6���> /� /� /����8*+�*+�*+�X%�%�%�NiA�iA�iA�VH�H�H�H�H�H�Hr:r )F)r�)Or�� __future__r�osr`�mathrr�r"r�rrN�uuidr�numbersrrxr%r�� contextlibrrr�_multiprocessing_helpersr �loggerr r �diskr �_parallel_backendsrrrrr�_utilsrrrrr�r�rKrLrw� externalsrrM�localrGr%rvr5rHrJr9r?r=rXr�r�environr��stripr�r��objectr�r�r�r�r�r�r�r�r r r r\r:r$�<module>r�s-���� ������ � � � � � � � ��������������� � � � ������������������������� � � � �����"�"�"�"�"�"�(�(�(�(�(�(�(�(�(�(�(�(�-�-�-�-�-�-�-�-�!�!�!�!�!�!�.�.�.�.�.�.�.�.�.�.�.�.�.�.�)�(�(�(�(�(�(�(�2�1�1�1�1�1�3�3�3�3�3�3� �'�#�*� +� +��"�#� � ����-�v�6���>�"8�H� ��������"�H�V���O�%�� �9�?� � �� &� &� &� �N����y�t�,�,�,��i�d�+�+�+��y�q�)�)�)��9�4�0�0�0��)�$�/�/�/����-�-�-��i�d�+�+�+��y�t�,�,�,� � ��5��/�����( #�8� ,� #�I� .� #�I� .� � � � � #�8� ,� #�I� .� #�I� .�H#�H#�H#�H#�Vo>�o>�o>�o>�o>�o>�o>�o>�dh)�h)�h)�h)�h)��h)�h)�h)�`�� �7�2�}���;� �Z�^�^�1�2� 6� 6� <� <� >� >� F�$�F� ��+�R�^�6�:�:�:��"�"�"�"�"�6�"�"�"�L � � � �� � C�C�C�C�(+�+�+�& � � �V+�V+�V+�V+�V+�f�V+�V+�V+�t����*3�3�3�3�>hH�hH�hH�hH�hH�v�hH�hH�hH�hH�hHr:
Memory