�
J��g� � �� � d dl Z d dlmZ d dlZd dlmZ d dlZdej ddfd�Zdej fd�Z dej
j fd�Zdefd �Z
dd
�Zdefd�Zdae j ddefd�� � ZdS )� N)� Generator)�default_generator� new_state�returnc �. � t j | � � dS )z�Sets the random number generator state.
.. note:: This function only works for CPU. For CUDA, please use
:func:`torch.manual_seed`, which works for both CPU and CUDA.
Args:
new_state (torch.ByteTensor): The desired state
N)r � set_state)r s �\/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/torch/random.py�
set_rng_stater
s � � �� �*�*�*�*�*� c �( � t j � � S )z�Returns the random number generator state as a `torch.ByteTensor`.
.. note:: The returned state is for the default generator on CPU only.
See also: :func:`torch.random.fork_rng`.
)r � get_state� r r �
get_rng_stater s � � �&�(�(�(r c � � t | � � } ddl}|j � � � s|j � | � � ddl}|j � � � s|j � | � � ddl}|j � � � s|j � | � � t | � � t j | � � S )a� Sets the seed for generating random numbers on all devices. Returns a
`torch.Generator` object.
Args:
seed (int): The desired seed. Value must be within the inclusive range
`[-0x8000_0000_0000_0000, 0xffff_ffff_ffff_ffff]`. Otherwise, a RuntimeError
is raised. Negative inputs are remapped to positive values with the formula
`0xffff_ffff_ffff_ffff + seed`.
r N)�int�
torch.cuda�cuda�_is_in_bad_fork�manual_seed_all� torch.mps�mps�manual_seed� torch.xpu�xpu�_seed_custom_devicer ��seed�torchs r r r s� � � �t�9�9�D������:�%�%�'�'� )�
�
�"�"�4�(�(�(������9�$�$�&�&� $�
� ���d�#�#�#������9�$�$�&�&� (�
� �!�!�$�'�'�'�������(��.�.�.r c � � t j � � } ddl}|j � � � s|j � | � � ddl}|j � � � s|j � | � � ddl }|j
� � � s|j
� | � � t | � � | S )z�Sets the seed for generating random numbers to a non-deterministic
random number on all devices. Returns a 64 bit number used to seed the RNG.
r N)r r r r r r r r r r r r r s r r r = s� � � �!�#�#�D������:�%�%�'�'� )�
�
�"�"�4�(�(�(������9�$�$�&�&� $�
� ���d�#�#�#������9�$�$�&�&� (�
� �!�!�$�'�'�'�������Kr c �� � t | � � } t j � � � }t t |� � r�t t |� � }d}d}t ||� � rEt ||� � r5 t ||� � � � s t ||� � | � � dS dS d|� d�}|d|� d|� d|� d�z
}t
j |t d �
� � dS dS )z�Sets the seed to generate random numbers for custom device.
Args:
seed (int): The desired seed.
See [Note: support the custom device with privateuse1]
r r zSet seed for `z0` device does not take effect, please add API's �`z` and `z` to `z` device module.� )�
stacklevelN) r r �_C�_get_privateuse1_backend_name�hasattr�getattr�warnings�warn�UserWarning)r �custom_backend_name�custom_device_mod�_bad_fork_name�_seed_all_name�messages r r r T s" � � �t�9�9�D��(�@�@�B�B���u�)�*�*�
>�#�E�+>�?�?��*��*���$�n�5�5� >�'�BS�Uc�:d�:d� >�=�7�,�n�=�=�?�?�
A�:��)�>�:�:�4�@�@�@�@�@�
A�
A� m�':�l�l�l�G��m�>�m�m�.�m�m�H[�m�m�m�m�G��M�'�;�1�=�=�=�=�=�=�
>�
>r c �( � t j � � S )z�Returns the initial seed for generating random numbers as a
Python `long`.
.. note:: The returned seed is for the default generator on CPU only.
)r �initial_seedr r r r1 r1 k s � � �)�+�+�+r FT�fork_rng�devicesr c # �� �K � t j |� � j }t t |d� � ���t d|� d�dz � � �|sdV � dS | ��� � � }|dk r�t s�|� � � � d|� d|� d|� � � � d |� � � � d
|� � � � d|� � � � d|� d
|� d|� � � � d|� d|� d�}t j |� � dat t |� � � � } nt | � � } t j � � }�fd�| D � � } dV � t j
|� � t | |� � D ]\ } }
��
|
| � � �dS # t j
|� � t | |� � D ]\ } }
��
|
| � � �w xY w)a�
Forks the RNG, so that when you return, the RNG is reset
to the state that it was previously in.
Args:
devices (iterable of Device IDs): devices for which to fork
the RNG. CPU RNG state is always forked. By default, :meth:`fork_rng` operates
on all devices, but will emit a warning if your machine has a lot
of devices, since this function will run very slowly in that case.
If you explicitly specify devices, this warning will be suppressed
enabled (bool): if ``False``, the RNG is not forked. This is a convenience
argument for easily disabling the context manager without having
to delete it and unindent your Python code under it.
device_type (str): device type str, default is `cuda`. As for custom device,
see details in [Note: support the custom device with privateuse1]
Nztorch has no module of `z`, you should register z,a module by `torch._register_device_module`.� z reports that you have z& available devices, and you have used z_ without explicitly specifying which devices are being used. For safety, we initialize *every* zA device by default, which can be quite slow if you have a lot of z5s. If you know that you are only making use of a few z' devices, set the environment variable z_VISIBLE_DEVICES or the 'z' keyword argument of z� with the set of devices you are actually using. For example, if you are using CPU only, set device.upper()_VISIBLE_DEVICES= or devices=[]; if you are using device 0 only, set zb_VISIBLE_DEVICES=0 or devices=[0]. To initialize all devices and suppress this warning, set the 'z#' keyword argument to `range(torch.z.device_count())`.Tc �: �� g | ]}�� |� � ��S r )r )�.0�device�
device_mods �r �
<listcomp>zfork_rng.<locals>.<listcomp>� s'