�
%�g� � �^ � d dl Z ddlmZmZ e j ded�� � G d� d� � ZdS ) � N� )�AcceleratorState�
GradientState�ignoreztorch.optim.lr_scheduler)�category�modulec �H � e Zd ZdZddedefd�Zd� Zd� Zd � Zd
� Z d� Z
d� Zd
S )�AcceleratedSchedulerau
A wrapper around a learning rate scheduler that will only step when the optimizer(s) have a training step. Useful
to avoid making a scheduler step too fast when gradients went overflow and there was no training step (in mixed
precision training)
When performing gradient accumulation scheduler lengths should not be changed accordingly, Accelerate will always
step the scheduler to account for it.
Args:
scheduler (`torch.optim.lr_scheduler._LRScheduler`):
The scheduler to wrap.
optimizers (one or a list of `torch.optim.Optimizer`):
The optimizers used.
step_with_optimizer (`bool`, *optional*, defaults to `True`):
Whether or not the scheduler should be stepped at each optimizer step.
split_batches (`bool`, *optional*, defaults to `False`):
Whether or not the dataloaders split one batch across the different processes (so batch size is the same
regardless of the number of processes) or create batches on each process (so batch size is the original
batch size multiplied by the number of processes).
TF�step_with_optimizer�
split_batchesc � � || _ t |t t f� � r|n|g| _ || _ || _ t � � | _ d S �N) � scheduler�
isinstance�list�tuple�
optimizersr r r �gradient_state)�selfr r r r s �d/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/accelerate/scheduler.py�__init__zAcceleratedScheduler.__init__/ sM � �"���(2�:��e�}�(M�(M�_�*�*�T^�S_���*���#6�� �+�o�o����� c �� � | j s | j j |i |�� d S | j j s#| j j r| j xj dz
c_ d S | j D ]}|j r d S �
| j r | j j |i |�� d S t � � j }t |� � D ]V}t | j d� � r-| j j | j j k r | j j |i |�� �D | j j |i |�� �Wd S )Nr �total_steps)r r �stepr �sync_gradients�adjust_scheduler�_step_countr �step_was_skippedr r �
num_processes�range�hasattrr )r �args�kwargs�optr �_s r r zAcceleratedScheduler.step6 sJ � ��'� ��D�N���0��0�0�0��F� �"�1� ��"�3�
0���*�*�a�/�*�*��F��?� � �C��#�
����
���
9��D�N���0��0�0�0�0�0� -�.�.�<�M��=�)�)�
9�
9���4�>�=�9�9� 9��~�1�T�^�5O�O�O�+���+�T�<�V�<�<�<��'�D�N�'��8��8�8�8�8�
9�
9r c �4 � | j � � � S r )r �get_last_lr�r s r r( z AcceleratedScheduler.get_last_lrU s � ��~�)�)�+�+�+r c �4 � | j � � � S r )r �
state_dictr) s r r+ zAcceleratedScheduler.state_dictX s � ��~�(�(�*�*�*r c �: � | j � |� � d S r )r �load_state_dict)r r+ s r r- z$AcceleratedScheduler.load_state_dict[ s � ���&�&�z�2�2�2�2�2r c �4 � | j � � � S r )r �get_lrr) s r r/ zAcceleratedScheduler.get_lr^ s � ��~�$�$�&�&�&r c �&