� ���gY����dZddlZddlZddlmZddlmZmZddlZddl m Z ddl m Z m Z ddlmZmZdd lmZdd lmZdd lmZeje��ZdAd �ZdBde defd�Zde fd�Zdedefd�ZdBde dedefd�Zdededefd�Z dBd�Z!dededede"fd�Z# dCde dedede"def d�Z$dedededefd�Z% dDde dedededef d�Z&dededede"d e"d!ef d"�Z' dEd%�Z(dd&�deded'eefd(�Z) dFde ded'eedefd)�Z*d*d+�dededede"d,e"f d-�Z# dGde dedede"ded.ee"d,ee"fd/�Z+deded0ed1ed2e,d3e,d4e"de"fd5�Z- dHde ded1edeed0eed2e,d3e,d4e"de"defd8�Z.ej/e!ej0e$ej1e&ej2e(ej3eej4eej5e*ej6eej7e+ej8e.i Z9 dId9ee,efde deedeed:ee:f d;�Z;Gd<�d=e ��Z<Gd>�d?e ��Z=dJd@�Z>dS)Kz$PyTorch optimization for BERT model.�N)�partial)�Optional�Union)� Optimizer)�LambdaLR�ReduceLROnPlateau�)�LayerWiseDummyOptimizer�LayerWiseDummyScheduler)� SchedulerType)�logging)�require_versionc��dS�Nr �)�_s �i/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/transformers/optimization.py�_get_constant_lambdar"s�� �1������� optimizer� last_epochc�0�t|t|���S)a� Create a schedule with a constant learning rate, using the learning rate set in optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. �r)rr)rrs r�get_constant_scheduler&s�� �I�3� � K� K� K�Krc ��t|fi|��S)a Create a schedule with a constant learning rate that decreases when a metric has stopped improving. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. kwargs (`dict`, *optional*): Extra parameters to be passed to the scheduler. See `torch.optim.lr_scheduler.ReduceLROnPlateau` for possible parameters. Return: `torch.optim.lr_scheduler.ReduceLROnPlateau` with the appropriate schedule. )r)r�kwargss r�get_reduce_on_plateau_scheduler7s�� �Y� 1� 1�&� 1� 1�1r� current_step�num_warmup_stepsc�l�||kr-t|��ttd|����z SdS)N��?��float�max)rr s r�,_get_constant_schedule_with_warmup_lr_lambdar&Is9���&�&�&��\�"�"�U�3�s�4D�+E�+E�%F�%F�F�F� �3rc�R�tt|���}t|||���S)ad Create a schedule with a constant learning rate preceded by a warmup period during which the learning rate increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. �r r)rr&r)rr r� lr_lambdas r�!get_constant_schedule_with_warmupr*Os-��"�D�Wg�h�h�h�I� �I�y�Z� @� @� @�@r�num_training_stepsc ���||kr-t|��ttd|����z Stdt||z ��ttd||z ����z ��S)Nr �r#)rr r+s r�*_get_linear_schedule_with_warmup_lr_lambdar.dsp���&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D� �s�E�,�|�;�<�<�u�S��L^�aq�Lq�Er�Er�?s�?s�s� t� t�trc�R�tt||���}t|||��S)a� Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. �r r+)rr.r)rr r+rr)s r�get_linear_schedule_with_warmupr1js5��&�2�)�-����I� �I�y�*� 5� 5�5r� num_cyclesc �^�||kr-t|��ttd|����z St||z ��ttd||z ����z }tdddtjtjt|��zdz|z��zz��S)Nr r-��?r"�@�r$r%�math�cos�pi�rr r+r2�progresss r�*_get_cosine_schedule_with_warmup_lr_lambdar<�s����&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D��\�$4�4�5�5��c�!�EW�Zj�Ej�>k�>k�8l�8l�l�H� �s�C�3���$�'�E�*�4E�4E�*E��*K�h�*V�!W�!W�W�X� Y� Y�Yrr4c�T�tt|||���}t|||��S)a� Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. num_cycles (`float`, *optional*, defaults to 0.5): The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. �r r+r2)rr<r�rr r+r2rr)s r�get_cosine_schedule_with_warmupr@�s8��2�2�)�-�� ���I� �I�y�*� 5� 5�5rc �n�||kr-t|��ttd|����z St||z ��ttd||z ����z }|dkrdStdddtjtjt|��|zdzz��zz��S)Nr r"r-r4r6r:s r�=_get_cosine_with_hard_restarts_schedule_with_warmup_lr_lambdarB�s����&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D��\�$4�4�5�5��c�!�EW�Zj�Ej�>k�>k�8l�8l�l�H��3����s� �s�C�3���$�'�e�J�6G�6G�(�6R�VY�5Y�*Z�![�![�[�\� ]� ]�]rc�T�tt|||���}t|||��S)a� Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. num_cycles (`int`, *optional*, defaults to 1): The number of hard restarts to use. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. r>)rrBrr?s r�2get_cosine_with_hard_restarts_schedule_with_warmuprD�s8��0�E�)�-�� ���I� �I�y�*� 5� 5�5r�lr_end�power�lr_initc���||kr-t|��ttd|����z S||kr||z S||z }||z }d||z |z z }|||zz|z} | |z Srr#) rr r+rErFrG�lr_range� decay_steps� pct_remaining�decays r�4_get_polynomial_decay_schedule_with_warmup_lr_lambdarM�s����&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D� �*� *� *������V�#��(�+;�;� ��\�,<�<� �K�K� ��=�%�/�/�&�8���w��r�H�����z>r"c��|jd}||kstd|�d|�d����tt|||||���}t |||��S)a� Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the optimizer to end lr defined by *lr_end*, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. lr_end (`float`, *optional*, defaults to 1e-7): The end LR. power (`float`, *optional*, defaults to 1.0): Power factor. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Note: *power* defaults to 1.0 as in the fairseq implementation, which in turn is based on the original BERT implementation at https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37 Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. �lrzlr_end (z#) must be smaller than initial lr (�))r r+rErFrG)�defaults� ValueErrorrrMr)rr r+rErFrrGr)s r�)get_polynomial_decay_schedule_with_warmuprT�sx��>� ��&�G� �f� � ��Y�F�Y�Y�w�Y�Y�Y�Z�Z�Z��<�)�-���� ���I� �I�y�*� 5� 5�5r)� timescalerUc��||kr-t|��ttd|����z S||z }dtj||z|z ��z }|S)Nr r")r$r%r7�sqrt)rr rU�shiftrLs r�$_get_inverse_sqrt_schedule_lr_lambdarYsb���&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D� �(� (�E� �$�)�\�E�1�Y�>�?�?� ?�E� �Lrc�`�|�|pd}tt||���}t|||���S)a� Create a schedule with an inverse square-root learning rate, from the initial lr set in the optimizer, after a warmup period which increases lr linearly from 0 to the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. timescale (`int`, *optional*, defaults to `num_warmup_steps`): Time scale. last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Ni')r rUr)rrYr)rr rUrr)s r�get_inverse_sqrt_scheduler['s@��.��$�.�� ��<�O_�kt�u�u�u�I� �I�y�Z� @� @� @�@rr-)� min_lr_rater\c�x�||kr-t|��ttd|����z St||z ��ttd||z ����z }ddtjtjt|��zdz|z��zz}|d|z z|z}td|��S)Nr r4r"r5rr6)rr r+r2r\r;�factors rr<r<Es����&�&�&��\�"�"�U�3�q�2B�+C�+C�%D�%D�D�D��\�$4�4�5�5��c�!�EW�Zj�Ej�>k�>k�8l�8l�l�H� �C�$�(�4�7�U�:�->�->�#>��#D�x�#O�P�P�P� Q�F� �q�;�� '�+� 5�F� �q�&�>�>�r�min_lrc���|�|�td���|�||jdz }n|�td���tt||||���}t |||��S)a� Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to min_lr, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_training_steps (`int`): The total number of training steps. num_cycles (`float`, *optional*, defaults to 0.5): The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. min_lr (`float`, *optional*): The minimum learning rate to reach after the cosine schedule. min_lr_rate (`float`, *optional*): The minimum learning rate as a ratio of the initial learning rate. If set, `min_lr` should not be set. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Nz/Only one of min_lr or min_lr_rate should be setrPzLOne of min_lr or min_lr_rate should be set through the `lr_scheduler_kwargs`)r r+r2r\)rSrRrr<r)rr r+r2rr_r\r)s r�+get_cosine_with_min_lr_schedule_with_warmupraPs���F��k�5��J�K�K�K� � ��y�1�$�7�7� � � � ��g�h�h�h��2�)�-��� ���I� �I�y�*� 5� 5�5r�num_stable_steps�num_decay_steps� warmup_type� decay_type� min_lr_ratioc���||kr�t|��ttd|����z }|dkr|} nN|dkr(ddtjtj|z��z z} n |dkrdtjd|z ��z } | d|z z|z} td| ��S|||zkrdS|||z|zkr�t||z |z ��ttd|����z }|dkrd|z } n^|dkr;ddtjtjt|��zdz|z��zz} n|dkrdtj|��z } | d|z z|z} td| ��S|S) Nr �linear�cosiner4r"�1-sqrtr-r5)r$r%r7r8r9rW) rr rbrcrdrerfr2r;r^s r�_get_wsd_scheduler_lambdark�s����&�&�&���&�&��s�1�6F�/G�/G�)H�)H�H�� �(� "� "��F�F� �H� $� $��C�$�(�4�7�X�+=�">�">�>�?�F�F� �H� $� $��4�9�S�8�^�4�4�4�F��3��-�.��=���3������&�)9�9�9�9��s��&�)9�9�O�K�K�K���(8�8�;K�K�L�L�u�UX�YZ�\k�Ul�Ul�Om�Om�m�� �� !� !��8�^�F�F� �8� #� #��C�$�(�4�7�U�:�5F�5F�+F��+L�x�+W�"X�"X�X�Y�F�F� �8� #� #��4�9�X�.�.�.�F��3��-�.��=���3����� �rrhric �"�|�|�td���|�|�tjd��|dvrtd|�d����|dvrtd|�d����|�||z |z }tt|||||||���} t || | ��S) a� Create a schedule with a learning rate that has three stages: 1. warmup: increase from min_lr_ratio times the initial learning rate to the initial learning rate following a warmup_type. 2. stable: constant learning rate. 3. decay: decrease from the initial learning rate to min_lr_ratio times the initial learning rate following a decay_type. Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. num_warmup_steps (`int`): The number of steps for the warmup phase. num_decay_steps (`int`): The number of steps for the decay phase. num_training_steps (`int`, *optional*): The total number of training steps. This is the sum of the warmup, stable and decay steps. If `num_stable_steps` is not provided, the stable phase will be `num_training_steps - num_warmup_steps - num_decay_steps`. num_stable_steps (`int`, *optional*): The number of steps for the stable phase. Please ensure that `num_warmup_steps + num_stable_steps + num_decay_steps` equals `num_training_steps`, otherwise the other steps will default to the minimum learning rate. warmup_type (`str`, *optional*, defaults to "linear"): The type of warmup to use. Can be 'linear', 'cosine' or '1-sqrt'. decay_type (`str`, *optional*, defaults to "cosine"): The type of decay to use. Can be 'linear', 'cosine' or '1-sqrt'. min_lr_ratio (`float`, *optional*, defaults to 0): The minimum learning rate as a ratio of the initial learning rate. num_cycles (`float`, *optional*, defaults to 0.5): The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (`int`, *optional*, defaults to -1): The index of the last epoch when resuming training. Return: `torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule. Nz@Either num_training_steps or num_stable_steps must be specified.zZBoth num_training_steps and num_stable_steps are specified. num_stable_steps will be used.)rhrirjzUnknown warmup type: z), expected 'linear', 'cosine' or '1-sqrt'zUnknown decay type: )r rbrcrdrerfr2)rS�warnings�warnrrkr) rr rcr+rbrdrerfr2rr)s r�get_wsd_schedulero�s���Z�!�&6�&>��[�\�\�\��%�*:�*F�� �r�s�s�s��8�8�8��g��g�g�g�h�h�h��7�7�7��e� �e�e�e�f�f�f���-�0@�@�?�R���!�)�)�'���!�� � � �I� �I�y�*� 5� 5�5r�name�scheduler_specific_kwargsc�� �t|��}t|}|��t|t��r�|j}i� |���D]}t |||||���� |<�� fd�}|���D]}|jr|�|���t||j d���S|tj kr ||��S|�i}|tj kr ||fi|��S|�t|�d����|tjkr |||���S|tjkr |||���S|tjkr ||f||d�|��S|�t|�d ����||f||d�|��S) a Unified API to get any scheduler from its name. Args: name (`str` or `SchedulerType`): The name of the scheduler to use. optimizer (`torch.optim.Optimizer`): The optimizer that will be used during training. num_warmup_steps (`int`, *optional*): The number of warmup steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it's unset and the scheduler type requires it. num_training_steps (`int``, *optional*): The number of training steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it's unset and the scheduler type requires it. scheduler_specific_kwargs (`dict`, *optional*): Extra parameters for schedulers such as cosine with restarts. Mismatched scheduler types and scheduler parameters will cause the scheduler function to raise a TypeError. N)rr r+c�<���|���dS�N)�step)�param�scheduler_dicts �r�scheduler_hookz%get_scheduler.<locals>.scheduler_hook+s"��� �5� !� &� &� (� (� (� (� (rrP)�optimizer_dictrPz; requires `num_warmup_steps`, please provide that argument.r(r0z= requires `num_training_steps`, please provide that argument.)r �TYPE_TO_SCHEDULER_FUNCTION� isinstancer ry�keys� get_scheduler� requires_grad�"register_post_accumulate_grad_hookr rR�CONSTANT�REDUCE_ON_PLATEAUrS�CONSTANT_WITH_WARMUP� INVERSE_SQRT�WARMUP_STABLE_DECAY) rprr r+rq� schedule_funcryrvrxrws @rr}r}sG���2 �� � �D�.�t�4�M����I�7N�!O�!O��"�1����#�(�(�*�*� � �E�$1��(��/�!1�#5� %�%�%�N�5� !� !� )� )� )� )� )� $�(�(�*�*� I� I�E��"� I��8�8��H�H�H��&�n��I[�\`�Ia�b�b�b�b� �}�%�%�%��}�Y�'�'�'� �(�$&�!� �}�.�.�.��}�Y�D�D�*C�D�D�D����D�]�]�]�^�^�^� �}�1�1�1��}�Y�9I�J�J�J�J� �}�)�)�)��}�Y�9I�J�J�J�J� �}�0�0�0��}� � �-�1� � �(�  � � ��!��D�_�_�_�`�`�`� �=�� �)�-� � � $�  � �rc����eZdZdZ d�fd � Zed ���Zed ���Zed ���Zed ���Z e j ��dd���Z �xZ S)� Adafactora) AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code: https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py Paper: *Adafactor: Adaptive Learning Rates with Sublinear Memory Cost* https://arxiv.org/abs/1804.04235 Note that this optimizer internally adjusts the learning rate depending on the `scale_parameter`, `relative_step` and `warmup_init` options. To use a manual (external) learning rate schedule you should set `scale_parameter=False` and `relative_step=False`. Arguments: params (`Iterable[nn.parameter.Parameter]`): Iterable of parameters to optimize or dictionaries defining parameter groups. lr (`float`, *optional*): The external learning rate. eps (`Tuple[float, float]`, *optional*, defaults to `(1e-30, 0.001)`): Regularization constants for square gradient and parameter scale respectively clip_threshold (`float`, *optional*, defaults to 1.0): Threshold of root mean square of final gradient update decay_rate (`float`, *optional*, defaults to -0.8): Coefficient used to compute running averages of square beta1 (`float`, *optional*): Coefficient used for computing running averages of gradient weight_decay (`float`, *optional*, defaults to 0.0): Weight decay (L2 penalty) scale_parameter (`bool`, *optional*, defaults to `True`): If True, learning rate is scaled by root mean square relative_step (`bool`, *optional*, defaults to `True`): If True, time-dependent learning rate is computed instead of external learning rate warmup_init (`bool`, *optional*, defaults to `False`): Time-dependent learning rate computation depends on whether warm-up initialization is being used This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested. Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3): - Training without LR warmup or clip_threshold is not recommended. - use scheduled LR warm-up to fixed LR - use clip_threshold=1.0 (https://arxiv.org/abs/1804.04235) - Disable relative updates - Use scale_parameter=False - Additional optimizer operations like gradient clipping should not be used alongside Adafactor Example: ```python Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) ``` Others reported the following combination to work well: ```python Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) ``` When using `lr=None` with [`Trainer`] you will most likely need to use [`~optimization.AdafactorSchedule`] scheduler as following: ```python from transformers.optimization import Adafactor, AdafactorSchedule optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) trainer = Trainer(..., optimizers=(optimizer, lr_scheduler)) ``` Usage: ```python # replace AdamW with Adafactor optimizer = Adafactor( model.parameters(), lr=1e-3, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) ```N�g����KH�9g����MbP?r"皙�����r-TFc ����td��|�| rtd���| r| std���|||||||| | d� } t���|| ��dS)Nz torch>=1.5.0z;Cannot combine manual `lr` and `relative_step=True` optionsz0`warmup_init=True` requires `relative_step=True`) rP�eps�clip_threshold� decay_rate�beta1� weight_decay�scale_parameter� relative_step� warmup_init)rrS�super�__init__) �self�paramsrPr�r�r�r�r�r�r�r�rR� __class__s �rr�zAdafactor.__init__�s���� ��'�'�'� �>�m�>��Z�[�[� [� � Q�}� Q��O�P�P� P���,�$��(�.�*�&�  �  �� �������*�*�*�*�*rc��|d}|dr@|dr d|dznd}t|dtj|d��z ��}d}|dr"t|d d |d ��}||zS) NrPr�r�g�����ư>rug{�G�z�?r"r�r�r �RMS)�minr7rWr%)� param_group� param_state� rel_step_sz�min_step� param_scales r�_get_lrzAdafactor._get_lr�s���!�$�'� � �� '� N�5@��5O�Y�t�k�&�1�1�1�UY�H��h��d�i� �F�8K�.L�.L�(L�M�M�K�� � �(� )� I��k�%�0��3�[��5G�H�H�K��[�(�(rc�D�t|��dk}|ddu}||fS)N�r�)�len)r�� param_shape�factored�use_first_moments r� _get_optionszAdafactor._get_options�s0���{�#�#�q�(��&�w�/�t�;���)�)�)rc�\�|�d��|���dzz S)Nr�r4)�norm�numel)�tensors r�_rmszAdafactor._rms�s$���{�{�1�~�~������3�!6�7�7rc���||�dd���z ����d��}|�d�����}t j||��S)NrT)�dim�keepdim�����)�mean�rsqrt_� unsqueeze�rsqrt�torch�mul)�exp_avg_sq_row�exp_avg_sq_col�r_factor�c_factors r�_approx_sq_gradzAdafactor._approx_sq_grad�sm��#�^�%8�%8�R��%8�%N�%N�N�V�V�X�X�b�b�ce�f�f��!�+�+�B�/�/�5�5�7�7���y��8�,�,�,rc� �d}|� |��}|jD�]�}|dD�]�}|j�� |j}|jtjtjhvr|���}|jrtd���|j |}|j }|� ||��\}} t|��dkr�d|d<| rtj |��|d<|rptj|dd����|��|d<tj|dd �|dd�z���|��|d <ntj |��|d <d|d <n}| r|d�|��|d<|r=|d�|��|d<|d �|��|d <n|d �|��|d <|} |jtjtjhvr| ���} |dxxd z cc<|�| ��|d <|�||��} dt%j|d|d��z } |dz|ddz} |r�|d}|d }|�| ���| �d���d| z ���|�| ���| �d ���d| z ���|�||��} | �|��n\|d }|�| ���| d| z ���|����|��} | �|�| ��|dz �d�����| �| ��| rC|d}|�|d���| d |dz ���|} |ddkr!| �| |d | z���| �| ��|jtjtjhvr|�| ��������|S)z� Performs a single optimization step Arguments: closure (callable, optional): A closure that reevaluates the model and returns the loss. Nr�z,Adafactor does not support sparse gradients.rru�exp_avgrr�r�r�� exp_avg_sqr�r r"r�r�r�)r�)�alphar�)r�r�r�)� param_groups�grad�dtyper��float16�bfloat16r$� is_sparse� RuntimeError�state�shaper�r�� zeros_like�zeros�tor�r�r7�pow�mul_�add_r�r�r��div_�clamp_�copy_)r��closure�loss�group�pr�r�� grad_shaper�r�� p_data_fp32rP�beta2t�updater�r�r�r�s rruzAdafactor.step�s����� � ��7�9�9�D��&�M )�M )�E��8�_�L )�L )���6�>���v���:�%�-���!@�@�@��:�:�<�<�D��>�W�&�'U�V�V�V�� �1� ��!�Z� �-1�->�->�u�j�-Q�-Q�*��*��u�:�:��?�?�$%�E�&�M�'�B�+0�+;�D�+A�+A��i�(��E�27�+�j��"��o�2N�2N�2Q�2Q�RV�2W�2W��.�/�27�+�j��"��o�PZ�[]�[^�[^�P_�>_�2`�2`�2c�2c�dh�2i�2i��.�/�/�.3�.>�t�.D�.D��l�+�#$�E�%�L�L�'�E�+0��+;�+>�+>�t�+D�+D��i�(��K�27�8H�2I�2L�2L�T�2R�2R��.�/�27�8H�2I�2L�2L�T�2R�2R��.�/�/�.3�L�.A�.D�.D�T�.J�.J��l�+�� ��7�u�}�e�n�=�=�=�"-�"3�"3�"5�"5�K��f� � � ��"� � � �#�y�y��5�5��e� ��\�\�%��/�/���t�x��f� �u�\�7J�K�K�K����'�U�5�\�!�_�4���;�%*�+;�%<�N�%*�+;�%<�N�"�'�'��/�/�4�4�V�[�[�R�[�5H�5H�QT�W]�Q]�4�_�_�_�"�'�'��/�/�4�4�V�[�[�R�[�5H�5H�QT�W]�Q]�4�_�_�_�"�1�1�.�.�Q�Q�F��K�K��%�%�%�%�!&�|�!4�J��O�O�F�+�+�0�0���f� �0�N�N�N�'�-�-�/�/�4�4�T�:�:�F�� � �T�Y�Y�v�.�.��7G�1H�H�P�P�UX�P�Y�Y�Z�Z�Z�� � �B����#�%�#�I�.�G��L�L��w��0�0�5�5�f�Q��w��EW�5�Y�Y�Y�$�F���(�A�-�-��$�$�[�%��:O�9O�RT�9T�$�V�V�V�� � �&��)�)�)��7�u�}�e�n�=�=�=��G�G�K�(�(�(��YL )�\� r) Nr�r"r�Nr-TTFrt)�__name__� __module__� __qualname__�__doc__r�� staticmethodr�r�r�r�r��no_gradru� __classcell__�r�s@rr�r�^s�������R �R �n � ��������+�+�+�+�+�+�@�)�)��\�)��*�*��\�*� �8�8��\�8��-�-��\�-��U�]�_�_�[�[�[��_�[�[�[�[�[rr�c�*��eZdZdZd�fd� Zd�Z�xZS)�AdafactorSchedulea8 Since [`~optimization.Adafactor`] performs its own scheduling, if the training loop relies on a scheduler (e.g., for logging), this class creates a proxy object that retrieves the current lr values from the optimizer. It returns `initial_lr` during startup and the actual `lr` during stepping. r-c�����fd�}|jD]}�|d<�t���||��|jD]}|d=�dS)Nc����Srtr)r� initial_lrs �rr)z-AdafactorSchedule.__init__.<locals>.lr_lambdaXs ���� rr�)r�r�r�)r�rr�r)r�r�s ` �rr�zAdafactorSchedule.__init__Ws���� � � � � ��+� -� -�E�",�E�,� � � ������I�.�.�.��+� $� $�E��l�#�#� $� $rc�p��|j��fd��jD��}t|��dkr|j}|S)Nc���g|]B}|ddj���|�j|dd����CS)r�r)r�r�r�)�.0r��opts �r� <listcomp>z,AdafactorSchedule.get_lr.<locals>.<listcomp>csS��� � � ���X��q�!�&�2� �K�K��s�y��x���);�<� =� =�2�2�2rr)rr�r��base_lrs)r��lrsr�s @r�get_lrzAdafactorSchedule.get_lrasR����n�� � � � ��)� � � �� �s�8�8�q�=�=��-�C�� r�r-)r�r�r�r�r�r�r�r�s@rr�r�OsV���������$�$�$�$�$�$� � � � � � � rr�c�"�t||��S)aX Get a proxy schedule for [`~optimization.Adafactor`] Args: optimizer ([`~torch.optim.Optimizer`]): The optimizer for which to schedule the learning rate. initial_lr (`float`, *optional*, defaults to 0.0): Initial lr Return: [`~optimization.Adafactor`] proxy schedule object. )r�)rr�s r�get_adafactor_scheduler�ms�� �Y� � 3� 3�3rrt)r)r4r)r r)rNr"r)Nr)r4rNN)NNrhrirr4r)NNNr�)?r�r7rm� functoolsr�typingrrr�� torch.optimr�torch.optim.lr_schedulerrr�trainer_pt_utilsr r � trainer_utilsr �utilsr �utils.versionsr� get_loggerr��loggerr�intrrr&r*r.r1r$r<r@rBrDrMrTrYr[ra�strrkro�LINEAR�COSINE�COSINE_WITH_RESTARTS� POLYNOMIALr�r�r�r��COSINE_WITH_MIN_LRr�rz�dictr}r�r�r�rrr�<module>rs���+�*� � � � �����������"�"�"�"�"�"�"�"� � � � �!�!�!�!�!�!�@�@�@�@�@�@�@�@�N�N�N�N�N�N�N�N�(�(�(�(�(�(�������+�+�+�+�+�+� �� �H� %� %�� � � � �L�L�Y�L�C�L�L�L�L�"2�i�2�2�2�2�$�s��Y\����� A�A��A�c�A�_b�A�A�A�A�*u�S�u�WZ�u�ps�u�u�u�u� 6�6�6�6�6Z��Z�,/�Z�EH�Z�V[�Z�Z�Z�Z�vx�6�6��6�,/�6�EH�6�V[�6�or�6�6�6�6�D^��^�,/�^�EH�^�VY�^�^�^�^�rt�6�6��6�,/�6�EH�6�VY�6�kn�6�6�6�6�B������ � � � � ������,Y[�+6�+6�+6�+6�\rv����s��QT��ai�jm�an�����eg�A�A��A�,/�A�<D�S�M�A�^a�A�A�A�A�>sv������,/��EH��V[��jo�������"�#'�16�16��16��16��16�� 16� � 16� �U�O� 16��%��16�16�16�16�h#��#��#�� #� � #� � #��#��#��#�#�#�#�T)-�&*������F6�F6��F6��F6��F6�!�� � F6� �s�m� F6� � F6��F6��F6��F6��F6�F6�F6�F6�T��9���9��&�(Z���G���1��&�(I��� 9��#�%C��$�&Q��%�'7� ��"'+�(,�04� Z�Z� ��]�"� #�Z��Z��s�m�Z�!�� � Z� (��~� Z�Z�Z�Z�zn�n�n�n�n� �n�n�n�b���������<4�4�4�4�4�4r
Memory