� H�gX�� ���ddlmZddlmZddlmZGd�de��ZGd�d��ZGd �d ��ZGd �d ��Z Gd �d��Z Gd�d��Z Gd�d��Z d�Z Gd�d��Zd�Zd�ZGd�d��ZGd�d��Zd�ZGd�d��Zeee e e e eeeeeed� Zd)d$�Zd*d&�Zd'�Zee_ee_ee_ed(krdd!lZej��d!Sd!S)+�)�print_function)�copy�)�xrangec��eZdZd�ZdS)�OptimizationMethodsc��dS�N�)�ctxs �l/home/asafur/pinokio/api/open-webui.git/app/env/lib/python3.11/site-packages/mpmath/calculus/optimization.py�__init__zOptimizationMethods.__init__s�� ��N)�__name__� __module__� __qualname__rr rr rrs#������ � � � � rrc�"�eZdZdZdZd�Zd�ZdS)�Newtona] 1d-solver generating pairs of approximative root and error. Needs starting points x0 close to the root. Pro: * converges fast * sometimes more robust than secant with bad second starting point Contra: * converges slowly for multiple roots * needs first derivative * 2 function evaluations per iteration �c �����|�_t|��dkr|d�_ntdt|��z�����_d|vr��fd�}n|d}|�_dS)N�r�!expected 1 starting point, got %i�dfc�:���j��|��Sr �r �diff��x�f�selfs ��r rzNewton.__init__.<locals>.df*�����x�}�}�Q��*�*�*r)r �len�x0� ValueErrorrr)r r rr#�kwargsrs` ` r rzNewton.__init__"s�������� �r�7�7�a�<�<���e�D�G�G��@�3�r�7�7�J�K�K� K�����v�~�~� +� +� +� +� +� +� +����B�����rc#�K�|j}|j}|j} |||��||��z z }t||z ��}|}||fV��5r )rrr#�abs)r rrr#�x1�errors r �__iter__zNewton.__iter__0sg���� �F�� �W�� �W�� ��a�a��e�e�b�b��f�f�n�$�B���R��L�L�E��B��u�+� � � �  rN�rrr�__doc__�maxstepsrr*r rr rrsC�������� �H� � � �����rrc�"�eZdZdZdZd�Zd�ZdS)�Secantz� 1d-solver generating pairs of approximative root and error. Needs starting points x0 and x1 close to the root. x1 defaults to x0 + 0.25. Pro: * converges fast Contra: * converges slowly for multiple roots �c ��||_t|��dkr|d|_|jdz|_nMt|��dkr|d|_|d|_nt dt|��z���||_dS)Nrr��?rz'expected 1 or 2 starting points, got %i)r r"r#r(r$r�r r rr#r%s r rzSecant.__init__Ks~����� �r�7�7�a�<�<���e�D�G��g��n�D�G�G� ��W�W��\�\���e�D�G���e�D�G�G��F��R���P�Q�Q� Q�����rc#��K�|j}|j}|j}||��} ||��}||z }|sdS||z |z }|sdS||||z z }}|}|t|��fV��@r )rr#r(r')r rr#r(�f0�f1�l�ss r r*zSecant.__iter__Ws����� �F�� �W�� �W�� �Q�r�U�U�� ���2���B��R��A�� ����b��A� �A�� �����b��d���B��B��c�!�f�f�*� � � � rNr+r rr r/r/:sC������ � ��H� � � �����rr/c�"�eZdZdZdZd�Zd�ZdS)�MNewtonat 1d-solver generating pairs of approximative root and error. Needs starting point x0 close to the root. Uses modified Newton's method that converges fast regardless of the multiplicity of the root. Pro: * converges fast for multiple roots Contra: * needs first and second derivative of f * 3 function evaluations per iteration rc �����|�_t|��dkstdt|��z���|d�_��_d|vr��fd��n|d���_d|vr��fd�}n|d}|�_dS)Nrrrrc�:���j��|��Sr rrs ��r rzMNewton.__init__.<locals>.df�r!r�d2fc�:���j��|��Sr r�rrr s ��r r=zMNewton.__init__.<locals>.d2f������x�}�}�R��+�+�+r�r r"r$r#rrr=�r r rr#r%r=rs` ` @r rzMNewton.__init__{�����������2�w�w�!�|�|��@�3�r�7�7�J�K�K� K��Q�%�������v�~�~� +� +� +� +� +� +� +����B�������� ,� ,� ,� ,� ,� ,� ,���,�C�����rc#��K�|j}|j}|j}|j} |}||��}|dkrdS||��}||��}|||||z|z z z z}t ||z ��} || fV��U)NTr�r#rrr=r'� r rrrr=�prevx�fx�dfx�d2fxr)s r r*zMNewton.__iter__�s����� �G�� �F�� �W���h�� ��E���1���B��Q�w�w����"�Q�%�%�C��3�q�6�6�D� ��s�R�$�Y��_�,�-� -�A���E� �N�N�E��U�(�N�N�N� rNr+r rr r:r:hsC�������� �H����&����rr:c�"�eZdZdZdZd�Zd�ZdS)�Halleya� 1d-solver generating pairs of approximative root and error. Needs a starting point x0 close to the root. Uses Halley's method with cubic convergence rate. Pro: * converges even faster the Newton's method * useful when computing with *many* digits Contra: * needs first and second derivative of f * 3 function evaluations per iteration * converges slowly for multiple roots rc �����|�_t|��dkstdt|��z���|d�_��_d|vr��fd��n|d���_d|vr��fd�}n|d}|�_dS)Nrrrrc�:���j��|��Sr rrs ��r rzHalley.__init__.<locals>.df�r!rr=c�:���j��|��Sr rr?s ��r r=zHalley.__init__.<locals>.d2f�r@rrArBs` ` @r rzHalley.__init__�rCrc#��K�|j}|j}|j}|j} |}||��}||��}||��}|d|z|zd|dzz||zz z z}t ||z ��} || fV��V)NTrrErFs r r*zHalley.__iter__�s����� �G�� �F�� �W���h�� ��E���1���B��"�Q�%�%�C��3�q�6�6�D� �!�B�$�s�(�a��Q��h��D��0�1� 1�A���E� �N�N�E��U�(�N�N�N� rNr+r rr rLrL�sC��������$�H����& � � � � rrLc�"�eZdZdZdZd�Zd�ZdS)�Mullera� 1d-solver generating pairs of approximative root and error. Needs starting points x0, x1 and x2 close to the root. x1 defaults to x0 + 0.25; x2 to x1 + 0.25. Uses Muller's method that converges towards complex roots. Pro: * converges fast (somewhat faster than secant) * can find complex roots Contra: * converges slowly for multiple roots * may have complex values for real starting points and real roots http://en.wikipedia.org/wiki/Muller's_method r0c ���||_t|��dkr,|d|_|jdz|_|jdz|_n�t|��dkr*|d|_|d|_|jdz|_nZt|��dkr(|d|_|d|_|d|_nt dt|��z���||_|d|_dS)Nrrr2r�z*expected 1, 2 or 3 starting points, got %i�verbose)r r"r#r(�x2r$rrUr3s r rzMuller.__init__�s������ �r�7�7�a�<�<���e�D�G��g��n�D�G��g��n�D�G�G� ��W�W��\�\���e�D�G���e�D�G��g��n�D�G�G� ��W�W��\�\���e�D�G���e�D�G���e�D�G�G��I�"�2�w�w�'�(�(� (�����i�(�� � � rc#�lK�|j}|j}|j}|j}||��}||��}||��} ||z ||z z }||z ||z z } ||z ||z z } || z| z } | |z ||z z } | dkr2| dkr,|jr#t d��t d|d|d|��dS|}|}|}|}|j�| dzd|z| zz ��} t| | z ��t| | z��kr| } |d|z| | zz z}||��}t||z ��}||fV���) NTrz canceled withzx0 =z, x1 =zand x2 =r�) rr#r(rVrU�printr �sqrtr')r rr#r(rV�fx0�fx1�fx2�fx2x1�fx2x0�fx1x0�w�fx2x1x0�rr)s r r*zMuller.__iter__�s����� �F�� �W�� �W�� �W���a��e�e���a��e�e���a��e�e�� ��3�Y�2��7�+�E��3�Y�2��7�+�E��3�Y�2��7�+�E��� ��%�A��u�}��b��1�G��A�v�v�'�Q�,�,��<�D��/�*�*�*��&�"�h��J��C�C�C����B��C��B��C��� � �a��d�Q�s�U�7�]�2�3�3�A��1�q�5�z�z�C��A��J�J�&�&��B�� �!�C�%�1�q�5�/� !�B��!�B�%�%�C���R��L�L�E��e�)�O�O�O�1 rNr+r rr rRrR�sC��������&�H�)�)�)�( � � � � rrRc�"�eZdZdZdZd�Zd�ZdS)� Bisectiona 1d-solver generating pairs of approximative root and error. Uses bisection method to find a root of f in [a, b]. Might fail for multiple roots (needs sign change). Pro: * robust and reliable Contra: * converges slowly * needs sign change �dc ��||_t|��dkrtdt|��z���||_|d|_|d|_dS)Nr�%expected interval of 2 points, got %irr)r r"r$r�a�br3s r rzBisection.__init__4sS����� �r�7�7�a�<�<��D�s�2�w�w�N�O�O� O�����A�����A�����rc#�4K�|j}|j}|j}||z }||��} |j�||zd��}||��}||z}|dkr|}n|dkr|}|}n||jjfV�|dz}||zdz t |��fV��q)NT�����rr)rrirjr �ldexp�zeror') r rrirjr7�fb�m�fm�signs r r*zBisection.__iter__<s����� �F�� �F�� �F�� ��E�� �Q�q�T�T�� $�����q�1�u�b�)�)�A���1���B���7�D��a�x�x�����������������&�&�&�&� ��F�A��q�5�!�)�S��V�V�#� #� #� #� $rNr+r rr rere"sC���������H����$�$�$�$�$rrec�f�|dkrd�}n&|dkrd�}n|dkrd�}ntd|z���|S)zE Return a function to calculate m for Illinois-like methods. �illinoisc��dS)N��?r ��fzros r �getmz_getm.<locals>.getmUs���3r�pegasusc��|||zz Sr r rws r ryz_getm.<locals>.getmXs���r�B�w�<� r�andersonc�&�d||z z }|dkr|SdS)Nrrrvr )rxrorps r ryz_getm.<locals>.getm[s"���B�r�E� �A��1�u�u����srzmethod '%s' not recognized)r$)�methodrys r �_getmrPs|������ � � � � �9� � � � � � � �:� � � � � � ��5��>�?�?�?� �Krc�"�eZdZdZdZd�Zd�ZdS)�Illinoisa� 1d-solver generating pairs of approximative root and error. Uses Illinois method or similar to find a root of f in [a, b]. Might fail for multiple roots (needs sign change). Combines bisect with secant (improved regula falsi). The only difference between the methods is the scaling factor m, which is used to ensure convergence (you can choose one using the 'method' keyword): Illinois method ('illinois'): m = 0.5 Pegasus method ('pegasus'): m = fb/(fb + fz) Anderson-Bjoerk method ('anderson'): m = 1 - fz/fb if positive else 0.5 Pro: * converges very fast Contra: * has problems with multiple roots * needs sign change r0c ��||_t|��dkrtdt|��z���|d|_|d|_||_|d|_|d|_|�dd��|_ t|j ��|_ |jrtd |j z��dSdS) Nrrhrr�tolrUr~rtzusing %s method) r r"r$rirjrr�rU�getr~rryrYr3s r rzIllinois.__init__�s������ �r�7�7�a�<�<��D�s�2�w�w�N�O�O� O��A�����A��������%�=����i�(�� ��j�j��:�6�6�� ��$�+�&�&�� � �<� 3� �#�d�k�1� 2� 2� 2� 2� 2� 3� 3rc#�K�|j}|j}|j}|j}||��}||��}d} ||z }|dkrdS||z |z } ||| z z } || ��} t | ��|jkr|jrtd| ��| |fV�dS| |zdkr |}|}| }| }n|�| |��}| }| }||z}|jr|r|dkstd|��||zdz t |��fV���)NTrzcanceled with z =rtzm:r) r~rrirjr'r�rUrYry) r r~rrirj�farorpr7r8�zrxs r r*zIllinois.__iter__�sY������� �F�� �F�� �F�� �Q�q�T�T�� �Q�q�T�T�� �� $��A��A��A�v�v����b��A� �A��B�q�D��A���1���B��2�w�w���!�!��<�2��-�q�1�1�1���d� � � ����B�w��{�{�����������I�I�b�"�%�%�������r�T���|� �� �&�J�*>�*>��d�A�����q�5�!�)�S��V�V�#� #� #� #�3 $rNr+r rr r�r�esC��������8�H� 3� 3� 3�!$�!$�!$�!$�!$rr�c�$�d|d<t|i|��S)z� 1d-solver generating pairs of approximative root and error. Uses Pegasus method to find a root of f in [a, b]. Wrapper for illinois to use method='pegasus'. rzr~�r���argsr%s r �Pegasusr��s!��!�F�8�� �T� $�V� $� $�$rc�$�d|d<t|i|��S)z� 1d-solver generating pairs of approximative root and error. Uses Anderson-Bjoerk method to find a root of f in [a, b]. Wrapper for illinois to use method='pegasus'. r|r~r�r�s r �Andersonr��s!��"�F�8�� �T� $�V� $� $�$rc�"�eZdZdZdZd�Zd�ZdS)�Riddera� 1d-solver generating pairs of approximative root and error. Ridders' method to find a root of f in [a, b]. Is told to perform as well as Brent's method while being simpler. Pro: * very fast * simpler than Brent's method Contra: * two function evaluations per step * has problems with multiple roots * needs sign change http://en.wikipedia.org/wiki/Ridders'_method r0c ���||_||_t|��dkrtdt|��z���|d|_|d|_|d|_|d|_dS)NrrhrrrUr�)r rr"r$r(rVrUr�r3s r rzRidder.__init__�sj�������� �r�7�7�a�<�<��D�s�2�w�w�N�O�O� O��Q�%����Q�%����i�(�� ��%�=����rc#�K�|j}|j}|j}||��}|j}||��} d||zz}||��}|||z |�||z ��z|z|�|dz||zz ��z z} || ��} t | ��|jkr/|jrtd| ��| t ||z ��fV�dS| |zdkr| }| }n| }| }t ||z ��} ||zdz | fV���)NTrvrzcanceled with f(x4) =r) r rr(rVrrrZr'r�rUrY) r r rr(r\rVr]�x3�fx3�x4�fx4r)s r r*zRidder.__iter__�sO�����h�� �F�� �W���a��e�e�� �W���a��e�e�� %��b�2�g��B��!�B�%�%�C��r�B�w�#�(�(�3��9�"5�"5�5��;�c�h�h�s�A�v�PS�TW�PW�GW�>X�>X�X�X�B��!�B�%�%�C��3�x�x�$�(�"�"��<�8��1�3�7�7�7��#�b�2�g�,�,�&�&�&�&����S�y�1�}�}������������R��L�L�E���7�A�+�u�$� $� $� $�% %rNr+r rr r�r��sC��������&�H�!�!�!�%�%�%�%�%rr�c�"�eZdZdZdZd�Zd�ZdS)�ANewtonz� EXPERIMENTAL 1d-solver generating pairs of approximative root and error. Uses Newton's method modified to use Steffensens method when convergence is slow. (I.e. for multiple roots.) rc �����|�_t|��dkstdt|��z���|d�_��_d|vr��fd��n|d���_��fd�}|�_|d�_dS)Nrrrrc�:���j��|��Sr rrs ��r rzANewton.__init__.<locals>.dfr!rc�8��|�|���|��z z Sr r )rrrs ��r �phizANewton.__init__.<locals>.phis"����q�q��t�t�b�b��e�e�|�#� #rrU)r r"r$r#rrr�rU)r r rr#r%r�rs` ` @r rzANewton.__init__s����������2�w�w�!�|�|��@�3�r�7�7�J�K�K� K��Q�%�������v�~�~� +� +� +� +� +� +� +����B���� $� $� $� $� $� $�����i�(�� � � rc#�K�|j}|j}|j}|j}d}d} |} ||��}n(#t$r|jrt d|��YdSwxYw|}t||z ��}|r4t||z ��|z dkr|jrt d��|dz }|dkr't|��}d}|jrt d��||fV���)NrTz$ZeroDivisionError: canceled with x =rzconverging slowlyrTzaccelerating convergence) r#rrr��ZeroDivisionErrorrUrYr'� steffensen) r r#rrr�r)�counterrG� preverrors r r*zANewton.__iter__s(���� �W�� �F�� �W���h������ ��E� ��S��W�W����$� � � ��<�F��@�"�E�E�E���� �����I���� �O�O�E�� ��U�Y�.�/�/�%�7�!�;�;��<�/��-�.�.�.��1� ���!�|�|� ��o�o�����<�6��4�5�5�5��e�)�O�O�O�+ s� 3�!A�ANr+r rr r�r�sC�������� �H�)�)�)�"����rr�c���|�|��}|�|j��}|�||���}t|��}t|��}|�||��}t |��D]c}|���} | |xx|z cc<|�|| ���|z |z } t |��D]} | | || |f<��d|S)z� Calculate the Jacobian matrix of a function at the point x0. This is the first derivative of a vectorial function: f : R^m -> R^n with m >= n )�matrixrZ�epsr"rr) r rr�hrHrp�n�J�j�xj�Jj�is r �jacobianr�Cs��� � � �1� � �A� �������A� ���A�A�q�E� � �B� �B���A� �A���A� � � �1�a���A� �A�Y�Y���� �V�V�X�X�� �1����� �����j�j���B�� � �2�%�� *������ � �A���U�A�a��c�F�F� � �Hrc�"�eZdZdZdZd�Zd�ZdS)�MDNewtona� Find the root of a vector function numerically using Newton's method. f is a vector function representing a nonlinear equation system. x0 is the starting point close to the root. J is a function returning the Jacobian matrix for a point. Supports overdetermined systems. Use the 'norm' keyword to specify which norm to use. Defaults to max-norm. The function to calculate the Jacobian matrix can be given using the keyword 'J'. Otherwise it will be calculated numerically. Please note that this method converges only locally. Especially for high- dimensional systems it is not trivial to find a good starting point being close enough to the root. It is recommended to use a faster, low-precision solver from SciPy [1] or OpenOpt [2] to get an initial guess. Afterwards you can use this method for root-polishing to any precision. [1] http://scipy.org [2] http://openopt.org/Welcome � c �2����|_�|_t|ttf��r��|��}|jdks Jd���||_d|vr|d|_n ��fd�}||_|d|_ |d|_ dS)Nrz need a vectorr�c�0�����|��Sr )r�)rr rs ��r r�zMDNewton.__init__.<locals>.J�s����|�|�A�q�)�)�)r�normrU) r r� isinstance�tuple�listr��colsr#r�r�rU)r r rr#r%r�s `` r rzMDNewton.__init__xs����������� �b�5�$�-� (� (� ����B���B��w�!�|�|�|�_�|�|�|���� �&�=�=��C�[�D�F�F� *� *� *� *� *� *��D�F��6�N�� ��i�(�� � � rc#�BK�|j}|j}|j}|j}|j�||���}||��}d}|s�| }||�} |j�| |��} |jr.td��t| ��td| ��|jj } || z} | |kr|jrtd��d}nA|j�|| ���}||��} | |kr| }| }n| dz} || | zz} �`||fV�|��dSdS)NFzJx:zs:Tzcanceled, won't get more excactr) rr#r�r�r r��lu_solverUrY�one)r rr#r�r�rH�fxnorm�cancel�fxn�Jxr8r7r(�newnorms r r*zMDNewton.__iter__�sv���� �F�� �W���y�� �F�� �X�_�_�Q�Q��V� $� $����b������� ��#�C���B��B���!�!�"�c�*�*�A��|� ��e� � � ��b� � � ��d�A������ �A��a��B� ���8�8��|�A��?�@�@�@�!�F���X�_�_�Q�Q��V�,�,���$�r�(�(���V�#�#�$�F��B���Q����!�A�#�X�� ��v�,� � � �7� � � � � rNr+r rr r�r�ZsC��������6�H�)�)�)� #�#�#�#�#rr�) �newton�secant�mnewton�halley�muller�bisectrtrzr|�ridder�anewton�mdnewtonr�NFTc �����j} �xjdz c_|� �jdz}|�d|��|d<d|vr |d|d<||d<t|tt f��r�fd�|D��}n��|��g}t|t��r, t|}n#t$rtd ���wxYwt|tt f��rt|����fd �} | } ||�} t| tt �j f��} n##t$r||d ��} d } YnwxYwd |vr|d } | rt}d|vr �fd�} | |d<n|d} nt} | | ��d kr-| r�� |��|�_S|d |�_S|�||fi|��} d|vr |d}n| j}d }| D]T\}}|r t#d|��t#d|��|dz }||t%d| |����zks||krn�U|std���t|tt �j f��s|g}n|}|r:| ||���dz|kr%td| ||���dz�d|�d����||�_S#|�_wxYw)a4 Find an approximate solution to `f(x) = 0`, using *x0* as starting point or interval for *x*. Multidimensional overdetermined systems are supported. You can specify them using a function or a list of functions. Mathematically speaking, this function returns `x` such that `|f(x)|^2 \leq \mathrm{tol}` is true within the current working precision. If the computed value does not meet this criterion, an exception is raised. This exception can be disabled with *verify=False*. For interval arithmetic (``iv.findroot()``), please note that the returned interval ``x`` is not guaranteed to contain `f(x)=0`! It is only some `x` for which `|f(x)|^2 \leq \mathrm{tol}` certainly holds regardless of numerical error. This may be improved in the future. **Arguments** *f* one dimensional function *x0* starting point, several starting points or interval (depends on solver) *tol* the returned solution has an error smaller than this *verbose* print additional information for each iteration if true *verify* verify the solution and raise a ValueError if `|f(x)|^2 > \mathrm{tol}` *solver* a generator for *f* and *x0* returning approximative solution and error *maxsteps* after how many steps the solver will cancel *df* first derivative of *f* (used by some solvers) *d2f* second derivative of *f* (used by some solvers) *multidimensional* force multidimensional solving *J* Jacobian matrix of *f* (used by multidimensional solvers) *norm* used vector norm (used by multidimensional solvers) solver has to be callable with ``(f, x0, **kwargs)`` and return an generator yielding pairs of approximative solution and estimated error (which is expected to be positive). You can use the following string aliases: 'secant', 'mnewton', 'halley', 'muller', 'illinois', 'pegasus', 'anderson', 'ridder', 'anewton', 'bisect' See mpmath.calculus.optimization for their documentation. **Examples** The function :func:`~mpmath.findroot` locates a root of a given function using the secant method by default. A simple example use of the secant method is to compute `\pi` as the root of `\sin x` closest to `x_0 = 3`:: >>> from mpmath import * >>> mp.dps = 30; mp.pretty = True >>> findroot(sin, 3) 3.14159265358979323846264338328 The secant method can be used to find complex roots of analytic functions, although it must in that case generally be given a nonreal starting value (or else it will never leave the real line):: >>> mp.dps = 15 >>> findroot(lambda x: x**3 + 2*x + 1, j) (0.226698825758202 + 1.46771150871022j) A nice application is to compute nontrivial roots of the Riemann zeta function with many digits (good initial values are needed for convergence):: >>> mp.dps = 30 >>> findroot(zeta, 0.5+14j) (0.5 + 14.1347251417346937904572519836j) The secant method can also be used as an optimization algorithm, by passing it a derivative of a function. The following example locates the positive minimum of the gamma function:: >>> mp.dps = 20 >>> findroot(lambda x: diff(gamma, x), 1) 1.4616321449683623413 Finally, a useful application is to compute inverse functions, such as the Lambert W function which is the inverse of `w e^w`, given the first term of the solution's asymptotic expansion as the initial value. In basic cases, this gives identical results to mpmath's built-in ``lambertw`` function:: >>> def lambert(x): ... return findroot(lambda w: w*exp(w) - x, log(1+x)) ... >>> mp.dps = 15 >>> lambert(1); lambertw(1) 0.567143290409784 0.567143290409784 >>> lambert(1000); lambert(1000) 5.2496028524016 5.2496028524016 Multidimensional functions are also supported:: >>> f = [lambda x1, x2: x1**2 + x2, ... lambda x1, x2: 5*x1**2 - 3*x1 + 2*x2 - 3] >>> findroot(f, (0, 0)) [-0.618033988749895] [-0.381966011250105] >>> findroot(f, (10, 10)) [ 1.61803398874989] [-2.61803398874989] You can verify this by solving the system manually. Please note that the following (more general) syntax also works:: >>> def f(x1, x2): ... return x1**2 + x2, 5*x1**2 - 3*x1 + 2*x2 - 3 ... >>> findroot(f, (0, 0)) [-0.618033988749895] [-0.381966011250105] **Multiple roots** For multiple roots all methods of the Newtonian family (including secant) converge slowly. Consider this example:: >>> f = lambda x: (x - 1)**99 >>> findroot(f, 0.9, verify=False) 0.918073542444929 Even for a very close starting point the secant method converges very slowly. Use ``verbose=True`` to illustrate this. It is possible to modify Newton's method to make it converge regardless of the root's multiplicity:: >>> findroot(f, -10, solver='mnewton') 1.0 This variant uses the first and second derivative of the function, which is not very efficient. Alternatively you can use an experimental Newtonian solver that keeps track of the speed of convergence and accelerates it using Steffensen's method if necessary:: >>> findroot(f, -10, solver='anewton', verbose=True) x: -9.88888888888888888889 error: 0.111111111111111111111 converging slowly x: -9.77890011223344556678 error: 0.10998877665544332211 converging slowly x: -9.67002233332199662166 error: 0.108877778911448945119 converging slowly accelerating convergence x: -9.5622443299551077669 error: 0.107778003366888854764 converging slowly x: 0.99999999999999999214 error: 10.562244329955107759 x: 1.0 error: 7.8598304758094664213e-18 ZeroDivisionError: canceled with x = 1.0 1.0 **Complex roots** For complex roots it's recommended to use Muller's method as it converges even for real starting points very fast:: >>> findroot(lambda x: x**4 + x + 1, (0, 1, 2), solver='muller') (0.727136084491197 + 0.934099289460529j) **Intersection methods** When you need to find a root in a known interval, it's highly recommended to use an intersection-based solver like ``'anderson'`` or ``'ridder'``. Usually they converge faster and more reliable. They have however problems with multiple roots and usually need a sign change to find a root:: >>> findroot(lambda x: x**3, (-1, 1), solver='anderson') 0.0 Be careful with symmetric functions:: >>> findroot(lambda x: x**2, (-1, 1), solver='anderson') #doctest:+ELLIPSIS Traceback (most recent call last): ... ZeroDivisionError It fails even for better starting points, because there is no sign change:: >>> findroot(lambda x: x**2, (-1, .5), solver='anderson') Traceback (most recent call last): ... ValueError: Could not find root within given tolerance. (1.0 > 2.16840434497100886801e-19) Try another starting point or tweak arguments. rNirU�d1frr�c�:��g|]}��|����Sr )�convert)�.0rr s �r � <listcomp>zfindroot.<locals>.<listcomp>�s#���-�-�-�Q�#�+�+�a�.�.�-�-�-rzcould not recognize solverc�"����fd��D��S)Nc���g|]}|����Sr r )r��fnr�s �r r�z)findroot.<locals>.tmp.<locals>.<listcomp>�s���/�/�/�b���D� �/�/�/rr )r��f2s`�r �tmpzfindroot.<locals>.tmp�s����/�/�/�/�B�/�/�/�/rrF�multidimensionalr�c�0����|d��S)N�inf)r�)rr s �r �<lambda>zfindroot.<locals>.<lambda>�s������!�U�!3�!3�rr-zx: zerror:rzZCould not find root using the given solver. Try another starting point or tweak arguments.rz-Could not find root within given tolerance. (z > z0) Try another starting point or tweak arguments.)�precr�r�r�r�r�r��str� str2solver�KeyErrorr$rr�� TypeErrorr�r'r-rY�max)r rr#�solverr�rU�verifyr%r�r�rHr�r�� iterationsr-r�rr)�xlr�s` @r �findrootr��s�����b �8�D�W� ���B���� �;��'�E�/�C�"�J�J�y�'�:�:��y�� �F�?�?�!�%�=�F�4�L���u� � �b�4��-� (� (� #�-�-�-�-�"�-�-�-�B�B��+�+�b�/�/�"�B� �f�c� "� "� ?� ?�#�F�+����� ?� ?� ?� �!=�>�>�>� ?���� �a�$��� '� '� ��a���B� 0� 0� 0� 0� 0��A� %���B��B�)�"�t�U�C�J�.G�H�H� � ��� %� %� %���2�a�5���B�$� � � � %���� �� '� '�%�&8�9� � � ��F��V�#�#�3�3�3�3��!%��v����f�~����D� �4��8�8�q�=�=�� ��z�z�"�~�~�B����?�!�u�>����9�V�C��B�1�1�&�1�1� � �� � ��j�)�H�H�!�*�H� ��"� S� S�H�A�u�� '��h��"�"�"��h��&�&�&� ��F�A��s�S��D�D��G�G�_�_�,�,�,��X� � ���1>�� S� �"R�S�S�S��!�d�E�3�:�6�7�7� ���B�B��B� � 7�d�d�1�1�b�6�l�l�A�o��+�+��*�!%��Q�Q��V� � �a�������6�7�7� 7�������4������sV�BJ:�+ B9�8J:�9C�5J:� 'D1�0J:�1E�J:�E�AJ:�5J:�C.J:�: Kr�c ������|� �jdz}�|d<t|��D]I�dt���zdz}||vr ||}n���fd�}t||����|ksn�J�S)a� Return the multiplicity of a given root of f. Internally, numerical derivatives are used. This might be inefficient for higher order derviatives. Due to this, ``multiplicity`` cancels after evaluating 10 derivatives by default. You can be specify the n-th derivative using the dnf keyword. >>> from mpmath import * >>> multiplicity(lambda x: sin(x) - 1, pi/2) 2 Ng�������?�d0f�drc�2�����|���Sr )r)rr rr�s ���r r�zmultiplicity.<locals>.<lambda>�s���3�8�8�A�q�!�,�,�r)r�rr�r') r r�rootr�r-r%�dfstrrr�s `` @r � multiplicityr��s������ �{��g��n���F�5�M� �H� � �����c�!�f�f� �s�"�� �F�?�?����B�B�,�,�,�,�,�,�B��2�2�d�8�8�}�}�s�"�"� �E�#� �Hrc����fd�}|S)a� linear convergent function -> quadratic convergent function Steffensen's method for quadratic convergence of a linear converging sequence. Don not use it for higher rates of convergence. It may even work for divergent sequences. Definition: F(x) = (x*f(f(x)) - f(x)**2) / (f(f(x)) - 2*f(x) + x) Example ....... You can use Steffensen's method to accelerate a fixpoint iteration of linear (or less) convergence. x* is a fixpoint of the iteration x_{k+1} = phi(x_k) if x* = phi(x*). For phi(x) = x**2 there are two fixpoints: 0 and 1. Let's try Steffensen's method: >>> f = lambda x: x**2 >>> from mpmath.calculus.optimization import steffensen >>> F = steffensen(f) >>> for x in [0.5, 0.9, 2.0]: ... fx = Fx = x ... for i in xrange(9): ... try: ... fx = f(fx) ... except OverflowError: ... pass ... try: ... Fx = F(Fx) ... except ZeroDivisionError: ... pass ... print('%20g %20g' % (fx, Fx)) 0.25 -0.5 0.0625 0.1 0.00390625 -0.0011236 1.52588e-05 1.41691e-09 2.32831e-10 -2.84465e-27 5.42101e-20 2.30189e-80 2.93874e-39 -1.2197e-239 8.63617e-78 0 7.45834e-155 0 0.81 1.02676 0.6561 1.00134 0.430467 1 0.185302 1 0.0343368 1 0.00117902 1 1.39008e-06 1 1.93233e-12 1 3.73392e-24 1 4 1.6 16 1.2962 256 1.10194 65536 1.01659 4.29497e+09 1.00053 1.84467e+19 1 3.40282e+38 1 1.15792e+77 1 1.34078e+154 1 Unmodified, the iteration converges only towards 0. Modified it converges not only much faster, it converges even to the repelling fixpoint 1. c�^���|��}�|��}||z|dzz |d|zz |zz S)Nrr )rrH�ffxrs �r �Fzsteffensen.<locals>.FBs@��� �Q�q�T�T���a��e�e���#���A�� �#��"��*�q�.�1�1rr )rr�s` r r�r��s$���J2�2�2�2�2� �Hr�__main__)r�NFT)Nr�)� __future__rr� libmp.backendr�objectrrr/r:rLrRrerr�r�r�r�r�r�r�r�r�r�r�r�doctest�testmodr rr �<module>r�s��%�%�%�%�%�%�������"�"�"�"�"�"� � � � � �&� � � �)�)�)�)�)�)�)�)�V,�,�,�,�,�,�,�,�\5�5�5�5�5�5�5�5�n4�4�4�4�4�4�4�4�lJ�J�J�J�J�J�J�J�Z,$�,$�,$�,$�,$�,$�,$�,$�\���*N$�N$�N$�N$�N$�N$�N$�N$�`%�%�%�%�%�%�9%�9%�9%�9%�9%�9%�9%�9%�v6�6�6�6�6�6�6�6�| � � �.Q�Q�Q�Q�Q�Q�Q�Q�n��'����!�W����X�G�G� � i�i�i�i�X  � � � �6I �I �I �V (���'���#/�� � �z����N�N�N��G�O�������r
Memory