Menu

The chord method is an example of solving the equation. Numerical methods for solving nonlinear equations

Decorative trees and shrubs

Iteration method

Method of simple iterations for equation f.(x.) \u003d 0 is as follows:

1) The initial equation is converted to mind, convenient for iterations:

x. = φ (h.). (2.2)

2) choose the initial approximation h. 0 and calculate the subsequent approximations on the iterative formula
x K. = φ (x K. -1), k. =1,2, ... (2.3)

If there is an iterative sequence limit, it is the root of the equation f.(x.) \u003d 0, i.e. f.(ξ ) =0.

y. = φ (h.)

a X. 0 x. 1 x. 2 ξ. b.

Fig. 2. The object of iterations

In fig. 2 shows the process of obtaining another approximation by the iteration method. The sequence of approximations converges to the root ξ .

Theoretical foundations for the use of the iteration method gives the following theorem.

Theorem 2.3.. Let the conditions follow:

1) equation root h.= φ (x)belongs to the segment [ but, b.];

2) all values \u200b\u200bof the function φ (h.) belong to the segment [ but, b.], t. e. butφ (h.)≤ B.;

3) There is such a positive number q.< 1 that derivative φ "(x.) At all points of the segment [ but, b.] Satisfies inequality | φ "(x.) | ≤ q..

1) iterative sequence x P.= φ (x p- 1)(n \u003d1, 2, 3, ...) converges at any x. 0 Î [ but, b.];

2) the limit of the iterative sequence is the root of the equation

x \u003d φ.(x.), i.e. if x K.\u003d ξ, then ξ \u003d φ (ξ);

3) Fairly inequality, characterizing the rate of convergence of an iterative sequence

| ξ -X k | ≤ (b-A.)× Q k.(2.4)

Obviously, this theorem puts, rather, harsh conditions that need to be checked before applying the iteration method. If derived function φ (x.) The module is more than one, then the iteration process is diverged (Fig. 3).

Y. = φ (x.) y. = x.

Fig. 3. Considered process of iterations

In terms of the convergence of iterative methods, inequality is purely used

| x k - x k - 1 | ε . (2.5)

Horde method is to replace the curve w. = f.(x.) Segment of direct passing through points ( but, f.(a.)) and ( b., f.(b.)) Fig. four). Abscissa point intersection direct with axis OH Accepted for the next approximation.

To obtain the calculated formula of the Horde method, we will write the equation of a straight line passing through points ( a., f.(a.)) and ( b., f.(b.)) and equating w. to zero, we will find h.:

Þ

Algorithm of the Horde method :

1) Let k. = 0;

2) Calculate the next iteration number: k. = k. + 1.

Find next k.-E approximation by the formula:

x K.= a.- f.(a.)(b. - a.)/(f.(b.) - f.(a.)).

Calculate f.(x K.);

3) if f.(x K.) \u003d 0 (found root), then go to clause 5.

If a f.(x K.) × f.(b.)\u003e 0, then b.= x K.Otherwise a. = x K.;

4) if | x k - x k -1 | > ε , then go to paragraph 2;

5) Display the root value x k;

Comment. The actions of the third point are similar to the actions of the half-division method. However, in the Horde method at each step, the same end of the segment (right or left) can be moved if the graph of the function in the vicinity of the root is convex upward (Fig. 4, but) or concave down (Fig. 4, b.). Therefore, the difference in adjacent approximations is used in the convergence criteria.

Fig. four. Horde method

4. Newton Method(tangents)

Suppose that the approximate value of the root of the equation was found f.(x.) \u003d 0, and denote it x P.. Ready formula newton methodto determine the next approximation x N. +1 can be obtained in two ways.

The first method expresses geometric meaning newton method and is that instead of the intersection point of the graph w.= f.(x.) with axis Oxwe are looking for an intersection point with the axis Oxtangential for graphics function at point ( x N., F.(x N.)), as shown in Fig. 5. The equation of tangent has the form u - F.(x N.)= f "(x N.)(x.- x N.).

Fig. five. Newton method (tangent)

At the intersection point of the tangent with the axis Oxvariable w.\u003d 0. Equity w.to zero, express h.and we get a formula method tangents :

(2.6)

Second way: Spread function f.(x.) in a series of Taylor in the neighborhood of the point x \u003d x n:

We restrict ourselves linear terms relatively ( h.- x P.), equate to zero f.(x.) and, expressing unknown from the resulting equation h.denoting it through x N. +1 We obtain formula (2.6).

We give sufficient conditions for the convergence of the Newton method.

Theorem 2.4.. Let on the segment [ but, b.] Terms are fulfilled:

1) Function f.(x.) and its derivatives f "(h.)and f ""(x.) continuous;

2) derivatives f "(x) and f.""(x.) are different from zero and retain certain constant signs;

3) f.(a.)× F.(b.) < 0 (function f.(x.) Changes the sign on the segment).
Then there is a segment [ α , β ] containing the desired root of the equation f.(x.) = 0, where the iterative sequence (2.6) converges. If as zero approximation h. 0 Select the boundary point [ α , β ], in which the function sign coincides with the sign of the second derivative,

those. F.(x. 0)× f "(x. 0)\u003e 0, then the iterative sequence converges monotonously

Comment. It should be noted that the chord method just comes from the opposite side, and both of these methods can complement each other. Possible and combined hord-tangent method.

5. Method of sequent

The partition method can be obtained from Newton's method when replacing a derivative approximate expression - a difference formula:

, ,

. (2.7)

Formula (2.7) two previous approximations are used x P.and x n - 1. Therefore, with a given initial approximation h. 0 must be calculated the following approximation x. 1 , for example, the Newton method with an approximate replacement of the derivative according to the formula

,

Algorithm of the method of sequential:

1) the initial value is given. h. 0 and error ε . Calculate

;

2) for n \u003d1, 2, ... while condition is satisfied | x N.x N. -1 | > ε , calculate x P + 1 according to formula (2.7).

3. Horde method

Let an equation f (x) \u003d 0, where f (x) is a continuous function having a derivative of the first and second orders in the interval (A, B). The root is considered separated and is on the segment.

The idea of \u200b\u200bthe chord method is that at a sufficiently small gap a curve arc y \u003d f (x) can be replaced with chord and as an approximate value of the root take the intersection point with the abscissa axis. Consider the case (Fig. 1), when the first and second derivatives have the same signs, i.e. f "(x) f ² (x)\u003e 0. Then the chord equation passing through points A0 and B has the form

Approximation of the root x \u003d x1, for which Y \u003d 0, is defined as


.

Similarly for chord passing through points A1 and B, the following root approximation is calculated.

.

In the general case, the formula of the chord method has the form:

. (2)

If the first and second derivatives have different signs, i.e.

f "(x) f" (x)< 0,

that all approximations to the root X * are performed by the right border of the segment, as shown in Fig. 2, and are calculated by the formula:

. (3)

The choice of formula in each particular case depends on the type of function f (x) and is carried out according to the rule: still is the boundary of the root insulation segment, for which the sign of the function coincides with the sign of the second derivative. Formula (2) is used in the case when f (b) f "(b)\u003e 0. If the inequality F (a) f" (a)\u003e 0 is valid, then it is advisable to use formula (3).


Fig. 1 Fig. 2.

Fig. 3 Fig. four

The iterative process of the chord method continues until an approximate root is obtained with a given degree of accuracy. When evaluating the accuracy error, you can use the relation:

.

Then the calculation condition is written in the form:

where E is a given calculation error. It should be noted that when finding the root, the chord method often provides a faster convergence than half-division method.

4. Newton method (tangent)

Let the equation (1) have a root on the segment, and f "(x) and f" (x) are continuous and retain constant signs throughout the interval.

The geometrical meaning of the Newton method is that the arc of the y \u003d f (x) curve is replaced by tangent. To do this, some initial approximation of the root X0 is selected on the interval and the tangent at the point C0 (x0, f (x0)) is carried out to the curve y \u003d f (x) to the intersection with the abscissa axis (Fig. 3). The equation of tangent at the C0 point is

It is then tested through a new point C1 (x1, f (x1)) and the point x2 of its intersection with the 0x axis is determined, etc. In the general case, the formula of the tangent method has the form:

As a result of the calculations, the sequence of approximate values \u200b\u200bx1, x2, ..., xi, ..., each subsequent member is closer to the root x * than the previous one. The iterative process usually stops when performing condition (4).

The initial approximation x0 must satisfy the condition:

f (x0) f ¢¢ (x0)\u003e 0. (6)

Otherwise, the convergence of Newton's method is not guaranteed, since the tangent will cross the abscissa axis at a point that does not belong to the segment. In practice, as the initial approximation of the root X0, one of the boundaries of the interval is usually selected, i.e. x0 \u003d a or x0 \u003d b, for which the sign of the function coincides with the sign of the second derivative.

Newton method provides a high speed of convergence in solving equations for which the value of the module is the derivative of ½f ¢ (x) ½ rope of the root is quite large, i.e. The graph of the function y \u003d f (x) in the neighborhood of the root has a large steepness. If the y \u003d f (x curve) is almost horizontal in the interval, then it is not recommended to apply the method of tangent.

The essential disadvantage of the considered method is the need to calculate derivatives for the organization of the iteration process. If the value f ¢ (x) changes little on the interval, then the formula can be used to simplify calculations.

, (7)

those. The value of the derivative is enough to calculate only once at the starting point. This geometrically means that tangents at points Ci (xi, f (xi)), where i \u003d 1, 2, ... is replaced by straight, parallel tangent, carried out to the curve y \u003d f (x) at the start point C0 (X0 , f (x0)), as shown in Fig. four.

In conclusion, it should be noted that all the above is true when the initial approximation of X0 is selected closest to the true root of X * equations. However, this is not always just feasible. Therefore, Newton's method is often used in the final stage of solving equations after the operation of a reliably converging algorithm, for example, half-division method.

5. Method of simple iteration

To apply this method to solve equation (1), it is necessary to convert it to the form. Next, the initial approximation is selected and X1 is calculated, then x2, etc.:

x1 \u003d j (x0); x2 \u003d j (x1); ...; xk \u003d j (xk-1); ...

nonlinear algebraic equation root

The resulting sequence converges to the root when performing the following conditions:

1) The function j (x) is differentiable on the interval.

2) at all points of this interval J ¢ (x) satisfies inequality:

0 £ Q £ 1. (8)

Under such conditions, the rate of convergence is linear, and iterations should be performed until it becomes a fair condition:

.

Criterion of type


it can only be used at 0 £ q £ ½. Otherwise, iterations end prematurely without ensuring a given accuracy. If the calculation of q is difficult, then you can use the ending criterion

; .

Possible various methods Converting equation (1) to mind. It should be chosen such that satisfies condition (8), which generates a similar iterative process, such as, for example, it is shown in Fig. 5, 6. Otherwise, in particular, at ½J ¢ (x) ½\u003e 1, the iterative process dispels and does not allow to obtain a solution (Fig. 7).

Fig. five

Fig. 6.

Fig. 7.

Conclusion

The problem of improving the quality of calculations of nonlinear equations with a variety of methodsAs the inconsistency between the desired and valid, exists and will exist in the future. Her decision will contribute to the development information technologieswhich is to improve the methods of organizing information processes and their implementation using specific tools - media and programming languages.


List of sources used

1. Alekseev V. E., Vaulin A.S., Petrov G. B. - Computer equipment and programming. Programming Workshop: Pratty. Address / M.: Higher. shk. , 1991. - 400 p.

2. Abramov S.A., Zima E.V. - The start of programming in Pascal. - M.: Science, 1987. -112 p.

3. Computing equipment and programming: studies. for tech universities / A.V. Petrov, V.E. Alekseev, A.S. Vaulin et al. - M.: Higher. Shk., 1990 - 479 p.

4. Gusev V.A., Mordkovich A.G. - Mathematics: Ref. Materials: KN. For students. - 2nd ed. - M.: Enlightenment, 1990. - 416 p.



The point of approximate solution, i.e., consecutive approximations (4) are constructed by formulas:, (9) where is the initial approximation to the exact solution. 4.5 The Zeidel method based on the linearized equation, the iterative formula for constructing an approximate solution of the nonlinear equation (2) based on the linearized equation (7) has the form: 4.6 The method of pre-reserved methods ...

Let on the cut The function is continuous, takes the value of different characters at the ends of the segment, and the derivative f "(x) Saves a sign. Depending on the sign of the second derivative, the following cases of the location of the curves are possible (Fig. 1).


Fig. one.

Algorithm for the approximate calculation of the root method of chord.

Initial data: f (X) -function ; e. - required accuracy; x. 0 - Initial approximation.

Result: XPR - approximate root equation f (x)= 0.

Decision method:


Fig. 2. f "(x) f "" (x)\u003e 0.

Consider the case when f "(x) and f "" (x) Have the same signs (Fig. 2).

Function schedule passes through points A. 0 (A, F (A)) and B. 0 (b, f (b)). The desired root of the equation (point x *) We are unknown, instead, take a point h. 1 Crossing chord BUT 0 IN 0 With an abscissa axis. This will be the approximate value of the root.

In analytical geometry, a formula is derived defining the equation of direct passing through two points with coordinates. (x1; u1) and (x2; u2): .

Then the chord equation BUT 0 IN 0 Wrong in the form :.

Find value x \u003d x. 1 , for which y \u003d 0.:. Now the root is on the segment . Apply the chord method to this segment. Cut the chord connecting points A. 1 (X. 1 , f (x 1 )) and B. 0 (b, f (b)), and find h. 2 - Point of crossing chord BUT 1 IN 0 with axis Oh: x. 2 \u003d X. 1 .

Continuing this process, we find

x. 3 \u003d X. 2 .

We get a recurrent formula for calculating approximations to the root

x. n + 1. \u003d X. n. .

In this case, the end b. Cut remains fixed, and the end a. moves.

Thus, we get estimated formulas Horde methods:

x. n + 1. \u003d X. n. ; x. 0 \u003d A.. (4)

Calculations of the next approximations to the exact root of the equation continues until it reaches a given accuracy, i.e. Condition must be performed: | X. n + 1. -X. n. |< where - the specified accuracy.

Now consider the case when the first and second derivatives have different signs, i.e. f "(x) f "" (x)<0 . (Fig. 3).

Fig. 3. Geometric Method Interpretation For Case f "(x) f "" (x)<0 .

Connect Points A. 0 (A, F (A)) and B. 0 (b, f (b)) Chordoy BUT 0 IN 0 . Point of intersection of chords with axis Oh We will consider the first root approximation. In this case, the fixed end of the segment will be the end but.


Horde equation BUT 0 IN 0 :. From here we will find x. 1 , believed y \u003d 0: x. 1 \u003d B.. Now the root equation x.. Applying the chord method to this segment, we get x. 2 \u003d X. 1 . Continuing, etc., we get x. n + 1. \u003d X. n. .

Estimated methods of the method:

x. n + 1. \u003d X. n. , x. 0 =0 . (5)

The condition of the end of calculations: | X. n + 1. -X. n. |< . Then cPR \u003d XN + 1 With accuracy, so if f "(x) f "" (x)\u003e 0 The approximate value of the root is found by formula (4) if f "(x) f "" (x)<0 , then according to formula (5).

The practical choice of one or another formula is carried out using the following rule: the fixed end of the segment is the one for which the sign of the function coincides with the sign of the second derivative.

Example. Illustrate the action of this rule on the equation

(x - 1) ln (x) -1 \u003d 0if the root isolation segment .

Decision. Here f (x) \u003d (x - 1) ln (x) -1.

f "(x) \u003d ln (x) +;

f "" (x) \u003d.

The second derivative in this example is positive on the segment of the root isolation : f "" (x)\u003e 0, f (3)\u003e 0, i.e. f (b) f "" (x)\u003e 0. Thus, in solving this equation, the chord is chosen to clarify the root of formula (4).

vAR E, C, A, B, Y, Ya, YB, YN, X, X1, X2, XN, X1, X2, XN, F1, F2: REAL;

begin E: \u003d 0.0001;

writeln ("Vvedi Nachalo Otrezka");

writeln ("Vvedi Konec Otrezka");

y: \u003d ((x - 1) * ln (x)) - 1;

y: \u003d ((x - 1) * ln (x)) - 1;

yb: \u003d y; C: \u003d (A + B) / 2; x: \u003d C;

y: \u003d ((x - 1) * ln (x)) - 1;

f1: \u003d ln (x) + (x-1) / x;

f2: \u003d 1 / x + 1 / (x * x);

if (ya * yb< 0) and (f1*f2 > 0)

then Begin X1: \u003d A; While ABS (X2 - X)\u003e E DO

x2: \u003d x1 - (yn * (b - x1)) / (yb - yn);

writeln ("Koren Uravneniya XN \u003d", x2)

eND ELSEBEGIN X1: \u003d B;

while ABS (X2 - X)\u003e E DO

begin x: \u003d x1; y: \u003d ((x - 1) * ln (x)) - 1; yn: \u003d y;

x2: \u003d x1 - (yn * (x1- a)) / (yn - ya);

writeln ("Koren Uravneniya XN \u003d", x2);

Method of simple iterations

Consider the equation f (x) \u003d 0 (1) with a separated root X.. To solve the equation (1) by the method of simple iteration, we give it to the equivalent form: x \u003d c (x). (2)

It can always be done, with many ways. For example:

x \u003d g (x) · f (x) + x? c (x)where g (X.) - an arbitrary continuous function that does not have roots on the segment .

Let be x. (0) - obtained in any way approaching the root x. (in the simplest case x. (0) \u003d (A + B) / 2).The simple iteration method is a consistent calculation of the members of the iterative sequence:

x. (K + 1) \u003d C (x (k) ), k \u003d 0, 1, 2, ... (3)

starting with approximation x. (0) .

Approval: 1 If the sequence (x (k)) of the simple iteration method converges and the function C is continuous, the sequence limit is the root of the equation x \u003d c (x)

Proof: Let. (four)

Moving to the limit in equality x. (K + 1) \u003d C (x (k) ) We obtain from one side of the software (4), which, and on the other hand, due to the continuity of the function c. and (4) .

As a result, we get x. * \u003d C (x * ). Hence, x. * - The root of equation (2), i.e. X \u003d X. * .

To enjoy this statement you need a convergence of a sequence (X. (k) }. A sufficient condition of convergence gives:

Theorem 1: (about convergence) Let the equation x \u003d c (x)it has the only root on the segment And the conditions are completed:

  • 1) c (x) c 1 ;
  • 2) c (x) "X;
  • 3) There is a constant q\u003e 0: | c "(x) |? q . Tide iterative sequence (X. (k) }, specified formula x. (K + 1) \u003d C (x (k) ), k \u003d 0, 1, ... converges with any initial approximation x. (0) .

Proof: Consider two neighboring sequence member (X. (k) ): X. (k) \u003d C (x (K-1) ) and x. (K + 1) \u003d C (x (k) ) Tak as under condition 2) x. (k) and X. (K + 1) Lying inside cut , that using the Lagrange Theorem on the average values \u200b\u200bwe get:

x. (K + 1) - X. (k) \u003d C (x (k) ) - C (x (K-1) ) \u003d C "(C k. ) (X. (k) - X. (K-1) ), where c k. (X. (K-1) , X. (k) ).

From here we get:

| X. (K + 1) - X. (k) | \u003d | C "(C k. ) | · | X. (k) - X. (K-1) | ? Q | X. (k) - X. (K-1) | ?

? Q (Q | x (K-1) - X. (K-2) |) \u003d Q 2 | X. (K-1) - X. (K-2) | ? ...? Q. k. | X. (1) - X. (0) |. (5)

Consider the ranks

S. ? \u003d X. (0) + (X. (1) - X. (0) ) + ... + (x (K + 1) - X. (k) ) + ... . (6)

If we prove that this series converges, it means that the sequence of its partial sums converges

S. k. \u003d X. (0) + (X. (1) - X. (0) ) + ... + (x (k) - X. (K-1) ).

But it is easy to calculate that

S. k. \u003d X. (k)) . (7)

Therefore, we are therefore prove and convergence of an iterative sequence (X. (k) }.

To prove the convergence of the PJ (6), compare it by rear (without the first term x. (0) ) with near

q. 0 | X. (1) - X. (0) | + Q. 1 | X. (1) - X. (0) | + ... + | x (1) - X. (0) | + ..., (8)

which converges as infinitely decreasing geometric progression (since by condition q.< 1 ). By virtue of inequality (5), the absolute values \u200b\u200bof the series (6) do not exceed the corresponding members of the converging series (8) (i.e., the series (8) majorize a number (6). Consequently, the series (6) also converges. Table converges sequence (X. (0) }.

We obtain a formula that gives a method for evaluating the error | X - X (K + 1) |

method of simple iteration.

X - X. (K + 1) \u003d X - S k + 1. \u003d S. ? - S. k + 1. \u003d (X. (k + 2) - (K + 1) ) + (x (K + 3) - X. (k + 2) ) + ... .

Hence

| X - X (K + 1) | ? | X. (k + 2) - (K + 1) | + | X. (K + 3) - X. (k + 2) | + ...? Q. k + 1. | X. (1) - X. (0) | + Q. k + 2. | X. (1) - X. (0) | + ... \u003d Q k + 1. | X. (1) - X. (0) | / (1-Q).

As a result, we get a formula

| X - X (K + 1) | ? Q. k + 1. | X. (1) - X. (0) | / (1-Q).(9)

Taking out x. (0) value x. (k) , per x. (1) - Value x. (K + 1) (since when performing the conditions of the theorem, this choice is possible) and considering that when there is inequality q. k + 1. ? Q. We deposit:

| X - X (K + 1) | ? Q. k + 1. | X. (K + 1) - X. (k) | / (1-Q)? Q | X. (K + 1) - X. (k) | / (1-Q).

So, we finally get:

| X - X (K + 1) | ? Q | X. (K + 1) - X. (k) | / (1-Q). (10)

Use this formula to output the end criterion of the iterative sequence. Let the equation x \u003d c (x) solved by simple iteration method, and the answer must be found with accuracy e, i.e

| X - X (K + 1) | ? e.

Taking into account (10) we get that accuracy e. will be achieved if inequality

| X. (K + 1) -X. (k) | ? (1-Q) / q.(11)

Thus, to find the roots of the equation x \u003d c (x) The method of simple iteration is accuracy to continue iterations until the difference module between the latest adjacent approximations remains greater than the number e (1-Q) / Q.

Note 1: as a constant q usually take an estimate from above for magnitude

Geometric interpretation

Consider a graph of the function. This means that the solution of the equation is the intersection point with a straight line:


Picture 1.

And the next iteration is the coordinate of the intersection of a horizontal direct point with a straight line.


Figure 2.

From the picture, the requirement of convergence is clearly visible. The closer the derivative of 0, the faster the algorithm converges. Depending on the sign of the derivative near the solution, the approximation can be built in different ways. If, each next approximation is built on the other side of the root:


Figure 3.

Conclusion

The problem of improving the quality of calculations, as a mismatch between the desired and valid, exists and will exist in the future. Its decision will be promoted by the development of information technologies, which is to improve the methods of organizing information processes and their implementation using specific tools - media and programming languages.

The result of the work can be considered the created functional model of finding the roots of the equation by the methods of simple iteration, Newton, chore and half-division. This model is applicable to deterministic tasks, i.e. The accuracy of the experimental calculation of which can be neglected. The created functional model and its software implementation can serve as an organic part of solving more complex tasks.

Conducting research on the topic term paper "Numerical methods. Solution of nonlinear equations", I achieved the goals set in the introduction. The methods of clarification of roots were considered in detail. A few examples were given to each definition and theorem. All theorems are proven.

The use of various sources made it possible to fully disclose the topic.

The method under consideration is the same as the method of half-division, is designed to clarify the root on the interval

Takes values \u200b\u200bof different characters. Another approximation, in contrast to the half-division method, we do not take in the middle of the segment, but at the point which intersects the abscissa axis straight line (chord) conducted through points BUTand IN(Fig. 2.6).

We write the equation direct passing through points BUTand IN:

.

For the intersection point, the line with the abscissa axis (
) We get the equation

. (2.13)

As a new interval to continue the iterative process, choose one of two
and
at the ends of which the function
Takes values \u200b\u200bof different characters. For the case under consideration (Fig. 2.6) choose a segment
, as
. The following iteration is to determine the new approximation. like chord intersection points
With an abscissa axis, etc.

We finish the process of clarifying the root, when the distance between the next approximations will be less than the specified accuracy, i.e.

(2.14)

or when performing condition (2.12).

Ø Comment. The method of half division and the chord method is very similar, in particular, the procedure for checking the signs of the function at the ends of the segment. At the same time, the second of them in some cases gives a faster convergence of the iterative process. However, in some cases, the chord method can converge significantly slower than half-division method. This situation is shown in Fig. 2.7. Both methods considered do not require knowledge of additional information about the function.
. For example, it is not required that the function is differentiable. Even for discontinuous functions, the considered methods have guaranteed convergence. More complex definition methods of root use additional information about the function.
First of all, the properties of differentiability. As a result, they usually have a faster convergence, but at the same time, applicable for a narrower class of functions, and their convergence is not always guaranteed. An example of this method is the Newton method.<

  1. Newton Method (Tangent Method)

Let us know the initial approach to the root (The selection of the initial approximation will be discussed in detail below). We will spend at this point tangent to the curve
(Fig. 2.8). This tangent will cross the abscissa axis at the point which will be considered as the next approximation. Value Easy to find from the picture:

,

expressing from here , get

.

Similarly, the following approximations can be found. Formula for k.+ 1st approximation is

,
(2.15)

Formula (2.15) implies the applicability condition of the method: function
must be differentiable and
In the neighborhood of the root should not change the sign.

To end the iterative process, conditions (2.12) or (2.14) can be used.

Ø Set 1. In the Newton method, in contrast to the previous methods, it is not necessary to set the segment
containing the root of the equation, but it is enough to find some initial approximation of the root .<

Ø Set 2. The formula of the Newton method can be obtained from other considerations. Give me some initial approach of the root
. Replace function f.(x.) in the neighborhood of the point Segment of the Taylor series:

and instead of a nonlinear equation
Let the linearized equation

considering his decision as the following (first) approximation to the desired root value. The solution to this equation is obvious:

Repeating this process comes to Newton's formula (2.15).<

Newton's convergence. We find out the main conditions for the convergence of the sequence of values
calculated by formula (2.15), to the root of equation (2.1). Assuming that
twice continuously differentiable, decompose
in a series of Taylor in the surrounding k.-Ho approximation

Sharing the last ratio on
and moved the part of the components from the left side to the right, we get:

.

Given that the expression in square brackets according to (2.15) is equal
, rewrite this ratio in the form

.

. (2.16)

From (2.16) Evaluation follows

, (2.17)

where
,
.

Obviously, the error decreases if

. (2.18)

The resulting condition means that convergence depends on the selection of the initial approximation.

Evaluation (2.17) characterizes the rate of descending of the error for the Newton method: at each step, the error is proportional to the square of the error in the previous step. Consequently, Newton's method has a quadratic convergence.

IN the initial approximation in Newton's method.As follows from condition (2.18), the convergence of the iterative sequence obtained in the Newton method depends on the selection of the initial approximation . This can also be seen from the geometric interpretation of the method. So, if you take a point as an initial approximation (Fig. 2.9), then the convergence of the iterative process does not count.

If you choose a point as an initial approximation , I get a convergent sequence.

In general, if the segment is set
containing the root, and it is known that the function
Monotonna on this segment, then as an initial approximation You can choose that border of the segment
where the signs of the function coincide
and the second derivative
. Such a selection of the initial approximation guarantees the convergence of the Newton method, subject to the monotony of the function on the segment of the root localization.