Computer Science/Numerical Methods/ Bisection method ( With complete introduction)
Contents
Bisection Method in Numerical Methods (Complete Introduction)
What is the Bisection Method?
The Bisection Method is a numerical technique used to find the roots of a function f(x) = 0 in a given interval [a, b]. It is based on the Intermediate Value Theorem (IVT), which states that:
If a continuous function f(x) changes sign between two points a and b (i.e., f(a) * f(b) < 0), then there exists at least one root in the interval [a, b].
Steps to Apply the Bisection Method
Choose an interval [a, b] such that f(a) * f(b) < 0 (i.e., f(x) changes sign).
Find the midpoint (c):
c=a+b2c = \frac{a + b}{2}
Check the function value at c:
- If f(c) = 0, then c is the root.
- If f(a) * f(c) < 0, then the root lies in [a, c]. Update b = c.
- Else, the root lies in [c, b]. Update a = c.
Repeat steps 2 & 3 until the root is found with the required accuracy (ε tolerance level).
Example Problem
Find the root of f(x) = x³ – 4x – 9 using the bisection method in the interval [2, 3].
Step 1: Check the function values at endpoints
f(2)=(2)3−4(2)−9=8−8−9=−9f(2) = (2)^3 – 4(2) – 9 = 8 – 8 – 9 = -9 f(3)=(3)3−4(3)−9=27−12−9=6f(3) = (3)^3 – 4(3) – 9 = 27 – 12 – 9 = 6
Since f(2) * f(3) < 0, a root exists in [2, 3].
Step 2: Compute Midpoint
c=2+32=2.5c = \frac{2 + 3}{2} = 2.5 f(2.5)=(2.5)3−4(2.5)−9=15.625−10−9=−3.375f(2.5) = (2.5)^3 – 4(2.5) – 9 = 15.625 – 10 – 9 = -3.375
Since f(2) * f(2.5) < 0, update b = 2.5.
Step 3: Repeat until required accuracy is achieved
Continuing this process, we get a refined root x ≈ 2.75 after multiple iterations.
Advantages of the Bisection Method
Always Convergent (as long as f(a) * f(b) < 0).
Simple and easy to implement.
Works for any continuous function.
Limitations of the Bisection Method
Slow Convergence (linear order).
Cannot find multiple roots in the same interval.
Requires initial interval selection where sign change occurs.
Python Code Implementation
Conclusion
The Bisection Method is a fundamental root-finding technique used in numerical methods. While it guarantees convergence, it is relatively slow compared to other methods like Newton-Raphson or Secant Method. However, its simplicity makes it an excellent choice for solving equations numerically.