≈ MemoLearning Numerical Analysis

Computational methods for solving mathematical problems

← Back to Mathematics

Curriculum Overview

10
Total Units
~130
Algorithms to Master
6
Core Units
4
Advanced Units
1

Error Analysis and Computer Arithmetic

Understand sources of error and floating-point arithmetic in numerical computations.

  • Sources of numerical error
  • Absolute and relative error
  • Machine epsilon and floating-point representation
  • Rounding and truncation errors
  • Error propagation
  • Condition numbers and stability
  • Significant digits and precision
  • Backward and forward error analysis
2

Root Finding Methods

Learn algorithms for finding zeros of nonlinear equations.

  • Bisection method
  • Newton's method (Newton-Raphson)
  • Secant method
  • Method of false position (regula falsi)
  • Fixed-point iteration
  • Convergence analysis
  • Multiple roots and modified Newton's method
  • Systems of nonlinear equations
3

Linear Systems

Master direct and iterative methods for solving systems of linear equations.

  • Gaussian elimination with pivoting
  • LU decomposition
  • Cholesky decomposition
  • QR decomposition
  • Jacobi iterative method
  • Gauss-Seidel method
  • SOR (Successive Over-Relaxation)
  • Convergence criteria for iterative methods
4

Interpolation

Study methods for constructing functions that pass through given data points.

  • Lagrange interpolation
  • Newton's divided differences
  • Hermite interpolation
  • Spline interpolation
  • Cubic splines
  • Error bounds for interpolation
  • Chebyshev nodes
  • Runge phenomenon
5

Numerical Differentiation and Integration

Learn computational methods for derivatives and integrals.

  • Finite difference formulas
  • Forward, backward, and central differences
  • Richardson extrapolation
  • Trapezoidal rule
  • Simpson's rule
  • Gaussian quadrature
  • Adaptive quadrature
  • Monte Carlo integration
6

Ordinary Differential Equations

Solve initial value problems for ODEs using numerical methods.

  • Euler's method
  • Improved Euler method (Heun's method)
  • Runge-Kutta methods
  • Fourth-order Runge-Kutta
  • Multistep methods
  • Adams-Bashforth methods
  • Adams-Moulton methods
  • Stability and stiff equations
7

Eigenvalue Problems

Find eigenvalues and eigenvectors using computational methods.

  • Power method
  • Inverse power method
  • Shifted power method
  • QR algorithm
  • Jacobi method for symmetric matrices
  • Householder transformations
  • Singular value decomposition (SVD)
  • Applications to principal component analysis
8

Approximation Theory

Study optimal approximation of functions and data fitting.

  • Least squares approximation
  • Linear regression
  • Polynomial curve fitting
  • Orthogonal polynomials
  • Chebyshev polynomials
  • Fourier approximation
  • Rational function approximation
  • Minimax approximation
9

Partial Differential Equations

Solve PDEs using finite difference and finite element methods.

  • Classification of PDEs
  • Finite difference methods
  • Heat equation (parabolic PDEs)
  • Wave equation (hyperbolic PDEs)
  • Laplace equation (elliptic PDEs)
  • Boundary conditions
  • Stability analysis
  • Introduction to finite element methods
10

Optimization Methods

Learn computational approaches to optimization problems.

  • Golden section search
  • Newton's method for optimization
  • Gradient descent
  • Conjugate gradient method
  • Quasi-Newton methods (BFGS)
  • Linear programming (simplex method)
  • Constrained optimization
  • Lagrange multipliers and KKT conditions

Unit 1: Error Analysis and Computer Arithmetic

Understand the fundamental sources of error in numerical computation and how computers represent numbers.

Sources of Numerical Error

Learn about modeling errors, data errors, truncation errors, and round-off errors. Understand how these different error sources affect numerical computations.

Absolute and Relative Error

Define absolute error |x - x̃| and relative error |x - x̃|/|x|. Learn when each measure is more appropriate and how they relate to significant digits.

Floating-Point Representation

Understand IEEE 754 standard for floating-point numbers. Learn about mantissa, exponent, and machine epsilon. Study the limitations of computer arithmetic.

Rounding and Truncation

Distinguish between rounding (nearest representable number) and truncation (chopping). Understand how these operations introduce errors in calculations.

Error Propagation

Study how errors in input data propagate through arithmetic operations. Learn error bounds for addition, subtraction, multiplication, and division.

Condition Numbers

Define condition numbers as measures of sensitivity to input perturbations. Learn to compute condition numbers for various problems and interpret their meaning.

Numerical Stability

Understand the difference between mathematical stability and numerical stability. Learn to identify and avoid numerically unstable algorithms.

Backward Error Analysis

Learn backward error analysis: instead of asking "how close is the computed answer to the exact answer?", ask "what problem does the computed answer solve exactly?"

Unit 2: Root Finding Methods

Master algorithms for finding zeros of nonlinear equations and systems.

Bisection Method

Learn the most robust root-finding method based on the Intermediate Value Theorem. Understand guaranteed convergence and linear convergence rate.

Newton's Method

Master Newton-Raphson iteration: x_{n+1} = x_n - f(x_n)/f'(x_n). Learn about quadratic convergence and potential pitfalls like poor initial guesses.

Secant Method

Study the secant method as a finite-difference approximation to Newton's method. Understand superlinear convergence and when to use it over Newton's method.

Method of False Position

Learn regula falsi as a combination of bisection reliability with secant method efficiency. Understand how it maintains bracket while improving convergence.

Fixed-Point Iteration

Study iteration schemes x_{n+1} = g(x_n) where roots correspond to fixed points. Learn convergence conditions and how to construct suitable g(x).

Convergence Analysis

Understand linear, superlinear, and quadratic convergence rates. Learn how to analyze and compare the efficiency of different root-finding methods.

Multiple Roots

Study problems with repeated roots where standard Newton's method has linear convergence. Learn modified Newton's method for multiple roots.

Systems of Equations

Extend root-finding to nonlinear systems F(x) = 0. Learn multidimensional Newton's method and Jacobian matrix computations.

Unit 3: Linear Systems

Learn direct and iterative methods for solving systems of linear equations efficiently.

Gaussian Elimination

Master the fundamental direct method for solving Ax = b. Learn partial pivoting strategies to improve numerical stability and avoid division by small numbers.

LU Decomposition

Factor matrices as A = LU where L is lower triangular and U is upper triangular. Learn how this enables efficient solution of multiple systems with the same matrix.

Cholesky Decomposition

Study the special factorization A = LL^T for symmetric positive definite matrices. Understand computational advantages and applications to least squares problems.

QR Decomposition

Learn to factor A = QR where Q is orthogonal and R is upper triangular. Understand Gram-Schmidt process and Householder reflections for computing QR.

Jacobi Method

Study the basic iterative method where each variable is updated using the most recent values of other variables. Learn convergence conditions and implementation.

Gauss-Seidel Method

Improve upon Jacobi by using updated values immediately. Understand how this typically improves convergence rate and reduces memory requirements.

SOR Method

Learn Successive Over-Relaxation as an acceleration technique for Gauss-Seidel. Understand how the relaxation parameter ω affects convergence rate.

Iterative Convergence

Study convergence criteria for iterative methods. Learn about spectral radius, diagonal dominance, and conditions that guarantee convergence.

Unit 4: Interpolation

Construct functions that pass through given data points and understand interpolation error.

Lagrange Interpolation

Learn to construct polynomial interpolants using Lagrange basis functions. Understand the explicit formula and uniqueness of polynomial interpolation.

Newton's Divided Differences

Master the recursive construction of interpolating polynomials using divided differences. Learn efficient algorithms for adding new data points.

Hermite Interpolation

Extend interpolation to include derivative information. Learn to construct polynomials that match both function values and derivatives at given points.

Spline Interpolation

Study piecewise polynomial interpolation. Learn how splines provide smooth curves while avoiding oscillations of high-degree polynomial interpolation.

Cubic Splines

Master the most common spline type with continuous second derivatives. Learn natural, clamped, and periodic boundary conditions for cubic splines.

Interpolation Error

Understand error bounds for polynomial interpolation. Learn how error depends on the spacing of interpolation points and smoothness of the function.

Chebyshev Nodes

Learn optimal point placement for polynomial interpolation. Understand how Chebyshev points minimize the maximum interpolation error.

Runge Phenomenon

Study the oscillatory behavior of high-degree polynomial interpolation with equally spaced points. Learn why splines or Chebyshev points are preferred.

Unit 5: Numerical Differentiation and Integration

Learn computational methods for computing derivatives and integrals numerically.

Forward, Backward, Central Differences

Learn f'(x) ≈ (f(x+h) - f(x))/h, f'(x) ≈ (f(x) - f(x-h))/h, and f'(x) ≈ (f(x+h) - f(x-h))/(2h). Understand accuracy and stability trade-offs.

Richardson Extrapolation

Improve accuracy by combining estimates with different step sizes. Learn how to eliminate leading error terms systematically using Richardson's method.

Trapezoidal Rule

Approximate integrals using ∫f(x)dx ≈ (h/2)[f(a) + 2f(x₁) + 2f(x₂) + ... + f(b)]. Learn error analysis and composite trapezoidal rule.

Simpson's Rule

Use parabolic approximation for higher accuracy: ∫f(x)dx ≈ (h/3)[f(x₀) + 4f(x₁) + 2f(x₂) + 4f(x₃) + ... + f(xₙ)]. Understand O(h⁴) error.

Gaussian Quadrature

Learn optimal point and weight selection for polynomial integration. Understand Legendre polynomials and how Gaussian quadrature achieves maximum degree exactness.

Adaptive Quadrature

Automatically adjust step size based on error estimates. Learn recursive algorithms that concentrate computational effort where the integrand varies most.

Monte Carlo Integration

Use random sampling for high-dimensional integrals. Understand the probabilistic approach and its advantages for complex domains and high dimensions.

Unit 6: Ordinary Differential Equations

Solve initial value problems for ODEs using numerical methods with controlled accuracy.

Euler's Method

Learn the simplest method: y_{n+1} = y_n + hf(t_n, y_n). Understand geometric interpretation and first-order accuracy of Euler's method.

Improved Euler Method

Study Heun's method as a predictor-corrector scheme. Learn how averaging slopes improves accuracy from O(h) to O(h²).

Runge-Kutta Methods

Master the family of methods that use multiple slope evaluations per step. Understand the general framework for constructing higher-order methods.

Fourth-Order Runge-Kutta

Learn the classic RK4 method with O(h⁴) accuracy. Understand the optimal balance between computational cost and accuracy for most practical problems.

Multistep Methods

Study methods that use information from multiple previous points. Learn how multistep methods can achieve high accuracy with fewer function evaluations.

Adams-Bashforth Methods

Master explicit multistep methods using polynomial extrapolation of the derivative. Learn the trade-off between stability and step size restrictions.

Adams-Moulton Methods

Study implicit multistep methods with better stability properties. Learn predictor-corrector combinations that balance accuracy and computational efficiency.

Stability and Stiff Equations

Understand numerical stability vs. mathematical stability. Learn about stiff differential equations and implicit methods designed to handle them.

Unit 7: Eigenvalue Problems

Find eigenvalues and eigenvectors of matrices using iterative and direct methods.

Power Method

Learn the basic iterative method for finding the dominant eigenvalue. Understand convergence rate and how it depends on eigenvalue separation.

Inverse Power Method

Find the smallest eigenvalue by applying power method to A⁻¹. Learn how matrix factorization makes this method practical for large matrices.

Shifted Power Method

Target specific eigenvalues using (A - σI)⁻¹. Understand how shifting enables finding eigenvalues near a given value σ.

QR Algorithm

Master the most important method for finding all eigenvalues. Learn how repeated QR factorization leads to upper triangular form revealing eigenvalues.

Jacobi Method

Study the classical method for symmetric matrices using plane rotations. Learn how Jacobi iterations systematically reduce off-diagonal elements.

Householder Transformations

Learn orthogonal transformations that introduce zeros systematically. Understand their role in reducing matrices to tridiagonal or Hessenberg form.

Singular Value Decomposition

Study the factorization A = UΣV^T for any matrix. Learn applications to least squares, data compression, and principal component analysis.

Applications to PCA

Apply eigenvalue methods to principal component analysis. Understand dimensionality reduction and how eigenvalues indicate variance explained.

Unit 8: Approximation Theory

Study optimal methods for approximating functions and fitting data.

Least Squares Approximation

Learn to minimize Σ(f(x_i) - p(x_i))² over all polynomials p of given degree. Understand the normal equations and their solution.

Linear Regression

Apply least squares to fit linear models y = ax + b to data. Learn about correlation coefficients and goodness of fit measures.

Polynomial Curve Fitting

Extend least squares to higher-degree polynomials. Understand the trade-off between model complexity and overfitting.

Orthogonal Polynomials

Study polynomial families that are orthogonal with respect to specific weight functions. Learn how orthogonality simplifies least squares computations.

Chebyshev Polynomials

Master the polynomials that deviate least from zero on [-1,1]. Understand their minimax property and applications to function approximation.

Fourier Approximation

Learn to approximate periodic functions using trigonometric series. Understand discrete Fourier transform and fast Fourier transform algorithms.

Rational Approximation

Study approximation by ratios of polynomials P(x)/Q(x). Learn Padé approximations and their advantages for functions with poles or asymptotes.

Minimax Approximation

Find the polynomial that minimizes the maximum absolute error. Learn the Remez exchange algorithm and equioscillation theorem.

Unit 9: Partial Differential Equations

Solve PDEs numerically using finite difference and finite element methods.

PDE Classification

Learn to classify second-order PDEs as elliptic, parabolic, or hyperbolic. Understand how classification determines appropriate numerical methods.

Finite Difference Methods

Replace derivatives with finite difference approximations. Learn how to construct difference schemes and analyze their properties.

Heat Equation

Solve parabolic PDEs like ∂u/∂t = α∂²u/∂x². Learn explicit and implicit time-stepping schemes and their stability requirements.

Wave Equation

Handle hyperbolic PDEs like ∂²u/∂t² = c²∂²u/∂x². Understand the CFL condition and characteristic-based methods.

Laplace Equation

Solve elliptic PDEs like ∇²u = 0. Learn iterative methods for the resulting linear systems and multigrid techniques.

Boundary Conditions

Handle Dirichlet, Neumann, and Robin boundary conditions. Learn how boundary conditions affect the numerical scheme construction.

Stability Analysis

Study von Neumann stability analysis for finite difference schemes. Learn about CFL conditions and numerical dissipation/dispersion.

Finite Element Methods

Introduction to variational formulations and piecewise polynomial approximations. Understand the advantages for complex geometries and boundary conditions.

Unit 10: Optimization Methods

Learn computational approaches to finding minima and maxima of functions.

Golden Section Search

Learn the optimal method for one-dimensional optimization without derivatives. Understand the golden ratio's role in minimizing function evaluations.

Newton's Method for Optimization

Apply Newton's method to find critical points: x_{k+1} = x_k - [∇²f(x_k)]⁻¹∇f(x_k). Learn about convergence and Hessian modification techniques.

Gradient Descent

Study the fundamental first-order method: x_{k+1} = x_k - α_k∇f(x_k). Learn step size selection and convergence analysis for convex functions.

Conjugate Gradient Method

Learn the optimal method for quadratic functions. Understand how conjugate directions accelerate convergence and applications to general optimization.

Quasi-Newton Methods

Study BFGS and other methods that approximate the Hessian. Learn how these methods achieve superlinear convergence without computing second derivatives.

Linear Programming

Master the simplex method for problems with linear objective and constraints. Understand basic feasible solutions and optimality conditions.

Constrained Optimization

Learn methods for optimization subject to equality and inequality constraints. Study penalty methods, barrier methods, and sequential quadratic programming.

KKT Conditions

Understand the Karush-Kuhn-Tucker conditions as necessary conditions for constrained optima. Learn their role in algorithm development and optimality verification.