Computational methods for solving mathematical problems
← Back to MathematicsUnderstand sources of error and floating-point arithmetic in numerical computations.
Learn algorithms for finding zeros of nonlinear equations.
Master direct and iterative methods for solving systems of linear equations.
Study methods for constructing functions that pass through given data points.
Learn computational methods for derivatives and integrals.
Solve initial value problems for ODEs using numerical methods.
Find eigenvalues and eigenvectors using computational methods.
Study optimal approximation of functions and data fitting.
Solve PDEs using finite difference and finite element methods.
Learn computational approaches to optimization problems.
Understand the fundamental sources of error in numerical computation and how computers represent numbers.
Learn about modeling errors, data errors, truncation errors, and round-off errors. Understand how these different error sources affect numerical computations.
Define absolute error |x - x̃| and relative error |x - x̃|/|x|. Learn when each measure is more appropriate and how they relate to significant digits.
Understand IEEE 754 standard for floating-point numbers. Learn about mantissa, exponent, and machine epsilon. Study the limitations of computer arithmetic.
Distinguish between rounding (nearest representable number) and truncation (chopping). Understand how these operations introduce errors in calculations.
Study how errors in input data propagate through arithmetic operations. Learn error bounds for addition, subtraction, multiplication, and division.
Define condition numbers as measures of sensitivity to input perturbations. Learn to compute condition numbers for various problems and interpret their meaning.
Understand the difference between mathematical stability and numerical stability. Learn to identify and avoid numerically unstable algorithms.
Learn backward error analysis: instead of asking "how close is the computed answer to the exact answer?", ask "what problem does the computed answer solve exactly?"
Master algorithms for finding zeros of nonlinear equations and systems.
Learn the most robust root-finding method based on the Intermediate Value Theorem. Understand guaranteed convergence and linear convergence rate.
Master Newton-Raphson iteration: x_{n+1} = x_n - f(x_n)/f'(x_n). Learn about quadratic convergence and potential pitfalls like poor initial guesses.
Study the secant method as a finite-difference approximation to Newton's method. Understand superlinear convergence and when to use it over Newton's method.
Learn regula falsi as a combination of bisection reliability with secant method efficiency. Understand how it maintains bracket while improving convergence.
Study iteration schemes x_{n+1} = g(x_n) where roots correspond to fixed points. Learn convergence conditions and how to construct suitable g(x).
Understand linear, superlinear, and quadratic convergence rates. Learn how to analyze and compare the efficiency of different root-finding methods.
Study problems with repeated roots where standard Newton's method has linear convergence. Learn modified Newton's method for multiple roots.
Extend root-finding to nonlinear systems F(x) = 0. Learn multidimensional Newton's method and Jacobian matrix computations.
Learn direct and iterative methods for solving systems of linear equations efficiently.
Master the fundamental direct method for solving Ax = b. Learn partial pivoting strategies to improve numerical stability and avoid division by small numbers.
Factor matrices as A = LU where L is lower triangular and U is upper triangular. Learn how this enables efficient solution of multiple systems with the same matrix.
Study the special factorization A = LL^T for symmetric positive definite matrices. Understand computational advantages and applications to least squares problems.
Learn to factor A = QR where Q is orthogonal and R is upper triangular. Understand Gram-Schmidt process and Householder reflections for computing QR.
Study the basic iterative method where each variable is updated using the most recent values of other variables. Learn convergence conditions and implementation.
Improve upon Jacobi by using updated values immediately. Understand how this typically improves convergence rate and reduces memory requirements.
Learn Successive Over-Relaxation as an acceleration technique for Gauss-Seidel. Understand how the relaxation parameter ω affects convergence rate.
Study convergence criteria for iterative methods. Learn about spectral radius, diagonal dominance, and conditions that guarantee convergence.
Construct functions that pass through given data points and understand interpolation error.
Learn to construct polynomial interpolants using Lagrange basis functions. Understand the explicit formula and uniqueness of polynomial interpolation.
Master the recursive construction of interpolating polynomials using divided differences. Learn efficient algorithms for adding new data points.
Extend interpolation to include derivative information. Learn to construct polynomials that match both function values and derivatives at given points.
Study piecewise polynomial interpolation. Learn how splines provide smooth curves while avoiding oscillations of high-degree polynomial interpolation.
Master the most common spline type with continuous second derivatives. Learn natural, clamped, and periodic boundary conditions for cubic splines.
Understand error bounds for polynomial interpolation. Learn how error depends on the spacing of interpolation points and smoothness of the function.
Learn optimal point placement for polynomial interpolation. Understand how Chebyshev points minimize the maximum interpolation error.
Study the oscillatory behavior of high-degree polynomial interpolation with equally spaced points. Learn why splines or Chebyshev points are preferred.
Learn computational methods for computing derivatives and integrals numerically.
Learn f'(x) ≈ (f(x+h) - f(x))/h, f'(x) ≈ (f(x) - f(x-h))/h, and f'(x) ≈ (f(x+h) - f(x-h))/(2h). Understand accuracy and stability trade-offs.
Improve accuracy by combining estimates with different step sizes. Learn how to eliminate leading error terms systematically using Richardson's method.
Approximate integrals using ∫f(x)dx ≈ (h/2)[f(a) + 2f(x₁) + 2f(x₂) + ... + f(b)]. Learn error analysis and composite trapezoidal rule.
Use parabolic approximation for higher accuracy: ∫f(x)dx ≈ (h/3)[f(x₀) + 4f(x₁) + 2f(x₂) + 4f(x₃) + ... + f(xₙ)]. Understand O(h⁴) error.
Learn optimal point and weight selection for polynomial integration. Understand Legendre polynomials and how Gaussian quadrature achieves maximum degree exactness.
Automatically adjust step size based on error estimates. Learn recursive algorithms that concentrate computational effort where the integrand varies most.
Use random sampling for high-dimensional integrals. Understand the probabilistic approach and its advantages for complex domains and high dimensions.
Solve initial value problems for ODEs using numerical methods with controlled accuracy.
Learn the simplest method: y_{n+1} = y_n + hf(t_n, y_n). Understand geometric interpretation and first-order accuracy of Euler's method.
Study Heun's method as a predictor-corrector scheme. Learn how averaging slopes improves accuracy from O(h) to O(h²).
Master the family of methods that use multiple slope evaluations per step. Understand the general framework for constructing higher-order methods.
Learn the classic RK4 method with O(h⁴) accuracy. Understand the optimal balance between computational cost and accuracy for most practical problems.
Study methods that use information from multiple previous points. Learn how multistep methods can achieve high accuracy with fewer function evaluations.
Master explicit multistep methods using polynomial extrapolation of the derivative. Learn the trade-off between stability and step size restrictions.
Study implicit multistep methods with better stability properties. Learn predictor-corrector combinations that balance accuracy and computational efficiency.
Understand numerical stability vs. mathematical stability. Learn about stiff differential equations and implicit methods designed to handle them.
Find eigenvalues and eigenvectors of matrices using iterative and direct methods.
Learn the basic iterative method for finding the dominant eigenvalue. Understand convergence rate and how it depends on eigenvalue separation.
Find the smallest eigenvalue by applying power method to A⁻¹. Learn how matrix factorization makes this method practical for large matrices.
Target specific eigenvalues using (A - σI)⁻¹. Understand how shifting enables finding eigenvalues near a given value σ.
Master the most important method for finding all eigenvalues. Learn how repeated QR factorization leads to upper triangular form revealing eigenvalues.
Study the classical method for symmetric matrices using plane rotations. Learn how Jacobi iterations systematically reduce off-diagonal elements.
Learn orthogonal transformations that introduce zeros systematically. Understand their role in reducing matrices to tridiagonal or Hessenberg form.
Study the factorization A = UΣV^T for any matrix. Learn applications to least squares, data compression, and principal component analysis.
Apply eigenvalue methods to principal component analysis. Understand dimensionality reduction and how eigenvalues indicate variance explained.
Study optimal methods for approximating functions and fitting data.
Learn to minimize Σ(f(x_i) - p(x_i))² over all polynomials p of given degree. Understand the normal equations and their solution.
Apply least squares to fit linear models y = ax + b to data. Learn about correlation coefficients and goodness of fit measures.
Extend least squares to higher-degree polynomials. Understand the trade-off between model complexity and overfitting.
Study polynomial families that are orthogonal with respect to specific weight functions. Learn how orthogonality simplifies least squares computations.
Master the polynomials that deviate least from zero on [-1,1]. Understand their minimax property and applications to function approximation.
Learn to approximate periodic functions using trigonometric series. Understand discrete Fourier transform and fast Fourier transform algorithms.
Study approximation by ratios of polynomials P(x)/Q(x). Learn Padé approximations and their advantages for functions with poles or asymptotes.
Find the polynomial that minimizes the maximum absolute error. Learn the Remez exchange algorithm and equioscillation theorem.
Solve PDEs numerically using finite difference and finite element methods.
Learn to classify second-order PDEs as elliptic, parabolic, or hyperbolic. Understand how classification determines appropriate numerical methods.
Replace derivatives with finite difference approximations. Learn how to construct difference schemes and analyze their properties.
Solve parabolic PDEs like ∂u/∂t = α∂²u/∂x². Learn explicit and implicit time-stepping schemes and their stability requirements.
Handle hyperbolic PDEs like ∂²u/∂t² = c²∂²u/∂x². Understand the CFL condition and characteristic-based methods.
Solve elliptic PDEs like ∇²u = 0. Learn iterative methods for the resulting linear systems and multigrid techniques.
Handle Dirichlet, Neumann, and Robin boundary conditions. Learn how boundary conditions affect the numerical scheme construction.
Study von Neumann stability analysis for finite difference schemes. Learn about CFL conditions and numerical dissipation/dispersion.
Introduction to variational formulations and piecewise polynomial approximations. Understand the advantages for complex geometries and boundary conditions.
Learn computational approaches to finding minima and maxima of functions.
Learn the optimal method for one-dimensional optimization without derivatives. Understand the golden ratio's role in minimizing function evaluations.
Apply Newton's method to find critical points: x_{k+1} = x_k - [∇²f(x_k)]⁻¹∇f(x_k). Learn about convergence and Hessian modification techniques.
Study the fundamental first-order method: x_{k+1} = x_k - α_k∇f(x_k). Learn step size selection and convergence analysis for convex functions.
Learn the optimal method for quadratic functions. Understand how conjugate directions accelerate convergence and applications to general optimization.
Study BFGS and other methods that approximate the Hessian. Learn how these methods achieve superlinear convergence without computing second derivatives.
Master the simplex method for problems with linear objective and constraints. Understand basic feasible solutions and optimality conditions.
Learn methods for optimization subject to equality and inequality constraints. Study penalty methods, barrier methods, and sequential quadratic programming.
Understand the Karush-Kuhn-Tucker conditions as necessary conditions for constrained optima. Learn their role in algorithm development and optimality verification.