⚡ MemoLearning Scientific Computing

Computational methods, algorithms, and numerical analysis for scientific problems

← Back to Computer Science

Curriculum Overview

12
Total Units
~150
Key Algorithms
8
Core Units
4
Advanced Units
1

Python for Scientific Computing

Master Python fundamentals and essential libraries for scientific computation.

  • Python basics and data structures
  • NumPy arrays and vectorization
  • SciPy scientific functions
  • Matplotlib visualization
  • Pandas data manipulation
  • Jupyter notebooks
  • Performance optimization
  • Memory management
2

Numerical Linear Algebra

Study efficient algorithms for matrix computations and linear systems.

  • LU decomposition and pivoting
  • QR factorization
  • Singular Value Decomposition (SVD)
  • Eigenvalue algorithms
  • Iterative methods (Jacobi, Gauss-Seidel)
  • Krylov subspace methods
  • Sparse matrix techniques
  • Condition numbers and stability
3

Root Finding and Optimization

Learn algorithms for solving nonlinear equations and optimization problems.

  • Bisection and Newton's method
  • Secant and Brent's method
  • Fixed-point iteration
  • Gradient descent methods
  • Newton's method for optimization
  • Quasi-Newton methods (BFGS)
  • Constrained optimization
  • Global optimization techniques
4

Interpolation and Approximation

Study methods for data fitting and function approximation.

  • Polynomial interpolation
  • Lagrange and Newton forms
  • Spline interpolation
  • Least squares fitting
  • Orthogonal polynomials
  • Chebyshev approximation
  • Radial basis functions
  • Error analysis and convergence
5

Numerical Integration

Learn quadrature rules and integration techniques for functions and data.

  • Newton-Cotes formulas
  • Gaussian quadrature
  • Adaptive integration
  • Monte Carlo integration
  • Multidimensional integration
  • Improper integrals
  • Oscillatory integrals
  • Error estimation and convergence
6

Ordinary Differential Equations

Study numerical methods for solving initial and boundary value problems.

  • Euler's method and error analysis
  • Runge-Kutta methods
  • Multistep methods
  • Adaptive step size control
  • Stiff equations and implicit methods
  • Boundary value problems
  • Shooting methods
  • Systems of ODEs
7

Partial Differential Equations

Learn finite difference and finite element methods for PDEs.

  • Classification of PDEs
  • Finite difference schemes
  • Stability and consistency
  • Heat equation (parabolic)
  • Wave equation (hyperbolic)
  • Laplace equation (elliptic)
  • Finite element method basics
  • Boundary conditions and domains
8

Monte Carlo Methods

Study stochastic simulation and sampling techniques.

  • Random number generation
  • Monte Carlo integration
  • Markov Chain Monte Carlo (MCMC)
  • Metropolis-Hastings algorithm
  • Gibbs sampling
  • Importance sampling
  • Variance reduction techniques
  • Applications in physics and finance
9

Fast Fourier Transform

Master frequency domain analysis and signal processing.

  • Discrete Fourier Transform (DFT)
  • FFT algorithms (Cooley-Tukey)
  • Spectral methods for PDEs
  • Convolution and correlation
  • Windowing and spectral leakage
  • Multidimensional FFT
  • Applications in image processing
  • Inverse problems
10

Machine Learning Algorithms

Learn computational aspects of machine learning and data science.

  • Linear regression and regularization
  • Classification algorithms
  • Neural networks and backpropagation
  • Clustering algorithms
  • Dimensionality reduction (PCA, t-SNE)
  • Cross-validation and model selection
  • Ensemble methods
  • Computational complexity
11

High-Performance Computing

Study parallel computing and optimization for large-scale problems.

  • Parallel algorithms and architectures
  • OpenMP and shared memory
  • MPI and distributed computing
  • GPU computing with CUDA
  • Vectorization and SIMD
  • Load balancing
  • Scalability analysis
  • Performance profiling and optimization
12

Scientific Software Development

Learn best practices for developing robust scientific software.

  • Software engineering principles
  • Version control with Git
  • Testing and debugging strategies
  • Documentation and reproducibility
  • Code optimization techniques
  • Continuous integration
  • Package management and deployment
  • Open source collaboration

Unit 1: Python for Scientific Computing

Master Python fundamentals and essential libraries for scientific computation and data analysis.

Python Basics and Data Structures

Master Python syntax, control flow, functions, and built-in data structures (lists, tuples, dictionaries, sets). Learn list comprehensions and generators for efficient code.

NumPy Arrays and Vectorization

Learn ndarray creation, indexing, slicing, and broadcasting. Master vectorized operations, array manipulation, and linear algebra functions for efficient numerical computing.

SciPy Scientific Functions

Explore SciPy modules: optimize, integrate, interpolate, linalg, stats, and special. Learn to solve scientific problems using high-level mathematical functions.

Matplotlib Visualization

Create publication-quality plots with matplotlib. Master 2D/3D plotting, subplots, styling, animations, and interactive visualizations for data exploration.

Pandas Data Manipulation

Learn DataFrame and Series operations, data cleaning, grouping, merging, and time series analysis. Master data I/O and exploratory data analysis techniques.

Jupyter Notebooks

Master interactive computing with Jupyter. Learn markdown, magic commands, widgets, and best practices for reproducible research and data science workflows.

Performance Optimization

Learn profiling tools, Cython, Numba JIT compilation, and algorithm optimization. Understand bottlenecks and techniques for speeding up Python code.

Memory Management

Understand Python memory model, garbage collection, and memory-efficient programming. Learn techniques for handling large datasets and memory profiling.

Unit 2: Numerical Linear Algebra

Study efficient algorithms for matrix computations, linear systems, and eigenvalue problems.

LU Decomposition and Pivoting

Learn LU factorization for solving linear systems Ax = b. Understand partial and complete pivoting for numerical stability and computational complexity O(n³).

QR Factorization

Master QR decomposition using Householder reflections and Givens rotations. Apply to least squares problems and orthogonal transformations.

Singular Value Decomposition (SVD)

Understand SVD A = UΣV^T for matrix analysis, pseudoinverse computation, low-rank approximation, and principal component analysis applications.

Eigenvalue Algorithms

Learn power method, QR algorithm, and Jacobi method for eigenvalue computation. Understand convergence properties and applications to matrix diagonalization.

Iterative Methods

Study Jacobi, Gauss-Seidel, and SOR methods for large sparse systems. Analyze convergence conditions and computational efficiency compared to direct methods.

Krylov Subspace Methods

Master Conjugate Gradient (CG) for symmetric positive definite systems and GMRES for general systems. Understand preconditioning for faster convergence.

Sparse Matrix Techniques

Learn sparse matrix storage formats (CSR, CSC, COO) and specialized algorithms. Study graph-based orderings and fill-in reduction for sparse factorizations.

Condition Numbers and Stability

Understand condition numbers κ(A) = ||A|| ||A⁻¹||, error propagation, and numerical stability. Learn backward/forward error analysis for robust algorithms.

Unit 3: Root Finding and Optimization

Learn algorithms for solving nonlinear equations and finding optimal solutions to mathematical problems.

Bisection and Newton's Method

Master bisection method for guaranteed convergence and Newton's method x_{n+1} = x_n - f(x_n)/f'(x_n) for quadratic convergence. Analyze convergence rates and requirements.

Secant and Brent's Method

Learn secant method for derivative-free root finding and Brent's method combining bisection with faster methods for robust and efficient root finding.

Fixed-Point Iteration

Study fixed-point problems x = g(x) and iteration schemes. Understand contraction mapping theorem and conditions for convergence of iterative methods.

Gradient Descent Methods

Learn steepest descent x_{k+1} = x_k - α∇f(x_k) and variants. Study step size selection, momentum methods, and adaptive learning rates for optimization.

Newton's Method for Optimization

Apply Newton's method x_{k+1} = x_k - H⁻¹∇f(x_k) using Hessian matrix for second-order optimization. Understand quadratic convergence and computational costs.

Quasi-Newton Methods (BFGS)

Learn BFGS and L-BFGS algorithms that approximate Hessian using gradient information. Study secant conditions and memory-efficient implementations.

Constrained Optimization

Study Lagrange multipliers, KKT conditions, and penalty/barrier methods. Learn sequential quadratic programming (SQP) for nonlinear constrained problems.

Global Optimization Techniques

Explore simulated annealing, genetic algorithms, and particle swarm optimization for finding global minima in non-convex optimization landscapes.

Unit 4: Interpolation and Approximation

Study methods for data fitting, function approximation, and constructing smooth representations of discrete data.

Polynomial Interpolation

Learn polynomial interpolation theory, existence and uniqueness theorems. Understand Runge's phenomenon and oscillatory behavior of high-degree polynomials.

Lagrange and Newton Forms

Master Lagrange interpolation formula and Newton's divided differences. Compare computational efficiency and numerical stability of different representations.

Spline Interpolation

Study piecewise polynomial interpolation with splines. Learn cubic splines, B-splines, and smoothing splines for flexible and stable curve fitting.

Least Squares Fitting