Vectors and Matrices — Study Notes and Flashcards Summary & Study Notes
These study notes provide a concise summary of Vectors and Matrices — Study Notes and Flashcards, covering key concepts, definitions, and examples to help you review quickly and study effectively.
📐 Overview
These notes summarize core concepts for vectors and matrices used in numerical computing and linear algebra. Focus is on how to create, index, and operate on vectors/matrices, plus key matrix properties and special matrix constructors. Mathematical symbols are shown using notation.
🧭 Creating Row and Column Vectors
A row vector is a 1×n array and a column vector is an n×1 array. For example, a row vector might be written as and a column vector as (conceptually). In code, the difference is orientation: a row has one row and many columns, while a column has one column and many rows.
🔢 Indexing and Slicing
Indexing accesses individual elements using integer indices (often 1-based in math, 0-based in many programming languages). Slicing selects subranges of elements: e.g., selecting a subvector or a submatrix. Use colon notation conceptually: selecting rows to and columns to gives the submatrix . Negative or logical indices may be supported in specific languages for reverse selection or masking.
🧱 Matrix Creation Methods
Common constructors:
- Bracket notation: direct listing such as or conceptually.
- linspace: creates linearly spaced vectors between two endpoints. Example: produces values from to .
- logspace: creates logarithmically spaced values. Example: produces points between and .
These methods are used to generate vectors for plotting, sampling, or building structured matrices.
➕➖✖ Matrix Operations
- Addition/Subtraction: Two matrices can be added or subtracted only if they have the same shape. Elementwise: {ij}=A{ij}+B_{ij}.
- Multiplication: Matrix multiplication is defined when is and is , producing an matrix with . This is not elementwise by default.
- Elementwise multiplication (Hadamard) is a separate operation where elements are multiplied pairwise and requires same shape.
- Transpose: The transpose of is written and swaps rows and columns: {ij}=A{ji}. Useful for converting row vectors to column vectors and vice versa.
🔁 Inverse, Determinant, Rank, and Trace
- Inverse (): For square matrix , the inverse satisfies when is nonsingular. Not all matrices have inverses. Numerically use stable methods (LU, QR) rather than naive inversion.
- Determinant (): A scalar value giving scaled volume and singularity test. If , is singular (noninvertible).
- Rank (): The dimension of the column (or row) space; number of linearly independent columns. Rank reveals degrees of freedom and whether linear systems have unique solutions.
- Trace (): Sum of diagonal elements, invariant under similarity transforms. For square , .
Practical note: For numerical work, compute rank and inverse with tolerances; small determinants close to zero can indicate ill-conditioning.
🧩 Eigenvalues and Eigenvectors
For square matrix , an eigenvalue and corresponding eigenvector satisfy . The operation that computes them is often called eig. Eigenvalues provide insight into matrix behavior (stability, modes, diagonalization). For defective matrices, you may not get a full set of linearly independent eigenvectors; consider generalized eigenproblems or Schur decomposition for robustness.
When working numerically, eigenvalues can be complex even for real matrices; sorting and normalization of eigenvectors are common post-processing steps.
⚙️ Special Matrices
Common constructors useful for initialization and testing:
- zeros(n,m): matrix of zeros.
- ones(n,m): matrix of ones.
- eye(n): identity matrix with ones on the diagonal.
- rand(n,m): random matrix with entries uniformly distributed in .
- randn(n,m): random matrix with entries drawn from a standard normal distribution (mean 0, variance 1).
These are essential for building test cases, initializing algorithms, and setting up identity/scaling matrices.
🧾 Practical Tips & Numerical Considerations
- Prefer solving linear systems via decomposition (e.g., LU, QR) rather than computing explicitly. This improves accuracy and efficiency.
- Be aware of conditioning: a matrix with large condition number amplifies numerical errors. Use to assess.
- Use appropriate tolerances when testing singularity or rank; floating-point arithmetic can make exact comparisons unreliable.
- For large or sparse matrices, use specialized sparse data structures and algorithms to reduce memory and computation costs.
✅ Summary
Understand how to construct vectors/matrices, index and slice, perform core operations, and compute key properties like inverse, determinant, rank, trace, and eigen-decomposition. Use special constructors like zeros, ones, eye, rand, and randn for practical tasks and testing.
Sign up to read the full notes
It's free — no credit card required
Already have an account?
Continue learning
Explore other study materials generated from the same source content. Each format reinforces your understanding of Vectors and Matrices — Study Notes and Flashcards in a different way.
Create your own study notes
Turn your PDFs, lectures, and materials into summarized notes with AI. Study smarter, not harder.
Get Started Free