Back to Explore

Vectors and Matrices — Study Notes and Flashcards Summary & Study Notes

These study notes provide a concise summary of Vectors and Matrices — Study Notes and Flashcards, covering key concepts, definitions, and examples to help you review quickly and study effectively.

733 words1 views

📐 Overview

These notes summarize core concepts for vectors and matrices used in numerical computing and linear algebra. Focus is on how to create, index, and operate on vectors/matrices, plus key matrix properties and special matrix constructors. Mathematical symbols are shown using ...... notation.

🧭 Creating Row and Column Vectors

A row vector is a 1×n array and a column vector is an n×1 array. For example, a row vector might be written as [1  2  3][1;2;3] and a column vector as [123]\begin{bmatrix}1\2\3\end{bmatrix} (conceptually). In code, the difference is orientation: a row has one row and many columns, while a column has one column and many rows.

🔢 Indexing and Slicing

Indexing accesses individual elements using integer indices (often 1-based in math, 0-based in many programming languages). Slicing selects subranges of elements: e.g., selecting a subvector or a submatrix. Use colon notation conceptually: selecting rows ii to jj and columns kk to ll gives the submatrix A(i ⁣: ⁣j,  k ⁣: ⁣l)A(i!:!j,;k!:!l). Negative or logical indices may be supported in specific languages for reverse selection or masking.

🧱 Matrix Creation Methods

Common constructors:

  • Bracket notation: direct listing such as [1  2;  3][1;2;;3] or [1234]\begin{bmatrix}1 & 2\3 & 4\end{bmatrix} conceptually.
  • linspace: creates linearly spaced vectors between two endpoints. Example: linspace(a,b,n)\text{linspace}(a,b,n) produces nn values from aa to bb.
  • logspace: creates logarithmically spaced values. Example: logspace(a,b,n)\text{logspace}(a,b,n) produces nn points between 10a10^a and 10b10^b.

These methods are used to generate vectors for plotting, sampling, or building structured matrices.

➕➖✖ Matrix Operations

  • Addition/Subtraction: Two matrices can be added or subtracted only if they have the same shape. Elementwise: (A+B)ij=Aij+Bij(A+B){ij}=A{ij}+B_{ij}.
  • Multiplication: Matrix multiplication C=ABC=AB is defined when AA is m×pm\times p and BB is p×np\times n, producing an m×nm\times n matrix with Cij=kAikBkjC_{ij}=\sum_k A_{ik}B_{kj}. This is not elementwise by default.
  • Elementwise multiplication (Hadamard) is a separate operation where elements are multiplied pairwise and requires same shape.
  • Transpose: The transpose of AA is written ATA^T and swaps rows and columns: (AT)ij=Aji(A^T){ij}=A{ji}. Useful for converting row vectors to column vectors and vice versa.

🔁 Inverse, Determinant, Rank, and Trace

  • Inverse (A1A^{-1}): For square matrix AA, the inverse satisfies AA1=IAA^{-1}=I when AA is nonsingular. Not all matrices have inverses. Numerically use stable methods (LU, QR) rather than naive inversion.
  • Determinant (det(A)\det(A)): A scalar value giving scaled volume and singularity test. If det(A)=0\det(A)=0, AA is singular (noninvertible).
  • Rank (rank(A)\operatorname{rank}(A)): The dimension of the column (or row) space; number of linearly independent columns. Rank reveals degrees of freedom and whether linear systems have unique solutions.
  • Trace (trace(A)\operatorname{trace}(A)): Sum of diagonal elements, invariant under similarity transforms. For square AA, trace(A)=iAii\operatorname{trace}(A)=\sum_i A_{ii}.

Practical note: For numerical work, compute rank and inverse with tolerances; small determinants close to zero can indicate ill-conditioning.

🧩 Eigenvalues and Eigenvectors

For square matrix AA, an eigenvalue λ\lambda and corresponding eigenvector vv satisfy Av=λvAv=\lambda v. The operation that computes them is often called eig. Eigenvalues provide insight into matrix behavior (stability, modes, diagonalization). For defective matrices, you may not get a full set of linearly independent eigenvectors; consider generalized eigenproblems or Schur decomposition for robustness.

When working numerically, eigenvalues can be complex even for real matrices; sorting and normalization of eigenvectors are common post-processing steps.

⚙️ Special Matrices

Common constructors useful for initialization and testing:

  • zeros(n,m): n×mn\times m matrix of zeros.
  • ones(n,m): n×mn\times m matrix of ones.
  • eye(n): n×nn\times n identity matrix II with ones on the diagonal.
  • rand(n,m): random matrix with entries uniformly distributed in (0,1)(0,1).
  • randn(n,m): random matrix with entries drawn from a standard normal distribution (mean 0, variance 1).

These are essential for building test cases, initializing algorithms, and setting up identity/scaling matrices.

🧾 Practical Tips & Numerical Considerations

  • Prefer solving linear systems Ax=bAx=b via decomposition (e.g., LU, QR) rather than computing A1bA^{-1}b explicitly. This improves accuracy and efficiency.
  • Be aware of conditioning: a matrix with large condition number amplifies numerical errors. Use cond(A)\operatorname{cond}(A) to assess.
  • Use appropriate tolerances when testing singularity or rank; floating-point arithmetic can make exact comparisons unreliable.
  • For large or sparse matrices, use specialized sparse data structures and algorithms to reduce memory and computation costs.

✅ Summary

Understand how to construct vectors/matrices, index and slice, perform core operations, and compute key properties like inverse, determinant, rank, trace, and eigen-decomposition. Use special constructors like zeros, ones, eye, rand, and randn for practical tasks and testing.

Sign up to read the full notes

It's free — no credit card required

Already have an account?

Continue learning

Explore other study materials generated from the same source content. Each format reinforces your understanding of Vectors and Matrices — Study Notes and Flashcards in a different way.

Create your own study notes

Turn your PDFs, lectures, and materials into summarized notes with AI. Study smarter, not harder.

Get Started Free