You will have noticed that throughout the module there are two distinct points of view that relate to the same mathematical ideas. (1) Matrices Writing systems of m simultaneous linear equations in n variables x1,..,xn results in matrix equations of the form A x = 0 (homogeneous form) A x = b (inhomogeneous form) for an m x n matrix A with scalar entries, a n-vector x = (x1,..,xn)^T of variables and an m-vector b = (b1,..,bm)^T of scalars. With that in the back of our mind, we consider all sorts of ways of understanding matrices: kernel, image, rank (and row- and column-rank), row reduction, normal forms, inverses, similarity, orthogonality (with respect to a Euclidean form), orthogonal similarity, determinant, trace, eigenvalues, eigenvectors, diagonalisability, adjoint and so on (including SVD at the end). The row reduced echelon form is perfect for solving linear equations, and as a spin-off handles rank and inverses nicely too, though in any given calculation you may find short cuts. (2) Vector spaces and linear maps between them A vector space V (over a field K, usually for us the real numbers) is (loosely speaking) a set that is closed under addition and scalar multiplication. (That's not an exam-grade definition; you will need to know the formal definition. Note that there is nothing in the definition that says the elements of V have to look like vectors - they can be anything, and carefully specified sets of functions or polynomials are popoular examples.) A linear map is a map between vector spaces that respects the addition and scalar multiplication. (Ditto. For example, differentiation of functions is a linear map [if you specify domain and codomain carefully].) The key idea of basis of a vector space ties everything together. It is used to define dimension. It is used to represent elements of a [finite-dimensional] vector space as (column) vectors of scalars. Given a linear map f: V -> W, whenever we choose a basis in both V and W (so that we may represent their elements as vectors of scalars) we may represent the map f by a matrix A, so that f(v) = Av (that is, the map is simply multiplication by the matrix A). Then (we prove that) the definitions of kernel, image, rank and so on for the map f correspond to the same notions for the matrix A. The key difficulty (or strength) is that if we change basis in V or W, then the matrix A that represents f changes. We understand this in formulas like B = QAP^{-1}, where P and Q are the change of basis matrices in V and W respectively (and the matrices A and B are called 'equivalent'). We understand how Q corresponds to row operations on A and P^{-1} corresponds to column operations on A. (This is why we may do row operations when solving linear equations: they don't change the basis in V, so the kernel, that is, the set of solutions, is unchanged.) The more subtle notion of similarity arises when we have a linear map f: V -> V from a vector space to itself, where we use the same basis in domain and codomain when writing f as a matrix A. We see formulas like B = PAP^{-1}, where P is the change of basis matrix in V (and A and B are called 'similar'). [If V has a Euclidean form, and if P is orthogonal - that is, if we only permit orthogonal change of basis - then A and B are called 'orthogonally similar'.] Allowing only one change of basis reduces our freedom to change the matrix A severely. So while we always have the very simple Smith Normal Form when we work up to equivalence of matrices, when working up to similarity of (square) matrices it is a much more subtle question whether they can even be written in diagonal form: we understand an answer using eigenvalues and eigenvectors. To help navigate the module, I attempt an audit of the main definitions, theorems and calculations of the module below. You may or may not find this useful: it's a thing I found useful to do when revising myself, but you may have better methods. The fact I've done it rather than you makes it pretty pointless, but perhaps it's interesting to compare with what you do if you try something similar. I can't promise to have included everything: this is just for ideas of how to see the module, and not an exhaustive list of what we did (and certainly not a reflection of what's on the exam, which I cannot remember). ====================================================================== Gavin's attempt at listing the definitions ========================================== You should certainly know all of these. It's not a matter of memorising them, but of using them so frequently that they are self evident to you. Even then, it is worth occasionally trying to write them out formally, and then comparing what you've written with the lecture notes. 1.3.1 vector space 1.3.3 subspace 1.3.5 W1+W2 (the span of two subspaces) 1.4.1 linear independence 1.4.4 spanning 1.4.5 basis 1.4.8 dimension 2.1.1 linear map / linear operator 2.2.1 image, kernel 2.2.4 complementary subspaces 2.2.6 rank, nullity 2.2.9 non-singular map, singular map 3.1.1 and 4.1.1 change of basis matrix 3.2.1 euclidean form 3.2.2 euclidean space 3.2.4 orthonormal (vector sequence / basis) 4.2.1 orthogonal linear operator 4.2.4 orthogonal matrix 5.1.2 Smith normal form (= row-column echelon form) 5.1.4 row space, column space (of a matrix), row rank, column rank 5.2.1 row echelon form 5.2.2 row reduced echelon form 6.1.1 sign of a permutation, even and odd permutations 6.1.2 determinant of a (square) matrix 6.1.3 characteristic polynomial of a (square) matrix 8.1.1 eigenvalue, eigenvector (of a linear operator) and eigenspace 8.1.2 eigenvalue, eigenvector (of a square matrix) and eigenspace 9.1.2 adjoint linear map, self-adjoint linear map ====================================================================== Gavin's attempt at listing the major theorems ============================================= You should certainly know all of these too, and ideally have a clear idea how you prove them, or why the proof works in general. Again, it's not a matter of memorising proofs, but of understanding why they work, so that you could build your own proof if you had to. LINEAR INDEPENDENCE, SPANNING, BASES AND DIMENSION 1.4.7 Bases have the same number of elements (so dimension is well defined). 1.4.11 Any spanning set has a subset that is a basis. 1.4.14 Any linearly independent set can be extended to a basis. 2.2.3 Dimension formula (for two subspaces of a vector space). 2.2.7 Rank-Nullity formula (for a linear map). EUCLIDEAN SPACES AND ORTHONORMAL BASES 3.2.6 Gram-Schmidt: turns arbitrary bases into orthonormal ones (or equivalently turns nonsingular square matrices into orthonormal matrices). LINEAR MAPS AND MATRICES 3.4.1 Linear maps U -> V are in bijection with m x n matrices (n=dimU, m=dimV). We further check that addition, scalar multiplication, composition and inverses of linear maps correspond to the same notions for matrices. 4.1.2 Two matrices A,B represent the same linear map with respect to different bases iff they are equivalent: that is, B = QAP^{-1} for invertible P,Q. NORMAL FORMS 5.1.1 Smith normal form. (A discussion of the rank of a matrix follows.) 5.1.5 Row and column operations do not change the row/column rank of a matrix. 5.1.7 Equivalent matrices / .. 5.2.3 Row reduced echelon form (via elementary transformations). DETERMINANTS 6.2.1 Elementary row and column operations and determinants. 6.3.1 Expansion formulas for determinants. 6.3.2 Adjoint matrix and determinant (and formula for inverse). 7.1.1 det(A^T) = det(A). 7.1.3 det(A) = 0 iff A is singular. 7.1.4 det(AB) = det(A)det(B). EIGENVALUES, EIGENVECTORS AND DIAGONALISATION 8.1.3 Eigenvalues are the roots of the characteristic polynomial. 8.1.4 Diagonal iff there is a basis of eigenvectors. 8.1.6 Eigenvectors of distinct eigenvalues are linearly independent (and it follows that having n distinct eigenvalues implies diagonalisable). 8.2.1 (Mini Cayley-Hamilton) A 2x2 matrix A satisfies its own characteristic polynomial (which allows us to sidestep some thinking in the 2x2 case). 9.1.6 A selfadjoint operator has an orthonormal basis of eigenvectors (or equivalently a symmetric matrix is orthogonally similar to a diagonal matrix). 9.1.7 distinct eigenvalues of symmetric matrices have orthogonal eigenvectors 9.2.1 (Singular Value Decomposition - non examinable) ... ====================================================================== Gavin's attempt at listing the main calculations ================================================ Here are various computations we've done, in no particular order: Solving simultaneous linear equations naively. Presenting simultaneous linear equations as a matrix equation and then Using row reduction (of the augmented matrix) to simplify it to expose the solution trivially. Sifting to find a linearly independent set of vectors. Extending a linearly independent set to a basis (also uses sifting). Arithmetic of matrices and vectors (multiplying, adding, etc.). Change of basis matrix. Expressing a vector with respect to a given basis (as a column vector). Compute the inverse of a matrix (or discover that it's not invertible) using row reduction (or naive formula with adjoints and determinants). Smith normal form - and its expression both by row and column operations and by multiplication by elementary matrices on the left and right. Row reduced echelon form - and its expression both by row operations and by multiplication on the left by elementary matrices. Calculating the kernel of a matrix (= solving linear equations). Finding a basis for the image of a matrix (= sifting the columns). Calculating the rank of a matrix (= dimension of the image = number of nontrivial rows in the row reduction = etc.). Writing down simple rotation and reflection matrices (2x2 case). Calculating the length of vectors and angles between them. Working with the bilinearity relations of Euclidean forms. Recognising orthogonal matrices. Gram-Schmidt and orthonormal bases. Using the characteristic polynomial to compute eigenvalues and then computing eigenvectors (or bases of the eigenspaces = ker(A - lambda.I)). Computing determinants, either naively, or by eye, or using reductions and block forms. Diagonalising a matrix (or finding that it is not similar to a diagonal matrix).