All posts by Krystal Guo

Fun with reflections and determinants

in collaboration with guest contributor Gabriel Coutinho

My frequent collaborator Gabriel Coutinho and I have very different problem solving styles.  In his own words, “we could write a book together; your part will have fancy group theory and nice drawings and mine will have sloppy proofs of eigenvalue bounds.” We challenged each other to this problem, which is B3 from the 2019 Putnam Math Competition. There was struggle and despair, but the next morning, we came up with two very different solutions.

The Problem:

Let \(Q\) be an \(n\times n\) real orthogonal matrix and let \(u\) be a \(n\)-dimensional real (column) unit vector. Let \(P = I – 2u u^T\), where \(I\) is the \(n\times n\) identity matrix. Show that if \(1\) is not an eigenvalue of \(Q\), then \(1\) is an eigenvalue of \(PQ\). 

Note that \(P\) is a reflection; it is an involution and also called a Householder transformation

Krystal’s solution: what exactly does \(P\) do? 

We can easily see that \(P\) sends \(u\) to \(-u\) and fixes everything orthogonal to \(u\). But we can actually prescribe the behaviour of a reflection.

Lemma. If \(x\) and \(y\) are real unit vectors and $$P = I – 2\frac{(x-y)(x-y)^T}{\langle x-y, x-y \rangle},$$ then \(Px = y\).

Proof. We observe the following \[ \begin{split} Px &= x – 2\frac{(x-y)(x-y)^Tx}{\langle x-y, x-y \rangle} \\ &= x – 2\frac{(x-y)(x^Tx-y^Tx)}{x^Tx- x^Ty -y^Tx+ y^Ty} \\ &= x – 2\frac{(x-y)(1-y^Tx)}{2(1 – y^Tx)} \\ &= x – (x-y) \\ &= y, \end{split} \] as claimed. ∎

Corollary. Suppose \(u\) is a unit vector such that \( u = x – y \) where \( |x| = |y| =\alpha \). If \(P = I – 2u u^T\), then \(P \left( \frac{x}{\alpha} \right) = \frac{y}{\alpha} \).

Proof. We observe that \(\frac{x}{\alpha}\) and \(\frac{y}{\alpha}\) are unit vectors. The corollary now follows from the Lemma. ∎

To solve the problem, we observe that \(Q\) and \(Q^T\) have the same eigenvalues and so, if \(1\) is not an eigenvalue of \(Q\), there exists a matrix \(M = (I-Q^T)^{-1}\). Let \(x = Mu\). We see that \[u = (I-Q^T)x = x – Q^Tx.\] Since \(Q^T\) is an orthogonal matrix and thus preserves lengths, we have that \(|x| = |Q^Tx|\). Let \(x’ = \frac{x}{|x|}\). We may now apply the corollary and conclude that \(P\) takes \(x’ \) to \(Q^Tx’\).  

Now we see that \[PQ (Q^Tx’) = Px’ = Q^Tx’\] and so \(Q^Tx’\) is an eigenvector for \(PQ\) with eigenvalue \(1\).  

Gabriel’s solution: let’s go with some determinants  

Let \(R\) any real orthogonal matrix. Its determinant is \(\pm 1\) and all of its eigenvalues have norm 1. The \(P\) a reflection about an \((n-1)\)-dimensional subspace; it has one eigenvalue equal to \(-1\) and all other eigenvalues are \(1\) and so \(\det P = -1\). The idea is to show if \(\det(Q) = (-1)^{n-1}\), then \(Q\) has \(1\) as an eigenvalue.  

\(n\) is odd. Suppose \(R\) is an \(n\times n\) orthogonal matrix and \(\det(R) = 1\). Then \[ \det(I-R) = \det(R^T) \det(I-R) = \det(R^T – I) = (-1)^n \det(I-R). \] Since \(n\) is odd, we get that \(\det(I-R) = 0\), and \(1\) is an eigenvalue. Thus if \(Q\) has odd order, it must be that \(\det Q = -1\). As \(\det P = -1\), we have \(\det (PQ) = 1\) and the result follows. 

\(n\) is even. Suppose \(\det(R) = -1\). Each complex eigenvalue comes with its complex conjugate, so there are an even number of complex eigenvalues and their product is \(1\). Then there must an odd number of eigenvalues equal to \(-1\). But then there is also an eigenvalue equal to \(1\), as there are an even number of eigenvalues in total. Thus if \(Q\) and \(P\) have even order, it must be that \(\det Q = 1\). As \(\det P = -1\), we have \(\det (PQ) = -1\) and the result follows.

Addendum

Gabriel later realized that both parts of his proof could be extended to the other parity and leaves it as an exercise to the avid reader.

Eigen-magicks

A few weeks ago, a few physicists (Peter Denton, Stephen Parke, and Xining Zhang) and one notable mathematician (Terence Tao) posted a short 3-page note to the arXiv, containing a “new” identity relating to eigenvectors to eigenvalues. Many people rushed to explain that it was not new; it has existed at least since 1966 and has been used many times by people who took no credit for it. The authors have since amended their paper and converted it into a survey containing all seven proofs of this identity, but not before it was picked up by Quanta. Now, in response to collaborators asking if I knew it and co-authors inquiring if I’ve thought about using it, I’m writing about how we usually arrive at identity in algebraic graph theory.

the adjacency matrix

Say we have a graph \(G\) on \(n\) vertices. The adjacency matrix \(A\) of \(G\) is an \(n\times n\) matrix with rows and columns indexed by the vertices of \(G\); \(A_{u,v} = 1\) if \(u\) is adjacent to \(v\) and \(A_{u,v} = 0\) otherwise. It may not be immediately apparent, but this matrix has the following property: \[ A^k_{u,v} = \text{the number of walks from } u \text{ to } v \text{ of length }k. \] We will write \(\phi(G,t)\) for the characteristic polynomial of \(A\).

the walk-generating function

Now we may consider the walk-generating function of \(u\) and \(v\) in \(G\), denoted \(W_{u,v}(G,t)\). This is a formal power series in variable \(t\), where the coefficient of \(t^k\) is the number of walks from \(u\) to \(v\) of length \(k\). But, oh wait, we already know how to count those! We have that \[ W_{u,v}(G,t) = \sum_{k=0}^{\infty} A^k_{u,v} t^k. \] If we look carefully at each term being summed, we are just extracting the \(u,v\) entry of some matrix and then multiplying by \(t^k\), so we can just do the extraction later and get: \[ W_{u,v}(G,t) = \sum_{k=0}^{\infty} (tA)^k_{u,v}. \] In fact, we could just take the whole sum first and then extraction the \(u,v\) entry and so \[ W_{u,v}(G,t) = \left(\sum_{k=0}^{\infty} (tA)^k\right)_{u,v}. \]

And now, we need to recall from calculus the following equality (keeping in mind that, since we work with these series as formal power series, we are not troubled by things like convergence): \[ 1 +x + x^2 + x^3 + \cdots = \frac{1}{1-x}. \] We can apply it to our expression for \(W_{u,v}(G,t)\) where \(x = tA\) and get \[ W_{u,v}(G,t) = (I-tA)^{-1}_{u,v}. \] While this looks a bit better, I’d much rather work with \(tI-A\), whose determinant is the characteristic polynomial of \(A\). So, we will do a change of variables as follows: \[ t^{-1}W_{u,v}(G,t^{-1}) = (tI-A)^{-1}_{u,v}. \] From this point forward, we will work with the expression \((tI-A)^{-1}_{u,v}\), with the knowledge that it gives the walk-generating function.

Cramer’s rule

We will use Cramer’s rule to simplify the diagonal entries of \((tI-A)^{-1}\); we can also do off-diagonal entries, see Godsil’s Algebraic Combinatorics but it involves a small amount of mess.

Cramer’s rule says that you can get the \(u,u\) entry of a matrix \(M\) by dividing the adjoint at \(u\) by the determinant. If you remove the \(u\)th row and the \(u\)th column from \(A\), you get an \(n-1\times n-1\) matrix which happens to be the adjacency matrix of \(G\) with the vertex \(u\) deleted, which we denote \(G\setminus u\). Thus \[ (tI-A)^{-1}_{u,u} = \frac{\phi(G\setminus u, t)}{\phi(G,t)}. \quad \quad \quad \quad (*) \] We will call this equation ([/latex]*[/latex]) mostly because I don’t know how to enable equation numbering in MathJax.

spectral decomposition

Now we will use spectral decomposition to obtain another expression for \((tI-A)^{-1}_{u,u}\). Since \(A\) is a real symmetric matrix, it is diagonalizable. Why is that so? It’s explained in a previous post. We may write \(A\) as follows: \[ A = \sum_{r=0}^d \theta_r E_r \] where the \(\theta\)s are the distinct eigenvalues of \(A\) and \(E_r\) is the idempotent projector for the \(\theta_r\)-eigenspace of \(A\). I always find this cool to use because we defined our matrix \(A\) to record adjacencies in \(G\), but now we are considering \(A\) as a matrix operator acting on the \(n\)-dimensional vector space; its action is entirely determined by how it acts on its eigenspaces, the subspaces on which it acts as scalar multiplication.

Recalling that eigenvectors of different eigenspaces are orthogonal, it’s not difficult to convince yourself that \[ A^2 = \sum_{r=0}^d \theta_r^2 E_r, \] where we notice that we only had to square the eigenvalues to square \(A\). In fact, \[ f(A) = \sum_{r=0}^d f(\theta_r) E_r, \] for all “reasonable” functions \(f\). We will not discuss the technical details of what it means to be reasonable, but it suffices to say that polynomial, rational functions and power series are all reasonable. In particular, \(\frac{1}{t-x}\) is reasonable and so \[ (tI-A)^{-1}_{u,u} = \sum_{r=0}^d \frac{1}{t- \theta_r} (E_r)_{u,u}. \quad \quad \quad (\dagger) \]

putting it all together

Putting together equation \((\dagger)\) and \((*)\), we get the following: \[ \frac{\phi(G\setminus u, t)}{\phi(G,t)} = \sum_{r=0}^d \frac{1}{t- \theta_r} (E_r)_{u,u}. \quad \quad (\dagger\dagger) \] You may complain that it’s not exactly what they have in their paper, but it’s actually just one short trick away.

Fix \(s \in \{0,\ldots,r\}\) and we will find the expression for \((E_s)_{u,u}\). We multiply everything by \(t-\theta_s\) to get \[\frac{(t-\theta_s)\phi(G\setminus u, t)}{\phi(G,t)} = \sum_{r\neq s} \frac{t-\theta_s}{t- \theta_r} (E_r)_{u,u} + (E_s)_{u,u}.\] Now we evaluate at \(t = \theta_s\) and get the desired eigen-magicks: \[ \frac{(t-\theta_s)\phi(G\setminus u, t)}{\phi(G,t)} |_{t = \theta_s} = (E_s)_{u,u}. \] You may think we need to take a limit; Gabriel Coutinho’s notes has a limit here, but it is, in fact, unnecessary. Suppose \(G\) has \(\theta_s\) as an eigenvalue with multiplicity \(m\). By the Interlacing Theorem, removing one vertex can only decrease the multiplicity by at most one and thus, there are also \(m\) factors of \(t-\theta_s\) appear on the top of the left hand side and everything cancels.

some uses of the eigen-magicks

I use equation \((\dagger\dagger)\) and the corresponding equation for the off-diagonal entries often; in the study of state transfers of quantum walks, we need to investigate something called strong cospectrality of pairs of vertices. In particular, in a paper with Chris Godsil, Mark Kempton and Gabor Lippner (which is cited by the new survey version of the paper by Denton et al.), we use these equations to perturb an edge of a strongly regular graph, so that the endpoints become strongly cospectral. For those interested, the arXiv version from 2017 is here and the JCTA version is here.

If you stare at equation \((\dagger)\) long enough, you may be tempted, as I was, to substitute \(t=0\) and obtain the entries of the inverse of \(A\). This is somewhat legitimate since, when \(A\) is invertible, \(t=0\) is not a pole of the right hand side. This is essential what I do for the class of invertible trees in a recent paper. I reprove a result of Godsil’s on the inverses of trees from the 80s, but I do it in such a way that I am able to define a combinatorial operation on a tree which strictly increases (or decreases, if you apply the inverse of the operation) the median eigenvalue of the tree. It’s surprising that you can do this; eigenvalues relate to combinatorial structure of the graph, but the relationship is, in general, not quid pro quod, as it is in this case. This gives a partial order on the class of trees with a perfect matching and I find the maximal and minimal elements. I leave its height and Moebius function as open problems for readers.

Non-existence of strongly regular graphs with parameters (49,16,3,6)

Some people make a habit of occasionally visiting Wikipedia’s front page and following hyperlinked phrases down a rabbit hole of articles. I do a similarly thing with Andries Brouwer’s tables of strongly regular graphs, which leads to a cascade of papers, checking BCN and computing stuff in Sage.  How does that happen exactly? As usual in math, we now need to start with some definitions.

Definitions

A strongly regular graph is a \(k\)-regular graph where every pair of adjacent vertices has \(a\) common neighbours and every pair of non-adjacent vertices has \(c\) common neighbours. The tuple \((n,k,a,c)\) is called the parameter set of the strongly regular graph (or SRG). For example, the Petersen graph is a SRG with parameter set \((10,3,0,1)\). We immediately, see that these number \(n,k,a\) and \(c\) have to satisfy some conditions, some trivial (\(n>k\) and \(a,c \leq k\) and others less so.

Brouwer’s tables consists of all possible parameter sets for \(n\) up to \(1300\) which have passed some elementary checks. The parameters which are in green are classes where it has been confirmed that there does in fact exist at least one SRG with those parameters. The ones in red have been shown by other means (failing to satisfy the absolute or relative bound, for example) to not be parameter sets of any SRGs. The entries in yellow are parameter sets where the existence of a SRG is still open. This leads us to …

Non-existence of SRGs

Finding whether or not a putative SRG exists is a pretty interesting problem; the existence of the \(57\)-regular Moore graph which is a SRG with parameters \((3250, 57,0,1)\) is a long-standing open problem in algebraic graph theory. Today, we’ll look at the parameter set \((49,16,3,6)\), which holds the distinction of being the first red entry in Brouwer’s tables which is not a conference graph and also not ruled out by the absolute bound. This means that it was ruled out by a somewhat ad-hoc method; in this case, it was ruled out by Bussemaker, Haemers, Mathon and Wilbrink in this 1989 paper. However, computers have come along way since 1989, so instead of following their proof, I will give another proof, which uses computation (which was done in Sage).

That’s way too many 2’s, this SRG can’t exist

Suppose \(X\) is a SRG with parameter set \((49,16,3,6)\). This graph has eigenvalues $$16, 2^{(32)}, \text{ and } -5^{(16)}$$ where the multiplicities are given as superscripts. We’ll proceed by trying to build \(X\). First, we “hang a graph by a vertex” as follows:

The main thing to note right away is that neighhourhood of a vertex \(x\) in \(X\), denoted \(\Gamma_1(x)\) in the diagram, is a \(3\)-regular graph on \(16\) vertices. This is excellent news because there are only 4060 such graphs and one can generate them using geng, or download them from Gordon Royle’s website.

Next, we apply interlacing. We denote by \(\lambda_i(G)\) the \(i\)th largest eigenvalue of \(G\). Since \(\Gamma_1(x)\) is an induced subgraph of \(X\), interlacing theorem tell us $$ \lambda_i(X) \geq \lambda_i(\Gamma_1(x)) \geq \lambda_{33 + i}(X)$$ for \(i=1,\ldots,16\). In particular, we see that the second largest eigenvalue of \(\Gamma_1(x)\) is at most  \(2\). In general, this gives us all kinds of useful information (for example, that \(\Gamma_1(x)\) has to be connected), but since we only care about cubic graphs on \(6\) vertices, we can just go ahead and check which of them have this property. In turns out that only 35 cubic graphs on 16 vertices have this property.

If I have an induced subgraph \(Y\) of \(X\) on 18 vertices, the  interlacing theorem would give the following:  $$ \lambda_2(X) \geq \lambda_2(\Gamma_1(x)) \geq \lambda_{33}(X).$$ Recalling that the second and 33rd largest eigenvalues of \(X\) are both \(2\), we see that any subgraph of \(X\) on 18 vertices must have its second eigenvalue equal to \(2\). Since \(\lambda_i(G)\) already has 16 vertices, we’re not far off. Let \(y\) be a vertex in the second neighbourhood of \(x\) and we will consider the subgraph \(Y\) induced by \(x\), the neighbours of \(x\) and \(y\).

In terms of the computation, we try to add \(x\) and \(y\) to each of the remaining 35 cubic graphs. We can always choose \(y\) to be adjacent to a fixed vertex in \(\Gamma_1(x)\) and so there are \({16 \choose 5}\) ways of adding the other \(5\) edges incident to \(y\). Thus we have 35(3003) candidates for \(Y\). A quick computation later, we see that none of those graphs have \(2\) as the second largest eigenvalue, which contradicts the existence of \(X\).

So what? And also, now what?

It’s fun proving the non-existence of a SRG that is already known not to exist, but clearly what we actually want to do is convert some yellow entries of Brouwer’s table to either red or green. The method here (as well as that of the original paper of  Bussemaker, Haemers, Mathon and Wilbrink) works because the multiplicity of \(2\) as an eigenvalue is really large. The degree of the graph is equal to the multiplicity of its smallest eigenvalue, which smells of some kind of duality. This happens, for example, in the complement of Clebsch graph (which, by the way, is formally self-dual). The next open case where this occurs is \((100,33,8,12)\).  In this case, the first neighbourhood of a vertex would be a \(8\)-regular graph on 33 vertices, with a fairly large spectral gap. We probably don’t want to be generating all such graphs, so some new idea is needed.

Why are Hermitian matrices diagonalizable?

“Since we are working with a Hermitian matrix, we may take an eigenbasis of the space …”

“Wait, sorry, why are Hermitian matrices diagonalizable, again?”

“Umm … it’s not quick to explain.”

This exchange happens often when I give talks about spectra of graphs and digraphs in Bojan’s graph theory meeting. I know several ways to prove that Hermitian matrices are diagonalizable, but I couldn’t think of a simple, succinct statement which gives the right intuitive thing. So, in this post, I’d like to arrive at such a one-liner.

Definitions

We consider $n\times n$ matrices with entries in $\mathbb{C}$. A matrix $H$ is Hermitian if $H^* = H$, where $H^* = \bar{H}^T$ is the conjugate transpose of $H$. A matrix $H$ is diagonalizable if $H$ is similar to a diagonal matrix: i.e. there exist invertible matrix $P$ such that $P^{-1} H P = D$ where $D$ is a diagonal matrix.

An eigenvalue of $H$ is $\lambda$ such that there exist $v \in \mathbb{C}^n$ such that $$Hv = \lambda v$$ and $v$ is said to be an eigenvector of $H$. The characteristic polynomial of $H$ is $$\phi(H,t) = \det(tI – H).$$ If $\lambda$ is a number such that $\phi(H,\lambda) = 0$, then $\lambda I – H$ has a non-trivial kernel and so there exists a vector $v$ such that $(\lambda I – H)v = 0$. Thus, every root of $\phi(H,t)$ is an eigenvalue of $H$. However, the roots of the characteristic polynomial are not the same as the multiset of eigenvalues because there is a question of multiplicity.

Geometric vs algebraic multiplicities

The geometric multiplicity, $m_g(\lambda)$, of an eigenvalue $\lambda$ of $H$ is the dimension of the subspace of $\mathbb{C}^n$ generated by all eigenvectors $H$ with eigenvalue $\lambda$ (this space is called the eigenspace of $\lambda$). The algebraic multiplicity, $m_a(\lambda)$, is the multiplicity of $\lambda$ as a root of $\phi(H,t)$.

Proposition. A Hermitian matrix $H$ is diagonalizable if and only if $m_a(\lambda) = m_g(\lambda)$ for each eigenvalue $\lambda$ of $H$.

Proof. Suppose $H$ is a $n\times n$ Hermitian matrix.

“$\Leftarrow$” It is easy to see that the characteristic polynomial has degree $n$ and hence $n$ roots. Since the algebraic and geometric multiplicities agree, we see that $H$ has $n$ orthonormal eigenvectors (details left as an exercise) which can be used to form the columns of a matrix $P$. Then $P^*P = I $ and $P^*HP$ is a diagonal matrix with the eigenvalues of $H$ on the diagonal.

“$\Rightarrow$” Let $P$ be a matrix such that $P^{-1}HP = D$, where $D$ is a diagonal matrix. Then $$HP =PD$$ and we may consider the $j$th column of both sides. On the left side, we get $Hv$ where $v$ is the $j$th column of $P$. On the right side, we get $D_{jj}v$, and so $v$ is an eigenvector of $H$ with eigenvalue $D_{jj}$. Since $H$ and $D$ are similar, $$\phi(H,t) = \phi(D,t) = \prod_{j = 1}^n (t-D_{jj}). $$ Thus the multiset $\{D_{jj}\}_{j=1}^n$ is the set of eigenvalues of $H$ with both geometric and with algebraic  multiplicities. $\Box$

Diagonalizability

We will now show that Hermitian matrices are diagonalizable by showing that every eigenvalue has the same algebraic and geometric multiplicities.

Theorem. If $H$ is a Hermitian matrix with eigenvalue $\lambda$, then $m_g(\lambda) = m_a(\lambda)$.

Proof. We take $H$ to be a $n\times n$ Hermitian matrix and $\lambda$ an eigenvalue of $H$. We proceed by induction on $n$.

If $m_a(\lambda) = 1$, we are done, since there must be exactly one eigenvector of $\lambda$. We may assume $a = m_a(\lambda) >1$. Let $x$ be an eigenvector of $H$ such that $Hx = \lambda x$. We may assume that $x$ is normalized, i.e. $x^*x = 1$. We may extend $\{x\}$ to an orthonormal basis of $\mathbb{C}^n$, say $\{x, v_2, \ldots, v_n\}$.

Let $V= \langle v_2, \ldots, v_n \rangle = \langle x \rangle ^{\perp}$. We may consider $\mathbb{C}^n$ as the direct sum $\langle x \rangle \oplus V$. Let $v \in V$. Observe that $$x^*(Hv) = x^*H^*v=  (Hx)^*v = (\lambda x)^*v = \bar{\lambda} x^*v = 0.$$ Thus $Hv \in V$ and $V$ is a $H$-invariant subspace of $\mathbb{C}^n$.

Let $P$ be the unitary matrix with $\{x, v_2, \ldots, v_n\}$ as its columns. The above gives that $$ P^* HP = \left( \begin{array}{cccc} \lambda & 0 & \cdots & 0 \\ 0 & & & \\ \vdots & & B &  \\0 &&& \end{array} \right)$$ and $B$ is Hermitian since the left side is. We see that $$\phi(H,t) = \phi(P^* HP, t) = (t-\lambda)\phi(B,t).$$ Thus, $\lambda$ is an eigenvalue of $B$ with algebraic and geometric multiplicity  $a-1$ (by induction) and $B$ has pair-wise orthogonal eigenvectors $x’_2, \ldots, x’_a$ of $\lambda$. For $j = 2, \ldots, a$, let $x_j = P\left( \begin{array}{c} 0 \\ x’_j \end{array} \right)  P^*$. It is easy to see that $x, x_2, \ldots, x_a$ are pair-wise orthogonal eigenvectors of $H$ with eigenvalue $\lambda$, which proves the theorem. $\Box$

The Take-away

There are many (mostly equivalent ways) to show this; we could have used induction to prove $H$ is diagonalizable, without consider geometric vs algebraic multiplicities, we could have proved the decomposition into Jordan blocks, or we could have proven the spectral decomposition theorem.

The crux of the proof is that, when $H$ is Hermitian, the vector space $W^{\perp}$ is $H$-invariant when $W$ is. In our proof, this allowed us to, colloquially speaking, keep pulling eigenvectors out of $\mathbb{C}^n$. In general, given an $H$-invariant subspace $W$ of $\mathbb{C}^n$, we can consider the action of $H$ (by multiplication on the left) on $W$ and find the minimal polynomial of $H$ over $W$. If $\psi_1$ and $\psi_2$ are the minimal polynomials of $H$ over $W$ and $W^{\perp}$, respectively, then $\phi(H,t) = \psi_1 \psi_2$.


Intuitively, a Hermitian matrix $H$ is diagonalizable because we can break $\mathbb{C}^n$ into $H$-invariant, pairwise orthogonal, subspaces and diagonalize $H$ over each subspace.


To see that this a property that is not true of all matrices, consider the following matrix: $$N = \left(\begin{array}{cccc} 0 & 1 & 1 & 1 \\ 0& 0& 1 & 1 \\ 0& 0 & 0 & 1 \\ 0&0&0&0 \end{array} \right).$$ For graph theorists, this is the adjacency matrix of a transitive tournament on $4$ vertices. It is also a nilpotent matrix; that is, $N^4 = 0$. The characteristic polynomial of $N$ is $t^4$ and $N$ has $0$ as an eigenvalue with algebraic multiplicity $4$. However, $N$ has only one linearly independent eigenvector $e_1$ (the elementary basis vector, $(1\,\, 0\,\, 0\,\, 0)^T$).

Here, $\langle e_1 \rangle$ is an eigenspace of $N$ and hence $N$-invariant. Observe that $e_2 \in \langle e_1 \rangle^{\perp}$ but $Ne_2 = e_1$ and so $ \langle e_1 \rangle^{\perp}$ is certainly not $N$-invariant.

References

Roman, Steven. Advanced Linear Algebra. (Springer Graduate Texts in Mathematics, Vol. 135)

Prasolov, V. V. Problems and Theorems in Linear Algebra. (AMS Translations of Mathematical Monographs, Vol. 134)

Graph theory: helping tourists and map-makers since 1735

On pi day this year, I gave a talk with the same title as this post at A Taste of Pi, an event organized by SFU to introduce high school students to math. I’ve decided to tell a story about the history of graph theory as a gentle introduction to the area.

Slides to my pi day talk: here

To make graph theory accessible to high school students, I made a lot of pdf images. I’ve decided to convert some sequences of my pdf animations into gifs. Here are my two favourites. It might be necessary to click on them, to get them to animate.

Cardioids:

One way to draw an eulerian planar graph is to simply draw closed curves on a piece of paper and add vertices where the curves intersect. The result is a plane drawing of an eulerian planar graphs … whose faces can be coloured with 2 colours.

But we need not draw any random doodle (although that is also fun), we can draw an approximation to a cardioid as follows:

  1. We start with a circle C with a defined point P.
  2. Draw a circle whose centre is at a point Q  on C and  pass through point P.
  3. Repeat step 2 until satisfied. (Repeat for all choices of Q to be a cardioid.)

cardioid colouring
Drawing a cardioid to obtain an eulerian planar graph, embedded in the plane, and its 2-face-colouring.

2-face-colourability of eulerian planar graphs:

It is well-known that the faces of planar eulerian graphs are 2-colourable. If we take the planar dual Y of an eulerian plane graph X, we see each face of Y has even degree, since each vertex of X has even degree. Since the facial walks of a planar graph generates its cycle space, this implies that Y is bipartite.

Here is an illustration of the simpler, inductive proof:

A pictorial proof the planar eulerian graphs can be 2-coloured.
A pictorial proof the planar eulerian graphs can be 2-coloured.

Graphs on napkins

Some people believe in the statement: “if it’s not recreate-able, it’s not true.” … I don’t.
But instead of having dozens of photos all labelled “incomprehensible (but surely important) scribblings with math symbols”, I’m converting them into casual expositions. The posts will be generally aimed at an upper-year undergraduate level and will be about various topics in algebraic graph theory, like cores of cubelike graphs, generalized quadrangles and association schemes.

Doing maths in coffee shops

There’s an activity referred to by young graduate students and eager post-docs as “mathing it up”, which usually takes place in coffee shops. I like coffee.
Good coffee of the espresso persuasion. Flat whites, lattes, cappuccinos, macchiatos … sigh.  I also want to explore Vancouver and, of course, the world of mathematics. It occurs to me that this can all be achieved simultaneously.

So, I’ve gathered a list, guided by the wisdom of the internet, of the best coffee shops in Vancouver. At each place, I will order a latte, or whatever most resembles a latte (some places have a cafe au lait or a flat white instead) and do some maths, while soaking in the coffee shop atmosphere. Is there good lighting? Is there free water? Is there free wifi? Is that a strange concrete catwalk leading to the bathroom (yes, Milano in Gastown … I’m talking about you)?

Without further ado, here is the list:

1. Bel Cafe
2. Musette Caffe (Downtown)
3. Caffe Artigiano (Hornby St)
4. Finches
5. Thierry
6. JJ Bean (Alberni St)
7. Coffeebar (Gastown)
8. Milano (Gastown)
9. East Van Roasters
10. Nelson the Seagull
11. Musette Caffe (Chinatown)
12. Revolver Coffee
13. Caffe Brixton
14. Timbertrain Coffee Roasters
15. Matchstick (Chinatown)
16. 49th Parallel Roasters Cafe (Main St)
17. Kafka’s
18. JJ Bean (Main St)
19. Coco et Olive
20. Caffe Cittadella
21. East Cafe
22. Ethical Bean (1315 Kootenay St)
23. Matchstick (Mount Pleasant)
24. Satori Factory (closed M,T)
25. 49th Parallel Roasters Cafe (Kitsilano, almond milk)
26. Momento (2766 W 4th Ave)
27. Elysian (West 5th)
28. Wicked (1299 W 7th Ave)
29. Prado
30. Bump N Grind
31. Turks
32. Milano (West End)
33. Greenhorn Espresso Bar
34. Cafe for Contemporary Art
35. Crema
36. Caffe Artigiano (Burnaby)
37. Caffe Divano
38. Elysian Coffee (Broadway location)
39. Continental Coffee
40. Arbutus Coffee (2200 Arbutus St)

I’ve included different locations of the same shop (like Matchstick and Elysian), since I’m looking for a coffee experience, and not just the coffee itself. The original Hornby location of Caffee Artigiano is on the list and so is the location on Hastings in Burnaby, since Burnaby was so under-represented on the list. I’ve mostly chosen places which are coffee shops, where people order a latte and lounge around, soaking up the coffee shop feel for a hour (which is what I’ll be doing). Not on the list are bakeries, breakfast places, Belgian breakfast shops – they might serve great coffee, but do not meet the criterion of being a place to think and study, that is not my office or my home.

Things I used to compile my list: