Markov Chains and the Perron-Frobenius theorem
Disclaimer
The content of this post is based off material from 21-242 at CMU as taught by Clinton Conley. Consequently, I claim no credit for any of the ideas presented below.
About
A well-known result in linear algebra is the Perron-Frobenius theorem, which implies that (among other results) every stochastic matrix admits a stochastic eigenvector with eigenvalue one (definitions of those terms to follow later). The latter result finds a variety of uses in the study of probability, but most proofs (at least, those easily Googleable) appeal to results outside the realm of linear algebra (using e.g. an appeal to compactness or fixed point theorems). Since we’re ultimately dealing with linear operators, however, such analysis seems unnecessary. In this post, we’ll be going over the background needed for this result, ultimately proving it using nothing more advanced than the triangle inequality and basic linear algebra.
Motivation
Suppose we’re modeling some system whose state can be fully described by some finite set of states . For the purposes of imagination, we can think of such states as a collection of lily pads that a frog is jumping inbetween, a set of stations that a train goes between, or, in the case of Google’s PageRank, a set of websites that someone is browsing through. We’ll view all possible evolutions of our system as changes between these states, and we’ll also make the assumption that these evolutions occur in discrete time; that is, that the evolution of our system is described by the timesteps and an assignment of a state to each timestep. We also make the assumption that these transitions happen randomly at every timestep, with transition probabilities depending only on the current state (in particular, independently of the current time), meaning that at any state and time , one will be at state at time with a probability only depending on and . This assumption means that, for example, if you’re on the front page of reddit, you’re just as likely to visit a particular post whether or not you’ve seen it 1 times or 100 times before. With these assumptions in mind, then, let’s move onto the problem of how to mathematically model such a system.
We note that to fully encapsulate the properties of such a system, it suffices to capture the transition probabilities between any pair of states. To do this, we’ll use a matrix where represents the probability of being in state one time step after being in state . Since probabilities are always positive, and we insist that our system is always in some state, we naturally impose the constraint that and that for all . We call such a matrix satisfying these properties stochastic.
Given our definition of stochastic matrices, the idea of a stochastic vector naturally follows. We’ll define such vectors to be any valid column vectors of a stochastic matrix. More formally, this means any vector with . Whereas stochastic matrices serve to model the transition behaviors of our system, stochastic vectors can be thought of as encoding probability distributions for a single point in time, with corresponding to the probability that one is in state . Supposing that specifies such probabilities at , it follows fairly quickly from the definition of matrix multiplication that specifies the same probabilities at . Given these definitions, we’re now ready to begin looking at various properties of these objects.
Properties of Transition Matrices
Firstly, we’ll check some basic intuitions about how these two objects interact with each other.
Stability under vector multiplication
As noted above, multiplication by effectively corresponds to taking a single timestep forward.
Given that an original vector encodes a probability distribution, then, should as well, which we now check explicitly.
Suppose that is a stochastic matrix and that is stochastic. We’ll now show that is stochastic.
Let
By the definition of a stochastic vector, we know that .
Consider the representation of as a series of column vectors; that is,
By definition, we know that all such are stochastic, which means that for all .
By the definition of matrix multiplication,
Since every coordinate of is the sum of the product of non-negative reals, each coordinate is a non-negative real.
Furthermore, the sum of all the coordinates of is
so is stochastic.
Stability under matrix multiplication
Since corresponds to a single step forward of unit length, it naturally follows that powers of correspond to taking multiple timesteps forward simultaneously. To check this basic intuition, we prove the following, more general property: Suppose that and are two stochastic -matrices. Then is also stochastic. Consider the representation of as a series of column vectors; that is, By definition, we know that all such are stochastic. Then By the previous result, we know that all such are stochastic. Thus, by definition, is stochastic.
Main Result
With these properties out of the way, we move onto our main result, which we state as follows:
Theorem
For any stochastic matrix , there exists a stochastic eigenvector of with eigenvalue ; that is .
A more intuitive way of stating this result is saying that any such system has a “steady state” or fixed point, with the probability distribution represented by this eigenvector remaining fixed under the evolution of the system. Using results that we’ll be proving in the second part of this post, it can actually be shown that any initial probability distribution will tend to such a steady state as , meaning that for sufficiently large , the probability distribution will be effectively independent of .
At this point, a relatively overkill but functional proof of this theorem would be as follows. Note that the set of stochastic vectors in is simply the convex hull of the standard basis vectors and is hence compact. Furthermore, since this set is stable under multiplication by , (viewed as a continuous linear operator) sends this set to itself. Then we can apply the Brouwer fixed point theorem to conclude that has a fixed point. However, as already mentioned, this is a bit overkill, so let’s look at a different approach.
Proof
We first show that there exists some eigenvector with eigenvalue one.
Recall that, to do so, it suffices to demonstrate that is not invertible.
We now consider the representation of as a series of column vectors; that is, By definition, we know that all such are stochastic, which means that for all .
Now let where Note that every column of this new matrix has the property that . To demonstrate that this new matrix is not invertible, it suffices to show that there is a non-trivial linear combination of it’s rows that equals the zero vector. Given the above equation, we may simply take the linear combination of the rows of the matrix with all coefficients equal to 1.
This vector will be by the result above. Since , is non-invertible, and we know that any stochastic matrix has a eigenvalue of 1.
Now all we need to show is that there is some stochastic eigenvector associated to this eigenvalue.
As it turns out, doing this is a bit more complicated than it may first appear, as the eigenvector produced above may have both positive and negative entries. Nevertheless, we’ll be able to prove that some eigenvector exists with only non-negative entries, which finishes as we can then normalize the entries so that they sum to 1.
To do so, it suffices to be able to prove that, given any eigenvector with entries of potentially mixed signs, we can construct an eigenvector with all positive or negative entries (for convenience, we’ll call such vectors single-signed).
The basic approach will be to define a decomposition of any vector into two single-signed vectors, then to show that the vectors in this decomposition remain eigenvectors (provided that the original vector is also an eigenvector). In some sense, the decomposition we use is the “obvious one” (taking one vector to be all the positive entries, and the other to be all the negative entries), but uniqueness will play a crucial role in our argument.
We’ll begin by demonstrating existence and uniqueness in one dimension, which turns out to be pretty trivial.
Lemma 1
Fix some real , and consider such that and . Then there is a unique solution that minimizes . Furthermore, this minimum is exactly , and is achieved when one of or is equal to , and the other is zero.
Proof (Lemma 1)
From the triangle inequality, we know that , so is bounded below by . Furthermore, we know that equality holds if and only if and have the same sign. However, since is non-negative and is non-positive, this can only happen when one of them is zero. Based on the sign of , this uniquely determines the solution mentioned above.
Given this result involving non-negative reals, we can naturally extend it to non-negative vectors.
Lemma 2
Let be the norm of (that is, ). Any vector has a unique decomposition of the form that minimizes subject to the constraint that and have strictly non-negative entries. Furthermore, this minimum is exactly .
Proof (Lemma 2)
Note that since all the components of these vectors are independent, this is equivalent to minimizing the component-wise sums of and times. By the results of the previous lemma, each component has a unique solution under these constraints, where the total sum is the sum of the absolute values of the components of , which is exactly it’s norm.
Again, we stress that this decomposition is the “obvious one”; for example, the decomposition of most people would likely pick is , and formalizing this construction is not particularly difficult. The utility of this result comes from it’s guarantee of uniqueness.
Proof (main result, continued)
Consider any eigenvector satisfying . By lemma 2, we know that has a unique composition as , where and are non-negative. Since is linear, we know . Additionally, we know that preserves non-negativity of vectors as well as the norm of non-negative vectors (by a natural extension of stability under vector multiplication). Thus, and are both non-negative vectors, and have the same norms as and . Therefore, and decompose into the sum of two non-negative vectors with a minimal “total” norm. By uniqueness of our decomposition, and . Since and are not both zero, at least one of or is an eigenvector of with strictly non-negative values with eigenvalue 1, and we’re done!
Conclusion
As I’ve already mentioned, I think this proof is pretty neat, and the decomposition we introduced to justify it turns out to be useful in a couple other results related to stochastic matrices. Now that we’ve completed our proof of the desired result, in part II of this post, we’ll continue to use these results to prove other tightly coupled results.
With that, I hope you’ve enjoyed reading this post, and thanks for your time!