Markov chain

A Markov chain is a statistical process describing a (possibly infinite) sequence of steps where the probability of a state (event) occurring depends only on what event happened last time. Markov chains are usually represented as a probability matrix $$\mathbf{P}$$ where $$\mathbf{P}_{i,j}$$ denotes the probability that the next step will be in state j given that the current step is in state i.

Each Markov chain is associated with a stationary (or steady state) distribution, which is a distribution of events so that if an event is chosen at random according to the distribution, then the future event will also come from that distribution.

The Keener eigenvector and Sinkhorn voting methods are based on Markov chains, and by calculating the steady state for a particular family of Markov matrices, it's possible to construct a number of Smith-efficient voting methods. Due to their construction, these methods give each candidate a score, not just an order of finish.

Some nondeterministic methods can be derandomized by using Markov chains, for instance a semiproportional multi-winner generalization of random ballot.