Markov chain: Difference between revisions

From electowiki
Content added Content deleted
(Create initial page)
 
mNo edit summary
Line 5: Line 5:
Each Markov chain is associated with a stationary (or steady state) distribution, which is a distribution of events so that if an event is chosen at random according to the distribution, then the future event will also come from that distribution.
Each Markov chain is associated with a stationary (or steady state) distribution, which is a distribution of events so that if an event is chosen at random according to the distribution, then the future event will also come from that distribution.


The [[Keener eigenvector]] and [[Sinkhorn]] voting methods are based on Markov chains, and by calculating the steady state for a particular family of Markov matrices, it's possible to construct a number of [[Smith-efficient]] voting methods.<ref>{{cite web|url=http://lists.electorama.com/pipermail/election-methods-electorama.com/2002-December/008923.html|title=The Copeland/Borda wv hybrid (fwd)|website=Election-methods mailing list archives|date=2002-12-03|last=Simmons|first=F.}}</ref>
The [[Keener eigenvector]] and [[Sinkhorn]] voting methods are based on Markov chains, and by calculating the steady state for a particular family of Markov matrices, it's possible to construct a number of [[Smith-efficient]] voting methods.<ref>{{cite web|url=http://lists.electorama.com/pipermail/election-methods-electorama.com/2002-December/008923.html|title=The Copeland/Borda wv hybrid (fwd)|website=Election-methods mailing list archives|date=2002-12-03|last=Simmons|first=F.}}</ref> Due to their construction, these methods give each candidate a score, not just an order of finish.


{{stub}}
{{stub}}

Revision as of 16:12, 12 May 2022

Wikipedia has an article on:

A Markov chain is a statistical process describing a (possibly infinite) sequence of steps where the probability of a state (event) occurring depends only on what event happened last time. Markov chains are usually represented as a probability matrix where denotes the probability that the next step will be in state j given that the current step is in state i.

Each Markov chain is associated with a stationary (or steady state) distribution, which is a distribution of events so that if an event is chosen at random according to the distribution, then the future event will also come from that distribution.

The Keener eigenvector and Sinkhorn voting methods are based on Markov chains, and by calculating the steady state for a particular family of Markov matrices, it's possible to construct a number of Smith-efficient voting methods.[1] Due to their construction, these methods give each candidate a score, not just an order of finish.

This page is a stub - please add to it.

References

  1. Simmons, F. (2002-12-03). "The Copeland/Borda wv hybrid (fwd)". Election-methods mailing list archives.