Markov chain: Difference between revisions

Add more references about applying Markov chains to voting methods
(Create initial page)
 
(Add more references about applying Markov chains to voting methods)
 
(2 intermediate revisions by the same user not shown)
Line 5:
Each Markov chain is associated with a stationary (or steady state) distribution, which is a distribution of events so that if an event is chosen at random according to the distribution, then the future event will also come from that distribution.
 
The [[Keener eigenvector]] and [[Sinkhorn]] voting methods are based on Markov chains, and by calculating the steady state for a particular family of Markov matrices, it's possible to construct a number of [[Smith-efficient]] voting methods.<ref>{{cite web|url=http://lists.electorama.com/pipermail/election-methods-electorama.com/2002-December/008923.html|title=The Copeland/Borda wv hybrid (fwd)|website=Election-methods mailing list archives|date=2002-12-03|last=Simmons|first=F.}}</ref><ref>{{cite web|url=http://lists.electorama.com/pipermail/election-methods-electorama.com/2004-May/078240.html|title=Markov chain approaches|website=Election-methods mailing list archives|date=2004-05-15|last=Heitzig|first=J.}}</ref> Due to their construction, these methods give each candidate a score, not just an order of finish.
 
Some nondeterministic methods can be derandomized by using Markov chains, for instance a semiproportional multi-winner generalization of [[random ballot]].<ref>{{cite web|url=http://lists.electorama.com/pipermail/election-methods-electorama.com/2019-October/002323.html|title=Semiproportional methods 2: derandomizing the obvious|website=Election-methods mailing list archives|date=2019-10-04|last=Munsterhjelm|first=K.}}</ref>
 
{{stub}}
Line 11 ⟶ 13:
==References==
<references />
 
[[Category:Mathematics]]
1,196

edits