Talk:Summability criterion

This is the discussion page (the "Talk:" page) for the page named "Summability criterion". Please use this page to discuss the topic described in the corresponding page in the main namespace (i.e. the "Summability criterion" page here on electowiki), or visit Help:Talk to learn more about talk pages.

DSC

Is DSC really second-order summable, or is this an error? If it's not an error, then how do you construct a cn² summation array?

I couldn't make it work. DSC needs 2^n numbers. That's still much more feasible than IRV, though. -Kevin Venzke


The article says "The votes cannot be compressed by summing as in other election methods because votes may need to be transferred according to which candidates are eliminated in each round." Is this really entirely accurate? In each round the candidate to be eliminated is determined from the number of votes placing each uneliminated candidate first in the set of uneliminated candidates. This means (assuming tie-break rules that don't create added complication) that recording, for every set of 2 or more candidates, the number of votes giving top place within the set to each candidate in the set, will suffice. This is a summable array of n(2^(n-1)-1) numbers, still failing the criterion but enough to make the statement that "the votes cannot be compressed by summing" incorrect. Am I missing something? - DPJ

Interesting argument. KVenzke

For practical purposes, it's correct only for small numbers of candidates. Otherwise, the number of summation array entries will be so large that it's simply more efficient to record the actual ballots. For example, consider an IRV election with 12 candidates and 10 000 voters. With a summation array, this takes 12×(2^11-1) = 24 564 entries of 2 bytes each, for a total of 49 128 bytes. By storing the actual ballots, encoded as 4-byte numbers (the minimum needed with 479 million possible fully-ranked ballots), you would need only 40 000 bytes. Contrast this to a Condorcet matrix (n(n-1) entries), in which the summation array requires fewer bytes than the ballots even with 7000 candidates. DanBishop 17:17, 3 Jul 2005 (PDT)

Disproofs

I'm curious how one would prove that e.g. IRV can't possibly be summable. Most of the arguments go that IRV requires the full ballot count and that the full count is obviously not summable, but a proper proof would have to show that there exists no polynomial-space structure that can sum up the ballots in a way that an algorithm can use to return the result IRV would. I'm not aware of any proof of that sort, so it might be useful to add some sources to the "not summable" column.

In contrast, proving that something is summable is easy enough: just show an algorithm. Kristomun (talk) 14:12, 20 February 2020 (UTC)

Lower vs higher to describe level of summability

The article says "Election methods with lower summability levels are substantially easier to implement with integrity than methods with higher summability levels". While I understand what this says, it seems potentially confusing to use "lower" to mean "more easily summable" and vice versa. I won't change it, but I'm curious if anyone has solutions. BetterVotingAdvocacy (talk) 08:41, 24 April 2020 (UTC)

Bucklin and MJ

MJ is now in summability order 2, along with Bucklin. That's where it should be: it used to be in the "non-summable" column. To use an   array to calculate the MJ winner, let   be the number of people who gave candidate   the  th best grade. If  's median grade is higher than  's, then (by definition) a majority of the voters graded   at the same grade or a higher grade than they did  . That's how Bucklin works. The tiebreaker can similarly be handled by subtracting one from the relevant column for both candidates until the majority is attained at different grades for the two candidates. Kristomun (talk) 09:10, 6 May 2020 (UTC)

Condorcet-Borda hybrids

Looking at w:Nanson's method, there is a source (http://rsnz.natlib.govt.nz/volume/rsnz_46/rsnz_46_00_005780.html) showing how Nanson's method can be computed off of a pairwise matrix. If this is the case, then we can move it to 2nd-order summability; can someone verify this? BetterVotingAdvocacy (talk) 06:05, 15 May 2020 (UTC)

After having looked at the page, I agree that both Nanson and Baldwin are 2nd-order summable. This has to do with a relation between the pairwise matrix and Borda scores (in the absence of equal-rank); I just didn't think of it before now. I'll make the change. Kristomun (talk) 18:47, 22 May 2020 (UTC)

Why isn't IRV considered summable for electing the dog catcher?

Example using spreadsheets: Create a merge of precinct votes for a four choice election of 10 candidates with candidates as unique identifiers (A-J). Make precincts (ballot boxes) of a manageable number of votes, maybe a 1000.

Record the votes. Get from the ballot box to the spreadsheet somehow. Convert candidate names to identifiers.

No write-ins. Save the ballots for audit.

Sheet1: Enter votes in column A as 1 to 7 characters (A>B, A>B=D>C, etc) for up to 1000 rows

Sheet2: enter formula in cell A1 =Unique(A1:A1000),and enter formula in B1:B1000 =countif(sheet1!a1:a1000,a1)

Send a copy of sheet2 to Central Counting. Can’t be more than all 1000 rows, right? Unlikely that every odd combination of preferences will occur in a meaningful election. But with 10 candidates x 4 choices x 2 operators (= or >) between each, how many is that? Take a reasonably limited US presidential election (say 40 candidates, pick your top 10). How many different voting combinations will that actually come out to in each ballot box? In total? Five million is my spreadsheet's limit.

Central Counting imports/copies/pastes all precinct sheet2’s into their own spreadsheet sheet1. Make sheet2, putting =unique() formula in column A, and sumif() in column B for all the rows arriving from the precincts in sheet1. For rollups to state and national type levels, Central Counting sends their sheet2s for a similar merge at those higher levels. At the level of decision, copy columns A and B into a vote counter with enough rows to handle the actual number of voting combinations.

Isn’t that summable?

RalphInOttawa (talk) 23:22, 20 December 2023 (UTC)

The summability criterion has a very particular definition: it means that the storage space required grows only logarithmically in the number of voters and polynomially in the number of candidates. The format you propose is linear (i.e. not logarithmic) in the number of voters and thus isn't counted as summable of any order.
Practically speaking, there's no problem using computers to store the raw ballots, even for very large elections. Suppose that we have a ranked worldwide election with 15 candidates, and turnout is 100%. That's 8.1 billion ballots. We can store a rank order of 15 candidates with equal-rank allowed in 48 bits. Total storage required is thus 48.6 GB, which fits comfortably on modern memory cards.
But suppose you did the same count with FPTP. Now you can write the results down on paper (15 candidates, a max of ten digits each) and it's pretty easy to see who's in the lead. With a Condorcet matrix, it's not quite as easy to interpret the matrix, but it should fit on paper and with some calculation you should be able to see who the CW is.
So in a way, summability is more about "human-summable" than it is about managable period. A summable method keeps things at a human scale and it makes recounts easier to verify through their summaries. Usually the results can be interpreted more easily as well, e.g. IRV may have Sankey graphs, but they don't tell you who would have won if matters changed slightly. One caveat about this: higher order summable methods would lose these benefits quickly. (E.g. a method with an n^4 summary array would be hard to interpret and deal with.) Kristomun (talk) 18:52, 21 December 2023 (UTC)
Return to "Summability criterion" page.