ADVERTISEMENT

To Beat Gerrymandering, Do the Math

To Beat Gerrymandering, Do the Math

(Bloomberg Opinion) -- Gerrymandering is one of those topics that makes U.S. Supreme Court justices squirm. The problem is that the Constitution gives elected lawmakers the power to draw the lines around legislative districts, and it’s hard for judges to determine when politicians have abused their rightful authority. Just last year, the justices sidestepped cases that asked them to decide whether district maps engineered for partisan advantage clashed with constitutionally protected rights.

Now, however, the court is considering challenges to district maps in North Carolina and Maryland. They’re looking for a practical standard for courts to use to decide how much partisanship is too much.

Thankfully, mathematicians have stepped up to help. At the heart of their approach is the idea of testing whether the map in question is abnormal when compared against random maps that only account for legal and geographical constraints, and are generated in a way that’s oblivious to partisan machinations.

In 2018, North Carolina’s statewide congressional vote was almost evenly split between the two major U.S. political parties, but it ended up sending 10 Republicans and 3 Democrats to the U.S. House of Representatives. Of 24,000 random maps generated by a team led by Jonathan Mattingly, a mathematician at Duke University, only 162 led to such an unbalanced result — that’s less than 0.7 percent. The conclusion is that the current map is an outlier in the distribution of possible maps.

As simple as it is, though, the math wasn’t persuasive to some of the justices. At the court’s oral arguments in March, Justices Brett Kavanaugh, Samuel Alito and Neil Gorsuch suggested that disqualifying district maps that are shown to be statistical outliers would be the same as requiring the share of seats won by each party to match its share of the vote. Kavanaugh, for example, asserted that those challenging the North Carolina map wish to “mandate proportional representation,” an impractical standard that the courts have long rejected.

But the outlier-detection approach actually isn’t a stalking horse for proportional representation.

As a case in point, consider the 2000 presidential election, in which George W. Bush claimed 35% of the statewide vote in Massachusetts. If we used that year’s voting data to redistrict the state today, a proportional representation system would require that Republicans claim three or four of the state’s nine seats in the U.S. House of Representatives. Yet it would have been mathematically impossible to assemble even a single district that would have a majority of Bush voters, because they were almost uniformly distributed among the townships that serve as building blocks for districts. Therefore, a map that gave Democrats all nine seats wouldn't just be statistically typical — it would be the only possible option.

Other criticisms of the outlier-detection approach are also refutable. In an interview with Nature, Michael McDonald, a political scientist at the University of Florida, argued that since “there are more ways to draw voting districts in the U.S. than there are quarks in the universe,” it’s impossible to quantify how random a sample of maps is. But using samples to draw conclusions about much larger phenomena is precisely the point of the statistics discipline.

It’s encouraging that the Supreme Court does appear more receptive to the outlier-detection approach than it has been to another measure of gerrymandering, the efficiency gap, which has serious shortcomings. That approach, which seeks to compute the number of votes that are “wasted” by each party, was considered by the court in 2017 and scorned by Chief Justice John Roberts as “sociological gobbledygook.”

The court is expected to rule on the North Carolina and Maryland gerrymandering cases next month. The outlier-detection approach deserves to become the new standard for identifying unconstitutional gerrymanders. It would give justices a rigorous way to identify and invalidate blatantly unfair maps.

But even if these maps were thrown out, there would still be the question of how to design truly fair maps to replace them.

Some proposals to do this draw on fair division, an area of mathematical economics that studies formal notions of fairness and ways of achieving them. In particular, the classic “I cut, you choose” principle was the inspiration for a fair districting protocol that I created with my Carnegie Mellon colleague Wesley Pegden and Dingli Yu of Princeton University, as well as for another method suggested by Steven Brams, a political scientist at New York University. By contrast, a procedure dubbed the shortest splitline algorithm was designed to produce districts with simple shapes while ignoring political considerations; the procedure itself is so straightforward that a state legislature could literally write it into law.

Whichever approach wins out, it’s clear that math has a prominent role to play in the legal debate about gerrymandering. At least a fast-growing coalition of scientists spanning multiple disciplines thinks so. And we can prove that anyone who disagrees is an outlier.

In fact, a beautiful paper published in the Proceedings of the National Academy of Sciences in 2017 proves that one can evaluate gerrymandering in a statistically rigorous way even without generating perfectly random samples of maps.

To contact the editor responsible for this story: Jonathan Landman at jlandman4@bloomberg.net

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Ariel Procaccia is an associate professor in the computer science department at Carnegie Mellon University. His areas of expertise include artificial intelligence, theoretical computer science and algorithmic game theory.

©2019 Bloomberg L.P.