To Protect Consumers, Watch the Finance Algorithms

Bookmark

The Biden administration is planning to install Rohit Chopra, currently a member of the Federal Trade Commission, as head of the Consumer Financial Protection Bureau. I think he’s a great choice, and I have a piece of advice: Develop new and better ways to combat predatory finance, before it does too much damage.

Chopra has ample progressive cred. He helped Elizabeth Warren set up the CFPB in 2011, before the Trump administration started to dismantle it. At the FTC, he was at the vanguard of efforts to combat the misuse of people’s personal data. In one recent case that I followed, he supported requiring a facial-recognition company to delete an algorithm that it had trained on improperly obtained photos and personal information — and wanted to impose a fine that would deter similar transgressions. So I believe him when he says he is serious about protecting consumers.

That said, there’s a ton of work to be done — particularly in addressing the kinds of financial predation that inspired the creation of the CFPB. Back in Obama’s second term, the bureau was on the cutting edge of understanding things like discriminatory subprime auto lending, even developing a methodology to infer racial characteristics that lenders don’t collect or report directly. Amid the doldrums of the Trump administration, though, the classic human lending transgressions — confusing term sheets, fraudulent advertising aimed at veterans and seniors, exorbitant and manipulative overdraft fees — have increasingly given way to algorithms that can be just as unfair but that regulators don’t understand as much.

Chopra’s background positions him well to get ahead of this trend. To that end, the bureau will need its own algorithms for assessing what is fair, and the data to run them on.

I happen to have some experience in the area: I’ve worked with attorneys general on specific cases of unfair auto and payday lending. To convince a judge that certain activities were illegal, we had to come up with quantitative measures — such as, say, the difference in interest rates charged to otherwise similar Black and White borrowers — and demonstrate that they were out of bounds. We developed similar rules to determine how badly individual borrowers were treated, and how much compensation they deserved. These rules weren’t perfect, but they certainly helped get a grip on the problem.

So why not use such rules more proactively? Instead of waiting for months or years for a lender to establish predatory practices to the extent that customers complain consistently, monitor its activity in something closer to real time. For example, require companies to report certain data for a fairness assessment at the end of each quarter. The relevant information could include interest-rate differentials by race and gender, one-year default rates, and total interest and fees as a percentage of principal. Each measure would have a threshold of acceptability, which if exceeded could trigger a closer look at the business. Given that companies should be collecting such information in any case, it shouldn’t be too difficult.

This is not foolproof. Companies could game the measures, or even outright lie — as Volkswagen famously did in emissions tests. Every now and then, regulators would have to perform a “road test” to make sure the data they were receiving conformed to reality. That said, setting some clear thresholds — which could be tightened over time — would help the CFPB prevent bad behavior, rather than punishing the perpetrators after the damage has been done.

This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.

Cathy O’Neil is a Bloomberg Opinion columnist. She is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of “Weapons of Math Destruction.”

©2021 Bloomberg L.P.

BQ Install

Bloomberg Quint

Add BloombergQuint App to Home screen.