Pointing out other people’s flaws is easy to do.
Pointing out the flaws of our leaders is probably a tradition as old as our species. Our governments, our institutions, and our laws—they’re imperfect. Wherever you live, this is invariably true.
It is not so easy, however, to make government better. You have to coordinate a coalition of people who think like you do. That coalition has to be large enough to outvote all other coalitions that do not agree with you. To build a coalition that big, you have to compromise with people who sort of think like you do, but sort of don’t. By the time you build a coalition that big, the ideological alignment of your coalition is weak, and it’s hard to agree on much.
Against this backdrop, our government has to try to solve real-life problems. Problems like how to care for the sick and the poor without disincentivizing growth; how to keep our country safe without overly restricting people’s rights; how to safeguard personal health decisions while respecting many citizens’ deepest religious convictions.
Among these many problems is the problem that interests me the most, which is, “how to regulate technology?” This is a problem that, to paraphrase Cass Sunstein, is singularly ill suited for government regulation.
When it comes to problems caused by technology, most governments are inclined to try to do something, even when they don’t understand the problems they’re trying to solve. Stated another way, governments suffer from action bias. And journalists, pundits, and general market commentators, those people make their living spewing action bias.
Unfortunately, just doing something isn’t likely to solve our problems unless we carefully consider the costs and benefits of what we’re trying to do before we do it.
What’s more, the philosophy of “just do something” places a lot of faith in those who are responsible for the doing of that something. It is as if we think our leaders, whom we know to be deeply flawed, will somehow act as if they were philosopher-kings and queens when asked to regulate one of, if not the most, opaque areas of law.
Perhaps some skepticism is warranted there.
The trap of urging our governments to just do something is tempting, and even high-quality journalists occasionally fall into it.
Last month, the normally excellent Tom Standage of The Economist (my favorite news source) wrote:
The General Data Protection Regulation, a set of rules of data protection and privacy introduced by the European Union in May 2018 was a step in the right direction . . . . Critics will argue that such rules hamper innovation and strengthen the internet giants, which can afford the costs of regulatory compliance in a way that startups cannot. They have a point. But Europe’s approach seems preferable to America’s more hands-off stance.The Economist
There you have it. The EU, having just done something, however flawed it might be, has made progress.
This is not true.
A regulation is better than no regulation—if and only if—its benefits outweigh its harms. Simply throwing regulations against the wall, particularly if those regulations are likely to stifle the most dynamic sector of the economy, the tech sector, is a terrible idea.
Governments should regulate harms, not technology. The former is challenging; the latter is reckless and impossible to do effectively.
And the way to regulate harms is to attempt to identify and quantify harms, and then attempt to reduce those harms in a way that does not create greater harms in other areas of the economy.
Perhaps that is the something our regulators should be doing.