Tech Law Policy Blog Tracking the Most Important Research and Developments in Tech Law & Policy

Friday Links 4/26/19 Mental Masturbation and Trolley Problems

F

There appears to be some academic buzz about the new paper by Samantha Godwin on the Ethics and Public Health of Driverless Vehicle Collision Programming.

I believe this is the most overdone and impractical area of tech policy today. As MIT professor Rodney Brooks said in a blog post a couple of years ago:

Here’s a question to ask yourself. How many times when you have been driving have you had to make a forced decision on which group of people to drive into and kill? You know, the five nuns or the single child? Or the ten robbers or the single little old lady? For every time that you have faced such decision, do you feel you made the right decision in the heat of the moment? Oh, you have never had to make that decision yourself? What about all your friends and relatives? Surely they have faced this issue?

And that is my point. This is a made up question that will have no practical impact on any automobile or person for the forseeable future. Just as these questions never come up for human drivers they won’t come up for self driving cars. It is pure mental masturbation dressed up as moral philosophy. You can set up web sites and argue about it all you want. None of that will have any practical impact, nor lead to any practical regulations about what can or can not go into automobiles. The problem is both non existant and irrelevant.

Eric Goldman has a new paper coming out on why Section 230 is better than the First Amendment.

Goldman is one of the biggest Section 230 experts and also one of its biggest advocates. He acknowledges that Section 230 is under scrutiny and may not survive in its current state much longer. But I think he does not do a strong enough job in this article meeting head on the very real policy concerns stemming from those aggrieved parties who are harmed when services that profit from user-generated content fail to address or even encourage defamatory or illegal conduct on their platforms.

As detailed by Danielle Keats Citron and Benjamin Wittes in The Internet Will Not Break: Denying Bad Samaritans § 230 Immunity, here are some providers and users whose activities have been immunized under § 230:

-a revenge porn operator whose business was devoted to posting people’s nude images without consent

-a gossip site that urged users to send in “dirt” and fanned the flames with snarky comments

a message board that knew about users’ illegal activity yet refused to collect information that would allow them to be held accountable

-a purveyor of sex-trade advertisements whose policies and architecture were designed to prevent the detection of sex trafficking

-an auction site facilitating the sale of goods that risked serious harm

-an individual who forwarded a defamatory email with a comment that “[e]verything will come out to the daylight”

-a hook-up site that ignored more than fifty reports that one of its subscribers was impersonating a man and falsely suggesting his interest in rape, resulting in hundreds of strangers confronting the man for sex at work and home

These facts are powerful and compelling. And the policy trend seems to be shifting, in the US, and even more so abroad, toward holding companies that make their living off third-party content accountable for what’s posted there.

I think there’s an argument, in spite of all of the above, that the speech-protecting power of § 230 still outweighs the public policy interest in allowing lawsuits to proceed against companies that profit from and sometimes even encourage potentially harmful third-party content, but Goldman needs to beef up that part of his proposed article, because right now, that argument is losing the debate.

Last but not least, Ben Evans of Andreessen Horowitz waxes eloquently about bias in AI.

A few highlights:

Such issues are not new or unique to machine learning – all complex organizations make bad assumptions and it’s always hard to work out how a decision was taken. The answer is to build tools and processes to check, and to educate the users – make sure people don’t just ‘do what the AI says’. Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning…

Hence, the scenario for AI bias causing harm that is easiest to imagine is probably not one that comes from leading researchers at a major institution. Rather, it is a third tier technology contractor or software vendor that bolts together something out of open source components, libraries and tools that it doesn’t really understand and then sells it to an unsophisticated buyer that sees ‘AI’ on the sticker and doesn’t ask the right questions, gives it to minimum-wage employees and tells them to do whatever the ‘AI’ says. This is what happened with databases. This is not, particularly, an AI problem, or even a ‘software’ problem. It’s a ‘human’ problem. 

Worth reading in its entirety.

Add comment

Tech Law Policy Blog Tracking the Most Important Research and Developments in Tech Law & Policy

Recent Posts

Recent Comments

Categories