Tech Law Policy Blog Tracking the Most Important Research and Developments in Tech Law & Policy

Friday Links 3/29/19

F

The big news in tech policy this week: The European Union signed off on its proposed “link tax” and revised copyright regime. In essence, if the law goes into effect, any company with more than 50 employees or €10 million in annual revenue would have to negotiate a license with a content provider before providing any links to copyrighted content. Given that the nature of internet linking is very time sensitive, that’s practically impossible to do.

That’s a massive sea change from current policy around the world. 

Consistent with their prior comments, the Electronic Frontier Foundation thinks this is a horrific idea. Many other smart commentators seem to agree that the EU seems to be worse than tone deaf on this one. It’ll be interesting to see if this comes to fruition and actually becomes a law. Given the strong majority and that the commentary on this law has been around for a while, it looks like it’s going to happen. If it does, I’ll be curious if it changes the way the internet works in Europe or if it also impacts the rest of the world.

More bad news for the tech giants: Australia threatens jail time to executives of companies who allow offensive or violent content to be published on their platforms.

Also, Facebook plans fight with Belgium over the ability to use cookies to track users’ and non-users’ web surfing habits.

The Economist dedicated its most recent issue to Europe’s increasingly aggressive regulatory stance toward the big tech firms.

Lots of great law review articles to come out lately.

Lina Khan and David Pozen offer a skeptical view of information fiduciaries. The authors suggest, correctly, that information fiduciary obligations would run diametrically opposed to these corporations’ obligations to their stockholders, and would therefore be unworkable:

Like other corporations with comparable business models, Facebook therefore has a strong economic incentive to maximize the amount of time users spend on the site and to collect and commodify as much user data as possible. By and large, addictive user behavior is good for business. Divisive and inflammatory posts are good for business. Deterioration of privacy and confidentiality norms is good for business. Reforms to make the site less addictive, to deemphasize sensationalistic content, and to enhance personal privacy would arguably be in the best interests of users. Yet each of these reforms would also pose a threat to Facebook’s bottom line and therefore to the interests of shareholders.

Khan and Pozen (internal citations omitted)

Worth noting that the authors do not seem to favor market-based solutions, but rather “a growing body of neo-Progressive scholarship that urges greater emphasis on ‘structural’ (or ‘infrastructural’) solutions to problems of discrimination and domination online.”

John Linarelli of Durham University writes about Advanced Artificial Intelligence and Contract and Gervais writes about Artificial Intelligences as Authors.

Both articles posit interesting theoretical questions about artificial intelligences as agents. My reaction to both articles is the same: Given the modern trajectory of the development of artificial intelligence, the most advanced forms of AI are always developed by the leading companies in tech. As such, any advanced AI is likely to be the intellectual property of a major tech company. AlphaGo is the property of Alphabet, Inc.

If Alphabet, Inc. developed an AlphaGo equivalent that wrote poetry or entered into contracts, it would be doing so on behalf of Alphabet. Any copyrights or contractual obligations would revert back to the company that created it.

All that’s to say, for now, and for the foreseeable future, the best forms of AI are going to be the intellectual property of the biggest tech companies, and that’s the end of the story. All questions about the theoretical autonomy of these agents seem to miss that fact.

Another good law review article by Ellen P. Goodman about Algorithmic Change and Political Failure. It’s a good warning for people who expect human agents to gently acquiesce to purported changes proposed by algorithms. The story goes like this: MIT programmers designed a busing system that reduced systemic racial biases in district busing practices and provided more efficient allocations of students to school at times better suited for their age groups. But the public revolted. Or, at least, a vocal minority of the public revolted. The plan was opposed by wealthy parents and the NAACP and had to be scrapped.

The best of the bunch is Daron Acemoglu and Pascual Restrepo’s “The Wrong Kind of AI? Artificial Intelligence and the Future of Labor Demand.” The authors suggest that policy-makers need to provide incentives for Silicon Valley to develop labor-reinstated AI rather than labor-destroying AI. The current path of automation has had a devastating impact on labor and labor’s role in society. Without careful navigation, this could lead to grave societal upheaval and other social consequences.

The biggest weakness in the article is that the authors don’t specifically propose what policy changes need to occur to effectuate this policy change. Perhaps a tax credit for startups that create, directly or indirectly, a certain number of high-quality jobs over a certain number of years? This would be very hard to define effectively, but it’s certainly an idea worth exploring further.

Add comment

Tech Law Policy Blog Tracking the Most Important Research and Developments in Tech Law & Policy

Recent Posts

Recent Comments

Categories