States are tripping over themselves to pass ever-stricter privacy laws. Texas just signed a new data breach notification law. Nevada and Oregon also have expanded their privacy laws. Nevada’s law includes a right to opt out of the sale of personal data, which is the first such law in the United States. Maine passed a new law that may be stricter still, one that requires ISPs to receive affirmative consent before sharing data.
Frank Pasquale proposes four rules for robots. The first rule is “Robots Should Complement Rather than Substitute for Professionals.” I suspect that violates Aziz Huq’s “right to a well-calibrated machine decision.”
Ben Thompson dives even deeper into tech and anti-trust. His main conclusion is that while the FTC and DOJ may have Google, Amazon, Apple, and Facebook in the crosshairs, other than Google, he thinks there’s likely no basis for a successful claim against any of them under current anti-trust law.
Mike Masnick reminds us again that content moderation at scale is impossible. I’m usually inclined to agree with Masnick, and I think he’s one of the smartest people out there on tech policy-related issues. But I think he’s engaging in hyperbole here. Content moderation at scale will forever be messy and imperfect. It will produce false positives and false negatives. But as Twitter found out the hard way, engaging in no content moderation is an untenable situation for a major platform. All major platforms have no choice but to moderate and their content-moderation decisions will be justifiably be scrutinized. That’s the reality of being big and being a medium for sharing information on the internet, now and for the foreseeable future.
Law review articles:
Clark D. Asay writes about Artificial Stupidity. Rather than analyzing current developments in modest, limited, “narrow” artificial intelligence, Asay analyzes the policy choices that might help lead to general artificial intelligence.