Cybersecurity & Tech

The Cyberlaw Podcast: Triangulating Apple

Stewart Baker
Tuesday, January 9, 2024, 4:44 PM

Published by The Lawfare Institute
in Cooperation With
Brookings

Returning from winter break, this episode of the Cyberlaw Podcast covers a lot of ground. The story I think we’ll hear the most about in 2024 is the remarkable exploit used to compromise several generations of Apple iPhone. The question I think we’ll be asking for the next year is simple: How could an attack like this be introduced without Apple’s knowledge and support? We don’t get to this question until near the end of the episode, and I don’t claim great expertise in exploit design, but it’s very hard to see how such an elaborate compromise could be slipped past Apple’s security team. The second question is which government created the exploit. It might be a scandal if it were done by the U.S. But it would be far more of a scandal if done by any other nation. 

Jeffery Atik and I lead off the episode by covering recent AI legal developments that simply underscore the obvious: AI engines can’t get patents as “inventors.” But it’s quite possible that they’ll make a whole lot of technology “obvious” and thus unpatentable.

Paul Stephan joins us to note that National Institute of Standards and Technology (NIST) has come up with some good questions about standards for AI safety. Jeffery notes that U.S. lawmakers have finally woken up to the EU’s misuse of tech regulation to protect the continent’s failing tech sector. Even the continent’s tech sector seems unhappy with the EU’s AI Act, which was rushed to market in order to beat the competition and is therefore flawed and likely to yield unintended and disastrous consequences.  A problem that inspires this week’s Cybertoonz.

Paul covers a lawsuit blaming AI for the wrongful denial of medical insurance claims. As he points out, insurers have been able to wrongfully deny claims for decades without needing AI. Justin Sherman and I dig deep into a NYTimes article claiming to have found a privacy problem in AI. We conclude that AI may have a privacy problem, but extracting a few email addresses from ChatGPT doesn’t prove the case. 

Finally, Jeffery notes an SEC “sweep” examining the industry’s AI use.

Paul explains the competition law issues raised by app stores – and the peculiar outcome of litigation against Apple and Google. Apple skated in a case tried before a judge, but Google lost before a jury and entered into an expensive settlement with other app makers. Yet it’s hard to say that Google’s handling of its app store monopoly is more egregiously anticompetitive than Apple’s.

We do our own research in real time in addressing an FTC complaint against Rite Aid for using facial recognition to identify repeat shoplifters.  The FTC has clearly learned Paul’s dictum, “The best time to kick someone is when they’re down.” And its complaint shows a lack of care consistent with that posture.  I criticize the FTC for claiming without citation that Rite Aid ignored racial bias in its facial recognition software.  Justin and I dig into the bias data; in my view, if FTC documents could be reviewed for unfair and deceptive marketing, this one would lead to sanctions.

The FTC fares a little better in our review of its effort to toughen the internet rules on child privacy, though Paul isn’t on board with the whole package.

We move from government regulation of Silicon Valley to Silicon Valley regulation of government. Apple has decided that it will now require a judicial order to give government’s access to customers’ “push notifications.” And, giving the back of its hand to crime victims, Google decides to make geofence warrants impossible by blinding itself to the necessary location data. Finally, Apple decides to regulate India’s hacking of opposition politicians and runs into a Bharatiya Janata Party (BJP) buzzsaw. 

Paul and Jeffery decode the EU’s decision to open a DSA content moderation investigation into X.  We also dig into the welcome failure of an X effort to block California’s content moderation law.

Justin takes us through the latest developments in Cold War 2.0. China is hacking our ports and utilities with intent to disrupt (as opposed to spy on) them. The U.S. is discovering that derisking our semiconductor supply chain is going to take hard, grinding work.

Justin looks at a recent report presenting actual evidence on the question of TikTok’s standards for boosting content of interest to the Chinese government. 

And in quick takes, 

Download 486th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

Stewart A. Baker is a partner in the Washington office of Steptoe & Johnson LLP. He returned to the firm following 3½ years at the Department of Homeland Security as its first Assistant Secretary for Policy. He earlier served as general counsel of the National Security Agency.

Subscribe to Lawfare