Lawfare News

A Manifesto: Using Empirical Research in Journalism and Scholarship to Understand Big Tech

Kate Klonick
Thursday, May 9, 2019, 9:55 AM

On April 22, Julia Angwin, an award-winning investigative journalist specializing in technology, was somewhat bizarrely fired as editor-in-chief from the fledgling media company she’d founded. The company, The Markup, was created in order to focus on data-driven journalism, and in solidarity five members of the seven-person editorial team resigned as well.

Mark Zuckerberg on stage at Facebook's F8 Conference, 2014. (Source: Flickr/Maurizio Pesce)

Published by The Lawfare Institute
in Cooperation With
Brookings

On April 22, Julia Angwin, an award-winning investigative journalist specializing in technology, was somewhat bizarrely fired as editor-in-chief from the fledgling media company she’d founded. The company, The Markup, was created in order to focus on data-driven journalism, and in solidarity five members of the seven-person editorial team resigned as well. Shortly after, one of the leading and longest working reporters in privacy and technology, Kashmir Hill, was among those unceremoniously let go in a round of layoffs at tech publication Gizmodo as the site changed ownership.

Aside from the specific cost in new reporting and thoughtful projects, the loss of Angwin’s and Hill’s reporting—even if it’s only temporary—should be a reminder of the power of their new brand of journalism to shape both the internal policies of technology companies and traditional government regulation. Theirs is a model that should not be confined to journalism but should be expanded into legal scholarship as well.

There are lots of ways to cover tech, but the past few decades have shown that the techniques most effective at moving law and policy forward take time. One of Angwin’s most consequential pieces came together after a full three years of collecting data and asking subjects to test and report the advertising they were being shown by Facebook’s algorithm. Angwin’s reporting eventually revealed that Facebook was in fact letting advertisers exclude by race. It was a huge finding and resulted in a change in Facebook’s policy going forward. “Facebook’s changes to its ad targeting system are a huge win for civil rights,” wrote Angwin in a March 2019 Twitter thread explaining the long road it had taken to produce that work.

Similarly, Hill has spent days, weeks and months experimenting with surveillance technology. In 2018, Hill voluntarily wired her entire home with “smart” and surveillance technology for one month—subjecting herself to everything from smart toothbrushes to a smart bed. Assisted by co-reporter Surya Mattu, Hill was able to demonstrate just how much information these devices conveyed to third parties without the consent of friends or family entering the home. A year later, Hill ran a different type of experiment, this time blocking Amazon, Facebook, Google, Microsoft and Apple from her life—each for a week at a time, and then all five at once. The project is currently up for a Webby, an award honoring “the best of the Internet.” As calls for antitrust measures continue to percolate, Hill’s 2019 series has become one of the most widely heralded examples of the monopoly-esque power of big tech.

This reporting takes time. But even more importantly, it’s characterized by a scientific rigor. This is what Angwin set out to base her entire investigative team around when she started The Markup. The adage goes that journalism is the first draft of history—and as Angwin told me, “[I]t’s our job to have the best possible draft of history.” Even after the collapse of The Markup, she aims to continue this effort by relying on the scientific method, which guides her reporting. This is easier with some more quantitative stories than it is with others, but she still believes in the importance of raising the bar. “One of the easiest ways we better journalism is just by increasing the sample size of journalism. The old adage used to be three quotes,” she laughed. “I think we can do a little bit better than that.”

This kind of work is particularly necessary because technology companies are both opaque in their operation and incredibly powerful. Problems that arise from big tech—be they in the wrong kind of content staying up, the wrong kind of content coming down, harassment, terrorism, algorithmic bias or fake news—are not issues the public can start to address until they know what’s going on.

The lessons from this style of tech reporting go beyond journalism. The legal academy is used to starting with an argument about where the law should go. From this viewpoint, the scientific method—which moves only toward an argument if one is suggested by evidence—can seem a strange beast. But it need not be. For years, legal scholars have been developing work that surveys the harms of technology and uses that evidence to argue for change. Danielle Citron has spent her career documenting the dark side of the internet, and most recently she and Benjamin Wittes have examined the harm done by bad Samaritans on the Internet to argue for reforming certain types of regulation. Mary Anne Franks has done similar work documenting nonconsensual pornography in working to try and put an end to it through new state and federal legislation. Ari Ezra Waldman has used interviews and primary source research within the privacy industry to demonstrate how current privacy laws fail to deliver on their promise to protect people.

But there is much, much more room for more descriptive work on what these large technology companies are actually doing to govern us, so that we can best know how to respond. “You can’t solve a problem unless you precisely diagnose it,” says Angwin. “I see my job as trying to diagnose it.” This is where the pioneering journalism of Angwin and Hill can meet with that of existing legal scholars: The task for scholars is to learn new quantitative and qualitative empirical ways to see what tech is so that they can address the ways in which tech fails us as a society.

Right now, this approach is somewhat rare in legal scholarship, but it is not at all new. Among the categories of legal scholarship Martha Minow identifies in her famous essay, “Archetypal Legal Scholarship: A Field Guide,” are two that seem particularly apt for law and tech academia. These types of “[t]est[] a proposition about society … that is used by lawyers or assumed in legal sources” through empirical investigation—along with work that will “study, explain, and assess legal institutions, systems, or institutional actors” using “historical, anthropological, sociological or economic analysis” to expose complexity and “gaps between theories and practice.” It’s just this type of work that can address the technology companies that govern the public.

Legal scholarship on big tech does not have to fit into one of Minow’s categories, but often it naturally does. What’s more, when it does, it can produce valuable results. In my 2018 article “The New Governors: The People, Rules, and Processes Governing Online Speech,” I used dozens of interviews and primary source documents to argue that large online speech platforms such as Facebook, Twitter, and Google were essentially systems of governance on our freedom of expression. For upcoming papers, I’m embedding for weeks at a tech company to write a similar history and system explanation in the hopes that it will lead to better policy and regulation, and using large data sets to explore the phenomenon of virality.

This work is tiring, time consuming and expensive. (I had to fund my projects with grants, a rare thing for a legal academic.) But, like Hill’s and Angwin’s work, it also takes these companies out of the world of theory and into reality. We don’t have to imagine what these companies are doing and what harms come at what benefits—we can actually know. And knowing is where addressing the problems caused by big tech begins.


Topics:
Kate Klonick is an Assistant Professor at Law at St. John's University Law School, an Affiliate Fellow at the Information Society Project at Yale Law School, and Future Tense Fellow at New America. Her research and writing looks at networked technologies' effect on the areas of social norm enforcement, freedom of expression, and private online governance. Her work on these topics has appeared in the Harvard Law Review, Maryland Law Review, New York Times, The New Yorker, The Atlantic, Slate, The Guardian and numerous other publications.

Subscribe to Lawfare