Lawfare Daily: Chris Hoofnagle on the Theory, History, and Future of Cybersecurity

Published by The Lawfare Institute
in Cooperation With
Chris Hoofnagle, Visiting Senior Research Fellow at King’s College and Professor of Law in Residence at the UC Berkeley School of Law, joins Kevin Frazier, Assistant Professor at St. Thomas University College of Law and a Tarbell Fellow at Lawfare, and Eugenia Lostri, Lawfare's Fellow in Technology Policy and Law, to discuss ALL things cybersecurity—its theory, history, and future. Much of their conversation turns on themes expressed in Hoofnagle’s textbook, “Cybersecurity in Context,” that he co-authored with Golden G. Richard III. The trio also explore related concepts such as the need for an interdisciplinary approach to teaching and studying cybersecurity.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://givebutter.com/c/trumptrials.
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Introduction]
Chris Hoofnagle: Law
enforcement, the military, the intelligence community, private sector actors
all look at security differently. They have different incentives, different
goals, different interpretations of what security is. And part of what's made
regulating security so difficult is that these goals are sometimes in deep
conflict.
Kevin Frazier: It's
the Lawfare Podcast. I'm Kevin Frazier, assistant professor at St.
Thomas University College of Law and a Tarbell fellow at Lawfare, joined
by my colleague, Eugenia Lostri, Lawfare’s Fellow in Technology, Policy,
and Law, and our guest, Chris Hoofnagle, visiting senior scholar fellow at
King's College and Professor of Law in Residence at the UC Berkeley School of
Law.
Chris Hoofnagle: This
is a rich field that involves so many different issues, from standards to
forensics to international relations. So part of what we're doing is getting
the student to slow down and to appreciate just how big the picture is.
Kevin Frazier: Today
we're talking about “Cybersecurity in Context,” a new textbook coauthored by
Chris and Golden Richard. The book explores the theory, history, and future of
cybersecurity.
[Main Podcast]
Eugenia Lostri:
Chris, I mean honestly, your textbook is pretty impressive. I think the first
thing I said to Kevin is that, oh my god, this is really a history of
everything cyber, right? So that’s, that's really incredible. But I imagine
that a big challenge of writing a history of everything is ensuring that it remains
relevant that you're not having to constantly update any time that there is a
big change in technology, or a small change in technology, or in policy, since we're
finally seeing a lot of action on that front. So I was curious as I was reading
it, how did you go about future proofing your textbook?
Chris Hoofnagle:
That's absolutely right. That's a central concern in writing a textbook. And so
what my coauthor, Golden Richard, and I did was take a framework approach. The
book is filled with theory, with high level framing questions one can ask. We
try not to answer questions, but rather to equip students with the right
questions to ask, many of which are based around cost benefit analysis and
really thinking critically about what we're trying to do with security and
whether our interventions work, how they might fail, how technology might
change. So you've identified a central concern for us, one that we've struggled
throughout the textbook to address.
Eugenia Lostri: Yeah,
no, I can imagine. And I have to say the idea of the tradeoffs is so central,
right? And it's not, sometimes not sufficiently discussed when we're talking
about cybersecurity. We all want to accomplish 100 percent of what's in our
respective silos. So having that as a theme I would imagine is going to be a
great contribution. Something that struck me, though, is that you chose to have
these images from the Iliad and the Odyssey, which is not necessarily what I
think about when I'm thinking about technology. So, I'm curious if you can tell
us a little bit more about why you chose that, what it represents, and how it's
supposed to help students as they're going through the textbook.
Chris Hoofnagle:
Golden and I are both classicists, and we think that there are lessons from the
Iliad and Odyssey that are relevant to today that we found that many students
aren't familiar with. And the primary lesson is that you have to use your head
to defeat your adversaries and not your brawn. You know, all of our popular
media today presents heroes and soldiers as these kind of mega warriors rather
than as people who use their head and use techniques that are as old as
history. You can go back and read Caesar, Julius Caesar and his use of trickery
and disinformation to win battles. The Odyssey is filled with disinformation
and clever tricks that allow societies to win without using violence. And I
think this is a lesson we really need to understand that most people are going to,
well the smart people in the world are going to try to engage in adversarial
conduct using their head rather than their brawn.
Kevin Frazier: And when
you mention use of a framework approach, using your brain, for example, to dive
into, let's say, a cost benefit analysis, one thing that came to mind for me
was that we're seeing some challenges to the use of CBAs, as the shorthand is,
in other contexts. Where, for example, in an antitrust framework, we've been
seeing more and more people challenge this kind of over reliance, perhaps, on
quantification of the good or bad of a certain policy.
Was there a sort of back and forth with you and Golden on
whether that was the best normative framework to use, or what was it about a
CBA that you thought leaning into it in a cybersecurity context made the most
sense for students, given where we are, given other policy conversations going
on?
Chris Hoofnagle: I've
been a big critic of cost benefit analysis throughout my career. I've
identified it as a kind of one-eyed analysis, but let me tell you, my sometimes
colleague, Peter Schuck, convinced me that cost benefit analysis is the way to
think about regulation. And the way to think about cost benefit analysis is to
broaden one's lens to think about some other values.
So one of the questions we challenge students to think about is
whether a security trade off involves a privilege, a right, or an interest. And
if we are trading interests or trading privileges, that's a very different
issue than whether we're trading a right. Another question we ask in our
framework is whether the security measure makes opportunities for guile. So
can, if you implement this security measure, does it make it possible for
decision makers to take your money? Or to deny wrongdoing? So, security has to
be seen in this larger framework where assaults to our civil liberties are part
of the costs considered and whether or not accountability is actually in the
system.
And if there's not accountability, if there's no way for the
data subject, the individual, to hold a decisionmaker to account, that is a
cost that has to be in the framework.
Kevin Frazier: And I
think this is particularly exciting because of my own experience being a
student of yours in an interdisciplinary cybersecurity course, having a
framework that allows for the incorporation of diverse disciplines and
encourages students to reach out to that public policy student or reach out to maybe
even that philosophy student and get a new understanding of what should be
included in that CBA is really fascinating. So would you say that this textbook
really is a push, even a subtle push, or maybe an explicit push to make cybersecurity
more interdisciplinary?
Chris Hoofnagle:
Absolutely. As Kevin, from being in my classes, my courses at University of
California are deeply multidisciplinary. It's one of the reasons why Golden and
I wrote this textbook. We struggle to teach our respective students important
cyber literature from other disciplines. The classic example is trying to teach
law students Thomas Rid's “Cyber War Will Not Take Place.” And that's an
amazing article, and it has some subtleties and disciplinary assumptions that
are just outside law students’ toolboxes. Now you got it because of your
background, but most of--
Kevin Frazier: Who
knows if I really got it. You're being too kind. For all the listeners out
there, I was probably back there raising my hand all the time. What the heck is
going on here? But I appreciate the kind words professor.
Chris Hoofnagle: Well,
they're well deserved.
And so what we're trying to do, I think what the textbook does,
is it synthesizes the economic literature, the psychological literature, the
political science IR security studies literature so that students who don't
have a background in these different disciplines can make sense. And a lot of
that is about the disciplinary assumptions that are unstated in political
science and that actually conflict with lawyers’ ideas about what's just in the
world. And lawyers are very focused on
individual, individual rights. And so the disciplinary assumption that a
political scientist or security studies scholars might have, it's just a
different level. And it's just not obvious to people studying in the field.
Eugenia Lostri: I
just, I want to jump in here as someone who comes from a background that had
nothing to do with cybersecurity or technology, here another lawyer with an
interest in international law. The way that I read this section of your
textbook was, there's so many different ways in which you can contribute to the
discussion, even when you don't have this technical background, even though you
should probably learn a little bit, like your textbook is doing, just learning
a little bit of everything to understand the context.
But I'm wondering, because you do work with students every day,
you see this firsthand, and we know that there is a cyber workforce deficit. We
need a push towards bringing more students interested in the field in any one
way, whether it's policy, whether it's law, whether it's economics, psychology,
or the actual technical stuff. So, I'm wondering if in these cross disciplinary
classes that you have, do you see what are the challenges or the hurdles in the
way for these students? Or actually, we just need to wait a bit longer until
all of them graduate and then the cyber workforce problem will be solved.
Chris Hoofnagle:
That's a great question. The problem that students have is that first step and
believing in themselves and getting that first job. And what's so interesting
about this is that America and other nations need millions and millions of
people to work in cyber. And the upside for students is that even the entry
level job, so information security analyst, is a $100,000 a year job. So, one
in cybersecurity can do well. You can have a great middle-class job. The problem
is getting that first job. And so what we do in the textbook is we integrate
labs to increase students facility with computers. So, it's written so that even
students who have no programming experience can do the labs and learn more
about their computers, but also to demystify things a bit.
What's going on in a lot of, let's say, SOCs is not terribly
complicated. And a college student who has good critical thinking skills, good
communication skills, can absolutely do that work. But currently cybersecurity
is cloaked in all this mystery and intrigue. So what we want to do is demystify
it and say, Hey, there's a place in cybersecurity for you. And it doesn't
matter if you're, if you're a music major or some other major in, in social
sciences, what really matters is whether you can think and whether you can
communicate.
Kevin Frazier:
Speaking of that deficit of cybersecurity workers, you spend a lot of time in
the textbook pointing out that the military was the original cyber actor. And I
think for a lot of those jobs you mentioned, the $100,000 job or some sort of
private sector job, I think there's a particular deficit in the public sector,
finding folks who will be in those day to day roles, helping the government
itself respond to these crises. Thinking about some of these broader efforts
we're seeing now in the AI context, for example, the Department of Homeland
Security created an AI core to try to bring more tech talent into the
government, would you call on DHS and similar entities to say, hey, we need a
similar Cyber Corps or can we expand efforts like that to say this is our sort
of Peace Corps for cybersecurity or what can we do to get more public sector
expertise in this front?
Chris Hoofnagle: The
public sector has a particular problem. They're training people from zero,
taking them from zero to 60 and then they lose them. So the military and many
other agencies are training new people and those very people go out and get
jobs at consulting firms where they sell their services back to the public
sector. So, it's great for those individuals, but it's very costly for the
taxpayer. So, we absolutely need ways to integrate new learners, so that that first
job into the public sector, and no one wants to hear this, but we have to make
pay higher. We have to pay more, and we have to make these jobs more flexible. There's
a lot of people who don't want, for whatever reason, they don't want to be in
the Washington D.C. area. They want to live elsewhere in the world. And for
whatever reason, they don't want to live under the burden of a security
clearance. And these are some of the factors that make it hard for the public
sector. So why should I live with the burden of a security clearance when I
could get a job at a Bay Area company that's not going to bother me about
whether I'm friends with people from certain nations or whether my weekend
recreational activities are. These are some of the challenges we need to
overcome
Eugenia Lostri: Sso
you started your question, Kevin, exactly the same way that I was going to
start my question, which is about the military as the original cyber actor. And
I was very curious if you could tell us a little bit more about how you see the
fact that it started in the military, how it shaped the history of cybersecurity
so far, the way that we understand it, and whether you're seeing a shift now
when we consider this more in the commercial space, the private space. Has it
changed things? Or are we definitely still burdened by the original sins of how
everything developed?
Chris Hoofnagle: It
might be teaching at Berkeley that shades my lens on this, but my experience in
academia is that many in the professorate are dismissive of the military. They
don't understand how complex it is. They don't understand how big and how
awesome and how thoughtful people are in the military, and I really got fully
into focus in this in studying Willis Ware's archive at the University of
Minnesota.
Willis Ware did all sorts of consulting and study for the
military and some of his reports are now declassified. And what you learn from
these reports is that in the late 1960s, the military had already figured out a
lot of things that we are still struggling to figure out in the corporate
sector. Prime example is security by design, a report that Willis Ware wrote in
1970, one of the first recommendations is that computer systems have to have
security as a design factor. That is security by design. In the 1970s, the
military was doing red teaming. They called it tiger teaming. They also figured
out that multi-tenant environments are inherently insecure.
They figured out all these things that we seemed to forget and
relearn in the 2000s and 2010s. And so I think we should be humble, humbled by
this. And I also think that we ought to be looking at the military and the
intelligence community more carefully for leadership in understanding security
problems and then understanding what to do about them.
Kevin Frazier: And I
know Eugenia is going to have a lot to say. I do want to make one plug just
generally while we're talking about military opportunities as someone who was
two weeks away from bootcamp joining the Air Force JAG, but then was medically
disqualified. That's a whole other podcast, but for all those law students
listening right now. Look at JAG programs. If you want to get involved on any
of these issues and get real meaningful experience right out of law school,
call a recruiter. It'll be worth it, and you'll see some really interesting
opportunities. And then send me an email, and we'll talk further. But with
that, Eugenia, please.
Eugenia Lostri:
That's great. Thank you, Kevin. I just, there's so many threads in what you
just said that I want to pull. Okay, let's start with the first one. You
mentioned security by design and you may know, or listeners may know, that we
at Lawfare have this ongoing project looking into security by design,
the incentives and disincentives for it. You have an entire section addressing
this. And I do appreciate the plug that, yes, we've known it. What are some of
the technical solutions to this problem? For a very long time we compiled this attempted
literature review, looking at all the different ways in which different actors
have been thinking about security by design. And it just becomes very
surprising why these things are just not basic, why they're not in every single
product. And I do appreciate the Biden-Harris administration's push for
security by design. I think that's great.
But basically, the way that you talk about it, it's a very
common way to talk about it, is about the incentive to be the first to market
and how that detracts from security. It creates, it makes sense, security
creates friction, it means revisiting, means having to do things again, and if
you're not the first to market, you're probably going to lose the race. So if
security is not required, for minimum viability of the product, do you think
this is exclusively an economic problem, or these other categories that we've
been talking about, the technical side, the people side, the psychological
side, do those influence it? Maybe they don't influence it just as much as the
economics of the market. How do you see that now that you've done all this work,
how do you see that affecting security by design?
Chris Hoofnagle: Let
me just start with some humility. I'm a startup lawyer. I'm basically a
corporate lawyer now. And my understanding of cybersecurity comes from my lens.
And that lens is not universally true. It's just the lens I have and what I
see. Most of my work is in the startup space and the venture space. Small
companies tend not to have a chief security officer or security team. They tend
to have someone who's very good at security, but not a comprehensive way of
looking at things. And they understand the game is about being acquired or going
public. And these economic factors are overshadowing the legal factors.
So speaking as a lawyer, I have to say they have to do all the
things and they have to comply with GDPR and so on. But speaking as an expert
in the field, I know that the economics of becoming very affluent, of selling
one's company outweighs security. And that might be okay. One way to think
about it is, we're going to have this innovation, and oftentimes the acquiring
entity cleans up the security problems. But it leaves a lot of users in a
lurch. And you can think about some of the social media players out there where
companies got big very quickly and then had breaches that were catastrophic for
their users privacy. That's going to be the price we pay for that innovation.
Eugenia Lostri: Since
you brought up social media, let me tie that back to the military aspect of
this. Personally, I've always been a little bit skeptical of pairing
psychological operations with cybersecurity. It just seems like they're
sufficiently distinct and they have sufficiently different histories to be
considered separately, but I know that's not the case for everyone. A lot of
discussion has bucketed psyops as part of a cybersecurity problem, which has
been super interesting to just read about.
But you discuss these psyops, and I'm just quoting, the
prospects that cyberattacks might cause loss of crisis control is a powerful
psychological factor for the military. So are you understanding psychological
operations as in you may be convinced that your cybersecurity defenses are not
good enough or is this just psychological operations by themselves are going to
have this demoralizing effect on the military?
Chris Hoofnagle: So
the latter, and I think you're absolutely right. I think your framing is
absolutely right. I would draw a line. One is about electronic warfare, which
is separate, but then overlaps with cyber. Another is psychological operations
that do not need to be in the internet at all. They could be entirely in person
and so on, which again, overlaps with cyber.
So I think what's powerful about cyber operations, however, is
this psychological effect that an adversary’s systems might just not be
functioning, and it might be because of the incompetence of the adversary. A
great example is Olympic Games Stuxnet with the centrifuges, that attack seemed
to create a powerful effect on the adversary. And I think all adversaries to
powerful nations have to think about whether their systems are malfunctioning
because a powerful cyber actor like China or the United States is inside them.
And so I think there's going to be this fundamental concern about control and
systems going forward.
Kevin Frazier: So
bringing up some adversaries now, I think that moves us nicely into a kind of realpolitik
conversation. I think there are very few cybersecurity professionals and
scholars who would say they are satisfied with cybersecurity regulation as it
stands right now. We have more or less a 50-state framework for privacy and a
lot of cybersecurity measures. And yet you and your coauthor point out that
internet policy disputes, to use the words of David Clark, are often better
defined as tussles. And I love this phrasing of tussles. And you all say that
you refer to them as tussles because they're perhaps harder fought than other
policy debates.
And so I wonder if you think that we need something else to
help students not only take these frameworks, but then learn how to apply them
in a kind of realpolitik setting. Because we've seen on the Hill, in these
state legislative sessions, it's really difficult to pass meaningful cybersecurity
protections and new bills. So what's your response to how we take these great
ideas and get them implemented in practice on the Hill or in Sacramento or pick
your capital?
Chris Hoofnagle: I
think the most powerful argument we can possibly make is sourced in the
security as a science literature. If one can show that their approaches
actually make people's lives better, systems more secure, more resilient, I
think that's the way to go. And so I'll just throw a curve ball. I'm not sure
at all that security breach notification helps anymore. And I think it's
something that we ought to rethink. I also think that lawyers have made giving
notification the terminal goal of security breach rather than security. The
whole point of security breach notification is to create incentives for
security. So I think there ought to be a rethink around those measures.
The realpolitik issue, the way we address the realpolitik, is
to talk about cybersecurity through different actors’ lenses and to explain
that law enforcement, the military, the intelligence community, private sector
actors all look at security differently. They have different incentives,
different goals, different interpretations of what security is. And part of
what's made regulating security so difficult is that these goals are sometimes
in deep conflict.
Kevin Frazier: And I
think what's particularly important about the textbook as well is that you all
push students to think creatively about policy solutions. As you just pointed
out, lawyers tend to be, sorry to the rest of the legal profession, fairly path
dependent, right? If we can squeeze something into an individual due process
mindset, or say we can check a box and just persist with this outdated
notification law, even if we know no consumers respond to it. We're like, you
know what, that's all in a good day's work for a lawyer. Let's just keep on
with the status quo.
Eugenia Lostri: It's
good to be self-aware.
Kevin Frazier: Yes. So,
would you say, Chris, that inspiring that sort of policy creativity is one of
the goals of the textbook?
Chris Hoofnagle:
Golden Richard and I want to broaden what people consider to be cybersecurity.
So many people out there think it's incident response. It's security breach.
No, this is a rich field that involves so many different issues from standards
to forensics to international relations. So, part of what we're doing is
getting the student to slow down and to appreciate just how big the picture is.
But then once one sees how big the picture is, you realize that cybersecurity
might become a form of universal regulation and kind of an excuse to regulate
everything under the sun. So, we have to both see the big picture, but also be
able to decompose its elements into pieces that make sense and that are
regulatable.
Kevin Frazier: And I
think too, you all stress accountability, which is an underappreciated part of
any regulatory regime, which is, if bad actors can just continue to engage in
bad behavior, or let's not even say bad actors, just accidental actors who
engage in a practice they didn't intend to without accountability, nothing
really changes. Would you identify that as one of the current biggest flaws
with our approach to cybersecurity?
Chris Hoofnagle: So,
accountability is basically not present in some areas of cybersecurity. And a
great example is cybercrime which is a area where people can make lots of money
and more or less, they're unlikely to ever receive any type of investigation or
accountability. And then when we pan out a bit and say, okay, we can point our
fingers at cyber criminals all we want, I think we also need to think broadly
about entities like critical infrastructure providers where the accountability
rare really has to be around resilience. For instance, I don't think users are,
as a user of my electrical utility, I don't really care about their security.
What I care about is whether or not electricity is on. And so marketplace
incentives are really important and where you have monopoly and where you have
no choice, you might not have resilience and accountability. And so let me give
a recent example.
Recent example is the car dealerships all across America
recently suffered a major ransomware attack. I had both my cars serviced during
that ransomware attack. Both of my car companies were operating, two different
companies, they were operating, they were able to take appointments, they were
able to work on my car. And I think that's because there's competition in the
private sector. And it doesn't surprise me that we see total shutdowns when
there is no competition, when let's say a pipeline receives a ransomware
attack. And the answer is to shut off service. So I see accountability in a lot
of different ways. Everything from ensuring legal accountability through the
criminal law, but also through the market and how the market can shape
incentives for resilient operation.
Eugenia Lostri: So I
have a question about accountability that is a little bit tangential to the
question of cybersecurity. And actually, Kevin, this is also a question for you
because it has to do with AI, which again we cannot go through a podcast
without talking about. But when we think about the accountability of machine
learning, of artificial intelligence solutions that are being deployed and the
problem of this black box decision making, how are you thinking about
accountability in that context?
Where should that lay, should it be in the process? And maybe
this is my lawyer brain thinking we should have a process in order to respond
to this and make sure that there's accountability. Is it in the technical side?
Is it on whoever is deploying it? Is it on the human in the loop? How should we
be thinking about it? Because there's so many important decisions that affect
many individuals that are just being decided through who knows what process.
Kevin Frazier: I can
jump in there a little bit and just say I think that this is one of the biggest
issues with the use of AI, especially in a defensive context or an offensive
context, whether it's a military actor or just a private actor, deploying these
solutions where you're not quite sure who's responsible for what. And this was
just amplified by the podcast I did with Ashley Deeks and Mark Klamberg a few
weeks back, where we're going to see AI systems interacting with one another.
And when we get to that point, I don't know what you call black box squared.
Maybe you just call it a black hole. I'm not sure. If I did just coin a new
phrase, TM. But I think that's a really scary possibility that we have to reckon
with if we're going to ensure this notion of accountability, so my answer is a non-answer,
which is to say I don't know how we hold folks accountable.
Eugenia Lostri: Such
a lawyer.
Kevin Frazier: Such a
lawyer. And I'm so skeptical, and I’d actually love your take on this Chris
given your attention to psychology as well. I'm very skeptical of human in the
loop requirements satisfying a lot of these accountability concerns because
we've seen things like automation bias, which is just, if you are engaging with
an AI, it tells you to do X, what are you going to do? X. It tells you to do Y,
what are you going to do? Y. It's not rocket science, it's just human nature,
and that's fine, but I don't think we should ignore how our brains work, how
our society works, how our organizations work, when we think about those
accountability decisions. So that's my one cent of Kevin wisdom.
Chris Hoofnagle: I
think it makes sense to look at historical examples and the best historic
example is the Fair Credit Reporting Act, which absolutely regulates AI. Credit
grantors have been using machine learning in the form of multiple logistic
regression for decades now. And your credit score is based on a somewhat
secret, it's a half secret, set of factors that are based on regression
analysis. And the way we deal with that is we give individuals the ability to
challenge the ultimate decision. They're not allowed to look in the black box.
We know what's in the black box. It's things like payment history and so on.
Now, automation bias is definitely present in credit granting
now. If you go to your local phone store and try to buy a new iPhone and the
credit score comes back, it's not going to give, it's not even going to give a
number. It's going to give a thumbs up or a thumbs down. And that salesperson
has no choice. Yes, you get the phone or no, you don't. And that's it. And so
that's an example where we've totally locked into the automation of the system.
We've totally taken away the authority of people to challenge it. Now the
customer can go and say, I want to be reassessed or the information that my
decision was based upon was inaccurate and get it redone. But I think there's
tremendous amount of academic study that can be done on credit reporting and
the effects of this automation and automated analysis.
I think it's also important, and what I see lacking in the
field is a paying attention to the compared to what. So you might not like
credit reporting, but let me tell you, there was something before credit
reporting that was worse and it involved, sitting down in an office and
convincing a person that you could pay your bills without data. So as bad as
the machine learning approach is, it might be better than the alternative. And
that kind of compared to what analysis is missing and a lot of the critique of
machine learning out there.
Kevin Frazier: You're
going to get me just talking on for way too long, but compared to what legal
scholarship field is just too blank, especially with respect to issues like
AVs. I'm driving in Miami right now. I would much rather have a whole set of
AVs on the road than humans who are, let me tell you, just the most
unpredictable actors ever. So please, we need more compared to what analysis
out there. Here's a call for papers for all those scholars out there.
Chris, I do want to also just dive into maybe other critiques
you may have of the way cybersecurity is traditionally taught. I think that
we've seen a huge spike in the number of schools who are offering cybersecurity
programs, which is great. It's on law course offerings across the board now. But
you point out there's often issues with this monolith view, for example, to
cybersecurity, we've talked pretty extensively about your emphasis on thinking
through all of the actors involved in cybersecurity. Are there any other common
issues you would just want to highlight or maybe best practices you really
think other scholars should be paying more attention to?
Chris Hoofnagle: I do
think that cybersecurity has to be taught with a technical emphasis. And it's
difficult to do this. This is one reason why Golden Richard and I wrote this
textbook. Golden is a computer scientist at Louisiana State University and has
long been affiliated with the NSA's Center for Academic Excellence program. So
much of what we do is give a companion set of labs to the normal doctrinal
instruction to show students that computers are not mysterious objects, that
you can become more sophisticated with them, you can learn what they're doing.
And in fact, a lot of cybersecurity skills shrouded in mystery, once they are
unshrouded, you realize that these are things I can do. And I don't need to be
a computer scientist to do it. I just need to be someone who can make sense of
information. So having study and statistics having some basic programming
skills, are some of the areas that I think are important to teach. And so what
we're trying to do is bridge the gap between the doctrine and these technical
skills. And it's rare to find people who have both.
Kevin Frazier: Well, I
think we will go ahead and leave it there. Thank you again for joining us,
Chris.
Chris Hoofnagle: It's
my pleasure. Thank you so much for having me.
Kevin Frazier: The Lawfare
Podcast is produced in cooperation with the Brookings Institution. You can
get ad free versions of this and other Lawfare podcasts by becoming a Lawfare
material supporter through our website, lawfaremedia.org/support. You'll also
get access to special events and other content available only to our
supporters. Please rate and review us wherever you get your podcasts.
Look out for our other podcasts, including Rational Security,
Chatter, Allies, and the Aftermath, our latest Lawfare
Presents podcast series on the government's response to January 6th. Check
out our written work at lawfaremedia.org. The podcast is edited by Jen Patja
and your audio engineer this episode was Noam Osband of Goat Rodeo. Our theme
song is from Alibi Music. As always, thank you for listening.