Lawfare Daily: Katie Moussouris on Bug Bounties

Published by The Lawfare Institute
in Cooperation With
Lawfare Editor-in-Chief Benjamin Wittes sits down with Katie Moussouris of Luta Security to talk bug bounties. Where do they come from? What is their proper role in cybersecurity? What are they good for, and most importantly, what are they not good for? Moussouris was among the hackers who first did bug bounties at scale—for Microsoft, and then for the Pentagon. Now she helps companies set up bug bounty programs and is dismayed by how they are being used.
To receive ad-free podcasts, become a Lawfare Material Supporter at www.patreon.com/lawfare. You can also support Lawfare by making a one-time donation at https://
Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.
Transcript
[Intro]
Katie
Moussouris: What
we're actually seeing is extremely small companies trying to do bug bounties
and trying to use it to replace their own internal security processes and
efforts. So you're seeing, you know, basically a misalignment of investment in
cybersecurity being crowdsourced as opposed to being in housed.
Benjamin
Wittes: It's the Lawfare
Podcast. I'm Benjamin Wittes, editor-in-chief of Lawfare, with Katie
Moussouris of Luta Security.
Katie
Moussouris: Where
should you be in terms of your security maturity? How many incidents or bugs of
a certain type should you have, you know, as an organization if you're actually
handling yourself properly? Nobody expects zero bugs, but it's how you react to
them and, you know, your own internal resilience that's not really being
measured right now.
Benjamin
Wittes: Today we're
talking about bug bounties, their history, what they're good for, and what
they're not good for.
[Main
Podcast]
So, I want to
start with a question that I think you probably get a lot, which is, how does
somebody get into the field of bug bounties? What was the trajectory by which
you went from not being a person who thought about bug bounties to being a
person who thought about bug bounties?
Katie
Moussouris: Well, I
think for me, it was, you know, it was fairly early days in the broader
adoption of bug bounties. So, think back to 2010, when Google started their bug
bounty. And before that, the only bug bounty of note that existed, you know,
really, was the Netscape bug bounty, which became the Mozilla bug bounty. And
that had just sat at you know, 500 bucks since the mid-90s. So up until 2010,
there really wasn't much going on in bug bounty land.
When Google
launched their bug bounty program, they had also, two years before, just
launched a brand-new web browser. And it was quickly gaining market share. And
that started, you know, propelling Microsoft into thinking more seriously about
paying for bugs, because previously they had sworn they would never pay for
bugs.
Benjamin
Wittes: Who were you
at that point? You were a Microsoft programmer, you had a bit of a hacking
background. How did you get into this? What's your trajectory?
Katie
Moussouris: Well, I
actually never wrote any code for Microsoft, so you can't blame me for any of
its bugs or its patches. I was a security program manager. So meaning I was in
charge of creating new programs to work with the hacker community. You know, it
takes one to know one kind of thing. You know, my previous background was as a
professional hacker. My unprofessional hacking days, you know, were back before
the internet really had much to depend on you know, or much of us depending on
it.
So, you know, my
career trajectory as a whole, you know, as an arc was really from the early
stages of the Internet when it was just hackers just doing things for fun to
more and more of the commercial and government dependency on the Internet. And
suddenly, those of us with hacking skills, you know, could make a living doing
ethical hacking and pointing out flaws. And by the time I got to Microsoft, I
had actually hung up my hacking hat, and wasn't hacking professionally anymore,
but actually just, you know, working with hackers and building better bridges
between, you know, the biggest software company in the world and the hackers
who knew how to break it.
Benjamin
Wittes: You created this
program at Microsoft for which we should be clear, you have, don't work anymore
and have not worked for quite some time.
Katie
Moussouris: A decade.
Yes.
Benjamin
Wittes: You created
this program, this bug bounty program for Microsoft. Walk us through how a bug
bounty program works and why Microsoft shifted gears and went from kind of
never paying for a discovery of a flaw to please try to hack us just disclose
it and we'll pay you. What was what was the thinking there and what was the
nature of the program?
Katie
Moussouris: Well, you
know, it was kind of like boiling a frog. We had to start really slowly because
while Microsoft was used to receiving bug reports from the public. You know,
hackers would find things and they would report them out of their own good
nature and with the hopes of having their name appear in a Microsoft security
bulletin with credit for finding the bug and reporting it.
So what happened
was I started a program a few years earlier than the bug bounty program, and
that was Microsoft Vulnerability Research, and that was us looking for bugs in
third party code. So much like the big CrowdStrike disaster that happened
recently you know, not saying that, that my program would have caught that, but
that was the kind of thing we were looking for. We were looking for ways to
avert disaster initiated by third party bugs.
So, I was
working on that program and we needed a way to publish security advisories that
we, for bugs that we found. So I ended up writing the very first formal
vulnerability disclosure policy for Microsoft. Previously, you know, it had
only ever had a policy that just said, we'll thank you if you report things
quietly to us and give us a chance to fix it. You know, that was, that was
pretty much the only policy in place before that. But we needed a policy to
work if we were the finders of the bug, under what circumstances might we go
out and publish vulnerability details and guidance, et cetera. So I wrote that
policy to be, you know, multi-directional if we were the finder of the bug, if
we were the receiver of the bug, and if we were the coordinator of the bug in,
you know, some sort of supply chain scenario.
Fast forward, I
got Microsoft to agree to pay for remediation ideas. So, for fixes and for new
architecture changes that might prevent exploitability. And slowly the frog was
starting to heat up at this point. Microsoft knew that Google, one of its, you
know, rising competitors in the browser space, was paying for bugs. But
Microsoft, at the time, was getting over a quarter million potential bug reports
for free. So that was over a quarter million to 300,000 non-spam email messages
a year were being reported in as potential issues to investigate. So you can
imagine that Microsoft was very, very cautious and didn't want to dangle a cash
reward, you know, in front of what was already a fire hose.
Benjamin
Wittes: Presumably
though, the hacker who has discovered the highest value bugs are the least
likely to report it without some incentive, right? Because they have the
strongest financial incentive to do something else with it and you know, that
versus maybe getting thanked in a security bulletin is a tougher sell than, for
example, Microsoft paying some significant amount of money versus the high
value nature of the exploit under the you know, on the, on whatever dark web
market is available, right? I mean, there, the 250,000 are not the highest
value possible bugs, right?
Katie
Moussouris: Well,
it's interesting that you bring up price because there's a broad
misunderstanding about, you know, needing to compete with the offense market.
You know, and not all offense market buyers are illegitimate. Some are
governments and law enforcement looking for exploits to use against criminals,
terrorists, child, you know, traffickers, nation states of, you know, that,
that wish to do us harm. So not all bug sellers are unethical and not all bug
buyers are either. but presumably, you're saying, you know, that if you
Bould sell a bug
and it's worth, you know, a lot of money, you would not necessarily report it
to the vendor for a lesser amount, like say a bug bounty amount, or for free.
And that's actually not the case. A lot of people want to see the bug get
fixed. And if you are selling in the offense market, you are specifically selling
to a market that you know does not want to see that bug fixed before they can
use it for its intended purpose. So, there are a lot of other reasons besides
money to disclose a bug to a vendor and not the least of which is that maybe
you use that technology or you realize that society depends upon that
technology and you would like to catch an airplane sometime and not have that
bug hanging out there unfixed. So you know, there are more reasons than just
money.
But that kind of
brings me to the point of how we started the Microsoft bug bounties was it
wasn't simply just, you know, we'll pay for any bug that turns up, you know,
turns out to be a security vulnerability. It was specifically asking for bugs
in the Internet Explorer beta version because previous to the bug bounty,
hackers were finding bugs, reporting them for free. But they were waiting until
the beta period was over and they were doing that because the only way they
could get credit in Microsoft security bulletin was if there was an actual
bulletin and you wouldn't ship a bulletin for a bug that only affected the beta
product.
So it was kind
of a lose-lose situation for, you know, both Microsoft and the users because as
soon as the beta period was over, suddenly Microsoft had to patch all of these
bugs that had been sat on by well-meaning researchers who just wanted credit.
So we put a bounty at the beginning of the beta period and we shifted the
traffic of, you know, that group of hackers that were already going to report those
bugs to Microsoft and we shifted them too much earlier in, you know, the
release cycle.
And we also
looked for, we also paid for exploitation techniques. And that's not something
that we had to necessarily compete with the offense market to get. Because if
you think about it, if an existing exploitation technique works, all you have
to find is a zero-day vulnerability and write an exploit that exploits that
particular vulnerability. And you can use the same technique over and over
again as long as it still works. You don't need to invest in finding new
techniques, but Microsoft definitely wanted to invest in understanding how to
defeat new exploitation techniques.
Benjamin
Wittes: So, this idea
of having a robust, systematic bug bounty program really catches on in the mid-teens,
right?
Katie
Moussouris: I'd say,
yeah, it really took off with Hack the Pentagon. So, Microsoft, we launched our
bug bounties in 2013. And Hack the Pentagon came about three years later. And
that was also a direct result of, you know, the Pentagon, you know, noticing
that the biggest software company in the world, with all of its complexity and
its, you know, multiple layers of supported software and versions, was able to
make this work for them. And so the Pentagon asked me to come brief them after
you know, Dr. Michael Sulmeyer had, who was working at the Pentagon at the
time, saw me give a guest lecture that in a joint symposium between Harvard
Kennedy School and MIT Sloan School, where I was talking about the game theory
and the economic theory and all of these elements that went into the creation
of Microsoft's first bug bounties.
And he was
intrigued. He asked me to come and brief the Pentagon, and that began, you
know, a long multiyear conversation that resulted in the very first bug bounty
program of the U.S. government. In fact, it was called Hack the Pentagon
against the best wishes of some of the members of the intelligence community. But
you know, that I think was the major tipping point. So Microsoft was the first
domino or actually Google was the first domino. Microsoft was a big heavy
domino that came about three years later. And then the Pentagon I think was,
was the biggest one that spawned broad adoption around the world.
Benjamin
Wittes: And since
then, you have left Microsoft and started a company, Luta Security, that sort
of focuses on bug bounties and consulting about them. So, tell us a little bit
about what the state of the market is now. If you're a company, how big are you
before you likely have a bug bounty program? And if you're a hacker, how do you
go about approaching companies? What's the ecosystem of the bug bounty look
like today?
Katie
Moussouris: Well, you
know, honestly, I had hoped it would help mature security at scale, you know,
around the world, but just like, you know, the dawn of professional penetration
testing or professional hacking at the beginning of the millennium, you know, a
wave of professionalizing hacking and hiring hackers, that didn't solve
security either. So it turns out, you know, that bug bounties are more of an ad
hoc version of, you know, hacking for hire that has been going on for 25
years or more.
Benjamin
Wittes: So wait, slow
down and unpack that.
Katie
Moussouris: Yeah. So
back in 1999, a company called @stake that was formed from the hacker group, the
L0pht, started professionalizing hacking. They weren't the only company. A few
handful of companies around that same time blossomed in the early internet. And
they found a market, and that market was, you know, starting to be companies
and banks and to a smaller extent, governments that wanted to understand their
risk and so hired professional hackers.
And I was part
of @stake, so I was part of that first wave of professional hackers. And we
thought at the time that this is great, you know, hacking is legitimized. We're
professionals now, people are hiring us, they want to know about their bugs.
And they're going to fix them. It was the and they're going to fix them part
that we got wrong.
And
unfortunately, fast forward now it's, you know, penetration testing or professional
hacking, same as, as it ever was for the last 25 years plus this sort of ad hoc
bug bounty, you know, crowdsourced version where you don't have to sign a
contract ahead of time and you don't have to strictly define so many
parameters. You do set a scope and margins, you know, where you don't want them
to cross you know, in general, if they're playing nice with you. But it's, you
know, it's a much more crowdsourced version of the same thing. And what I was
hoping would have happened in the last decade of bug bounties is that once
organizations understood how vulnerable they were by strangers that weren't
even being paid up front to go looking for bugs, if strangers could find bugs
and point them out, that we'd see an uptick in overall maturity. And we just
haven't seen it. And I think I know why.
Benjamin
Wittes: Well, that
leads me to the obvious next question, which is why?
Katie
Moussouris: Right. So
I think that a lot of organizations are, you know, using bug bounties the same
way they use penetration tests. They are fixing one bug at a time and they're
not looking for systemic issues and they're not improving their processes. And
the way that we look at it is we call that bug bounty Botox. You know, you're
only pretty on the outside. And you had asked earlier, how big does an
organization need to get before they do a bug bounty? Well, I think they need
to be fairly far along. But what we're actually seeing is extremely small
companies trying to do bug bounties and trying to use it to replace their own
internal security processes and efforts.
So you're
seeing, you know, basically a misalignment of investment in cybersecurity being
crowdsourced as opposed to being in-housed. And I think that's a big problem.
So what we recommend to people is that, you know, your organization should be
preventing as many bugs as possible, finding as many bugs as possible. And bug
bounties are really for the issues that you miss. And every bug that is
reported through a bug bounty program isn't just a singular bug to fix, it's,
you know, it's a breadcrumb to the path to improving your processes, so you
never make that same kind of mistake again.
Benjamin
Wittes: So, you serve
on, or have served on, the I forget what it's called, the sort of—
Katie
Moussouris: The Cyber
Safety Review Board?
Benjamin
Wittes: The Cyber
Safety Review Board. I was going to analogize it to the National Transportation
Safety Board, which did this quite devastating report on Microsoft, I guess,
either early this year or last year, late last year sometime. And of course, as
half of our listeners have been on flights that were delayed or canceled
because of the CrowdStrike slash Microsoft update problem. I'm very interested
in your sense of you know, how much of this is a Microsoft problem? How much of
it is a CrowdStrike problem? And kind of what are the lessons that we should
take from it and should have anticipated, or, or are there ones that we should
have anticipated here?
Katie
Moussouris: No,
absolutely. So, as you know, I served on the CSRB. My two-year appointment was
over and I and a bunch of other board members rolled off to make room for new
board members. So, and I was actually recused from the Microsoft report along
with, you know, other competitors and partners of Microsoft. So that report was
done with a subset of the board at the time.
But to answer
your question about the CrowdStrike incident. This was a CrowdStrike problem.
This was not a Microsoft problem. And in fact, you know, CrowdStrike made
architectural choices of executing code in the kernel that some of its
competitors do not do. You can do similar levels of protection against threats
without having to execute anything in kernel space. So it would have been a lot
safer if they had made a different architectural decision. And then they made a
testing mistake. They did not test this, you know, new content update in you
know, a dynamic way. They simply looked at the file, they had a file validator
and they looked at the file and said, it looks good. And then they just pushed
it out into millions of Windows machines.
So they had
coding mistakes, testing mistakes, and then they had process mistakes. And
those were the ones that they definitely should have known better as well. You
know, earlier this year, CrowdStrike had affected a number of Linux machines,
you know, so, you know, taking Microsoft completely out of the equation,
CrowdStrike, you know, has struck before. And in fact, the CEO of CrowdStrike
was the CTO at McAfee in 2010 when a similar Windows boot loop situation came
into play.
So, all of this,
you know, I would say is, you know, underlines the fragility of the Internet as
a whole and the fact that, you know, this vendor had kernel level access and
was not testing properly and wasn't rolling things out in a graduated fashion.
I mean, those are all lessons to be learned throughout the industry. But I do
think that CrowdStrike is an outlier among its peers in what it was doing and
you know, the fact that it didn't even give its users the ability to control
those updates that would just happen in the background. They give their users
the ability to control code updates, but not these content, these channel file
updates. And it turns out that the code that reads the channel file, you know,
that, that was already on the machines.
Benjamin
Wittes: And I mean,
obviously it'll produce changes in the way CrowdStrike does, or one hopes that
it will produce changes in the way CrowdStrike does business going forward. Do
you, I mean, one thing that strikes me is that it raises the issue, we often
talk about, you know, Windows and Microsoft as a sort of dangerous monoculture.
But here you have a vendor that will huge numbers of people are using, with the
ability to take down stuff en, en masse as a result of accidents. How should we
think about the pervasive use of entities like Cloudflare and CrowdStrike that
are actually security vendors, but they're so pervasive that errors in there
have security implications for, you know, millions of users such that they have
systemic implications?
Katie
Moussouris: I mean,
what's the question exactly?
Benjamin
Wittes: Well, the
question is, we worry about things, particularly operating systems that are so
pervasive that you know, a small flaw or a vulnerability in them can have
systemic-level implications. We don't tend to think, well, the security vendors
that we're hiring to protect generally individual systems become similarly so
pervasive that they raise the same concerns, particularly in interaction with
the dominant carriers, the Windows or the the Linuxes. And so I guess I'm like,
it, the incident makes me worried about how many different actors we have to
worry are this, we use this pervasively?
Katie
Moussouris: Well, you
know, what you uncovered is one of the reasons why Microsoft releases patches
on the second Tuesday of the month. One, it's to minimize disruption, and two,
it's to conduct extensive testing, and it's not just about testing, you know,
its own software in a vacuum. But it actually does a series of tests for
interoperability with third party software. And it learned to do this because
of exactly what you're talking about, reliability issues that, even if they
affected a fraction of machines, had an outsized effect.
And so that was
part of the reason why, back in 2007, I was able to get Microsoft to agree to
spend some of its resources looking for security holes in third party products
you know, through the Microsoft Vulnerability Research Program. And that was
exactly what, you know, justification I was using is that look, you know, harm
to the Windows ecosystem is harm to the Internet as a whole. And we're not the
only ones who are running code on Windows machines. So we have to look to these
third parties.
But I think that
to your point about security vendors being very pervasive and introducing risk,
that has been, you know, we've been blowing that horn among cyber security
practitioners for decades. We have been saying that look, software is software
and software has bugs. So your security software almost certainly has bugs. And
we've seen incidents like this, you know, in the past, they just, you know, didn't
happen to ground all the flights at once for a few days.
I think, to your
point, what this is doing is it's raising the level of awareness of
interconnectedness and complexity. And I almost think that, you know, a big
focus for us when we're looking at resilience and how to improve that for
economic reasons, for national security reasons, when we're looking at
resilience, we don't necessarily need to outrun threat actors as much as we
need to outrun complexity. And that's the big problem for us to solve in
cybersecurity.
Benjamin
Wittes: I want to
come back to the complexity point, but what's the solution to the vendors
introducing risk problem? I mean, you can imagine a sort of market of vendors
to protect against the security flaws of cyber security vendors, and then it's
kind of vendors all the way down. Is the solution for the CrowdStrikes of the
world to be more careful? Or is there some more systemic way we should be
thinking about it when we, when, you know, companies hire a cybersecurity
vendor to think through the risk that that vendor is going to introduce?
Katie
Moussouris: Well, you
know, it's, in the case of CrowdStrike, they definitely needed to follow secure,
you know, coding practices. Secure coding practices include dynamically testing
anything that that changes the operation of your software, including a
configuration or channel file like they did, like they had, and they did not
perform that that test. So, you know, having any vendor who is writing code try
to write it more securely from the beginning and follow best practices,
including dynamic testing is, you know, something that, that they should all be
making sure their processes cover that.
I think
Microsoft had already done its part where, you know, when it allows third
parties to have drivers that are kernel drivers like CrowdStrike did, the
kernel driver itself goes through a bunch of different tests and before it's
certified by Microsoft. So Microsoft was already aware that, you know, if we're
going to allow third party kernel drivers, we do need to put them through a
series of tests. The problem with this model was that, you know, again,
CrowdStrike was making some choices about where it was executing these volatile
files that change every day. And you know, all of these things combined
basically created this perfect storm that we all lived through.
Benjamin
Wittes: So, I want to
come back to complexity. You know, I've been hearing this argument, complexity
is the enemy of security, for well more than a decade. And everybody seems to
accept it. And it doesn't seem to alter anybody's behavior. Like, I don't know
any software vendor who's like, you know, we're not going to introduce this new
feature, because it would just add to the attack surface. And so, you know,
users can have a phone that doesn't send pictures. You know, I just don't know
what the, what the actor is who's refrained from adding complexity to the
system. You know, even Signal, right, keeps adding new features, right? And
they're the ones who were supposed to put security first and for all I know, do
so.
And so, the
observation that complexity is the enemy of security does not seem at any part
level of the system to operate as a resistance to adding complexity. And so, I
want to ask you to try to square that circle for me. We all know this, we all
believe it, and it all doesn't change our behavior. What good is the
observation if it doesn't cause us to have a simpler internet, simpler attacks,
or lessened attack surfaces, and all of which amounts to fewer features?
Katie
Moussouris: Well, I
think you took a walk around the actual cause of complexity, which is consumer
demand and market, you know, basically the market forces at work here. People
aren't going to buy things if they don't keep getting new features. And I think
that the counter to that and the way that we try and address that and balance
it out is there need to be some constructive ways to hold organizations
accountable and liable for negligence in building their software.
And I'm not
talking just blanket software liability, which is not, it's not feasible for a
lot of reasons, not the least of which is its potential to stifle innovation
and you know, favor the incumbents, you know, per se, in the software
industry. But I do think that, you know, complexity's cause is market forces,
and I do think there, there might be some market forces that could be applied
in accountability for software negligence.
And one example
of that, you know, could be a liability, you know, software liability
structure. But I would say that, you know, the investors would have to be
liable as well. The folks who are pushing for, you know, getting as many users
as possible before hiring a security team. That is the status quo right now in
building new software companies. And you know, I hacked Clubhouse, the audio
software social networking app. They had five employees at the time that I
hacked them, but they had no security team. They did have a bug bounty and they
had a hundred million dollars of VC money in the bank, and 10 million users
they were responsible for. So all of the incentives are aligned towards user
acquisition, feature build out, complexity build out, and insecurity. And
that's the part that the incentives have to work as pressure to, to stop that
from going out of control.
Benjamin
Wittes: So you
mentioned earlier that your hacking days were over and that you weren't a
professional hacker anymore. I would be remiss as an interviewer if I didn't
ask you what were you doing hacking Clubhouse?
Katie
Moussouris: You know,
Ben, hackcidents happen. I can't, I cannot lie. This sometimes happens, right? Look,
when you know how to spot bugs, sometimes you just kick the tires a little bit
and a bug falls out. So, that was pretty much what happened with Clubhouse.
It was literally
in the middle of an update. I happened to have two phones. I installed the new
version on a new phone while I was still dialed into an audio room on the old
phone, using the old software. And I used the new phone to leave the room, but
I was still dialed in, and so I was a ghost. I was able to speak and disrupt
the room, I was able to listen without being observed, and there was no way for
an administrator to kick me out. So that was truly a hackcident. I mean, you
cannot blame me for just, I mean, wiggling it a little bit, and a bug fell out.
Benjamin
Wittes: But let's
play with it. So, when you notice this bug walk us through what happens. So,
you hackcidentally notice that this darling of Silicon Valley, in the middle of
this update, has a problem. What did you do?
Katie
Moussouris: Well, as
the coauthor and coeditor of the International Standards on Vulnerability
Disclosure, I followed the standards and I attempted to identify a security
contact. That took me a couple of weeks. They really did not make it easy.
Finally, I found that they did have an email address. And I emailed them,
essentially asking if this was the right place to report a bug to them. I got
an autoresponder. And then by the time I got a person, so this all took several
weeks. And meanwhile, this bug was still active while I was simply just trying
to hunt down the right contact.
Finally, I got a
person responding saying, yes, we have a bug bounty program. Please go over to
this bug bounty platform and register. Now, I don't register and report bugs
through bug bounty platforms because they typically have a non-disclosure
agreement. If you sign up and report through here, you're agreeing not to
disclose the bug until it's fixed. And for me and for a lot of researchers, I'm
not interested in bounty. I'm interested in getting the bug fixed. And if that
means that that, you know, after some reasonable period of time, I have to
publicly disclose it myself so that users understand their risk, then I'm going
to go ahead and do that. So I refused. I kindly turned down their bug bounty
program offer.
And I said,
look, if it qualifies for a bounty, I'm going to donate it anyway, which I did.
And you know, I just want to see this get fixed. So back and forth happened.
Finally, I get on a Zoom with turns out the CTO and co-founder of Clubhouse.
And I was quite surprised. It was in that call that I asked, you know, well, is
there someone from your security team you want me to talk to? You know, you
really didn't have to take this call yourself. And he said, well, I'm handling
security right now. And I realized the company was a lot smaller than its 10
million user footprint would lead you to believe. At that point, I asked him
how many, how many folks he had. And he said, well, we're hiring right now, but
there's only five of us. And I was shocked. I was floored.
Anyway, I
coordinated the disclosure of the issue, you know, they released a patch. I
took a look at it, and my same attack vector didn't work of using two phones.
However, I knew, as a hacker, that there, probably didn't fix it all the way,
you know, and that they just kind of address that one vector. So when I wrote
up a blog about it, I said that, you know, I said, well, it appears to be
mitigated, but it might not be fixed. And they didn't really like that in the
blog, and I said, well, I'm not going to spend time writing a custom tool to
bypass your user interface and talk directly to the server and see if I can
still carry out the attack. But, you know, I will say that it appears to be
mitigated.
You know, long
story short once I released the blog and a Wired article came out about it,
people came out of the woodwork telling me that they had tried telling
Clubhouse about this bug for some of them, you know, close to a year before.
They had discovered it, you know, maybe just switching phones and things like
that. So non-security practitioners could find this flaw and they were ignored.
I think I wasn't ignored because I did let them know that I would publicly
disclose it. And I did have to pull a little bit of do you know who I am and I
link to a video of me describing the international standards that I was
attempting to follow and they were not, you know, making it easy for me. So
that's kind of the story of the hackcident that was Clubhouse. But yeah.
Benjamin
Wittes: And how
typical is that? When you described early on companies that were essentially
trying to outsource their security through bug bounty programs, this is a
picturesque example of that, and I suspect you had it on your mind. Is this a
pervasive practice now in the industry?
Katie
Moussouris: It really
is. And honestly, venture capital backed startups, as soon as they start
gaining a little traction, they'll think to themselves, well, I don't really
want to make my next hire a security person. I want to make my next hire, you
know, a developer, a marketing person, a salesperson. So those are the jobs
that the VCs are telling them to hire for. Some of them do realize they need to
do something about security. And they figure a bug bounty is a quick and dirty
way to point out the most obvious flaws.
They're not
necessarily wrong about that, but then who's going to fix them? And who in that
skeleton crew is going to take time out of feature development to go back and
potentially even re architect the whole solution from the ground up? They don't
have time for that. So they get into it sort of backwards.
And when
startups come to us and tell us that they want to start a bug bounty and they'd
like to know, you know, how to do it, we start them with a maturity assessment.
And they usually don't like what they get back from that because it says you're
not ready. You're not, you can't handle the truth. You're not ready for this.
And you know, you need to do some homework first and, and, you know, build up
your core strength before you, you know, go in and try and be a professional
bodybuilder, you know?
Benjamin
Wittes: So as the,
not the mother of bug bounty programs, but the person who really created or, or
one of the people who really created them at scale for, you know, major
corporations and entities. When you look at this development, do you feel like
you've created a monster and that you've, you created this tool that's really
useful for what it's useful for, but it's not, was never meant to be a
replacement for having serious internal security practices and it's being used
for this effect that it's really not going to serve?
Katie
Moussouris: Well, you
know, I will definitely take credit for popularizing bounties at scale. But the
creating a monster part, I think, you know, I think that the collective
circumstances under which bug bounties became really popular were, you know,
these VC backed platforms. And if you look at who the VCs were—
Benjamin
Wittes: Pause a
minute and explain what these, because when you first explained these platforms
to me they kind of blew my mind. What is a bug bounty platform?
Katie
Moussouris: It is, a
bug bounty platform is a ticketing system with ease of payment built in and
some triage services. Meaning, you know, the bugs will come in and some workers
at the platform will assess whether or not those bugs are real or whether
they're duplicates of other bugs that have already been received. And then they
will throw that bug over the fence to the vendor. And the idea is that only
valid bugs are getting paid for and that the vendor receiving the bugs, you
know, doesn't have to separate the signal from noise. So that's what those
platforms do.
Benjamin
Wittes: So it's
basically eBay for, for exploits and bugs.
Katie
Moussouris: Yeah, sort
of, but it's more like, you know, a bug reporting system. So just, a ticketing
system of any kind, right? You know, you have customer support ticketing
systems. You have, you know, various ticketing systems for, for getting issues
resolved. It's just a ticketing system front end. And the services on top of
it, usually offered by the bug bounty companies, are very narrow. They are just
initial triage. Is it or isn't it a bug? And is it or isn't it a duplicate? And
were they ever reward or not? And then the ease of payment is built in.
So, but getting
back to what has happened in this ecosystem, I think a large part of it is that
the venture capitalists who backed these bug bounty platforms are the same ones
that back other gig economy platforms and marketplaces like Uber and Lyft and
Instacart. These are the same exact VC partners that said, why don't we do, you
know, a marketplace for gig economy and cyber security? Unlike Uber or Lyft or
Instacart or any of those, bug bounties are actually the worst deal for the gig
workers. Because if you think about it, if you are an Uber driver and you
accept a ride and you go ahead and do the work, spend the gas money, you know,
all of that stuff and deliver your passenger to the airport, you will get paid.
If you are a bug
bounty hunter, it doesn't work like that. You can do all the work, find the
bug, write the report. And if you are not the first person to find that bug and
report it, you don't get anything. So it's like getting to the airport and
finding out you're, you weren't the fastest to get to the airport, so therefore
you're not going to get paid. So it's the worst gig economy job there is, and
that's part of, you know, how it's grown in a poor direction.
And then the
other part is that it hasn't improved security outcomes and security maturity.
And that part, again, is, I think, the focus on growth and getting people to do
bug bounties and vulnerability disclosure programs, which vulnerability
disclosure programs are just bug bounties without the cash, right? It's just
with a thank you or a t-shirt or whatnot. But getting people to do these
programs before they are ready and wondering, you know, why the industry hasn't
really improved in terms of its cybersecurity maturity. I don't know, maybe,
it's something where we're going to hit a turning point with not just gig
workers, you know, and labor rights, but also with expecting more out of
security practices in terms of outcomes and measurable maturity.
That's certainly
something that, that has been lacking in the practice of security. It's kind of
a scattershot practice. There are all these best practices, but nobody is
measuring best outcomes. Where should you be in terms of your security
maturity? How many incidents or bugs of a certain type should you have, you
know, as an organization, if you're actually handling yourself properly? Nobody
expects zero bugs, but it's how you react to them and, you know, your own
internal resilience that's not really being measured right now.
Benjamin
Wittes: So, you're
arguing in effect for a bug bounty program as an integrated part of a larger
security apparatus that is designed to catch what that larger security
apparatus misses rather than to substitute for a kind of end-to-end thinking
through how we're going to handle the vulnerabilities in the software that
we're creating.
Katie
Moussouris: That's
precisely it. And, you know, proponents of bounty first, ask questions later
will say, well, no, no, it's a very cheap and easy way for organizations that
are new to security to get an idea of where they need to remediate first. And
then they can build out from there. But that last part of and then they can
build out from there typically doesn't happen.
You know, I'll
give you an example: Zoom at the beginning of the pandemic. We were already
working with them before the pandemic started and working with them to improve
their security maturity. Because they found they were using both of the major
bug bounty platforms that are out there, and they were not able to keep up with
the volume of bugs that that were being reported that were legitimate bugs and
needed to be fixed. And so they had a broken process on the inside and they had
reached out to us saying, we're told you're the only ones that help with this
part. You know, we're basically the ones that, that, guide them through that
last mile of how do you get bugs fixed and how do you remediate the process
that led to the bug in the first place.
So we started
working with them, then the pandemic struck. And much like getting overwhelmed
in emergency rooms and hospitals, Zoom's intake of bugs shot through the roof
and it was, it was a huge deluge of reports. And so we, in about 10 weeks, got
them a 40 percent reduction of their, you know, caseload of bugs, their bug
volume by getting a lot of these issues fixed. And we put processes in place
before we left, such that they'd be able to keep up with the volume. But they
were a publicly traded company that was running on two separate bug bounty
platforms, and they still could not keep up. And that was because that, that
missing internal plumbing, they had a front door and the bugs were piling up
just on the other side of it.
Benjamin
Wittes: So like every
good thing in cybersecurity, good thing does not equal magic bullet.
Katie
Moussouris: Correct.
And also, spending a ton of money in cybersecurity doesn't necessarily mean
you're more secure. It just means you spent a lot of money.
Benjamin
Wittes: We are going
to leave it there. Katie Moussouris, you are a great American and a super
diverse and interesting mind, and it is always a pleasure to talk to you.
Katie
Moussouris: Great
talking to you too. Hopefully next time we can go on a field trip and shine
some lights on the embassy.
Benjamin
Wittes: The
Lawfare Podcast is produced in cooperation with the Brookings Institution.
You can get ad-free versions of this and other Lawfare podcasts by
becoming a material supporter of Lawfare using our website,
lawfaremedia.org/support. You'll also get access to special events and other
content available only to our supporters.
Have you rated
and reviewed the Lawfare Podcast? If not, please do so wherever you get
your podcasts and look out for our other podcast offerings.
This podcast is
edited by Jen Patja. Our theme song is from Alibi Music. As always, thanks for
listening.