Cybersecurity & Tech

The Lawfare Podcast: Bryan Choi on NIST's Software Un-Standards

Alan Z. Rozenshtein, Bryan H. Choi, Jen Patja
Thursday, March 7, 2024, 8:00 AM
Discussing NIST's history in setting information technology standards

Published by The Lawfare Institute
in Cooperation With
Brookings

Everyone agrees that the United States has a serious cybersecurity problem. But how to fix it—that's another question entirely. Over the past decade, a consensus has emerged across multiple administrations that NIST—the National Institute of Standards and Technology—is the right body to set cybersecurity standards for both the government and private industry.

Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, spoke with Bryan Choi, who argues that this faith is misplaced. Choi is an associate professor of both law and computer science and engineering at The Ohio State University. He just published a new white paper in Lawfare's ongoing Digital Social Contract paper series exploring NIST's history in setting information technology standards and why that history should make us skeptical that NIST can fulfill the cybersecurity demands that are increasingly being placed on it.

Click the button below to view a transcript of this podcast. Please note that the transcript was auto-generated and may contain errors.

 

Transcript

[Audio Excerpt]

Bryan Choi

That's the basic framework that NIST has adopted for cybersecurity, software security and AI. They've said, “You shall plan out,” right, “what are the risks? You shall then measure or detect when the risks occur, and then you shall remedy or mitigate the harm that arises once these events happen.” And it's a dramatic shift, right, it's a pivot from the types of standards that NIST was pushing out in the ‘60s, ‘70s, and ‘80s, where they were really trying to standardize and unify the way that software developers did their jobs. Now it feels like it's a pluralistic do whatever you want to do, and we're going to try to come up with vocabulary that unifies what you're doing. But it's not actually trying to tell you to do the same thing.

[Main Podcast]

Alan Rozenshtein

I'm Alan Rozenshtein, Associate Professor of Law at the University of Minnesota and Senior Editor at Lawfare, and this is the Lawfare Podcast for March 7th, 2024.

Everyone agrees that the United States has a serious cybersecurity problem, but how to fix it, that's another question entirely. Over the past decade, a consensus has emerged across multiple administrations that NIST, the National Institute of Standards and Technology, is the right body to set cybersecurity standards for both the government and private industry. My guest today argues that this faith is misplaced. Bryan Choi is an associate professor of both law and computer science and engineering at The Ohio State University. He just published a new white paper in Lawfare's ongoing Digital Social Contract paper series, exploring NIST's history in setting information technology standards, and why that history should make us skeptical that NIST can fulfill the cybersecurity demands that are increasingly being placed on it.

It's the Lawfare Podcast, March 7th: Bryan Choi on NIST's Software Un-Standards.

So Bryan, I want to start actually with a biographical question. You studied computer science in college and you have a joint appointment in the CS department at Ohio State. So you bring a lot of technical knowledge to your legal scholarship. It's honestly something that I, who also writes in this field, I'm really envious about. And I want to ask, what do you think your technical expertise contributes to how you do your legal scholarship when it comes to law and tech, and in particular the regulation of software, which has become one of the things you're best known for? And also I want to ask more generally, do you think that there's enough technical in legal academia generally, to equip law professors and other scholars to deal with all the tech regulation issues that are here now and will only increase going forward? Or to put another way, if you were giving me advice, right, as someone who doesn't have a CS background, but spent a bunch of time on Codecademy because he thinks it's fun, is this enough for me to spend the next 20 years of my life writing about AI regulation, let's say?

Bryan Choi

Yeah, those are two great questions, and let me take them in turn. And the first question, we often talk in law school about that we train our students to think like a lawyer, That's like canonical advice. And I think it's no different with computer science, You learn to think like an engineer or you learn how to think like a programmer. When I went and started doing computer science and programming, it felt like I was learning a new language. And then when I went to law school, it felt like I was learning a whole other language. And so there is a benefit to really doing that formal training because it then gets you in the mindset of, well, how do I build these systems? It doesn't feel like magic. It's more really the steps in which you have to think through the problem and break it down into modules, for instance, then what can go wrong, How do these systems fail? You bring that kind of mindset and experience. So I think there is a lot of value to having that kind of training.

Now, you asked a question, could a casual person taking some Codeacademy or other programming classes also have that mindset? I think sure, It depends on how much experience you have. Of course, your mileage may vary, but certainly I think any amount of experience or exposure with that kind of problem-solving aspect, I think is helpful, just as there are people who haven't gone to law school, but have been around lawyers or have been thinking about policy questions. And to a certain extent, of course, they can also think through these problems in a useful, constructive way too.

The second question you had, are there enough people? Of course, the answer is never, And you can always have more. Hiring deans, take note, you can always hire more of us, More people who are into this.

Alan Rozenshtein

That's right. Yeah.

Bryan Choi

And increasingly, right, James Grimmelmann, I think does a great job of doing this, but software is everywhere, It used to be this it's this niche field. Maybe it's just internet law. Maybe it's, is it even a class at all? But now, increasingly, it seems like it touches on every aspect of our daily lives. And so does having some technical knowledge help us with administrative law? Does it help us with environmental? All sorts of problems that are affected by software. And so when I talked to my engineering undergraduates and they ask, “What can I do with a technical degree if I go to law school?” And it's like, anything, right, you don't have to just be a patent lawyer. You could really do any type of law you want because that kind of training, I feel, is just so valuable these days.

Alan Rozenshtein

Okay. So let's get into the paper. And before we talk about NIST and what it can and cannot provide, let's talk about the problem first, At a high altitude, how are we doing? We can define that as the software industry or the government or society generally. How are we doing when it comes to cybersecurity? And is the problem getting worse or better? And how can we tell?

Bryan Choi

So the problem of cybersecurity seems to be getting worse in some ways. And, being managed in other ways. There was just a news story about a ransomware gang being taken down. We've had those kinds of stories periodically. And of course, another gang comes along and takes its place. So it feels a little bit like whack-a-mole. But in 2001, Congress passed FISMA to try to help manage this problem of information security. And there are lots of questions of how do we get agencies to comply and purchase better software.

Alan Rozenshtein

And just to jump in, for our listeners who may not be familiar, what is FISMA?

Bryan Choi

FISMA is the Federal Information Security Management Act, I believe is the acronym. And it was an effort to improve information security management in the federal government. Now that's been expanded. So in 2013, more than a decade later, President Obama said, “Cyber security's gotten only worse!” So even after having all this kind of paperwork, compliance, trying to get the federal government in compliance with better security practices, the cybersecurity problem was only getting worse. And Melanie Teplinsky has some great writing on that on that topic. Then we have a bunch of attacks. We have a bunch of problems. President Obama says, “Well, let's try to figure this out through Congress.” Congress stalls, doesn't pass any legislation. And so President Obama then does it by executive order. He calls on NIST to create a cybersecurity framework and NIST goes ahead and complies, creates a cybersecurity framework. And we're still circling the drain, The problems are not going away. If anything, they seem to be ever present. Where are we in the big picture? It seems like there's a lot of energy, a lot of effort, a lot of information-sharing and coordination, and law enforcement actions. But the size of the problem is so huge. It just keeps metastasizing and new problems continue to arise and the old problems don't fully go away. So I would say that I think this is a fruitful area to research in because it doesn't seem like this is going to be solved anytime soon.

Alan Rozenshtein

It's certainly a full employment program for people like you and me who teach cybersecurity law. But I want to stay on this for a second and dig deeper into why this is such a hard problem. I always think about it like, look, when I buy a toaster, I'm pretty confident it doesn't blow up. When I go on an airplane, I can be pretty confident it's not going to fall out of the sky. Now, it turns out a bulkhead may fall off. That's like a recent problem. But generally we've gotten very good at security and safety for just the vast majority of devices and practices. And yet, there just seems to be something about cyber security and more generally, software breaking, like in all ways, that it seems to defy our ability to get our arms around it. And it's not like it's a new problem, This isn't a technology that's five years old. We've had these problems since the ‘50s and ‘60s. I know you've done a lot of great work in other articles touching on this. I'd love to get your thoughts generally on what is it that seems to make securing software, not just a hard problem, but a super hard problem?

Bryan Choi

Sure. Yeah. This is going back to older work, not the current paper, but software is a double-edged sword, If you wanted to limit the plasticity and the capacity or the capability of software, you could do that and you could build pretty robust programs, and you could check them, and do all the verification that you wanted, but they would be very limited programs. And the double-edged sword is that we like software because it enables so much complexity, enables so much functionality, because of the abstractness of it, It's a construct that is abstracted away from physical constraints. You can kind of world built, That's the fantasy, is that you can really build whatever you want in software and code. And because of that just freedom, then it opens up a level of complexity that is not typical in physically manufactured goods or products. That's why a toaster, right, it's limited object. It only does a very narrow set of things. Jonathan Zittrain has a wonderful book on that, “The Future of The Internet and How to Stop It.” He talks about generativity and toasters are his leading example of something that is a single-use item. It does one thing very well. And if that's the case and you can product test it and you can fix all the problems with it and then you don't expect it to blow up. But software is a different order of magnitude of complexity. And so you can't actually troubleshoot it or test it in a comprehensive way that gets you that kind of assurance.

And so that's the principal problem. And what the software industry has done as a result of that problem has been to cut corners, So you have to cut corners if you can't actually do comprehensive testing. And so the question is, well, how do you cut corners? And it turns out they just kind of gave up and they said, “We're not going to go about this in a very systematic way. We're going just ship product and we're going to fix problems on the backend, in an iterative manner.” And so that's why we have all these software updates on a regular basis, That's what's happening as you're saying, “Because we can't get this perfect, we're just going to ad-hoc it and then we're just going to put out fires as they come up.” And because there's no systematic way of doing that, these problems continue to arise. And then when you fix it in that iterative way, new problems will--you might fix some old problems, but you might introduce new problems as well.

Alan Rozenshtein

So would it be fair to say that the issue is that as software grows in complexity, because the processing power increases, therefore allowing us to do things, the sophistication of our languages and tools sets increase, the demands, right, as Marc Andreessen said, “Software is eating the world,” so we want software to do more stuff. So as software is increasing in complexity, the rate of increase in let's say, those corner cases that you want to test just has outstripped whatever advances we have in testing, presumably--tell me if I'm wrong, I hope I'm not wrong. But in 2024, the ability to test software, I hope, is better than it was in 1970. It's just not better enough relative to how much more complex and how much more mission critical the software in 2024 is to what it was in 1970.

Bryan Choi

Yeah. And it's actually a computational problem, There's not enough time or processing power to really test every corner case, every branch of the computer program or the software system. And so you have to come up with some heuristic to do some subset of that,

Now there's an interesting question of whether with machine learning and advanced AI techniques, can that be a step forward? Is that a silver bullet that will solve the testing problem in a different way than we've tried to do before? Maybe the jury's out on that. There's been some, at least, preliminary work suggesting that it's good at detecting faults and errors, but at least in the traditional sense, you're never going to get comprehensive testing. So the question is, can you make do with some heuristic or some other shortcut to get a good enough coverage of the types of errors that might commonly arise when you're running the system.

Alan Rozenshtein

It would be quite pleasingly ironic if the way to figure out how these very complicated software systems are working is to take an even more complicated, even less scrutable system and bolt it on top. But we shall see. So you talked about the heuristics, right, that we have to use in lieu of being able to rigorously test. So let's talk about who's going to create those heuristics? Who's going to recommend that? I think that gets us to the heart of the paper, which is your really interesting analysis of what NIST can, and, perhaps, more importantly, cannot do in this space. So let's talk about NIST, the National Institute for Standards and Technology, which is a subagency under the Department of Commerce. Just briefly, what is NIST, what is its history, and why is everyone talking about it recently?

Bryan Choi

Yeah, so let me start by actually telling you how I got to this project, the motivation of it. So I live in Ohio. I would say about 2017, 2018, there started to be talk about this Data Care Act, which is ultimately enacted in 2019. And so when I was starting here, people started to say, is this a good thing, So what Ohio wanted to do was, it wanted to protect small businesses from liability for data breach litigation. And they said, “If you comply with one of these now recognized cybersecurity frameworks--the NIST cybersecurity framework being one of the more prominent examples of these--if you comply with one of these cybersecurity frameworks, then you'll be absolved of liability. It's a safe harbor, you don't have to worry about it.” And it was seen as an incentive to comply with one of these cybersecurity frameworks. And before that, I hadn't really been paying attention to NIST. And I said, “Well, how would this work? What would you actually do differently in order to get this safe harbor?” And I was very skeptical of it. And someone would say, “Well, don't you trust NIST? NIST is great! So don't you think this could work? What do you think about this?” And I said, “I don't know what I think about this.” So I started to look into it. So then I approached the problem with a bit of skepticism. I said, “What could NIST do?” I haven't really heard NIST doing anything internet-wise for a while. And as I started to dig into it, I realized actually NIST has done a lot, So the National Institute of Standards and Technologies got its start in the early 20th century, 1901, originally was named the National Bureau of Standards. And it was a very respected organization. They came out with all kinds of weights and measures and metrics and a whole bunch of basic science research. This was federal government at its best.

Alan Rozenshtein

These were literally the people that were like, “This is what an inch is,” right, or exactly how many centimeters or whatever.

Bryan Choi

Or these are the people who said, “You may think you know what an inch is, but we're going to standardize it and you're not going to have each state determine what an inch is or what measures people can use. And if you go back to the 19th and 18th century, each state had its own conception of which measures were ideal. But in order to promote international competitiveness, the federal government said, “We want to have standard measures. This helps with both buying products and selling products internationally.” And so that's why NIST is situated in the Department of Commerce, because this was seen as a way of promoting international commerce.

Okay, so NIST, very well-respected, helps with the war efforts and so forth,  gets into basic science, and anytime there's a big problem, scientific or technological problem, the federal government would look to NIST to lead those efforts. And computing technologies when it started to become prominent in the 1940s, again, NIST took the lead, there, So they helped build some of the first computers. They helped the federal government navigate computing technology. There was IT support for the federal government, and that was one of their major roles. And anytime there's a project like the census or any kind of big computing problem, they would help consult on those kinds of problems as well.

In 1965, NIST was tasked with coming up with federal information processing standards, and this was, again, a budget issue. The federal government decided that they were wasting too much money on redundant code and computer management. So why not consolidate that under NIST, have them come up with standards that would make the federal government operate more efficiently? That was the main premise of, of the mission.

Alan Rozensthein

Yeah. So it's great because you've gotten up to the, to the ‘60s and you write in your paper about how NIST played an important role in cybersecurity standards in the ‘60s and in the 1970s, but that its influence has waned quite dramatically since then. So what happened? Why was the ‘60, ‘70s NIST's high watermark in this space?

Bryan Choi

So when it first got started, there was a lot of faith in government. I would say this is the high watermark of federal agency standard-setting. You think about the highway safety, traffic safety, environmental safety, all these major legislative actions were happening in the 1960s. And so the federal government dutifully went off and started creating a bunch of standards. And this was no different. The problem I would say is that it just took so long and it was so expensive, and so NIST estimated it was spending about 80 percent of its budget coming up with these standards and developing them and getting it through the rulemaking process. And there's just so many problems it needed to tackle. And if it's taking three to five years per standard to get this across the finish line, meanwhile the software industry in the 1980s was taking off. This is the microcomputers, Apple and Microsoft were competing, just a whole bunch of innovation was happening in the computing space. And for NIST to try to keep up with that was just too ponderous. You think about all the stereotypes of big government being too slow and a bit too inefficient, and in some ways that was true. NIST was very careful, very diligent about its task, but in some ways that strength was also a weakness.

So in the1980s, they continue to promote, push out standards. They were actually very productive in the early to mid-1980s. But by 1987, there was a sense that this was not the way, The Department of Defense was starting to say, “We can't manage all these standards. We need to privatize them. We really should be”--there's a famous report written by a committee led by Fred Brooks, that said “We really should be pivoting away from writing our own code unless it's strictly necessary. We should be buying off-the-shelf commercial software, which is both cheaper, better, more effective, does all the things that it it's supposed to do. And to the extent that there is an off-the-shelf software, we can write custom software, but that should be the exception, not the rule.” That took from about 1987 to about 1995 before it became a federal policy. So it's still limped along for a little while, but it was in those moments where the federal government was starting to pivot away from being the leader in computing standards.

The rest of the ‘90s, NIST is not engaged in standard-setting. It's actually withdrawing from those activities. And so you think about the early ‘90s, the early internet era, Netscape and all that. NIST is in the background, They're still participating in standard-setting bodies, but they were not taking the leadership role the way they had been in the early decades.

Alan Rozenshtein

Now, as you write, more recently, NIST has been engaged in setting some computer standards, primarily in cyber security, software development, and AI. How has that gone and what differences have there been, to the extent there have been, between how NIST has done its job more recently to how it would have done its job in the ‘70s and early ‘80s?

Bryan Choi

There's a sense of urgency that's forced NIST's hand. The White House across multiple administrations--Obama, Trump, Biden--have all suddenly decided that NIST is this expert agency that can wave a magic wand and come up with standards that will solve problems in cybersecurity and AI and software quality. They haven't given quite the budget, I think, that NIST would need, and the timelines have been extraordinarily short. It's like within a year, you will come up with standards that solve all of AI, for instance. Like you’ll make it trustworthy, you’ll make it reliable, safe, fair, transparent, all these things. And you're going to do it within 12 to however many months. And it just seems so unrealistic. Now NIST can't complain. They can't say no. So they have to do something. And what they've done is they've just adapted the FISMA framework that I mentioned before. So FISMA was this risk management framework. It says, “Evaluate and assess the risks that you might have to your information management systems. And then once you have a sense of the risk landscape, you'll come up with a plan to manage those risks. And then you will act accordingly if any of those risks come to bear.” That's the basic framework that NIST has adopted for cyber security, software security and AI. They've said, “You shall plan out what are the risks. You shall then measure or detect when the risks occur, and then you shall remedy or mitigate the harm that arises, once these events happen and it's a dramatic shift, right? It's a pivot from where from the types of standards that NIST was pushing out in the ‘60s, ‘70s, and ‘80s, where they were really trying to standardize and unify the way that software developers did their jobs. Now it feels like it's a pluralistic, do whatever you want to do and we're going to try to come up with vocabulary that unifies what you're doing. But it's not actually trying to tell you to do the same thing. So it's almost the exact opposite of what we would think a standard does.

Alan Rozenshtein

I have to say this all sounds quite underwhelming, and I suspect you agree because your title of the paper is “NIST's Software Un-Standards.” These aren't actually standards, right?

Bryan Choi

They're not standards. NIST, in fact, disclaims, that they're standards. They say, “It doesn't make sense to say, ‘I am complying with these frameworks,’ because compliance means very different things to different parties.” It's a kind of what I call a choose-your-own-adventure framework. You get to choose which elements that you want to include in your plan. And then for each of the elements that you do include in your plan, you get to set the timing or the terms of what it means to do well on that particular term.

So I'll give you an example. When you discover a vulnerability in your software, you should probably remedy that, right? You should patch the vulnerability. Now the question that comes up is how quickly should you patch your vulnerability because it may be a very critical vulnerability. It may not be a critical vulnerability at all. But there's a lot of uncertainty as to how much time you should take between the detection of the vulnerability and the patching of the vulnerability. Now you would think NIST’s cybersecurity framework would give you an answer. Should it be 30 days, 60 days, 90 days, a year, five years, a decade? How long should you have before that known vulnerability gets patched? And the answer is that NIST does not provide any answer. It says, “The organization shall determine for itself what the appropriate timing is, and the organization shall determine all other kinds of aspects of this policy of how quickly you want to patch these vulnerabilities.” In fact, the cybersecurity framework doesn't necessarily say you have to have this rule or policy in your plan at all. Presumably, it would be wise to do so. But if you didn't, NIST wouldn't bring the hammer down. They wouldn't say, “Oh, you're non-compliant with our framework.” There is no such thing. So that's why I call it a choose-your-own-adventure, because you can really come up with whatever plan you want, and there is no such thing as non-compliance.

Alan Rozenshtein

So going back earlier in our conversation when you talked about how you got interested in NIST, going back to this example of this Ohio law that would provide a safe harbor, if I'm understanding this correctly, the safe harbor wouldn't really mean that much because how can you even tell if the company is complying with the safe harbor, right? It's either so vague that you can't tell or it's so meaningless that it's just a box-checking exercise that doesn't actually accomplish the goal. Are you as disappointed with all of this as I am? Or is there some wisdom to this approach?

Bryan Choi

So there's some disappointment and perhaps there's some wisdom. What it does accomplish is it says, “You must have a plan. You must have some documentation.” And I think that is one of NIST's goals. There are organizations that are doing less than the minimum. They don't even have a plan. And so how do you measure compliance with the framework is that you have created some paperwork that is nominally nodding towards a cybersecurity framework. Now, the disappointing part of it is, as you were suggesting, is the substance there could be almost anything, and all you have to do is write at the top “cybersecurity framework” or “cybersecurity plan” and then presumably you're in compliance or doing something that looks like what NIST wants.

Of course if you're a good faith actor, that might be actually pretty good. If you genuinely want your organization to be good at deterring cyberattacks, I think there is some reason to hope that the organization will be better after evaluating its risks and then taking some actions to fix the things that aren't good, right? But as lawyers, we're always trained to think about this from the shoes of a bad actor, the bad man, or--that's Oliver Wendell Holmes’ framework. And so if all you care about is avoiding liability, then you're going to do the bare minimum. You're not going to actually improve your organization's security practices. And then you won't bear any of the liability costs either.

Now, my scholarship work has tried to impose more of those costs on these software organizations, because I think that tort liability, and in general, legal costs, are a great incentive to improve behavior. It’s not necessarily the most efficient or certainly there are alternatives, but I think it's a key component of the overall ecosystem. And so if you're allowing these organizations to escape the liability framework entirely by doing nothing or de minimis, then I think that cheats the legal system out of a key lever that it has. And so that's why I think it's quite disappointing in those respects.

Alan Rozenshtein

So let's now talk about the lessons that you draw from this kind of historical analysis of NIST, in particular for attempts by the government to use NIST to improve cybersecurity standards. And I'm going to divide it into two parts. One is what should we do about NIST, or how should we use NIST? And the second is what should we do generally because maybe if not NIST, then maybe someone else. So let's talk about NIST first. Do you have any confidence that reliance on NIST will actually advance the ball, or will NIST just do what it's been doing? Again, maybe for no fault of its own because it's given a hard job, which is, look, you want standards? We'll give you standards, but it's a little bit like the--there's a great joke that the comedian Stephen Colbert once made about due process: “It's no big deal. Due process is just the process that we do.” It's a little bit like that. A NIST standard is just a standard that comes from NIST, even if it's an actually an un-standard that doesn't say anything. Where do you think again, to your point about good actors, good faith actors, NIST can meaningfully advance the ball here?

Bryan Choi

I think we have to think about the types of problems that NIST can do meaningful work on. So crypto seems to be one of those areas where NIST is really good.

Alan Rozenshtein

Crypto as in encryption, not cryptocurrencies.

Bryan Choi

That's correct. Right. Or although maybe it extends to cryptocurrencies. But encryption, cryptographic technologies seems to be an area where NIST has had impact and the cryptographic community seems to accept NIST as a reputable go-between or convener. And so why is that the case? And I have a hypothesis that crypto may be just one, it's more mathematical, it's more limited in scope. You have one job to do. All you have to do is translate text to numbers into some kind of encryption. You're doing a translation. That's all you're doing to do it well. And then also because it's highly mathematical because it's, perhaps, more technical, it's a smaller community. And to be accepted into the crypto community, you join this cohort. And so there's a greater chance of consensus among that community about what constitutes good crypto. And so if you can find other areas like that, then perhaps there's more hope for NIST as a convening role, or even as producing standards that provide consensus and uniformity the way that we would want a standard to do.

Alan Rozenshtein

And do you think that there are such areas? Are there maybe parts of AI and machine learning that are more technical, rather than these very difficult policy trade-offs between values X, Y, and Z?

Bryan Choi

That's one area where I have a possible optimism, which is, perhaps, AI and machine learning have similar attributes where it's very mathematical, very technical. It's a very small, closed community of people who are really at the top of the field. I don't mean the people who are installing it on their laptops at home, but I mean the people who are really kind of thinking about the high theory of machine learning and deep neural nets. It's a very small community relative to software development as a whole. And so, perhaps in that area, NIST might have some comparative advantage to be able to develop consensus standards in a way that resembles crypto. That said, AI is a much broader task. It's not like encryption where you're only doing a single narrow task. AI is being sold and marketed as being able to do almost anything you want and all the problems attendant with that. And so perhaps that's a reason to have some question about it.

But I think we should at least see some areas where NIST is able to make some progress. So, for instance, NIST has done work, early work, on developing standard data sets for instance on handwritten digits or fingerprints, and that work was very instrumental in getting machine learning off the ground in those areas. Those data sets played an important role. Could NIST provide some kind of certification or convening rule in that sense, in curating the certain types of data sets that are more legitimate than say, these scraped data sets from all of the internet, where we have no idea what's inside and whether the quality is good or bad?

Alan Rozenshtein

Or maybe benchmarks as well, things of that kind of technical training nature.

Bryan Choi

So perhaps testing benchmarks is another kind of data. Is that another way that NIST could contribute? So you could think of a few areas within AI and machine learning that might play to NIST's strengths and I'm really interested in exploring that in future work.

Alan Rozenshtein

Okay. So we've talked about NIST and I think we've been a little hard on NIST, but again, we've given them an impossible problem and, and it's better that NIST do what they can do well than be asked to do an impossible thing. But let's put NIST aside because I want to conclude this conversation by again, just zooming out and asking, okay, forget NIST. Joe Biden gives you a call, “Bryan, I've made you my all-things-software czar. Fix the problems for me.” Okay, we've talked about NIST, we've learned our lessons. But now, not limiting ourselves to that, what else can we do and should we use NIST as--and this is what you end your paper on, and what I kind of want to push back a little bit, because I thought that was very prerogative in your paper. Should we think of NIST as defining the ceiling of what we can do, whether we call it, do it in NIST or call it the Department of Good Programming, right? Or should we say, “Look, there's something about NIST that limits it. We should make a new thing and be more ambitious, give it an infinite budget.” If what we've learned from machine learning is that you just need more compute—maybe the same is true for regulation. You just need more bodies. What does software czar Bryan Choi do?

Bryan Choi

Yeah. So the other interesting aspect of focusing on NIST, that question, is because I think NIST is a really good example of an expert agency. It's probably the canonical example of an expert agency. There has been this tendency in the literature, and it's a recent trend, to say, “We should appoint some czar, some new agency. We should invent a new agency that will solve all our software or AI problems for us.” And in fact, some of the bills that were on the floor in Congress this year include “let's create a national A I Commission.” I've heard of proposals to create a FDA for algorithms or a Robotics Commission, or you hear various iterations of this as if creating a new agency will be the silver bullet, will be a magic fix for the problem that has bedeviled us for decades. And I just find that so hard to believe. And I think we should stop talking about a fix in those terms because I don't think it's productive. We have a whole bunch of existing agencies that have a lot of built-up expertise in their domains and they have all struggled to work on this problem.

Here, the leading example, I think, is the FDA. The FDA has been dealing with medical device software for decades and they just don't have a great approach to it. Often it's just one, medical device software is, that arm of the FDA, is less funded than the drug arm of FDA and so the review is a little more cursory anyway. And there's a second question of once software is approved, what happens to it after that? And the FDA says, “Well, you should update the software, but we might take a look if we think that we should.” It just is such a piecemeal approach to medical device software as is. Now throw AI into the mix and suddenly the FDA saying, “Well, our existing approach doesn't work at all. We have to come up with a completely new review plan. We have to tear apart the rule book and come up with some entirely new thing because AI is so different.” If the FDA is doing that, it suggests they don't really have an answer, even though they've been studying this since at least the early 2000s and even before that, right in the early 1990s, they started looking at this problem.So what is a new agency going to do differently that NIST or FDA or FTC is often invoked, what is it going to do differently than any of these existing agencies? It's going to inherit a bunch of political problems. It’ll be its own domain. Once you create it, it'll never die and then it'll just have, overlapping jurisdiction with all of these other pre-existing agencies.

What do I think is a better solution? It depends on whether we think that we can get some kind of unanimity or consensus in this area. And again, maybe AI is different, but at least with software, I'm not convinced that we're going to get a definitive answer about what constitutes good software practice. Instead, it's going to be continue to be this mélange of approaches, different companies and different developers are going to have their own philosophies about how to develop software and how to test software. And I think we should take a pluralistic approach. So I've argued in other work that aprofessional care standard would work well for that. That's a bit of an idiosyncratic view that I've promoted. But the advantage that I see with that is that it embraces the existing landscape of software development and then overlays a legal oversight on top of that without being too disruptive of the existing practice. So we have some legal oversight, not no legal oversight, the way it has been and at the same time, we allow software development to continue mostly as it has.

Alan Rozenshtein

And the idea here is, just to be clear, is just as you and I, as lawyers, are professionals in the technical sense--there is a profession, it is self-regulating, there are certain standards, those standards vary in their level of specificity according to how appropriate they should be, and that is one of the primaries we deal with--so we should think of programmers and system designers also as professionals in that sense, and that we need to have a legal superstructure of liability that encourages them to regulate themselves, and then just hope, right, that they know enough. They're the experts, that they will just--if they're incentivized to spend a bunch of time thinking about it, they will figure it out. At a very crude level, is that a decent description of what you'd propose?

Bryan Choi

Yeah, that's right. So, we want to incentivize software developers to at least avoid the worst mistakes, just as with medicine. We don't always know what's the best medical care. It could be a range of opinions. You ask different doctors, they might all have a different approach to treating cancer, for instance. The superstructure, the legal oversight is to say, “All of those approaches are acceptable as long as you all agree that it's not below the standard of--certainly no self-respecting doctor would recommend this.” And so, that component has been missing, I think, in software regulation. Right now we just say, “Hey, you can go to some code camp, have minimal experience. You put out some code and it fails and, well, nothing happens to you.” There's no kind of self-policing or no minimum baseline of what we consider to be malpractice, right, before the legal system gets involved.

Alan Rozenshtein

And I assume that, just as with medicine, where it's not either/or, we both have professional medical associations that set standards, but we also have an FDA, that simply doesn't allow doctors to prescribe certain things until the FDA has signed off on them. We might have a similar situation here where those things that a regulator like NIST can do, which is figure out what are the quote unquote, “good cryptographic primitives.” They would do that, but the rest of the stuff, you would just have to kind of let the community of practitioners figure it out for themselves.

Bryan Choi

That's a really interesting suggestion for the role of agencies because I don't want to cut agencies out of the ecosystem entirely. They have a role to play too. And so, the question is what is their role? What's their comparative advantage? One of them is that you are the centralized authority that can set rules. And I guess the fear that I have is that they would exercise that rule in a ham-handed way, to, for instance, password policy. NIST comes out with a password policy and that becomes the policy for everyone: “You shall have special characters and capital letters and so on,” and without necessarily being good policy. So what's a better use of that kind of centralized authority? It could be, as you suggest, that certain software is certified and they say “That's good enough. We accept it for certain uses,” and the medical device software could be an example. There might be other domain-specific software where you need that kind of certification, and if NIST is able, or some other agency, is able to provide that kind of assurance and update it frequently, periodically, they're not just going to say “once and done,” then perhaps that could be a useful role. That's separate from, say, a standard-setting authority or an enforcement authority, the way that we've thought of the agencies. Right now we think of agencies in those terms, like the FTC is going to go after you if you have a bad privacy policy or if you use facial recognition in a bad way or something. There's certain ideas about what the agency should be doing.

Alan Rozenshtein

Well, I think this is a good place to wrap it up. It's a fascinating topic, a great paper. Thank you so much for writing it and for coming on the show to talk about it.

Bryan Choi

Thanks very much. And I'm looking forward to continuing to explore these topics, so stay tuned.

Alan Rozenshtein

The Lawfare Podcast is produced in cooperation with the Brookings Institution. You can get ad-free versions of this and other Lawfare podcasts by becoming a Lawfare material supporter through our website, lawfaremedia.org/support. You'll also get access to special events and other content available only to our supporters.

The podcast is edited by Jen Patja and your audio engineer this episode was Noam Osband of Goat Rodeo. Our music is performed by Sophia Yan. As always, thank you for listening.


Alan Z. Rozenshtein is an Associate Professor of Law at the University of Minnesota Law School, a senior editor at Lawfare, and a term member of the Council on Foreign Relations. Previously, he served as an Attorney Advisor with the Office of Law and Policy in the National Security Division of the U.S. Department of Justice and a Special Assistant United States Attorney in the U.S. Attorney's Office for the District of Maryland.
Associate Professor of Law and Computer Science & Engineering, The Ohio State University.
Jen Patja is the editor and producer of The Lawfare Podcast and Rational Security. She currently serves as the Co-Executive Director of Virginia Civics, a nonprofit organization that empowers the next generation of leaders in Virginia by promoting constitutional literacy, critical thinking, and civic engagement. She is the former Deputy Director of the Robert H. Smith Center for the Constitution at James Madison's Montpelier and has been a freelance editor for over 20 years.

Subscribe to Lawfare