Bolt-On vs Baked-In Cybersecurity

Herb Lin
Thursday, June 23, 2022, 8:01 AM

Real cybersecurity involves trade-offs in functional requirements.

Cybersecurity locks (methodshop,; Pixabay, Free for commercial use)

Published by The Lawfare Institute
in Cooperation With

A few weeks ago, the annual RSA Conference met in San Francisco. The conference is among the world’s largest cybersecurity events, and it thus provides a useful opportunity to reflect on current issues in cybersecurity.

One of the most prominent issues in cybersecurity is that of “baking in” security into product development from the beginning, rather than “bolting on” security as an afterthought. A company that uses bolt-on security as its default product development practice is usually acting in accordance with its economic incentives. Efforts devoted to security do nothing to advance the functionality of a new product. In an environment in which time-to-market is often the key to marketplace success, it makes a lot of economic sense to fix security problems if and when they manifest themselves after product launch rather than to spend the up-front effort preventing those problems from arising in the first place. A product manager may believe that the probability of discovering a vulnerability is low or that the economic loss resulting from its potential discovery is low. Thus, the product manager may make what seems to be an economically rational decision to fix only those problems that are both discovered in the field and serious. If the product manager is right, the resulting costs will be lower than the cost incurred in a security-up-front or baked-in model.

But this bolt-on approach comes with important cybersecurity downsides. Fixing a security vulnerability discovered after the product has been finalized often entails revisiting important decisions made at early stages in product development. It could also end with the expensive result that much of the product must be redeveloped from those stages. 

Bolt-on security leaves users vulnerable to security problems that could have been avoided. Even worse is the security outcome when the product without baked-in security achieves a high degree of marketplace acceptance and success. Security problems latent in a widely accepted product then affect a much larger user base, thus potentially exacerbating the consequences of such problems. And when a security problem does appear that must be remediated, the cost of remediation at the back end is likely to grow as the number of users of the product increases. At a sufficiently large number of users of a product, there are societal costs that go beyond the sum of individual remediation costs.

It is thus nearly impossible to find anyone in the cybersecurity world who defends bolt-on security. Indeed, wandering around RSA listening to various talks and presentations reveals that nearly every product vendor claims that while others may do bolt-on security, they bake it in from the start.

Really? Let’s unpack that claim. Baked-in security, or security by design, as it is more formally known, calls for vendors to address security problems early in the product development process. Security thus becomes a criterion for product design on a par with other software attributes, including maintainability, reliability, usability, efficiency, adaptability, availability, portability, scalability, safety, fault tolerance, testability, usability, reusability, and sustainability. 

Often called the “-ilities” of software, these attributes are nonfunctional—that is, they do not help anyone get useful work done. Instead, it is a product’s functional requirements as they are ultimately implemented that users value. When a product innovator has a great idea, it is expressed in functional requirements as an articulation of what the product is supposed to do for the user. Conceptually and in principle, such expression occurs prior to product design.

To quote Fred Brooks from the “Mythical Man-Month”: 

The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds castles in the air, creating by the exertion of the imagination .... Yet the program construct, unlike the poet’s words, is real in the sense that it moves and works, producing visible outputs separate from the construct itself .... The magic of myth and legend has come true in our time. One types the correct incantation on a keyboard, and a display screen comes to life, showing things that never were nor could be. 

In short, it is the manifestation of the functional requirements that animate the product. Whereas the ultimate constraint on what physical systems can do is the laws of physics, the ultimate constraint on what software can do is the human imagination, which is far less constraining.

But reality intrudes when software “-ilities” are considered, and many products have failed to reach fruition because one or more of the “-ilities” were not adequately addressed. And so it is with security, even baked-in security, which is often less than the panacea it is branded as being. When product designers receive the functional requirements from senior management in the C-suite, they are generally obligated to treat those functional requirements as boundary conditions on what the product must do. They are not free to relax or waive functional requirements, even if fulfilling a particular functional requirement may contraindicate good security architecture or practices. In practice, what baked-in security often means is that the product design team accepts the functional requirements and does the best job it can within the established constraints.

But what if the functional requirements themselves pose security challenges? At some level, they must. A rock is entirely immune to cybersecurity challenges, but it is not useful in any way either. To make the rock useful, we have to give it the ability to do certain things—and functional requirements are how we specify those things. Once the requirements-augmented rock can do some useful things, its functionality can be abused. For example, assume that a requirement is that the rock can be dropped only by certain authorized people. Thus, we must develop a mechanism to keep the rock locked to the table on which it usually rests. That mechanism can be activated only by a specific key, copies of which are given to every authorized person. Herb is one of those people. But when Herb loses his key and George finds it, George gains the ability to drop the rock. Herb is trustworthy and would drop the rock only to illustrate to interested students the phenomenon of gravity. But dastardly George would drop the rock to hurt cute, furry animals. In short, the rock and its supporting infrastructure, which together constitute The Rock™, have now become more vulnerable to attack and are thus more susceptible to abuse by the Georges of the world. 

Repeated conversations I’ve had with those responsible for security suggest that C-suite leadership rarely considers the cybersecurity implications of product innovation or functional requirements for said products. In this view, the security team is an internal service organization for the C-suite rather than a partner in making high-level decisions about products. The role of a service organization is essentially to salute when given an order and then to do the best job it can possibly do. By definition, orders from the C-suite are not subject to question—even if those orders entail high security risks. This description is admittedly a caricature, but it points to some of the essential features of the relationship between the C-suite and those responsible for security.

To move away from this paradigm, two things have to happen. First, the entire C-suite has to have more than a nodding acquaintance with the basics of security—in particular, they need to know how hard security is to get right—so that they can understand how and why security considerations might arise in a proposed product. Second, the security team—particularly the chief information security officer (CISO)—must understand the rudiments of the business that the C-suite leads so that they can understand the potential business issues at stake. The CEO is responsible for making the trade-off between functionality and security, while the CISO’s role is to ensure that the trade-off is an informed one.

Senior leadership is unaccustomed to making trade-offs between product functionality and security. But it likely makes similar trade-offs with respect to certain other nonfunctional product attributes. Consider the issue of cost. No competent CEO would insist that a proposed functional requirement should remain no matter the potential cost that would be incurred in implementing that requirement or the expected return on investment. A chief financial officer is expected to be sufficiently conversant with the business to be able to provide useful advice and input on the financial ramifications of any particular product proposal.

For security to be truly regarded as a critical attribute of products, a business must sometimes forgo an aspect of product functionality to gain security benefits. Not all the time, since that would result in an entirely nonfunctional product (such as a rock), but some of the time. So a useful question to ask companies that claim to care about security is, “Please describe an instance in which product functionality was sacrificed for better security.” If the response is not unambiguously clear, beware of security hucksters peddling appearances over substance.

Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare