A ‘Window Sticker’ for Software
How buyers can use performance measures to drive better security in software products.

Published by The Lawfare Institute
in Cooperation With
In June 2017, a cyberattack known as notPetya corrupted the Ukrainian tax accounting software Medoc and used it to infect victim machines with destructive malware, which spread rapidly and incapacitated numerous global companies, including Maersk—causing billions of dollars in damage. Cybersecurity professionals believed the attack would serve as a wake-up call about the risks of vulnerable software.
And yet more than a half-decade later, the global community continues to see security weaknesses in software being exploited by threat actors to gain access to customers’ networks and data. In July 2025, a previously unknown (“zero day”) software flaw in Microsoft SharePoint was reportedly exploited to compromise a number of U.S. federal and state agencies, universities, and energy companies.
Multiple forms of best-practice guidance have been released in recent years in an effort to combat these attacks. The issue is whether they have been applied effectively. A lack of threat-informed design can skew prioritization and lead to a false sense of security. Likewise, best-practice adoption in name only (i.e., pro forma), legacy code complexity, and product-by-product variance can complicate the mapping of theory to practice.
Car buyers get to see a window sticker—known as a Monroney sticker—when making purchasing decisions. Software buyers could benefit from their own “window sticker” when making purchasing decisions so that, just like with a car, they can see different “crash test ratings,” the origin of parts, and which features are available, either standard or as an option. This article explores what such a “window sticker” might look like in the context of software products often exploited by malicious actors.
Historical Context
Disclosures to buyers are a staple of modern commerce. In 1958, frustrated by deceptive practices by automobile manufacturers, Sen. Mike Monroney (D-Okla.) sponsored the Automobile Information Disclosure Act, which mandated disclosures about basic vehicle information. The window stickers visible on cars at auto dealers around the country are known as Monroney stickers, and their contents have grown over time—for example, through the inclusion of fuel efficiency and five-star safety ratings. These safety ratings include information such as crash test results, and in 2024 the U.S. National Highway Traffic Safety Administration finalized significant updates to incorporate driver assistance technologies. Figure 1 shows a Monroney sticker.
Figure 1. Sample Automobile Information Disclosure Act disclosure (Monroney) sticker.
Recently, important software-related transparency initiatives such as a software bill of materials have been introduced. What’s different about the “window sticker” concept is that it distills multiple best-practice frameworks and verification information into a single, repeatable view of a product’s overall security performance.
This brings practical context to specific security assertions by suppliers and thereby informs trade-off analysis in buying decisions and rip-and-replace decisions. Moreover, as suppliers understand that these measures are being repeatedly evaluated by buyers, they are likely to more actively manage how they perform against them—improving overall quality of products.
Design Principles
The aim of this dashboard is to bring together best-practice assertions and technical observability. In the wake of the 2020 compromise of SolarWinds software as part of the SUNBURST campaign, the U.S. government launched a wide-ranging effort to provide guidance and tools to help software vendors better understand and manage software risks. A proliferation of guidance has ensued. In February 2022, the U.S. National Institute of Standards and Technology (NIST) finalized Special Publication 800-218, the Secure Software Development Framework (SSDF), and a recent executive order has directed that it be updated. In 2023, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) released a Secure by Design framework. NIST also defined minimum verification requirements for software developers.
One important point of framing is to distinguish enterprise from product security. When companies perform due diligence on their suppliers, they often focus on enterprise security measures, as noted in recent guidance from CISA. Although enterprise security is important, customers also need to focus on how a provider approaches security for a product (i.e., software application). Enterprise security refers to practices to protect a company’s own infrastructure and operations, while product security refers to actions the software provider takes to ensure the products they deliver are secure against attackers.
As noted in the Secure by Design framework, good software security practices fall into two broad categories: (a) application hardening (i.e., how applications have been built to minimize risks of threats and vulnerabilities that could later be exploited), which is the primary focus of the NIST SSDF; and (b) application security features (i.e., how features align or improve a customer’s security posture, for example, through flexibility in integrating with existing customer identity and access control systems, or provision of detailed logging to support investigations when incidents do occur). Default settings (i.e., secure “out of the box” settings) should also be considered. These are implemented through a life cycle of requirements-setting and design, code development, testing, release, and updates for vulnerabilities identified post-release.
Challenges
The above frameworks mostly reflect what providers say about their product security. And there are (at least) four associated challenges:
Pro forma adoption. In order to be broadly applicable, recent software security best practices tend to be described at certain levels of abstraction. The result is that software product companies can claim to have implemented security practices when adoption has in reality been only superficial. While more rigorous standards exist, they tend to be specific to certain technology product types—such as IEC 62443 4-1 for industrial automation technology—and adoption is not mandatory. Responsible companies will make a good-faith effort for both reputational and regulatory reasons (e.g., False Claims Act exposure for federal vendors), but practices can be implemented unevenly.
Legacy code complexity and code constituency. It is much easier to implement best practices with newly developed software, so an existing challenge is how to retrofit product security best practices into legacy software. And legacy software can be highly complex—tens to hundreds of millions of lines of code in multiple languages, as described earlier. Likewise, nearly all code today includes someone else’s code directly, executes in someone else’s environments (e.g., the cloud), or relies on someone else’s code for proper operation (e.g., remote APIs). That means the software supply chain security problem goes back dozens or even hundreds of upstream software suppliers, requiring each of the next-hop upstream suppliers to manage this risk.
Product-by-product variance. Many software companies have been adhering to good product security practices based on risk-informed corporate decision-making, but merger and acquisition transactions mean that companies may acquire other companies where such practices have been less well implemented. Thus, evaluation generally needs to occur on a product-by-product versus company-wide basis. Likewise, certain product security features may be available only as optional add-ons for an additional price.
Threat and risk. In a world where there is no such thing as risk elimination, and security choices need to be made in limited-resource environments, threat-informed product security is also critically important to ensuring that software is built in a way that is defensible against likely threats and related problems. What threat categories should we worry about?
A baseline list of threat categories was articulated by the Enduring Security Framework, a public-private partnership managed by the Department of Defense, CISA, and other government agencies, in its 2022 Securing the Software Supply Chain Recommended Practices Guide for Developers:
- Adversary intentionally injecting malicious code or a developer unintentionally including vulnerable code within a product.
- Incorporating vulnerable third-party source code or binaries within a product either knowingly or unknowingly.
- Exploiting weaknesses within the build process used to inject malicious software within a component of a product.
- Modifying a product within the delivery mechanism, resulting in injection of malicious software within the original package, update, or upgrade bundle deployed by the customer.
Thus, a more complete view of a software product’s security state will include technical observability to illuminate whether software addresses these threats and challenges.
Applying Design Principles to a Buyer’s Guide
So how do these design principles overlay onto a buyer’s guide? Figure 2 illustrates what a potential software security and safety “sticker” could look like.
Figure 2. Sample software safety and security disclosure sticker.
The key elements of the structure follow below.
Left-Hand Column—Process View
The top four categories in the left-hand column (Product Security Attestations/Pledges, Enterprise Security, Threat Modeling, and Supply Chain Risk Management) reflect best-practice-related assertions from the software providers—focused mostly on application of security processes (as opposed to effective implementation of those practices). SSDF is a relatively new framework, so it would not be surprising to see SSDF adoption as “in process.”
Notwithstanding that this article is focused on product security, enterprise security is still highly relevant to product security: A weakness in the former can be used as a stepping stone to compromise product infrastructure. This is expressly called out in both CISA’s Secure by Design framework and the Supply Chain Levels for Software Artifacts (SLSA) framework—which each emphasize the importance of enterprise security standards like CISA’s Cyber Performance Goals (CISA) and the Center for Internet Security’s Critical Security Controls (CSC) in protecting the build environment. Likewise, “product security” for cloud systems involves both development and deployment scenarios, with the latter being particularly dependent on enterprise security.
There are a few more targeted questions that can be asked to understand the provider’s approach to threat-informed defense: First, is threat modeling performed on the product and supporting infrastructure?In the U.S. Department of Defense’s 2019 DevSecOps Reference Design guidance—which describes the key design components and processes to instantiate a secure software factory—threat modeling is the very first step of the first phase, the planning process. One recognized software-centric threat modeling approach is STRIDE, which is a model for identifying software threats originally developed by Microsoft researchers—the name is an acronym for six threat categories (spoofing identity, tampering with data, repudiation, information disclosure, denial of service, and elevation of privilege). STRIDE-LM is an enhanced version. In terms of threats to software development infrastructure, the MITRE Corporation’s authoritative Adversarial Tactics Techniques & Common Knowledge (ATT&CK) framework is the most comprehensive, authoritative approach to mapping of threat actors to tactics, techniques, and procedures (TTPs) openly available today. More recently, a team of experts from leading software companies have developed the Open Software Supply Chain Attack Reference (OSC&R) framework. Similar to ATT&CK, the OSC&R framework documents the life cycle of attacks against software supply chains.
Second is the question of supply chain cybersecurity. Cybersecurity experts also worry about software components being developed by internal groups or suppliers that reside in countries that harbor ill-will toward countries where buyers operate. Government authorities have grown increasingly concerned that, where software is produced by research and development (R&D) teams physically located in adversary countries where they are thereby subject to the jurisdiction and direction of these hostile powers, this poses an undue risk of software product subversion. The concern has been reflected in a number of regulations issued in the final days of the Biden administration, including rules on giving vendors access to U.S. bulk personally identifiable information (PII), developing connected car software, and cybersecurity labeling of Internet of Things devices. Of course, it’s also important to acknowledge some realities in this regard: However much companies restrict R&D locations, if developers are using open-source code (which most do), there’s a decent chance that at least some of the contributors to open-source projects are from these same countries, as reflected in this August 2025 report.
Middle Column—Performance View (Direct Technical Observability)
Given the above-described threats and practical challenges, it is increasingly important for buyers to have options for greater transparency and direct technical observability into software product security performance. The middle column reflects verifiable aspects of product security.
There are five main approaches:
- Software composition analysis (SCA) tools analyze code to create inventories of software libraries (generally limited to open source), dependencies, and related vulnerabilities.
- Complex binary file analysis examines software in its final package form. Unlike SCA, complex binary analysis tools examine whether threat behavior such as malware or tampering is indicated. Binary analysis also attempts to discover all open-source and commercial third-party software components that are embedded inside a software package, as well as hard-coded secrets.
- Penetration testing can be thought of as verifying a configuration when you believe it to be secure. Application-related penetration testing comes in different flavors, including testing specific to web and mobile applications, and seeks to identify vulnerabilities in the product.
- Software bills of materials (SBOMs) are, as per CISA, formal records containing the details and supply chain relationships of various components used in building software. When vulnerabilities are later discovered in a given component, SBOMs can be used to identify software that is affected by the vulnerable component. Moreover, the European Union’s Cyber Resilience Act (CRA) requires providers to create SBOMs (and implement related Secure by Design practices), and its main obligations will take effect in December 2027.
- Supply Chain Levels for Software Artifacts framework. To the extent that SBOMs are considered “ingredient labels” for software, the Supply Chain Levels for Software Artifacts (SLSA) Version 1.0 framework is the “food safety handling guidelines” that build confidence in the build environment and hence the integrity of the ingredient list (like clean factory protocols and tamper-proof seals). SLSA is an industry-developed specification for characterizing and measuring the security of software supply chain infrastructure and generating provenance to support security attestations that can be verified.
For software composition analysis, binary analysis, pen testing, and software bill of materials analysis, acquirers might not only request evidence of these activities from software providers but also have the option of independently conducting these evaluations or verifying assertions because none are dependent on source code access.
The middle column also reflects input from enterprise security ratings services such as Black Kite, BitSight, Security Scorecard and RiskRecon.
Right-Hand Column—Practice View
The right-hand column enumerates application hardening and application security features that have been deemed particularly impactful by U.S. government and authoritative industry sources, and whether they are included as a standard feature or are optional. In 2024, CISA announced a Secure by Design Pledge, whereby signatories pledged to implement a subset of particularly impactful Secure by Design practices. CISA has also referenced Minimum Viable Secure Product (MVSP) considerations, which are a list of essential application security controls that should ideally be implemented in enterprise products and services. MVSP is based on the experience of contributors in enterprise application security across a range of companies and driven by Dropbox’s Vendor Security Model Contract and Google’s Vendor Security Assessment Questionnaire.
The column also includes whether providers offer Security Health Self-Check Features. One of the biggest challenges in working with cloud service providers is ensuring that cloud technology is implemented securely by the buyer—in other words, minimizing user error within customer information technology teams. Tracking cloud asset posture can increasingly be achieved through already-included “look yourself in the mirror” features from cloud providers, for example, Microsoft’s Secure Score. Some cloud systems such as Google Cloud Platform have also started to map posture to controls frameworks published by NIST and the Center for Internet Security.
Providers would have flexibility to highlight additional or different features over time (much as Monroney sticker content has evolved over time). The right-hand column could be expanded to include features specific to cryptography and artificial intelligence (AI)—for example, whether there is an inventory of cryptographic algorithms (useful for when they need to be updated as post quantum computing renders current-state algorithms insecure), and whether there are protections against disclosure of user data to AI large language models. Likewise, it might also contain disclosures around end-of-life factors (e.g., dates after which the product will no longer be supported). Such a “sticker” could itself be digitally signed to protect against fake labels.
***
How to interpret such a guide? At a minimum, buyers would want to see illumination on (a) threat modeling (i.e., Is there any?), ideally using an authoritative threat framework, as the foundation for product security; (b) best-practice product and enterprise security framework adoption, even if characterized as “in process”; (c) assertion reassurance that no malware, critical or high-severity vulnerabilities, or leaked secrets were detected in code at time of shipment—based on technical analysis that can be independently verified by buyers at their option (although SBOMs and SLSA assertions may not yet always be available); and (d) enumeration of application hardening and security features deemed particularly impactful by authoritative references.
Of course, product security due diligence falls within a broader life cycle of supply chain risk management, including inherent risk analysis, contractual terms, and continuous monitoring. Likewise, blind spots in some due diligence areas could potentially be addressed by elevated risk management in others—a lack of detail on SSDF conformance or threat modeling could necessitate increased binary and SBOM analysis, pen testing, and heightened continuous monitoring.
While software providers may be reluctant to voluntarily disclose the factors enumerated above en masse, nothing prevents buyers from creating their own “window sticker” equivalents by consistently applying these factors in their own due diligence processes. Moreover, as suppliers see these measures consistently applied in buyer due diligence, they are likely to more actively manage performance against them, improving the overall state of software security.