Congress Intelligence

Reducing Government Overclassification of National Security Information

Herb Lin
Thursday, February 16, 2023, 4:03 PM

To rectify the widespread overclassification of government documents, policymakers might con

Headquarters of the National Security Agency in Fort Meade, Maryland. (NSA, https://flic.kr/p/H4Kqu4; CC BY 2.0, https://creativecommons.org/licenses/by/2.0/)

Published by The Lawfare Institute
in Cooperation With
Brookings

Editor’s Note: This article is an expansion of the budget-based approach to classification that Lin presents in his Nov. 25, 2022, Boston Globe op-ed entitled “Want to make it ‘top secret’? Pay top dollar.”

Recent disclosures that President Joe Biden, former President Donald Trump, and former Vice President Mike Pence stored classified documents at home have shined a spotlight on what many people believe to be excessive government classification of information. (While many observers are arguing for a reevaluation of the classification process, at least one former government official has argued that current and former government officials have a responsibility to protect information designated as classified, whether that information is properly classified or not.)

The U.S. government system for classifying information is designed to protect sensitive information from falling into the wrong hands. Classified information is defined in Executive Order 13526—Classified National Security Information as government information that “could reasonably be expected to cause identifiable or describable damage to the national security” if improperly disclosed. Examples of such information could include secretly gathered information on a foreign nation’s nuclear weapons posture, the identities of U.S. intelligence agents, or clandestinely assembled profiles of foreign leaders. (Some such information has reportedly been contained in classified documents found at Trump’s Mar-a-Lago residence.) Classification of such information is often necessary to protect tradecraft and genuine state secrets.

The executive order also defines three categories of classified information: confidential, secret, and top secret. Information so classified is information whose unauthorized disclosure reasonably could be expected to cause damage (if classified as confidential), serious damage (if secret), or exceptionally grave damage (if top secret) to national security. At the same time, the order also specifies that classification shall not be used to “conceal violations of law, inefficiency, or administrative error”; “prevent embarrassment to a person, organization, or agency”; “restrain competition”; or “prevent or delay the release of information that does not require protection in the interest of the national security.” 

Nevertheless, individuals with the authority to classify information sometimes do so in violation of these principles for a number of reasons, such as maintaining secrecy or mystique, concealing misconduct or incompetence, adopting an overly cautious stance to avoid failing to protect sensitive information, or defaulting to secrecy as the result of other demands on the classifiers’ time and attention. 

Today, several million individuals hold clearances that grant access to classified information in the United States, a fact suggesting that the U.S. government and other parties generate and store an enormous volume of classified documents. For decades, government officials have concluded that there is too much classification of information and have expressed concern about a long-institutionalized tendency to overclassify nonsensitive information. 

For example, a 1956 report noted that the U.S. government system for classifying information exhibits “a tendency to ‘play safe’ and to classify information which should not be classified, or to assign too high a category to it. ... Further, there is a tendency to use the classification system to protect information which is not related to the national security.” In 1997, the bipartisan Moynihan-Combest commission found that the classification system “is used too often to deny the public an understanding of the policymaking process, rather than for the necessary protection of intelligence activities and other highly sensitive matters.” The 9/11 Commission found that overclassification is a threat to national security because it inhibits information sharing within the federal government and between the federal government and state and local agencies. Donald Rumsfeld noted in 2005 his long-held belief that “too much material is classified across the federal government as a general rule.” In 2013, Republican members of the House of Representatives Duncan Hunter and Martha Roby requested that the Government Accountability Office review the government’s classification systems and examine the degree to which material is classified even when such material does not impact national security. In the same year, Democratic Sen. Jeanne Shaheen called on the Obama administration to increase transparency by reducing the number of classified documents to reduce costs and to combat “a culture of secrecy that is antithetical to our democratic traditions and undermines public confidence in our institutions.” Even before these members of Congress officially voiced their concerns, the Public Interest Declassification Board, established by the implementing memorandum for Executive Order 13526, found that “present practices for classification and declassification of national security information are outmoded, unsustainable and keep too much information from the public.”

And the issue of overclassification has carried over to today. On Jan. 26, Director of National Intelligence Avril Haines noted that “[o]ver-classification undermines critical democratic objectives, such as increasing transparency to promote an informed citizenry and greater accountability,” undermines “the basic trust that the public has in its government,” and “negatively impacts national security.” 

In response to these criticisms, U.S. presidents and members of Congress have engaged in efforts to reform classification processes that reduce the amount of classified information produced and increase the rate at which classified information is officially declassified. The latest government effort was initiated last year by the National Security Council, “to determine how to overhaul the elaborate and often arbitrary classification system that Democrats and Republicans contend is undermining democracy and national security.”

How Classification Decisions Are Made

Whether this most recent effort will produce meaningful results remains to be seen. But it is worth examining why most previous government efforts have not resulted in an appreciable decrease in the production and retention of classified information. The reasoning behind this undesirable outcome is largely found in how classification decisions are made. Based on a classification guide, someone with the appropriate authority designates an item of information as classified. That decision to classify is rarely challenged or even reviewed, and neither the classifier nor the agency for whom the classifier works incurs any direct cost for a decision to classify something. Furthermore, the classifier rarely, if ever, suffers any negative consequences for such a decision. In short, acts of classifying information are a free good, and Economics 101 is quite clear that free goods are overused. Even worse, people are routinely punished for accidentally underclassifying information. The occurrence of a “data spill”—improperly transmitting classified information, including information that is marked at a lower classification than it should be—causes major headaches for those involved. Being involved with such incidents can count as adverse information that affects an individual’s ability to retain a security clearance.

Against this backdrop, it is easy to see the development of a mindset that says, “When in doubt, classify.” Mere exhortations directed at those responsible for classifying information to classify less are not likely to overcome the pro-classification incentives in place.

A Budget-Based Approach to Addressing Overclassification

What is needed is a powerful incentive on the side of less classification, and, in the U.S. government, an agency’s budget is the most powerful way to get that agency’s attention. If the act of classifying information were not free, the agency would have a significant incentive to reduce the number of decisions to classify information.

How much should an agency be charged for a decision to classify information? In fiscal year 2017 (the most recent year for which figures are available from the Information Security Oversight Office of the U.S. National Archives), the U.S. government made about 49 million decisions to classify information and spent about $18.4 billion on the classification system, which suggests a cost of about $370 per decision that designates information as classified. It is not, however, entirely clear what counts as a “decision to classify information.” Under the terms of Executive Order 13526, a decision to classify information should apply to a particular item of information, rather than a document that might contain many items of information, both classified and unclassified. On the other hand, it is easy to imagine that individuals using email on a network certified to handle secret information might simply mark an entire email with some level of classification, without going through the administrative hassle of marking individual paragraphs. In such cases, the “decision to classify information” might refer to the entire document.

For context, the Department of Defense’s budget authority for fiscal year 2017 was $606 billion, and the department accounts for the largest share of all national security spending by far. As a very rough estimate, the national security budget authority of other agencies, including the Departments of Energy, Homeland Security, Justice, and State, as well as the intelligence community, might account for another couple of hundred billion. These figures suggest that the classification system accounted for just a few percent of national security expenditures.

This calculation assigns an equal value to all decisions to classify information as confidential, secret, or top secret. Viewed at the level of individual decisions, that’s clearly inappropriate—but from the perspective of the overall management of classification decisions, the dollar figure isn’t as important as the principle that an agency should incur a nontrivial cost for such a decision. Indeed, dollar figures could be assigned more or less arbitrarily on the basis of expert judgment. Agencies would be tempted to assign a very low dollar value to a decision to classify, but if they yielded to these temptations, they would be stating for the record that the value of classifying information was low—a concession that would cast serious doubt on their claims that disclosure of the information affected would damage national security.

I suggest an approach, based on my 2014 proposal, that creates serious economic incentives to reduce the volume of classified information produced based on two principles:

  • First, classification should not be a free good, and a classification cost metric (CCM), described below, should be associated with any document containing information that is designated as classified.
  • Second, the agencies whose personnel actually make decisions about classification should benefit when the amount of classified information produced or retained is reduced. If implemented properly, this principle provides incentives for classification decisions that balance the value obtained from classifying information in any specific case against some cost associated with such classification. Furthermore, this principle drives the decision-making about classifying versus not classifying to the parties in the system that have the day-to-day responsibilities for such action.

A basic written document containing classified information is generally written as any other document would be written, except that each and every paragraph, section heading, and figure has a specific classification associated with it, and is marked as such. All information within a paragraph (or, in some cases, a portion of the paragraph such as a bulleted item) is classified at the level of the specific classification, even if only one particular piece of information within the paragraph is actually classified. (For those who have not seen such a document in real life, this link presents how Star Fleet Command might classify some basic facts about the U.S.S. Enterprise, NCC-1701-D, using the present-day classification system.)

Emails are a less formal type of document. In many cases, senders make overall classification judgments without marking individual portions based on their intuition about the appropriate level of classification. This has the undesirable result that it is not at all clear what parts of the email are classified.

The CCM I propose is proportional to the number of words in each classified paragraph, weighted by the paragraph’s level of classification (most heavily for paragraphs marked “Top Secret,” least heavily for paragraphs marked “Confidential,” and zero for paragraphs marked as unclassified). This means that longer documents will have higher CCMs only if they contain more words in classified paragraphs; adding words to unclassified paragraphs wouldn’t change the CCM.  

This approach to calculating the CCM is easily automated once the proper weights have been determined. Scoring a document according to its CCM provides a way of judging the relative importance of different classified documents—a higher CCM means the document is more important from a classification perspective, and, thus, improper disclosure would be more consequential. 

The CCM can be used as the basis for limiting classification in two ways. First, the CCM could be used to establish a dollar value for each document. Thus, an actual budget associated with the production of classified information could be created and used to limit such production. Second, the CCM could be used to drive decisions about declassifying older documents. An office or agency could earn CCM credits toward classifying new documents by declassifying older documents. A new classified document could be issued only if sufficient credits had been accumulated from the declassification of old documents. 

To limit the production of classified information, each agency that produces classified information would need to establish a total budget for the production of classified information as a line item. The weightings above for paragraphs containing confidential, secret, and top secret information would be interpreted as the dollar value per word in a classified paragraph. Thus, the CCM is interpreted as the dollar value of that document. The aggregate value of all classified documents is determined so that the production of classified documents in a fiscal year becomes an expense that the entity must cover with its budget allocation for that year. 

The last step is to compare the aggregate value of all classified documents in a given fiscal year to the line item for the production of classified information. At the end of the fiscal year, if the total classification cost of all classified documents produced is below the entity’s budget allocation, the office would be allowed to keep a fraction β of the cost underrun in the next fiscal year for discretionary but office-related purposes. In the simplest case, β = 1; that is, the office would be able to keep the entire cost underrun for the next fiscal year. Of course, Congress would have to agree to allow such an arrangement. But Congress could also treat such surpluses as reprogramming of existing budget authority, something that it routinely allows below certain thresholds and subject to congressional approval and oversight.

If, by contrast, the agency exceeds the line item, it will need to explain the overrun to Congress or it will have to take money from other parts of its budget to cover the excess cost. That would provide significant incentives for the agency to stay within its budget and, thereby, to decrease the number of its decisions to classify information. 

Perhaps the best way to understand the use of a dollarized CCM is that it is a tax on the production of classified information and thus provides a clear incentive to produce less such information.

To support the declassification of older documents, an agency should be permitted to earn credits for the classification of current information by declassifying older classified documents. This would require the agency to establish a suitable exchange ratio R between old and new documents, with the CCM serving as the basis for operationalizing the use of this ratio. For example, if the exchange ratio R is 10, the classification of one new document with a CCM score of 10,000 (using the original CCM, that is, not interpreted as a dollar value) would require the declassification of other older classified documents with a total CCM score of 100,000. 

The approach of trading the declassification of older documents for the right to classify new ones could also be used as a hedge against overruns of the classification budget. Specifically, an agency could prepare for potential overruns by making an intensive effort to declassify old documents. The CCM scores for these documents could be totaled and put into an account against which future classified documents would be charged (at the appropriate exchange ratio) in the event of budget overruns.

Moving Forward

The above description of a cost-based approach to reducing the volume of classified information contains the bare bones of a proposal; many other nuances and answers to questions about the scheme are contained in my aforementioned proposal article in the Journal of National Security Law and Policy. 

As a matter of good management practice, no program, especially one with broad-ranging and significant effects on national security, should be adopted on a wide scale without extensive testing of the ideas and assumptions underlying it.

A reasonable but quite modest first step might entail the collection of data from offices and agencies about the actual volume of classified information they produce. Relatively simple computer programs could count the number of words contained in classified paragraphs and produce aggregate measures. Such information is not easily available today but would be a logical measure to take before enacting any approach that targets overclassification. 

A second step would be to implement this approach in a small number of agencies or offices simply as a scoring mechanism, with no explicit dollar costs. Without the dollar costs, of course, there are no explicit incentives for changing behavior. But with document scores available, an agency or office would have an increased awareness of the distribution of costs across documents and document producers.

A third step would be to determine plausible values for the cost per word contained in classified paragraphs without actually imposing any of the budget constraints described in the above proposal. This would provide policymakers with a sense of the fiscal stakes without having to impose hard budget limits.

After piloting this approach in a few agencies or offices (in full, with real budgetary consequences as described above), policymakers could assess its success and hopefully promulgate it more widely to make the costs of classifying information visible to the day-to-day decision-makers. In so doing, it would enable agencies and policymakers to better focus protection on the documents that are most sensitive by minimizing the restriction of nonsensitive information. Most importantly, tangible monetary incentives for agencies and offices to exercise restraint in classifying information could finally provide meaningful incentives to address the ongoing problem of governmental overclassification.


Dr. Herb Lin is senior research scholar for cyber policy and security at the Center for International Security and Cooperation and Hank J. Holland Fellow in Cyber Policy and Security at the Hoover Institution, both at Stanford University. His research interests relate broadly to policy-related dimensions of cybersecurity and cyberspace, and he is particularly interested in and knowledgeable about the use of offensive operations in cyberspace, especially as instruments of national policy. In addition to his positions at Stanford University, he is Chief Scientist, Emeritus for the Computer Science and Telecommunications Board, National Research Council (NRC) of the National Academies, where he served from 1990 through 2014 as study director of major projects on public policy and information technology, and Adjunct Senior Research Scholar and Senior Fellow in Cybersecurity (not in residence) at the Saltzman Institute for War and Peace Studies in the School for International and Public Affairs at Columbia University. Prior to his NRC service, he was a professional staff member and staff scientist for the House Armed Services Committee (1986-1990), where his portfolio included defense policy and arms control issues. He received his doctorate in physics from MIT.

Subscribe to Lawfare