Congress Cybersecurity & Tech

Lessons From 1955: A Framework for Navigating Technological Change

Kevin Frazier
Tuesday, July 8, 2025, 10:08 AM
A 1955 congressional report on automation offers a blueprint for AI governance: Focus on economic growth, human development, and adaptation.
American flags frame a view of the Capitol building (Photo: Sgt. Matt Hecht/Rawpixel, https://www.rawpixel.com/image/3578651, CC0 by 1.0)

Published by The Lawfare Institute
in Cooperation With
Brookings

In October 1955, as the American economy hummed with postwar prosperity and factories across the nation installed increasingly sophisticated machinery, Rep. Wright Patman (D-Texas) convened a congressional investigation. The Subcommittee on Economic Stabilization’s nine days of hearings on “automation and technological change” produced a report that reads today like a blueprint for managing artificial intelligence (AI)—a framework that warrants renewed consideration as policymakers struggle to come up with a positive vision for AI’s development and diffusion.

The close parallels between 1955’s automation inquiry and today’s consideration of the promises and perils of AI warrant a deep dive into the committee’s work. Witnesses then marveled at electronic computers capable of processing insurance claims, manufacturing machines that could operate with minimal oversight, and “thinking robots” that promised to revolutionize everything from weather forecasting to television repair. Similar sentiments pervade modern AI headlines.

Today, large language models write legal briefs, computer vision systems diagnose medical conditions, and chatbots handle customer service inquiries—the latest iteration of humanity’s ongoing effort to develop, adopt, and understand intelligent machines.

What makes the 1955 report extraordinary is not merely its prescience but its comprehensive framework for technological governance. Rather than focusing narrowly on the technology itself, the committee examined automation’s implications across multiple dimensions: economic growth, employment patterns, educational systems, community development, government responsibility, and social cohesion. Their 11 recommendations (condensed to seven here given overlapping themes among them) offer a clear guide to adaptive governance—tech-agnostic principles that address the fundamental challenges of managing changes brought on by innovation.

Study of—and perhaps adherence to—this historical framework takes on particular urgency today as Congress weighs a 10-year federal moratorium on a wide range of state AI regulations. Critics have, with good reason, faulted such preemptive legislation for failing to articulate a coherent federal alternative—clearing the field without offering a game plan or, according to some, effectively guaranteeing a 10-year regulatory vacuum in light of Congress’s recent penchant for inaction. 

The 1955 recommendations provide what current debates lack: a comprehensive approach that balances innovation with social responsibility. As AI reshapes industries from journalism to radiology, contemporary policymakers have largely abandoned this holistic approach. Many of the more than 1,000 AI-related bills pending before state legislatures and Congress demonstrate that current AI governance suffers from a myopia: excessive focus on hypothetical long-term risks or dictating the proper uses of AI—a challenge properly left to social norms rather than legal mandates. Meanwhile, lawmakers neglect how best to respond to immediate employment disruptions and generally forgo study of adaptive regulatory frameworks. The policy discourse taking place on the Hill and in state capitals frequently devolves into simplistic pro- or anti-technology positions instead of diving into a very complex regulatory puzzle. The 1955 framework provides a nuanced path forward.

While this framework is not a perfect fit for all AI challenges—it does not directly address contemporary concerns about disinformation (though perhaps overblown) or algorithmic bias, for example—it offers a foundation that could guide federal AI policy toward more constructive ends. Rather than simply preventing state action, Congress could deploy these principles to create a proactive federal framework that addresses AI’s economic and social disruptions while preserving space for technological innovation.

Recommendation 1: The Foundation of Economic Dynamism

The committee’s primary recommendation was deceptively simple: Maintain “a good, healthy, dynamic, and prospering economy” to ensure displaced workers could find alternative employment. This insight reflects a fundamental truth about technological change that contemporary policy discussions often overlook: Innovation results in disruption primarily when overall economic opportunity contracts. If the economy expands, however, lawmakers can ease the transition through temporary disruptions.

Significant historical evidence supports this principle. The automation wave of the 1950s and 1960s coincided with a strong period of economic growth in American history, enabling millions of workers to jump from manufacturing to service industries. Conversely, technological changes during economic downturns—like the computerization that accelerated during the recessions of the early 1980s and 1990s—created more persistent displacement and social tension.

Current AI policy has largely inverted these priorities. Rather than ensuring that AI deployment occurs within a context of robust job creation, policymakers focus primarily on restriction and regulation. This approach risks repeating historical mistakes. The Luddites of early 19th-century England opposed textile machinery not because they were anti-technology per se but because technological change occurred during a period of economic stagnation and social disruption. Their concerns proved prophetic in the short term: Mechanization did eliminate traditional textile jobs and create genuine hardship for affected workers. However, the broader economic expansion that machinery enabled ultimately created far more opportunities than it destroyed.

An AI policy in line with the 1955 recommendations would prioritize removing barriers to entrepreneurship, reducing regulatory complexity for small businesses, and ensuring that AI tools enhance rather than replace human productivity across the economy. Put differently, lawmakers should collaborate with the engines of our economy—small and medium-sized businesses—to learn what they need to increase demand, spur productivity, and, crucially, retain and even grow their workforce. This might include tax incentives for companies that use AI to create new products and services rather than simply reducing headcount, streamlined permitting processes for AI-enabled startups, and public investment in infrastructure that supports AI-driven innovation.

Recommendation 2: Human Development as Technological Complement

The committee’s vision of meaningful employment, expanded educational access, and locally tailored workforce programs recognized that technological progress requires parallel investments in human development. These recommendations reflected a well-established lesson: Technological change aligns with the public interest when accompanied by systematic efforts to help workers adapt and thrive.

Contemporary AI policy has largely failed on these fronts. Federal workforce development programs remain fragmented across multiple agencies with minimal coordination. The Workforce Innovation and Opportunity Act, the primary federal workforce legislation until Congress failed to reauthorize it last year, and related federal and state job training programs, were designed for an economy where workers typically needed retraining once or twice in their careers. AI-driven change demands more flexible, continuous learning systems that can adapt rapidly to shifting skill requirements.

The 1955 emphasis on local program development proves particularly relevant for AI workforce policy. Different communities face distinct AI-related challenges based on their economic structure, educational resources, and demographic characteristics. Rural areas might benefit from AI-enabled remote work opportunities, while urban centers grapple with service-sector automation. Manufacturing regions could leverage AI to reshore production, while knowledge work centers must adapt to AI-augmented professional services.

Yet current policy discussions remain largely divorced from these local realities, focusing instead on abstract national frameworks. President Trump’s executive order on incorporating AI into public education, for example, relies on a body of federal officials to develop challenges for local students and educators to pursue. Though this executive order marks a positive step toward increasing AI competency across the U.S., it could benefit from a structure that more actively solicits input from officials tuned in to the needs of their students and local community. 

An effective AI workforce policy would empower states and localities to experiment with different approaches to accelerating AI adoption across key industries and the local workforce while providing federal coordination and resources. This might include portable training accounts that workers can use across multiple career transitions, regional innovation hubs that help traditional industries integrate AI productively, and community college partnerships with AI companies to develop locally relevant curricula and corps of AI literacy workers who can educate their neighbors. Instead, the Department of Labor has closed the Job Corps centers tasked with similar work.

Recommendation 3: Government Leadership and Strategic Investment

The committee urged the government to lead by example in managing technological transitions while investing heavily in retraining and upskilling programs. Put differently, this recommendation calls on the federal government to adopt best practices for helping its employees adapt to a new technological era–with the hope that private employers would follow suit in taking care of their displaced employees. This vision of proactive, evidence-based governance contrasts sharply with current AI policy’s largely reactive character.

Federal agencies have been slow to examine AI’s impact on their own workforce, let alone develop comprehensive strategies for supporting displaced workers in the broader economy. The Office of Personnel Management has produced minimal guidance on AI adoption in federal employment. The Department of Labor took nearly two years after the introduction of ChatGPT to release its best practices for aligning AI adoption with job quality and worker well-being. And the recent Office of Management and Budget memos—which direct agencies to pursue AI maturation—lack thorough consideration of how this pursuit may prove disruptive to federal employees who may soon be forced to look for new work as their jobs become extraneous.

This governmental passivity reflects deeper institutional challenges. The 1955 committee operated during an era when federal agencies possessed a greater willingness to engage in long-term planning. The report called on Congress to legislate on a three- to five-year time horizon, if not longer. Such a forward-thinking outlook is less common on the Hill today. It’s far more common for Congress to legislate by continuing resolution rather than a continued resolution to think about the well-being of future Americans.

Investment in proven retraining and upskilling programs by the government itself to aid employees impacted by automation represents perhaps the most concrete area where current policy could immediately adopt 1955 principles. The committee’s emphasis on supporting proven programs reflects important lessons about workforce development effectiveness. Not all retraining succeeds equally: Programs that combine technical skills with workplace readiness, involve employer partnerships, and provide ongoing support tend to produce better outcomes than purely academic or purely technical approaches.

Current federal workforce programs suffer from inadequate evaluation and limited scaling of successful models. The Trade Adjustment Assistance program, designed to help workers displaced by international trade, could provide a framework for AI displacement assistance, but it requires significant expansion and adaptation. Similarly, apprenticeship programs that combine AI-related technical skills with traditional trades could create pathways for workers in declining industries. As things stand, however, the federal government seems unlikely to heed the advice of the subcommittee by providing a template for leaning into innovation while also taking care of the well-being of its workforce.

Recommendation 4: Evidence-Based Policy Through Enhanced Data Collection

The committee’s call for investment in economic analysis and labor market studies with wide sharing of insights anticipated the data-driven policy revolution that has transformed fields from public health to criminal justice. However, AI policy suffers from a lack of systematic data collection and analysis.

The U.S. lacks comprehensive, standardized metrics on which jobs are most vulnerable to AI automation, which retraining programs prove most effective, and which communities face the greatest transition challenges. Without this foundational information, policy responses remain necessarily speculative. A focus on task-by-task assessments of the odds of a job being displaced is insufficient, yet serves as the primary means of analysis by many federal and state agencies. This approach omits the fact that soon AI-native firms and industries may render certain roles and industries entirely moot. Anticipating which fields may disappear is no easy feat. It requires collaboration among technologists, economists, and entrepreneurs, among others. Such collaboration is in short supply.

Other shortcomings have exacerbated this data deficit. AI’s rapid evolution makes traditional economic measurement difficult. The technology’s general-purpose nature means its impacts cut across traditional industry boundaries in ways that existing data collection systems struggle to capture. Moreover, many AI applications remain proprietary, limiting researchers’ ability to assess their economic effects systematically.

Addressing these limitations requires substantial investment in new data infrastructure. This might include mandatory reporting requirements for companies deploying AI systems above certain thresholds, longitudinal studies tracking workers in AI-affected industries, and real-time labor market monitoring systems that can detect emerging displacement patterns quickly. The European Union’s AI Act includes some reporting requirements that could inform similar American approaches.

The 1955 emphasis on sharing research findings proves especially relevant. Contemporary AI policy suffers from fragmentation, with different agencies, academic institutions, and private organizations conducting parallel research with minimal coordination. Creating centralized clearinghouses for AI-related labor market research could accelerate learning and improve policy effectiveness across different levels of government.

Recommendation 5: Corporate Responsibility and Social Costs

Perhaps most boldly, the 1955 report argued that companies implementing automation should bear responsibility for the “human costs of displacement and retraining.” This principle of corporate social responsibility for technological change has been largely abandoned in contemporary AI policy, where technology companies face minimal obligations to address the employment consequences and negative social externalities of their innovations.

The contrast with mid-20th-century expectations is striking. During the automation wave of the 1950s and 1960s, major corporations routinely provided extensive retraining programs, early retirement packages, and relocation assistance for displaced workers. This occurred not purely from altruism but from recognition that sustainable technological change required broad social acceptance and stable labor relations.

Contemporary AI companies operate under dramatically different expectations. While major technology firms invest heavily in technical AI research and safety initiatives, they provide minimal support for worker transition programs. This represents a significant departure from the postwar social contract that the 1955 committee took for granted—the expectation that technological progress should benefit society broadly rather than concentrating gains among technology owners.

Reviving corporate responsibility for technological transition costs could take multiple forms. Tax policy could provide incentives for companies that invest in worker retraining rather than simple layoffs. Public procurement could favor companies that give preference to local workers. Companies that collaborate with local community colleges, high schools, and other civic institutions to host training sessions and demo new models could likewise receive benefits.

The challenge lies in designing such policies without stifling innovation or driving AI development overseas. The solution may involve graduated responsibilities based on company size and displacement scale, safe harbors for companies that meet specified worker support standards, and international coordination to prevent regulatory arbitrage.

Recommendation 6: Constructive Labor Relations

The committee urged organized labor to “continue to recognize that an improved level of living for all cannot be achieved by a blind defense of the status quo” while simultaneously demanding adequate transition support for affected workers. This balanced approach—welcoming technological progress while ensuring workers share its benefits—offers a model for contemporary labor relations around AI.

Current debates often devolve into simplistic pro- or anti-AI positions, missing opportunities for constructive engagement around AI deployment that benefits both workers and productivity. Some unions have responded to AI with staunch opposition, as demonstrated by some unions opposing automated trucking by insisting the federal government enforce outdated regulations that require humans to perform certain tasks. Others have attempted to negotiate AI deployment terms, as in the United Auto Workers’s recent contracts with automakers. The most successful approaches combine technological engagement with strong worker protections.

The Writers Guild of America’s 2023 strike settlement provides an instructive example. Rather than banning AI outright, the agreement establishes frameworks for how AI can be used in creative processes while protecting writers’ compensation and creative control. This approach recognizes AI as a tool that can enhance creativity while ensuring human writers remain central to the creative process.

Scaling such approaches across the economy requires substantial changes to labor law and union strategy. Current collective bargaining frameworks often poorly address technological change, focusing primarily on wages and benefits rather than work reorganization and skill development. Labor unions need new capabilities for understanding and engaging with AI technologies, while companies need incentives to involve workers in AI adoption decisions rather than imposing changes unilaterally.

Recommendation 7: Adaptive Governance and Institutional Learning

The final set of recommendations by the committee emphasized that government at all levels should acknowledge and welcome technological progress while attempting to minimize its costs through close and ongoing analysis rather than reflexive regulatory responses. This philosophy of adaptive governance proves especially relevant to AI policy, where rapid technological change outpaces traditional regulatory time frames.

Current AI governance suffers from both overregulation and underregulation simultaneously, granting undue attention to hypothetical long-term risks while inadequately encouraging the technology’s most beneficial uses. Many of the proposals pending before state legislatures, such as the RAISE Act in New York, target catastrophic risks of unknown likelihood while many Americans continue to wait for improvements to health care, the justice system, and education that could be unleashed by today’s AI under the proper regulatory incentives.

The 1955 framework suggests a more balanced approach that embraces AI’s potential while systematically addressing its disruptive effects. This requires two key steps: first, building governmental capacity for continuous monitoring of how AI is being deployed and to what ends; and, second, rapid policy adjustment rather than attempts to predict AI’s trajectory through premature legislation that treats AI as a static technology.

Effective adaptive governance for AI might include sunset clauses for AI regulations that require periodic review and justification and regular policy evaluations based on empirical evidence rather than theoretical predictions. Adaptive licensing frameworks, as proposed for drug regulation, provide one model for how regulatory agencies can maintain oversight while enabling innovation.

Contemporary Relevance and the Path Forward

The enduring relevance of the 1955 recommendations reflects their focus on fundamental governance principles rather than specific technological details. The committee understood that successfully managing technological change requires addressing multiple interconnected challenges simultaneously: economic growth, worker development, community resilience, institutional adaptation, and social cohesion.

Contemporary AI policy’s failure to adopt this comprehensive approach has produced predictable results: fragmented responses that address narrow technical concerns while ignoring broader social implications, regulatory frameworks that emphasize restriction over adaptation, and public debates that generate more heat than light about AI’s societal impacts.

The path forward requires recovering the 1955 vision of technological governance as a collaborative social project. This means moving beyond narrow debates about AI safety or regulation toward comprehensive approaches that address economic opportunity, worker development, and community resilience simultaneously. 


Kevin Frazier is an AI Innovation and Law Fellow at UT Austin School of Law and Senior Editor at Lawfare .
}

Subscribe to Lawfare