Cybersecurity & Tech

Setting a Higher Bar: Professionalizing AI Engineering

Chinmayi Sharma
Tuesday, December 12, 2023, 12:00 PM

Requiring AI engineers to obtain licenses and comply with industry standards conscripts the experts that understand AI systems best to the cause of building them responsibly. 

Man coding. (StockSnap, http://tinyurl.com/29w2yt8w; CC0 1.0 DEED, https://creativecommons.org/publicdomain/zero/1.0/deed.en)

Published by The Lawfare Institute
in Cooperation With
Brookings

A big year for artificial intelligence (AI) is coming to a close, and with it, a mixed bag of reviews: proclamations on the life-saving advantages of AI, fears about the existential threat of AI, warnings of the environmental impact of AI, promises about the manufacturing benefits of AI, protests about the labor implications of AI, hopes about the security benefits of AI, anxieties about the discriminatory impacts of AI, and both optimism and pessimism about the impact of AI on democracy. Suffice it to say, society is far from reaching consensus on what to make of the new world order. But, in the face of so much uncertainty, countries around the world still face the Sisyphean-seeming task of determining how to regulate AI in a way that maximizes its potential and minimizes its risks. 

The OpenAI fracas last month highlighted and underscored the limitations of purely voluntary AI self-governance. When safety means forgoing a competitive advantage, companies are not likely to adopt the Anthropic model of cautious research. In other words, safety is probably going to be sacrificed at the altar of commercialization. As U.S. regulators explore options to address the problem at home—from licensing AI companies to establishing a new AI agency—there is one option missing from the conversation: professionalizing the Promethean engineers breathing life into AI.

What does professionalization mean? In short, it means establishing institutions and policies to ensure that the only people building AI are those who both are qualified to do so and are doing so in sanctioned ways. The longer version entails academic requirements at accredited universities; mandatory licenses to “practice” AI engineering; independent organizations that establish and update codes of conduct and technical practice guidelines; penalties, suspensions, or license revocations for failure to comply with codes of conduct and practice guidelines; and the application of a customary standard of care, also known as a malpractice standard, to individual engineer decisions in a court of law. Self-governance with the force of law. 

There’s a panoply of reasons to do this. It’s a familiar system used in medicine, law, accounting, and every other form of engineering, from civil to electrical. It allows the experts that know the technology best to set the bar for their profession and update it over time as both technology and best practices for building it advance. It encourages information sharing across the industry, allowing engineers to collaborate on identifying risks and designing solutions with professionals outside of their companies. It provides engineers with a shield to push back on company directives to build more or faster when doing so violates standards—standards set by independent engineers, not corporate interests. It empowers the profession, and the public, to weed out those engineers who flagrantly run afoul of those standards. 

The list goes on. But perhaps the most important reason to support the professionalization of AI engineers is this: It imbues the profession with a sense of integrity and charges the only people with the actual power to shape AI with a Hippocratic oath: to do no harm. As in medicine, this is not, and can never really be, a promise of outcomes—but, rather, it is an ideal to aspire to.

A Familiar Playbook

Society has drawn from the professionalization playbook countless times in the past, from doctors to accountants. AI engineers share a lot of similarities with these other professions, including the level of expertise needed, the fast-paced nature of the discipline, and the high stakes of the work done.

High Level of Expertise Required

Sometimes, subject matter can be so esoteric and inaccessible that the government, courts, and the public are all out of their depths trying to reckon with it. In those instances, policymakers wisely defer to experts in the field on questions of right and wrong. The government is not in the business of legislating on the appropriate uses of a stent in the artery versus cardiac bypass surgery. Regulation leaves those determinations to the cardiologists and cardiothoracic surgeons who took the relevant classes, obtained the required licenses, earned the available specialization certificates, demonstrated compliance with the most current practice guidelines, and possessed the actual experience using those specific treatment interventions. 

As with medicine, the right and wrong way to train or fine-tune a specific type of foundation model requires substantial expertise not just in linear regression or neural networks, but also in the intended application of the model, whether for medical diagnostics or national security. The range of expertise required by the field of AI is so deep and wide that it is likely beyond the ability of any one institution, whether an agency or the court, to master. Professionalization channels the expertise that industry already possesses to set a high, defensible bar. An advantage here: Engineers, famously skeptical of technocrats, are more likely to comply with standards they had a hand in setting. 

Some observers argue the standards are not there yet. AI is an art not a science, they say, and engineers don’t follow a blueprint when they build systems. However, many professions, such as medicine and law, think of their fields as art as much as science. How a physician treats a patient is dependent on unforeseen factors outside the physician’s control—the best laid plans for surgery can be foiled by an unexpected discovery regarding the patient’s health made during the procedure itself. There can be infinite permutations of circumstances impossible to address in ex ante practice guidelines. But that doesn’t mean these professions are devoid of clear, specific, and enforceable standards. Professional standards develop organically over time, first with crude yet important minimum standards that caution against the most egregious practices, eventually developing into sophisticated codes of conduct and specialty-specific practice guidelines. 

Others argue that, while theoretically possible, concrete standards for AI just don’t exist yet. While AI is indeed nascent, it has standards today and is developing new standards constantly. For one, AI is, at its core, complex predictive analytics. But today, individuals are building models without a rudimentary understanding of the statistics necessary to ensure the model is designed in a way that suits its use case. Further, while defining fairness is an age-old question that is unlikely to be resolved any time soon, AI researchers have been exploring technological ways to assess biases inherent in a data set, test systems for discriminatory behavior, and develop methods to retrain systems in less problematic ways. Designing heuristics to identify and rectify problematic behavior is not just limited to ethical quandaries. For example, “red-teaming” AI (a term borrowed from cybersecurity) pressure-tests systems, finding ways they can be coerced to malfunction or break. While security researchers have practiced this for decades, the concept has only just become in vogue in the AI context, but it is already enshrined in the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. No profession has achieved perfect and complete standards that reduce the likelihood of error to zero—but that does not undermine the value of setting a floor, prohibiting the worst behavior, and encouraging the adoption of whatever techniques the field has developed to minimize harm, even if just a bit. 

Keeping Up With Advancements in the Field 

Another advantage of leaning on industry experts to set and update codes of conduct and practice guidelines is that they are best positioned to stay on top of new developments in the field. The pace of AI feels unparalleled—new iterations of large language models (LLMs) and other forms of generative AI seem to come out daily. Technological improvements in AI, as with medicine, are based at least in part on scientific study, from mathematics to computer programming. Every day, researchers at academic institutions and companies around the world are experimenting with new ways to improve model behavior in terms of accuracy and efficiency as well as trust and safety. But not all research is made equal. Just as the medical industry sorts through new research to identify which studies are credible enough to inform binding practice guidelines, no one is better suited to evaluate the exponentially growing number of artificial intelligence studies than AI engineers. 

Regulation is famously, and by design, unable to update at the pace the science is advancing. Where regulations can encourage general categories of responsible behavior, such as the red-teaming of models, it will not be able to prescribe good red-teaming, as opposed to perfunctory red-teaming, because the answer to that question will evolve over time. Nor will it be able to adjust its language quickly if, down the road, a different approach is found to be more effective at testing models for bad behavior than red-teaming is. 

The Stakes of the Game

But why is it so important to establish such clear standards of care and ensure they evolve over time to reflect the most cutting edge research in the field? Because, as is true for bridge engineers and criminal attorneys, the stakes are too high—not just for direct clients or customers, but for society at large. 

On the one hand, getting the question of responsible AI wrong spells disaster on many fronts. AI has already convinced people to kill themselves, encouraged patients with eating disorders to starve themselves, directed law enforcement to apprehend misidentified people of color, and suppressed speech based on xenophobic stereotypes. These are the risks of underregulating AI—allowing it to proliferate without demanding an appropriate degree of care. 

Often, these harms take the form of negative externalities, impacts on parties outside of the relationship between the professional and the client. This happens when the direct client stands to lose less than third parties do and so fails to bargain for more caution or diligence. While a city commissions the bridge, its citizens suffer the collapse. While Enron might have hired the accounting firm, society paid the price for the accounting mistakes. In the AI context, the third parties most likely to be harmed by irresponsibly designed systems are often those that are unable to avoid the impact of AI on their lives, such as targets of law enforcement, and those that lack the consumer power and political capital to advocate for themselves. 

On the other hand, getting the question of AI wrong could suffocate a technology that promises to advance cancer treatment, protect critical infrastructure from catastrophic cyber threats, improve access to education and health care, increase the efficiency of government benefits distribution, and design new proteins that could replace the need for animal products. The medical profession has studied, and criticized, the negative impact nonclinical government regulations have had on patient care. This is the risk of overregulating AI—imposing infeasible requirements on AI companies that fail to serve their goals while depleting organizational resources and stifling new competition. 

If companies are left to govern, profit motives won’t care much about harms at the margin. If the U.S. government is left to govern, lack of technical expertise may result in onerous regulations that quell the benefits this technology offers. However, if regulation divorces engineer perspectives from their employers’, and incentivizes them to see their profession through a public interest lens, a Goldilocks balance on sustainable standards may be reached. 

Better for the Public

Society turns to regulation when it does not trust the status quo. With AI, the government has expressed time and again that while it values advancements in the technology and sees great potential in AI’s applications, it is concerned about the safety, security, and ethics of systems. In other words, it fears for the public. Professionalizing engineers is more likely to serve society’s interests more than alternative methods of regulation (or, worse, no regulation at all). 

First, there is the question of timing. A core tenet of cybersecurity, trust and safety, privacy, and ethics is that responsible systems need to be built responsible by design. What does that mean? It means fairness Band-Aids at the tail end of the AI development process won’t get the job done—studies have shown that safeguards slapped on top of systems can be jailbroken relatively easily, and models can be retrained to be, well, evil. 

Trying to hold companies liable for unsafe products is too little too late. It forces consumers to wait until the preventable bad thing actually happens, and it puts the burden on them (or the government) to look under the hood of a definitively opaque system to prove what went wrong. So, regulation’s best bet at ensuring AI is developed responsibly is to target the software lifecycle at its inception. And building AI starts with the engineer. 

Some observers disagree and say that building AI starts with corporate board priorities, executive team directives, and product team strategy. However, none of these entities possess the technical expertise to understand the feasibility and advisability of the things they want to build, how they want to build them, and the timeline on which they build them. A hospital’s board sets the budget, the hospital’s administrators allocate resources, and the department chiefs hire physicians and set schedules. But each of these decisions is informed by, and can be rejected by, the realities of clinical practice on the ground—how many resources go to pediatrics versus geriatrics is informed by the size of the patient pool and the complexity of the conditions treated. A hospital may want to cut budgets, but it cannot do so if the decision would force physicians to compromise patient care and commit malpractice. 

Similarly, professionalization empowers engineers to use their expertise as a shield against corporate interests, pushing back on development timelines when doing so would compromise documentation and safety checks. More than that, professionalization helps shift the culture entirely to one of social responsibility. Law, medicine, accounting, and engineering may all have direct clients, but the degree to which they do their jobs with integrity and care has huge impacts on general public welfare. In the same vein, AI engineers should feel social accountability for their work, from deciding what kind of training data to use to what kind of post-deployment safety audits to conduct. They should feel prepared to, as senior attorneys, attending physicians, and head contractors do, put their official stamp of approval on a final product. The backstop on every decision, from the first decision, should be: do no harm

Standards With Teeth

There won’t be an overnight culture change in a profession that has long had libertarian undertones and an industry that urges its engineers to “move fast and break things.” Professionalization benefits from the big stick of self-enforcement. Unlike voluntary standards, like the National Institute of Science and Technology (NIST) Risk Management Framework (RMF), professional codes of conduct and practice guidelines are mandatory and enforceable in two ways: by professional tribunals and the court of law. 

In a professionalized world, AI engineers who fail to audit training data for privacy violations or problematic biases can be reported by their colleagues or the public to professional boards that investigate the complaint and issue a penalty where appropriate. The penalty can be probation or a fine or, in particularly egregious circumstances, suspension or revocation of a license. Other professions employ this form of self-enforcement. For example, several physicians lost their licenses to practice during the pandemic for spreading misinformation about the vaccine (though not without pushback). An ethics board in Washington, D.C., recommended that Rudy Guiliani be disbarred for supporting Donald Trump’s efforts to overturn the 2020 election.

AI engineers can also be held liable for malpractice in a court of law for failing to uphold the standards demanded by their profession and thereby causing harm. Today, lawsuits against big technology companies fight a multiheaded hydra of barriers—from Section 230 to liability waivers, courts often dismiss lawsuits against technology companies before the parties even get to discovery. Creating a malpractice cause of action for the public to challenge irresponsible engineering decisions can bypass many of these barriers. Professionalizing AI engineering can also help plaintiffs surmount the challenge of proving causation—if standards demand documentation of decisions, like a physician’s requirement to record consultation notes, plaintiffs have a better chance of finding where in the development process something went awry. Moreover, subjecting engineers to individual liability is likely to generate an AI malpractice insurance market, ensuring that plaintiffs can recover when they are harmed and aren’t faced with judgment-proof defendants. 

When you need a license to practice, and the license is contingent on compliance with certain standards, then failure to comply is gambling with your own livelihood. That’s a compelling incentive for very cautious behavior—when your license is on the line, you don’t cut it close; you steer well clear of noncompliance. 

Better for Engineers

This is a death sentence for AI engineers, you might say. Not so. Professionalizing offers several benefits to engineers. For one, as discussed above, it provides engineers with a shield against upper management. Today, engineers are rewarded for building new things fast. Security, safety, and ethics are considerations that, at best, introduce delay, and, at worst, caution against building the thing at all—they get in the way. AI engineers may have more bargaining power than other professionals in the technology industry, but at the end of the day, they are still beholden to the interests of the handful of corporate entities with the resources to hire them and the means to develop advanced AI. In a world of enforceable standards, a licensed AI engineer has the trump card on development decisions. And forcing an AI engineer to flout industry standards exposes the company to liability as well. 

Which brings us to the second advantage for engineers: insurance on someone else’s dime. Just as hospitals and private practices pay for the malpractice insurance for their physicians, AI employers are likely to pay for the insurance for their engineers. For one, that is industry practice across all professions subject to malpractice liability, from doctors to lawyers. Two, AI engineers are in high demand, and they have substantial negotiating power to demand insurance coverage. This benefits AI engineers in two ways: They are protected financially from most malpractice lawsuits without expending their own money, and they are less likely to be put in risky situations because, as the insurance holder, the company is unlikely to expose themselves to undue liability either. Another win for caution.

The threat of liability should result in more stringent adherence to standards, avoiding litigation entirely. But, on  the off chance AI engineers are brought to court, professionalizing the field actually protects them. Rather than subjecting them to an ordinary standard of care, the same standard applied to everyone, they would be held to a customary standard of care. This means that a jury can’t look at a plaintiff with one leg shorter than the other after a fractured femur and assume negligence—it means the parties get to bring in physician experts to testify to the physician’s compliance with practice guidelines. In AI, as in medicine, outcomes are highly unpredictable, and errors are inevitable and often outside a developer’s control. A professional standard would hold AI engineers accountable for doing the best they can with the information available to them at the time. 

Finally, in a professionalized world, engineers are freed from the confines of walled gardens, more able to share information or start their own companies. When a profession is forced to establish industry-wide concrete practice guidelines, certain types of information must be shared, and cannot be withheld under the pretext of confidentiality. The American Medical Association officially condemns “[t]he intentional withholding of new medical knowledge, skills, and techniques from colleagues for reasons of personal gain” because it “is detrimental to the medical profession and to society.” The law followed suit, preventing surgical procedure patent-holders from asserting their rights against other medical professions. So, just as a successful heart disease intervention must be shared with the profession at large—so too must AI companies be required to share information related to serious system risks or promising precautionary practices. Software engineering generally, and AI engineering in particular, was built on the foundation of open-source collaborations. Corporate influences have stymied this to some degree, but if a new gateway for information flows were opened, engineers are primed to take advantage of it. 

Professionalization can open doors for information and for competitors. In a world in which companies or products are licensed, regulation will only reinforce the power that incumbents have in the industry. It forces the available talent to coalesce around big players that society has every reason to believe are not making AI decisions that prioritize the public’s interest. Licensed AI engineers would be free to leave the major players, start their own ventures, and boast the verified credentials they obtained to investors and clients. This benefits not only engineers but also the public, by fostering a more competitive AI landscape. 

Next Steps

To professionalize the field, either the federal government or states would first need to pass legislation requiring engineers to obtain licenses to build AI. A legal licensing requirement would necessarily require the creation of a licensing body composed of AI engineers tasked with establishing requirements to obtain a license (Do you need specific degrees? Will there be a licensing exam? Do you need a few years of work experience?) and the requirements to maintain a license (Are there continuous learning requirements? Does the license ever expire? What are grounds for losing your license?). Central to this effort would be the writing of mandatory codes of conduct and practice guidelines all licensed professionals must abide by. 

Relatedly, this licensing body would need to define what it means to “build AI”—who needs to get licensed? Rather than creating a static definition in law, allowing this licensing body of experts to define what constitutes “AI engineering” allows the definition to be flexible, to adapt to the changing technological landscape. Today, nurse practitioners are licensed to do work that was traditionally the exclusive purview of physicians—over time, society may be comfortable letting unlicensed software developers do the work that today is trusted to only the qualified few. 

The process would take years, of course. But that is true of all the regulatory proposals out there. AI has no quick fix. Whether or not a law requiring licenses passes tomorrow or never, however, the current discourse around AI regulation should at the very least be expanded to include expectations of AI engineers. If the goal of regulation is to incentivize building AI in more responsible, socially beneficial ways, then why not focus regulatory interventions on the link in the chain with the most control over, and expertise about, how AI is actually built? 

While professionalization is not without its shortcomings, such as protectionary barriers to the profession and extortionary insurance regimes, it may be a lot like democracy: the worst form of AI regulation, except for all the other ones. 


Chinmayi Sharma is an Associate Professor at Fordham Law School. Her research and teaching focus on internet governance, platform accountability, cybersecurity, and computer crime/criminal procedure. Before joining academia, Chinmayi worked at Harris, Wiltshire & Grannis LLP, a telecommunications law firm in Washington, D.C., clerked for Chief Judge Michael F. Urbanski of the Western District of Virginia, and co-founded a software development company.

Subscribe to Lawfare