Artificial intelligence is another reason for a new digital agency

admin

The rapid pace at which artificial intelligence (AI), is developing contrasts sharply with the torpid process of protecting public interests impacted by this technology. The oversight systems created by the private sector and the government to cope with the Industrial Revolution are not up to the AI revolution.

AI oversight needs a method that is as groundbreaking as the technology.

The American people, faced with the industrial revolution and its challenges, responded by introducing new concepts like antitrust enforcement or regulatory oversight. The digital revolution has not been addressed by policymakers. AI makes those realities even more daunting. We cannot allow the same regulatory cruise control that we’ve seen with digital platforms to be repeated for intelligent technology. The consumer facing digital services such as Google and Facebook or AI (which is led by the same companies), need a specialized federal agency that has experts who are appropriately compensated.

What worked before is insufficient

Dusting off what worked previously in the industrial era to protect consumers, competition, and national security isn’t sufficient when it comes to the new challenges of the AI era. To understand the effects of AI on society, economy, and security, specialized knowledge is needed. It is a delicate balance to strike between encouraging innovation and ensuring accountability for these effects. Relying on outdated statutes and regulatory structure to respond to the speed and expanse of AI is expecting the impossible. It invites the inevitable harm to the public interest when old systems are unable to keep pace, and private interests determine what is acceptable.

Stopping or slowing AI is like trying to stop the sun from rising. In the original information revolution that followed Gutenberg’s printing press, the Catholic Church tried and failed to slow the new technology. If the threat of eternal damnation wasn’t adequate to stop the inertia of new ideas and economic opportunity back then, why do we think we can stop the AI revolution now?

AI is a topic that has received bipartisan support from national policymakers. Senate Majority Leader Chuck Schumer demanded guidelines for testing and reviewing AI technology prior its release. House Speaker Kevin McCarthy’s office points to how he took a group of legislators to MIT to learn about AI. A presidential advisory committee report concluded, “direct and intentional action is required to realize AI’s benefits and guarantee its equitable distribution across our society.” The Biden administration’s AI Bill of Rights was a start, but with rights come obligations and the need to establish the responsibilities of AI providers to protect those rights.

Federal Trade Commission (FTC) Chair Lina Khan, who has been appropriately aggressive in exercising her agency’s authorities, observed, “There is no AI exception to the laws on the books.” She is, of course, correct. However, the laws in place were written to address issues that were created by the industrialized economy. The principal statute of Chairwoman Khan’s own agency was written in 1914.

Sectoral regulation, which relies on existing regulators like the FTC and Federal Communications Commission to address AI issues sector by sector, should not be mistaken for a national policy. Although these agencies are accountable for their own specific sectoral effects, the fact that they act independently does not mean that there is a national AI policy.

The Commerce Department’s National Telecommunications and Information Administration (NTIA) is running a process to solicit ideas about AI oversight. This is a significant step. We already have the answer. What’s needed is a special body that can identify and enforce broad public interests obligations for AI companies.

New Regulatory Model

The real regulatory revolution is not in the creation of a new agency but rather how it operates. The AI oversight goal should be to promote AI innovation and protect the public’s interest. The old micromanagement style that characterized industrial regulations will slow AI innovation. AI oversight must be agile and replace the old utility style of micromanagement.

A new regulatory paradigm works in three main parts:

  • AI does not have the same impact on all risks. AI which aids in online gaming or search options has a very different impact from AI which affects national or personal security. Oversight must be tailored to each individual’s needs, not one-size-fits all.
  • Codes of conduct: Instead of rigid utility-style regulations, AI supervision must be innovative and agile. After a risk is identified, behavioral obligations must be put in place to mitigate it. To arrive at such a code of conduct requires a new level of government-industry cooperation in which the new agency identifies an issue, convenes industry experts to work with the agency’s experts to come up with a behavioral code, and determine whether that output is an acceptable answer.
  • Enforcement: The agency should be able to determine if the new code has been followed, and issue penalties if not.

Unknowns

Future effects of AI remain unknown. What we know is what has been learned so far about how the failure to protect the public’s interest in a rapidly changing technological environment can have harmful effects.

We are once again witnessing the development and deployment of new technology without any regard for its consequences. Now is the time to create public interest standards that will govern this new, powerful technology. If there is no greater force than those who want to use the technology for commercial gain, history will repeat itself. Innovators will set the rules and the society will suffer the consequences.

Next Post

'This Is Our Reality': Ukraine Artists Depict The War

Sergei Supinsky. Video by Ihor Svydchenko & Sergiy Volsky Zhanna Kadyrova, an artist from Ukraine, felt that her work was worthless when Russia invaded Ukraine. What can you do with your art against a tank? She said nothing, as she sat in her Kyiv-based studio. The 42-year old changed her […]