Leading AI companies collaborate to establish Frontier Model Forum 

US highlights measures to tackle challenges of AI

The White House has revealed a set of actions aimed at tackling the challenges associated with artificial intelligence (AI), prompted by the growing popularity of tools like ChatGPT.

The government also has increasing concerns regarding potential risks such as discrimination, misinformation, and privacy.

According to the White House, the US government intends to implement policies that shape the procurement and utilization of AI systems by federal agencies. 

This move is expected to have a significant impact on the AI market, as well as regulate how Americans interact with AI in various government-related contexts such as websites, security checkpoints, and other settings.

Read also: Google AI pioneer resigns, warns of Chatbot dangers 

Concerns associated with AI

The White House also announced that the National Science Foundation plans to allocate $140 million towards promoting AI research and development. 

These funds will be utilized to establish research centres dedicated to applying AI solutions to critical areas such as climate change, agriculture, and public health, as outlined by the administration.

On the same day, Vice President Kamala Harris and other officials from the administration held a meeting with the CEOs of Google, Microsoft, OpenAI (the creator of ChatGPT), and Anthropic. 

The purpose of the meeting was to emphasize the significance of ethical and responsible development of artificial intelligence (AI). Coincidentally, the meeting took place alongside a UK government inquiry which aims to examine the risks and benefits associated with AI.

During a conference call prior to the meeting, a senior official from the Biden administration informed reporters about the essential obligation of tech companies to ensure the safety, security, and protection of people’s rights before deploying or releasing their AI products to the public.

The officials on the call highlighted several risks that the general public faces due to the widespread adoption of AI tools. These risks encompass potential misuse of AI-generated deepfakes and dissemination of misinformation that could undermine democratic processes. 

The White House’s concerns also include job displacement caused by increasing automation, biased decision-making resulting from algorithmic systems, safety hazards associated with autonomous vehicles, and the threat of malicious hackers leveraging AI-powered techniques.

In a statement issued after the meeting, the White House conveyed that President Biden and Vice President Harris emphasized the need to address both existing and potential risks posed by AI to individuals, society, and national security in order to realize the benefits of AI advancements fully. 

The statement also mentioned that President Biden stressed the fundamental responsibility of companies to ensure the safety and security of their products before deploying them or making them accessible to the public.

 

Tech companies to be held responsible under US laws

According to a statement from the White House regarding the meeting, Vice President Harris emphasized to the companies that they bear an “ethical, moral, and legal responsibility” to ensure the safety and security of their products. 

The statement also indicated that the companies would be held accountable under existing US laws. Furthermore, Harris hinted at the possibility of future regulations in the rapidly evolving AI industry.

In a statement, Harris expressed the need for collaboration among government, private companies, and society at large to address these challenges. 

She and President Biden are committed to playing their part in advancing potential new regulations and supporting legislation to ensure the safe and beneficial use of technological innovations.

White House Press Secretary Karine Jean-Pierre described the meeting as “honest” and “frank” in a conversation with reporters. She emphasized the significance of the CEOs’ presence at the meeting, highlighting the seriousness with which the administration approaches the issue.

Jean-Pierre stressed the importance of greater transparency from AI companies, including providing the public with the means to assess and evaluate their products, to ensure the safety and trustworthiness of AI systems.

The meeting reflects the federal government’s ongoing recognition of concerns arising from the rapid development and deployment of new AI tools and its efforts to address some of the associated risks.

Samsung forbids staff from using AI chatbots

Measures to address AI risks

Members of the Federal Trade Commission, testifying before Congress, have raised concerns that AI could greatly amplify fraud and scams. FTC Chair Lina Khan highlighted in a recent New York Times op-ed that the US government already possesses sufficient legal authority to regulate AI by leveraging its mandate to protect consumers and promote competition.

In the previous year, the Biden administration introduced a proposal for an AI Bill of Rights, which emphasized the importance of developers adhering to principles such as privacy, safety, and equal rights during the creation of new AI tools.

At the beginning of this year, the Commerce Department published voluntary guidelines for risk management in AI. These guidelines were intended to assist organizations and businesses in governing, mapping, measuring, and managing potential risks at various stages of the AI development cycle. 

Additionally, in April, the Department invited public input on the most effective policies for regulating AI, including considerations such as audits and industry self-regulation.

The United States government is not alone in its efforts to shape AI development. European officials are also actively engaged in discussions and anticipate the formulation of AI legislation, potentially within this year. Such legislation could have significant implications for AI companies worldwide.