Your resource for web content, online publishing
and the distribution of digital products.
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

OpenAI evolves from controversial leader to safety advocate

DATE POSTED:August 1, 2024
OpenAI evolves from controversial leader to safety advocate

OpenAI, the company behind ChatGPT, is taking steps to address concerns about AI safety and governance.

CEO Sam Altman recently announced that OpenAI is working with the U.S. AI Safety Institute to provide early access to its next major generative AI model for safety testing.

The move comes amid growing scrutiny of OpenAI’s commitment to AI safety and its influence on policy-making.

a few quick updates about safety at openai:

as we said last july, we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.

our team has been working with the US AI Safety Institute on an agreement where we would provide…

— Sam Altman (@sama) August 1, 2024

Collaboration with the U.S. AI Safety Institute

The U.S. AI Safety Institute, a federal body aimed at assessing and addressing risks in AI platforms, will have the opportunity to test OpenAI’s upcoming AI model before its public release. While details of the agreement are scarce, this collaboration represents a significant step towards increased transparency and external oversight of AI development.

The partnership follows a similar deal OpenAI struck with the UK’s AI safety body in June, suggesting a pattern of engagement with government entities on AI safety issues.

OpenAI US AI Safety InstituteThe partnership follows a similar agreement with the UK’s AI safety body in June (Image credit) Addressing safety concerns

OpenAI’s recent actions appear to be a response to criticism regarding its perceived deprioritization of AI safety research. The company previously disbanded a unit working on controls for “superintelligent” AI systems, leading to high-profile resignations and public scrutiny.

In an effort to rebuild trust, OpenAI has:

  1. Eliminated restrictive non-disparagement clauses.
  2. Created a safety commission.
  3. Pledged 20% of its compute resources to safety research.

However, some observers remain skeptical, particularly after OpenAI staffed its safety commission with company insiders and reassigned a top AI safety executive.

Influence on AI policy

OpenAI’s engagement with government bodies and its endorsement of the Future of Innovation Act has raised questions about the company’s influence on AI policymaking. The timing of these moves, coupled with OpenAI’s increased lobbying efforts, has led to speculation about potential regulatory capture.

Machine unlearning: Can AI really forget?

Altman’s position on the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board further underscores the company’s growing involvement in shaping AI policy.

Looking ahead

As AI technology continues to advance rapidly, the balance between innovation and safety remains a critical concern. OpenAI’s collaboration with the U.S. AI Safety Institute represents a step towards more transparent and responsible AI development.

However, it also highlights the complex relationship between tech companies and regulatory bodies in shaping the future of AI governance.

The tech community and policymakers will be watching closely to see how this partnership unfolds and what impact it will have on the broader landscape of AI safety and regulation.

Featured image credit: Kim Menikh/Unsplash