OpenAI has formed a new committee to oversee critical safety and security decisions related to its projects and operations. The committee consists of CEO Sam Altman and other senior executives. This has raised big questions.
OpenAI forms security committee with company insiders
OpenAI’s new Safety and Security Committee consists of Sam Altman, the company’s CEO; board members Bret Taylor, Adam D’Angelo and Nicole Seligman; Jakub Pachocki, chief scientist; Aleksander Madry, leader of the readiness team; Lilian Weng, head of security systems; Matt Knight, head of security; and John Schulman, head of compliance science.
This committee will evaluate OpenAI’s security processes and measures over the next 90 days. Once the assessments are finalized, findings and recommendations will be presented to the board of directors, and some recommendations will be made public.
OpenAI has experienced several high-profile departures from its security team over the past few months. Former employees have questioned the company’s priorities on AI security. Daniel Kokotajlo resigned in April because he lost faith in the company’s responsible use of increasingly capable AIs.
In May, OpenAI co-founder and chief scientist Ilya Sutskever left the company due to disagreements with CEO Sam Altman. Sutskever’s departure was allegedly prompted by Altman’s push to rapidly launch AI products while neglecting security efforts.
More recently, Jan Leike, a former DeepMind researcher who was involved in the development of ChatGPT and InstructGPT, left his position because he felt OpenAI was not properly addressing safety and security issues.
AI policy researcher Gretchen Krueger left the company with similar concerns and called on OpenAI to increase its accountability and transparency.
While advocating for AI regulation, OpenAI has also sought to shape it. To this end, the company has devoted significant resources to lobbying. Altman is also among the members of the US Department of Homeland Security’s newly established Artificial Intelligence Safety and Security Board.
OpenAI also announced that it will hire outside experts to balance the ethically criticized committee structure. These experts include cybersecurity expert Rob Joyce and former US Department of Justice official John Carlin. However, details about the size of this group of external experts and their influence on the committee were not provided.
Bloomberg writer Parmy Olson noted that in-house audit committees often don’t function as real audits. Although OpenAI says it plans to address valid criticisms through the committee, it is debatable whether these criticisms are valid.
CEO Sam Altman said in 2016 that external representatives would be given an important role in OpenAI’s governance. However, this plan never materialized and seems unlikely to happen now.