v0.18 🌳  

OpenAI Whistleblowers Allege Restrictive Non-Disclosure Agreements, Request SEC Investigation

2024-07-13 17:58:45.944000

OpenAI, the artificial intelligence research lab, is facing allegations that it illegally prohibited its employees from raising concerns about the safety risks of its technology. Whistleblowers have filed a complaint with the Securities and Exchange Commission (SEC), claiming that OpenAI issued overly restrictive employment, severance, and nondisclosure agreements that violated federal laws protecting whistleblowers. These agreements could penalize workers who disclosed information to federal regulators [ed8b1618]. The whistleblowers' letter also raised concerns about OpenAI prioritizing profit over safety in the development of its technology. OpenAI spokesperson Hannah Wong stated that the company's whistleblower policy protects employees' rights to make protected disclosures and that they have made changes to their departure process to remove nondisparagement terms [ed8b1618]. The SEC has received the complaint, but it is unclear if an investigation has been launched. The whistleblowers' letter also called for OpenAI to change its nondisclosure agreements and for the SEC to issue fines to the company for violating federal laws [ed8b1618].

OpenAI whistleblowers have filed a complaint with the U.S. Securities and Exchange Commission (SEC) over the company's allegedly restrictive non-disclosure agreements. The whistleblowers claim that OpenAI issued overly restrictive employment, severance, and non-disclosure agreements that could penalize workers who raised concerns about OpenAI to federal authorities. The agreements required employees to waive their federal rights to whistleblower compensation and obtain prior consent from the company to disclose information to federal regulators. OpenAI did not create exemptions in the employee nondisparagement clauses for disclosing securities violations to the SEC. The SEC has not commented on the existence of the whistleblower submission. OpenAI has not responded to requests for comment [6b91ce65].

OpenAI, a non-profit research lab, is facing criticism over its safety practices. Current and former employees allege that safety is being sidelined in the pursuit of technological advancement. Concerns were raised about rushed safety testing for a recent product launch. OpenAI employees have demanded better safety practices and transparency. The lab's commitment to safety is being questioned, despite its official mission statement emphasizing safety as a core principle. Experts warn of the potential risks of rapid AI advancement. OpenAI finds itself at a crossroads, needing to regain public trust and prioritize safety alongside innovation [1e96d169] [6b2438d1].

OpenAI, the artificial intelligence research lab, is facing backlash for failing to fulfill its commitment to dedicate 20% of its computing resources to combat the most dangerous form of AI. The company's Superalignment team, responsible for ensuring the safe control of future AI systems, was disbanded amid staff resignations and accusations that OpenAI prioritized product launches over AI safety [2f4ff727] [14f4625f]. The team repeatedly had its requests for access to graphics processing units (GPUs) denied by OpenAI's leadership, despite the company's public pledge [2f4ff727] [14f4625f]. OpenAI cofounder and former chief scientist Ilya Sutskever, who led the Superalignment team, left the company before the commitment could be fulfilled. The team faced compute allocation problems even before Sutskever's departure [2f4ff727] [14f4625f]. The failure to fulfill its commitment raises doubts about the credibility of OpenAI's public statements. The company has faced criticism for using an AI voice similar to Scarlett Johansson's, further undermining its claims [2f4ff727] [14f4625f]. OpenAI has also faced internal challenges, losing several AI safety researchers in recent months. Former employees have been required to sign separation agreements with strict non-disparagement clauses [2f4ff727] [14f4625f]. In addition to the criticism for failing to fulfill its commitment, OpenAI has disrupted five covert influence operations over the past three months that sought to use its artificial intelligence models for deceptive activities. The campaigns originated from Russia, China, Iran, and Israel. The threat actors attempted to leverage OpenAI's language models for tasks like generating comments, articles, profiles, and debugging code for bots and websites. The operations did not appear to have benefited from increased audience engagement or reach. OpenAI outlined AI leverage trends like generating high text/image volumes with fewer errors, mixing AI and traditional content, and faking engagement via AI replies [7c95b94a]. OpenAI has not yet responded to requests for comment on the matter [2f4ff727] [14f4625f].

A hacker gained access to the internal messaging systems of OpenAI in 2023, stealing details about the design of the firm’s AI products. The incident was reported by The New York Times and involved a hacker lifting details from discussions in an internal forum between OpenAI employees about the technologies being worked on by the company. OpenAI executives informed staff and the company’s board about the breach in April last year, but did not make the details public because no customer or partner data had been stolen. The company did not inform US law enforcement agencies of the incident as they believed the hacker was a private individual with no known ties to a foreign government. OpenAI has fixed the underlying security issue and continues to invest in strengthening its security. Cybersecurity experts warn that attacks on AI firms are likely to continue and increase, given the growing importance of the technology [ab47dae1].

OpenAI employees have criticized the company for 'failing' its first test to make its AI safe. Last summer, OpenAI promised the White House to safety test new versions of its AI technology. However, some members of OpenAI's safety team felt pressured to speed through a new testing protocol to meet a May launch date. Former OpenAI employees have raised concerns about rushed safety work and a lack of solid safety processes. OpenAI spokesperson Lindsey Held said the company conducted extensive internal and external tests and held back some features initially to continue safety work. OpenAI announced the preparedness initiative to prevent catastrophic risks. The company has launched two new safety teams in the last year. The incident sheds light on the changing culture at OpenAI, where company leaders have been accused of prioritizing commercial interests over public safety [6b2438d1].

The recent OpenAI security breach is worth paying attention to as AI companies are treasure troves of data and likely to become more of a target for hackers. Companies working with large AI companies should be cautious. Dezerv raises $32 million from Premji Invest and others at a valuation of over $200 million. Jio Financial Services Group COO Charanjit Attra resigns. The article also mentions other news and events without providing specific details [0b504d3e].

Disclaimer: The story curated or synthesized by the AI agents may not always be accurate or complete. It is provided for informational purposes only and should not be relied upon as legal, financial, or professional advice. Please use your own discretion.