v0.4 🌳  

Former OpenAI Co-Founder Ilya Sutskever Launches New AI Company Focused on AI Safety

2024-06-22 20:44:57.700000

Former OpenAI employee Ilya Sutskever has launched a new company called Safe Superintelligence Inc. (SSI) just a month after leaving OpenAI. Sutskever and Jan Leike left OpenAI due to a falling-out with leadership over how to approach AI safety. SSI aims to prioritize safety alongside capabilities in the development of AI. The company's mission is to create a safe superintelligence or artificial general intelligence (AGI). Sutskever believes that AI superior to human intelligence could emerge within the decade. SSI is actively seeking technical talent and has already set up offices in Palo Alto, California, USA, and Tel Aviv, Israel. SSI is a for-profit entity from the start, and co-founder Daniel Gross expressed confidence in their ability to raise capital for SSI. [783ce66f]

Safe Superintelligence Inc. (SSI), co-founded by former OpenAI employee Ilya Sutskever, Daniel Gross, and Daniel Levy, aims to prioritize safety alongside capabilities in the development of AI. Sutskever's departure from OpenAI earlier this year was marked by internal conflict, but he expressed regret for his involvement and is now focused on a stable and undistracted path towards safe AI development. SSI's approach to AI safety is influenced by lessons learned from OpenAI, where Sutskever co-led the Superalignment team. The company is currently building its team and has offices in Palo Alto, California, and Tel Aviv. [75cbed99]

OpenAI's former safety lead, Jan Leike, has joined rival company Anthropic to work in a similar role. Leike was part of OpenAI's superalignment team, which focused on the safety of future AI systems. He resigned from OpenAI along with former OpenAI chief scientist Ilya Sutskever. Leike disagreed with OpenAI's leadership about the company's core priorities and has joined Anthropic to continue the superalignment mission. Anthropic plans to expand its team and work on scalable oversight, weak-to-strong generalization, and automated alignment research. [502f796c]

OpenAI has formed a safety and security committee, led by CEO Sam Altman, to evaluate and develop its processes and safeguards. The committee includes directors Bret Taylor, Adam D’Angelo, and Nicole Seligman, as well as technical and policy experts Aleksander Madry, Lilian Weng, and John Schulman, Chief Scientist Jakub Pachocki, and head of security Matt Knight. The committee's first task is to evaluate and further develop OpenAI’s existing safety practices over the next 90 days and share recommendations with the board. [52c0fef8]

In addition, former OpenAI board member Helen Toner has alleged that the board tried to fire OpenAI CEO Sam Altman last year because they had lost trust in him. Toner claims that Altman did not inform the board about the release of ChatGPT and provided inaccurate information about OpenAI's safety processes. [502f796c]

Google's chief privacy officer, Keith Enright, will leave the company this fall, after 13 years at the tech giant. Enright's departure is part of a broader reorganization within the privacy teams, with the company attempting to shift privacy policy to various individual product management teams. The move comes as Google aims to prevent major AI lapses and improve its privacy practices. [6c307967]

The Stanford Internet Observatory, known for its research on the abuse of social platforms, is winding down after five years. Its founding director, Alex Stamos, left in November, and research director Renee DiResta's contract was not renewed. Some staff members have been told to find other jobs. The shutdown is attributed to a campaign by Republicans to discredit research institutions and discourage investigations into political speech and influence campaigns. The lab has faced three lawsuits alleging collusion with the government to censor speech. The remnants of the lab will be reconstituted under Jeff Hancock, the lab's faculty sponsor, with a focus on child safety. The Journal of Online Trust and Safety and the Trust and Safety Research Conference will continue. Stanford disputes the claim that the lab is being dismantled and emphasizes its commitment to academic research. [3809e0c8]

Former OpenAI chief scientist Ilya Sutskever has launched a new AI company called Safe Superintelligence (SSI) focused on safety. Sutskever, known for his differences with OpenAI CEO Sam Altman, started SSI just a month after leaving OpenAI. The company's mission is to create a safe superintelligence, which they believe is the most important technical problem of our time. SSI plans to advance capabilities while ensuring safety remains a priority. Although SSI and OpenAI are competitors in the AI space, OpenAI has several years of experience and established products, such as GPT-4 and GPT-4.0. Sutskever has stated that he won't be doing anything else with SSI until a safe superintelligence is achieved, indicating that it may take some time before the company releases mass-grade products. [10278cf8]

Google's chief privacy officer, Keith Enright, will leave the company this fall, after 13 years at the tech giant. Enright's departure is part of a broader reorganization within the privacy teams, with the company attempting to shift privacy policy to various individual product management teams. The move comes as Google aims to prevent major AI lapses and improve its privacy practices. [6c307967]

The Stanford Internet Observatory, known for its research on the abuse of social platforms, is winding down after five years. Its founding director, Alex Stamos, left in November, and research director Renee DiResta's contract was not renewed. Some staff members have been told to find other jobs. The shutdown is attributed to a campaign by Republicans to discredit research institutions and discourage investigations into political speech and influence campaigns. The lab has faced three lawsuits alleging collusion with the government to censor speech. The remnants of the lab will be reconstituted under Jeff Hancock, the lab's faculty sponsor, with a focus on child safety. The Journal of Online Trust and Safety and the Trust and Safety Research Conference will continue. Stanford disputes the claim that the lab is being dismantled and emphasizes its commitment to academic research. [3809e0c8]

Google's chief privacy officer, Keith Enright, will leave the company this fall, after 13 years at the tech giant. Enright's departure is part of a broader reorganization within the privacy teams, with the company attempting to shift privacy policy to various individual product management teams. The move comes as Google aims to prevent major AI lapses and improve its privacy practices. [6c307967]

The Stanford Internet Observatory, known for its research on the abuse of social platforms, is winding down after five years. Its founding director, Alex Stamos, left in November, and research director Renee DiResta's contract was not renewed. Some staff members have been told to find other jobs. The shutdown is attributed to a campaign by Republicans to discredit research institutions and discourage investigations into political speech and influence campaigns. The lab has faced three lawsuits alleging collusion with the government to censor speech. The remnants of the lab will be reconstituted under Jeff Hancock, the lab's faculty sponsor, with a focus on child safety. The Journal of Online Trust and Safety and the Trust and Safety Research Conference will continue. Stanford disputes the claim that the lab is being dismantled and emphasizes its commitment to academic research. [3809e0c8]

OpenAI's former chief scientist Ilya Sutskever has launched a new AI start-up called Safe Superintelligence Inc. (SSI). Sutskever left OpenAI due to disagreements over AI safety strategies. SSI is a for-profit entity with offices in Palo Alto and Tel Aviv. Sutskever believes that AI superior to human intelligence could emerge within the decade. SSI is actively seeking technical talent. Sutskever's departure from OpenAI was in May. SSI's mission is to build safe superintelligence and their approach to AI safety is their sole focus. SSI has already set up offices in Palo Alto and Tel Aviv and is recruiting technical talent. SSI is a for-profit entity from the start. Co-founder Daniel Gross expressed confidence in their ability to raise capital for SSI. [d6eab63f]

Ilya Sutskever, the former chief scientist of OpenAI, along with Daniel Gross and Daniel Levy, has launched a new AI startup called Safe Superintelligence Inc. The company aims to create a safe superintelligence or artificial general intelligence (AGI). Sutskever was previously the co-founder and chief scientist at OpenAI, Gross has led AI efforts at Apple and is an investor in tech and AI companies, and Levy has worked for OpenAI, Google, and Facebook. The company is focused on developing the technological capabilities and guardrails for safe superintelligence while innovating rapidly. Safe Superintelligence is an American company with offices in Palo Alto, California, USA, and Tel Aviv, Israel. The AI industry is divided between caution and innovation, with one camp focused on moving fast and breaking things and the other camp emphasizing the need for safety and thorough testing before releasing disruptive AI technology. [fa8c515a]

Ilya Sutskever, co-founder of OpenAI, has launched a new AI company called Safe Superintelligence Inc. (SSI) just one month after leaving OpenAI. Sutskever and Jan Leike left OpenAI due to a falling-out with leadership over how to approach AI safety. SSI aims to prioritize safety alongside capabilities in the development of AI. The company's mission is to create a safe superintelligence or artificial general intelligence (AGI). Sutskever believes that AI superior to human intelligence could emerge within the decade. SSI is actively seeking technical talent and has already set up offices in Palo Alto, California, USA, and Tel Aviv, Israel. SSI is a for-profit entity from the start, and co-founder Daniel Gross expressed confidence in their ability to raise capital for SSI. [783ce66f]

Disclaimer: The story curated or synthesized by the AI agents may not always be accurate or complete. It is provided for informational purposes only and should not be relied upon as legal, financial, or professional advice. Please use your own discretion.