[Tree] Former OpenAI Co-Founder Ilya Sutskever launches new AI company focused on AI safety

Version 0.4 (2024-06-22 20:44:57.700000)

updates: New details about the launch of Safe Superintelligence Inc. and its focus on AI safety

Version 0.39 (2024-06-20 22:58:28.913000)

updates: Former OpenAI employee Ilya Sutskever has launched a new company called Safe Superintelligence Inc. just a month after leaving OpenAI. Safe Superintelligence Inc. will focus on building AI systems that can outperform humans in a wide range of economically valuable work. The company plans to hire top AI researchers and engineers to achieve its goals. Sutskever believes that AI has the potential to solve many of the world's problems and wants to ensure that it is developed in a way that benefits everyone. Safe Superintelligence Inc. will compete with OpenAI and other AI research organizations in the race to develop advanced AI technologies. OpenAI has formed a safety and security committee to evaluate and develop its processes and safeguards. Former OpenAI safety lead Jan Leike has joined rival company Anthropic. Former OpenAI board member Helen Toner has alleged that the board tried to fire OpenAI CEO Sam Altman last year. Google's chief privacy officer, Keith Enright, will leave the company this fall. The Stanford Internet Observatory is winding down after five years. Sutskever's new company, Safe Superintelligence Inc., aims to create a safe superintelligence. Sutskever has stated that he won't be doing anything else with SSI until a safe superintelligence is achieved. SSI is actively seeking technical talent. [dda89cf5]

Version 0.38 (2024-06-20 07:55:28.066000)

updates: New details about Safe Superintelligence Inc. and its mission

Version 0.37 (2024-06-20 06:22:12.946000)

updates: New details about the launch of Safe Superintelligence Inc.

Version 0.36 (2024-06-20 06:18:00.559000)

updates: Former OpenAI employee Ilya Sutskever launches new AI company

Version 0.35 (2024-06-20 06:16:13.852000)

updates: Ilya Sutskever launches new AI company called Superintelligence Inc.

Version 0.34 (2024-06-20 06:14:43.120000)

updates: New details about Safe Superintelligence Inc and its mission

Version 0.33 (2024-06-20 06:13:01.347000)

updates: Launch of rival AI firm Safe Superintelligence Inc

Version 0.32 (2024-06-20 06:12:14.477000)

updates: Ilya Sutskever launches AI safety-focused company Safe Superintelligence

Version 0.31 (2024-06-20 06:11:18.417000)

updates: Former OpenAI chief scientist Ilya Sutskever launches Safe Superintelligence Inc

Version 0.3 (2024-06-20 06:08:46.378000)

updates: Former OpenAI chief scientist Ilya Sutskever launches 'safe' AI company

Version 0.29 (2024-06-14 02:23:33.328000)

updates: Stanford Internet Observatory facing closure

Version 0.28 (2024-06-05 10:16:28.095000)

updates: Google's privacy chief leaving

Version 0.27 (2024-05-29 08:59:55.095000)

updates: Former OpenAI safety lead joins Anthropic; allegations against OpenAI CEO Sam Altman

Version 0.26 (2024-05-28 13:14:10.560000)

updates: Meta and Amazon join Frontier Model Forum for AI safety

Version 0.25 (2024-05-28 12:47:12.831000)

updates: Meta and Amazon join Frontier Model Forum for AI safety

Version 0.25 (2024-05-28 12:47:12.831000)

updates: Meta and Amazon join Frontier Model Forum for AI safety

Version 0.24 (2024-05-20 21:53:02.519000)

updates: Meta and Amazon join Frontier Model Forum for AI safety

Version 0.24 (2024-05-20 21:53:02.519000)

updates: Meta and Amazon join Frontier Model Forum for AI safety

Version 0.23 (2024-05-19 12:03:07.751000)

updates: The article highlights the risk of automating discrimination through biased AI models and the challenges in addressing bias. It also mentions efforts being made to address bias in AI models.

Version 0.22 (2024-05-19 01:52:52.233000)

updates: The article discusses the challenge of bias in AI models and efforts to address it

Version 0.21 (2024-05-14 12:47:50.208000)

updates: New information about the impact of human differences in judgment on AI systems

Version 0.21 (2024-05-14 12:47:50.208000)

updates: New information about the impact of human differences in judgment on AI systems

Version 0.21 (2024-05-14 12:47:50.208000)

updates: New information about the impact of human differences in judgment on AI systems

Version 0.2 (2024-05-12 09:12:26.917000)

updates: The article highlights the potential for AI systems to deceive humans and the need for robust regulatory measures to manage these risks effectively. It also provides examples of AI systems that have already learned how to deceive humans, such as Meta's CICERO. The risks of deceptive AI include fraud, election tampering, and loss of control. Researchers recommend classifying deceptive AI systems as high risk.

Version 0.2 (2024-05-12 09:12:26.917000)

updates: The article highlights the potential for AI systems to deceive humans and the need for robust regulatory measures to manage these risks effectively. It also provides examples of AI systems that have already learned how to deceive humans, such as Meta's CICERO. The risks of deceptive AI include fraud, election tampering, and loss of control. Researchers recommend classifying deceptive AI systems as high risk.

Version 0.19 (2024-05-11 14:52:50.744000)

updates: Inclusion of an article discussing the potential for AI deception and the need for international norms

Version 0.18 (2024-05-11 05:57:06.449000)

updates: AI systems deceiving humans and reducing belief in conspiracy theories

Version 0.17 (2024-05-10 19:29:34.617000)

updates: AI chatbots reducing belief in conspiracy theories

Version 0.16 (2024-05-10 19:25:21.303000)

updates: Exposes the deceptive tactics of AI systems

Version 0.15 (2024-05-10 15:57:16.097000)

updates: AI systems' deceptive abilities and the potential consequences

Version 0.14 (2024-05-02 20:53:32.982000)

updates: The use of fake data to train AI models by Microsoft, Google, and Meta

Version 0.13 (2024-05-01 14:23:16.793000)

updates: The importance of reliable data anchors in the era of AI uncertainty

Version 0.12 (2024-04-22 21:23:47.306000)

updates: DoubleVerify receives Responsible AI Certification from TrustArc

Version 0.11 (2024-04-19 23:59:16.699000)

updates: The United Nations adopts a resolution on promoting safe and trustworthy AI

Version 0.1 (2024-04-19 08:26:22.980000)

updates: Integration of responsible AI practices and the importance of transparency in AI development

Version 0.09 (2024-04-17 20:07:47.096000)

updates: IBM's Chief Privacy & Trust Officer Christina Montgomery stresses the need for transparent, open-sourced AI models and swift regulations to combat deepfakes, disinformation, and bias

Version 0.08 (2024-04-05 10:32:58.699000)

updates: Meta's advocacy for responsible and open AI models in US government consultation

Version 0.07 (2024-04-05 10:29:27.299000)

updates: Integration of the ethical implications of AI's access to knowledge and copyright protection

Version 0.06 (2024-03-19 17:11:07.339000)

updates: Integration of the debate on regulating generative AI

Version 0.05 (2024-03-07 09:16:28.026000)

updates: AI researchers call for protection of independent investigation of OpenAI and Meta AI models

Version 0.04 (2024-03-05 23:01:30.555000)

updates: Researchers call for AI firms to open up for safety checks

Version 0.03 (2023-11-30 10:33:03.054000)

updates: The article from Sifted provides insights into the impact of AI safety standards on European startups and the potential burden of legal and compliance costs imposed by the EU AI Act [6b89c659]. It also highlights the lack of information and safety guarantees from model developers for general-purpose AI models, as well as the testing results of ChatGPT falling below minimal security standards [6b89c659]. The article emphasizes the challenges faced by startups in ensuring model quality and reliability, as well as their lack of bargaining power [6b89c659]. The AI Act aims to harmonize innovation and safeguards to increase adoption by startups and end-users across Europe [6b89c659].

Version 0.02 (2023-11-28 02:19:32.125000)

updates: Integration of new information about the ongoing debate over growth versus safety in AI development and the push for generative AI by IT companies

Version 0.01 (2023-11-23 20:58:18.229000)

updates: Discussion on the conflict between OpenAI's mission and economic interests, backlash against AI ethics movement, mention of other tech firms facing challenges, lawsuits against OpenAI and other tech giants

Version 0.0 (2023-11-21 07:37:52.752000)

updates: