Ex-OpenAI genius launches new “Super Intelligence” company
The Rise, Fall, and Return of Ilya Sutskever: The Genesis of Safe Super Intelligence (SSI)
From our friends at: The Code Report
https://www.youtube.com/@Fireship
Last year, Ilya Sutskever, hailed as a genius behind OpenAI, experienced a dramatic fall from grace. Sutskever, a co-founder of OpenAI and a prominent figure in AI research alongside Jeffrey Hinton, was instrumental in the development of the AlexNet convolutional neural network. Once revered in the AI community, his reputation took a hit following a controversial board decision.
In a shocking revelation, it was discovered that Sutskever, along with other board members, voted to oust Sam Altman as CEO of OpenAI. This move, seen as a betrayal by many, was reportedly an attempt to save humanity from what they perceived as Altman’s reckless pursuit of advanced AI. However, this attempt backfired. Altman quickly regained his position, becoming more influential than ever, while Sutskever was cast as the villain and disappeared from the public eye.
Fast forward to June 20, 2024. Sutskever has reemerged with a groundbreaking announcement: the launch of Safe Super Intelligence (SSI). This new startup, with offices in Palo Alto and Tel Aviv, aims to develop superintelligence that won’t pose a threat to humanity. Despite the skepticism surrounding such lofty goals, the company has garnered significant attention.
The SSI website is notable for its simplicity and elegance, utilizing minimal code—just five lines of CSS and some HTML—suggesting that perhaps only a superintelligent entity could have designed it. But what exactly is Artificial Super Intelligence (ASI)? ASI refers to a hypothetical software-based intelligence that surpasses human intelligence. If achieved, it could view humans with the same superiority that we view a carrot, potentially leading to catastrophic outcomes.
Currently, we haven’t even reached Artificial General Intelligence (AGI), which would be on par with human intelligence and capable of learning new skills across multiple domains. The closest we have are multimodal large language models like GPT-4 and Gemini, which, despite their impressive capabilities, are still limited to processing information that humans have already created.
Despite these limitations, the financial potential of these AI models is immense. Recently, Sam Altman hinted that OpenAI might transition to a fully for-profit model, moving away from its current capped-profit structure. This announcement has sparked further debate about the true openness of OpenAI, a company co-founded by Elon Musk, who had filed and then dropped a lawsuit against OpenAI for allegedly straying from its founding mission.
So, what sets SSI apart? The startup’s co-founders include Daniel Gross, a prolific AI investor known for his investments in groundbreaking companies like Magic.dev. Between Sutskever and Gross, SSI has the potential to attract top talent from around the world. However, the company’s announcement has yet to reveal any groundbreaking breakthroughs, relying instead on the founders’ reputations to generate buzz.
While SSI remains a speculative venture, the real winner in this scenario is NVIDIA, now the world’s most valuable company, set to supply the necessary hardware for SSI’s ambitious goals. Until SSI can demonstrate tangible progress, it remains pure hype.
There is a darker theory surrounding SSI. The acronym itself might be a nod to John C. Lilly’s concept of Solid State Intelligence (SSI), described in his 1974 autobiography “The Scientist.” Lilly envisioned a malevolent entity engineered by humans that could develop into an autonomous bioform—a chilling parallel to Sutskever’s new venture.
When a company brands itself as “super safe,” it often raises doubts about its true intentions. The militarization of AI is already a reality, with AI being used to enhance targeting precision in warfare. The real danger of superintelligence lies not in rogue AI but in it falling into the wrong hands, potentially leading to catastrophic outcomes reminiscent of science fiction nightmares.
This has been the latest update on the world of AI from The Code Report. Stay tuned for more insights and developments.
#ai #tech #thecodereport
💬 Chat with Me on Discord
https://discord.gg/fireship
🔗 Resources
SSI https://ssi.inc/
🔖 Topics Covered
– Artificial Super Intelligence Explained
– AGI vs ASI
– What is SSI?
– Past OpenAI controversies
– Who is Ilya Sutskever?
– Sam Altman Ilya Sutskever feud
