How Three Men Turned Two Words Into $1 Billion: The Safe Superintelligence Coup

Nihal Kurth·
How Three Men Turned Two Words Into $1 Billion: The Safe Superintelligence Coup

Superintelligence is Closer Than You Think

Yet another billion-dollar investment making headlines in the evening here in Europe. But this time, it’s different — this time, the stakes couldn’t be higher. As SSI begins its journey, it’s not just about scaling; it’s about scaling safely, with the future of humanity in mind. Think the “AI bubble” won’t last? I’m with the former Google CEO Eric Schmidt in saying AI is under-hyped. And SSI might just change your mind. Keep reading to discover why this could be the beginning of something groundbreaking.

With great power comes great responsibility — a cliché, perhaps, but never more relevant than now. As you read this, a select few are quietly embedding their biases and ambitions (whether noble or misguided) into the very core of our future. How? Through artificial intelligence and the algorithms that power it. The goal isn’t to replicate today’s flaws in tomorrow’s world. Instead, we’re striving for a future that’s less biased, more inclusive, and fundamentally fairer — all through the lens of AI.

Consider this: Amazon once rolled out a hiring algorithm to automate tech talent recruitment, only to discover that it was systematically disadvantaging women. How? The algorithm picked up on seemingly innocuous indicators, like women’s sports clubs listed on resumes, and used them as grounds for elimination. Why? Because in a world where only 15% to 22% of data science professionals are women, the algorithm “learned” that men were the safer bet.

Rightfully, Amazon pulled the plug after realizing the software did more harm than good. But Amazon is just one example. Despite their known flaws, algorithms are already deciding what movies we should watch next, what news we should consume, who gets hired, and which gig worker gets assigned to which task. And let’s not forget, algorithms falter when faced with new situations that fall outside their training data.

But this is small compared to the challenge of superintelligence — a challenge we might face in our lifetime. The stakes are much higher. How do we steer the development of these powerful systems to avoid unintended consequences? How do we make sure AI doesn’t make harmful decisions that could affect society in ways we can’t predict? After all, as the saying goes, The road to hell is paved with good intentions

As Wired so aptly put it, there’s a group perhaps small enough to fit around a kitchen table — yet they are the ones responsible for AI’s most remarkable advances.

This small circle holds the keys to the future. As AI continues to evolve at a breakneck pace, the advancements we’re witnessing could fundamentally redefine what it means to be human. With such concentrated power, we have to ask: Are these few individuals truly equipped to make decisions that could impact the entire world? Are they considering the broader implications, or are they too close to see the potential dangers?

Three men in the U.S. have taken these questions to heart. Determined to find a solution, they rolled up their sleeves and launched a bold new initiative. As they put it: “We’ve started the world’s first direct-path SSI lab, with one goal and one product: safe superintelligence.” This marks the beginning of a potentially historic company called Safe Superintelligence Inc.

Let’s meet the team:

I. Founders:

Daniel Gross

Daniel Gross is an American entrepreneur with an impressive track record. He co-founded Cue, spearheaded AI initiatives at Apple, and served as a partner at Y Combinator. Beyond that, he’s made his mark as a tech investor, backing big names like Uber, Instacart, Figma, GitHub, Airtable, Rippling, CoreWeave, Character.ai, Perplexity.ai, and more.

Ilya Sutskever

Ilya Sutskever is a computer scientist who has played a pivotal role in advancing machine learning. He co-invented AlexNet, a breakthrough in deep learning, alongside Geoffrey Hinton and Alex Krizhevsky. Born in Soviet Russia and raised in Jerusalem, Sutskever later moved to Canada to study under Hinton, an AI pioneer, the Godfather of AI. While Hinton has recently expressed concerns about AI’s direction, Sutskever’s focus on superintelligence indicates they share similar views on the future of AI.

Ilya’s Deep Learning Journey:

From an early age, Sutskever was captivated by how brains learn and how that process could be replicated in machines. Like his mentor Geoffrey Hinton, he recognized the vast potential in neural networks and the deep learning techniques Hinton pioneered. “It kept getting better and better,” Sutskever recalls.

In 2012, alongside Hinton and fellow graduate student Alex Krizhevsky, Sutskever co-created AlexNet — a neural network that dramatically outperformed anything else in identifying objects in photos. This moment marked the Big Bang of deep learning, firmly establishing Sutskever as a key figure in the AI revolution.

Daniel Levy

Daniel Levy is a seasoned computer scientist who previously led the Optimization team at OpenAI. Before diving into the world of cutting-edge AI, he was a Ph.D. student at Stanford University, where he specialized in machine learning, optimization, and privacy under the guidance of John Duchi.

Levy’s academic journey began at École Polytechnique and continued at Stanford, setting the stage for a career that has seen him contribute to some of the biggest names in tech, including Microsoft, Meta, Google, and, of course, OpenAI.

II. Safe Superintelligence Inc.: Facts and Figures

Company Inception:

SSI is a three-month-old startup, still in its early stages, (supposedly) with zero production code in place.

Company Valuation:

$5 billion.

Current Team:

Just 10 employees, but they’re strategically based in Palo Alto and Tel Aviv — two hubs known for attracting top technical talent. The founders have deep connections to the talent pool in both places.

The Problem:

The founding team sees building safe superintelligence as the most critical technical challenge of our time. That’s why they’ve launched the world’s first dedicated SSI lab, with one goal and one product: safe superintelligence.

SSI’s Mission:

Their mission is simple and singular: to achieve safe superintelligence. It’s not just their name — it’s their entire product roadmap. Every team member, investor, and strategy is aligned with this singular focus.

Company Strategy:

SSI approaches safety and capability as two sides of the same coin, tackling both through cutting-edge scientific innovation and state-of-the-art engineering. Their strategy is to advance capabilities as rapidly as possible, but always with safety leading the charge. This balance, they believe, is the secret to scaling effectively and responsibly.

As they push the boundaries of AI, they simultaneously develop safety protocols to match. For instance, when they build AI that processes data faster, they also craft safeguards to ensure it handles that data responsibly. To maintain their laser-sharp focus, they’ve stripped away distractions — no management overhead, no product cycles, no typical business model headaches. It’s all about prioritizing long-term safety and security over short-term commercial gains.

It’s hard not to think of why things didn’t pan out as planned for Ilya Sutskever at OpenAI.

Flashback: Ilya Sutskever, who once served as OpenAI’s chief scientist, co-led the Superalignment team alongside Jan Leike. Interestingly, Leike departed in May to join rival AI firm Anthropic. Shortly after, OpenAI dissolved its long-term AI risks team, just months after its launch. It’s hard not to see a connection and wonder if the distractions SSI is avoiding were part of the challenges that Sutskever faced at OpenAI.

Investors:

SSI is backed by heavyweights like Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG. These funds are earmarked for acquiring top-tier computing power and hiring the best talent available.

Perhaps the focused backing and clear fund allocation at SSI are intended to avoid the pitfalls that OpenAI encountered.

Yes, They’re Hiring!

SSI is looking for individuals who see this as their life’s work — people driven to tackle the most important technical challenge of our time.

III. Hiring Hacks: How They Land the Best

Beyond the usual questions about your projects and educational background, here are the standout questions that really caught my attention. Founders, take note:

  1. What is something weird or unusual you built or did early on in life?
  2. Please provide the link to a single commit that you find impressive, and explain why you find it noteworthy. You do not need to be the author.
  3. What is a project or task you completed, that everyone around you thought was impossible? For example, either due to task difficulty or some other constraint such as time.
  4. Are you able to work from the above location (Palo Alto and Tel Aviv) 5 days per week?

And the final question reveals something clear: they’re not remote-first. It’s self-explanatory — beats being in the same room, brainstorming, and connecting on all levels. If this is your life’s work, you better be here, fully committed. Fully and truly, with nothing held back.

In a recent Reuters interview, CEO Daniel Gross highlighted their priority on hiring a small, trusted team by focusing more on “good character” over experience.

Don’t believe it? See for yourself: https://ssi.inc/

IV. Scale in Peace, SSI.

We’re excited to see how you’ll make your mark on these remarkable times. Will you leave a lasting legacy on our beautiful planet? We can’t wait to find out.

Because in the end, it’s not just about the technology — it’s about the trust and human values we build into it. And it seems SSI is set to embed those values and shape a future we can all believe in.

But buckle up! This car is speeding down the road, and the journey is just beginning.

Meet Nihal:

Hey there! I’m Nihal, an engineer turned product strategist with a strong background in B2B dynamics and a proud Included VC Fellow. With entrepreneurial roots and work experience across 14 countries, my focus is on helping founders build products and teams that matter. Curious about the mind behind the keyboard? Let’s connect — drop me a line!

Fun fact: I launched my first business at 12 years old.

More Articles

View all
Inside OpenAI’s Usage Cliff

Inside OpenAI’s Usage Cliff

The viral chart nearly everyone read wrong.// 3–4 min read · A single chart convinced the internet that students run AI. One deeper look that rewrites the story.First appeared in Essentialist Edge · A...

Nihal Kurth·
Read