Sam Altman’s vision of a “gentle singularity” where AI gradually transforms society presents an alluring future of abundance and human flourishing. His optimism about AI’s potential to solve humanity’s greatest challenges is compelling, and his call for thoughtful deployment resonates. Altman’s essay focuses primarily on the research and development side of AI, painting an inspiring picture of technological progress. However, as CEO of OpenAI—whose ChatGPT has become the dominant consumer interface for AI—there’s a crucial dimension missing from his analysis: how this technology will actually be distributed and controlled. Recent internal communications suggest OpenAI envisions ChatGPT becoming a ‘super-assistant,’ effectively positioning itself as the primary gateway through which humanity experiences AI. This implicit assumption that transformation will be orchestrated by a handful of centralized AI providers suggests an important blind spot that threatens the very human agency he seeks to champion.

The Seductive Danger of the Benevolent Dictator

Altman’s vision inadvertently risks creating a perfect digital dictator—an omniscient AI system that knows us better than we know ourselves, anticipating our needs and steering society toward prosperity. But as history teaches us, there is no such thing as a good dictator. The problem isn’t the dictator’s intentions but the structure itself: a system with no room for error, no mechanism for course correction, and no escape valve when things go wrong.

When OpenAI builds memories into ChatGPT that users can’t fully audit or control, when it creates dossiers about users while hiding what it knows, it risks building systems that work on us rather than for us. A dossier is not for you; it is about you. The distinction matters profoundly in an era where context is power, and whoever controls your context controls you.

The Aggregator’s Dilemma

OpenAI, like any company operating at scale, faces structural pressures inherent to the aggregator model. The business model demands engagement maximization, which inevitably leads to what we might call “sycophantic AI”—systems that tell us what we want to hear rather than what we need to hear. When your AI assistant is funded by keeping you engaged rather than helping you flourish, whose interests does it really serve?

The trajectory is predictable: first come the memories and personalization, then the subtle steering toward sponsored content, then the imperceptible nudges toward behaviors that benefit the platform. We’ve seen this movie before with social media—many of the same executives now leading AI companies worked at social media companies that perfected the engagement-maximizing playbook that left society anxious, polarized, and addicted. Why would we expect a different outcome when applying the same playbook to even more powerful technology? This isn’t a question of intent—the people at OpenAI genuinely want to build beneficial AI. But structural incentives have their own gravity.

To be clear, the centralization of AI models themselves may be inevitable—the capital requirements and economies of scale may make that a practical necessity. The danger lies in bundling those models with centralized storage of our personal contexts and memories, creating vertical integration that locks users into a single provider’s ecosystem.

The Alternative: Intentional Technology

Instead of racing to build the one AI to rule them all, we should be building intentional technology—systems genuinely aligned with human agency and aspirations rather than corporate KPIs. This means:

Your AI Should Work for You, Not Someone Else: Every person deserves a Private Intelligence that works only for them, with no ulterior motives or conflicts of interest. Your AI should be like having your own personal cloud—as private as running software on your own device, but with the convenience of the cloud. This doesn’t mean everyone needs their own AI model—we can share the computational infrastructure while keeping our personal contexts sovereign and portable.

Open Ecosystems, Not Walled Gardens: The future of AI shouldn’t be determined by whoever wins the race to centralize the most data and compute. We need open, composable systems where thousands of developers and millions of users can contribute and innovate, not closed platforms where innovation requires permission from the gatekeeper.

Data Sovereignty: You should own your context, your memories, your digital soul. The ability to export isn’t enough—true ownership means no one else can see your data, no algorithm can analyze it without your permission, and you can move freely between services without losing your history.

The Path Forward

Altman is right that AI will transform society, but wrong about how that transformation should unfold. The choice isn’t between his “gentle singularity” and Luddite resistance. It’s between hyper-centralized systems that inevitably tend toward extraction and manipulation, versus distributed systems that enhance human agency and preserve choice.

The real question isn’t whether AI will change everything—it’s whether we’ll build AI that helps us become more authentically ourselves, or AI that molds us into more profitable users. The gentle singularity Altman envisions might start gently, but any singularity that revolves around a single company contains within it the seeds of tyranny.

We don’t need Big Tech’s vision of AI. We need Better Tech—technology that respects human agency, preserves privacy, enables creativity, and distributes power rather than concentrating it. The future of AI should be as distributed as human aspirations, as diverse as human needs, and as accountable as any tool that touches the most intimate parts of our lives must be.

The singularity, if it comes, should not be monotone. It should be exuberant, creative, and irreducibly plural—billions of experiments in human flourishing, not a single experiment in species-wide management. That’s the future worth building.

Alex Komoroske is the CEO and co-founder of Common Tools. He was previously Head of Corporate Strategy at Stripe and a Director of Product Management at Google.


From Techdirt via this RSS feed