On Monday, Chinese AI company DeepSeek released an impressive open-source large language model (LLM) called R1. This model reportedly costs 95% less to train compared to competitors like OpenAI. DeepSeek has made a significant leap in our understanding of LLMs by enabling step-by-step reasoning without relying on massive supervised datasets. While I’m no AI expert, it’s clear that this marks a major breakthrough.
U.S. markets, however, didn’t seem to celebrate the news.

This release comes just days after the announcement of the Stargate Project, a massive initiative to invest $500 billion over four years in building AI infrastructure for OpenAI in the U.S. Stargate is a collaboration between OpenAI, NVIDIA, Oracle, and Microsoft, aiming to “build and develop AI—and in particular AGI —for the benefit of all humanity.”
But what does AGI that “benefits all humanity” look like, and who gets to decide? The tech world is experiencing a major "vibe" shift which will have an affect on how AI is developed and governed.
The “Don’t Be Evil” Era
Over the past decade, social media companies have wrestled with challenges like misinformation and hate speech, often resorting to heavy-handed censorship. Google’s early motto, “Don’t be evil,” is difficult to enforce when their products touch billions of lives daily. The Twitter Files revealed how some of these decisions were influenced, if not pressured, by the three-letter agencies.
This year, we’ve witnessed a significant shift in how platforms are governed. Elon’s approach to governing X and Zuck’s pivot at Meta both illustrate a move away from censorship-heavy policies. With record-low trust in corporate media, people are increasingly turning to independent platforms like Substack, as well as podcasts and social media, for information.
This shift signals a broader societal reaction. It’s no longer “cool” to be woke. When one side of the political spectrum pushes too far, the masses tend to swing back toward the center. Silicon Valley, once the epicenter of the woke movement, now seems to be pivoting to free speech absolutism. Many founders will follow the tide, chasing the money, until the pendulum inevitably swings back. My concern is how AI could amplify these cultural shifts in extreme and destabilizing ways.
AI That “Can’t Be Evil”
Instead of relying on a single person or corporation to “not be evil,” I’d prefer an AI system that can’t be evil at the infrastructure level. This means building AI on decentralized protocols that are transparent and censorship-resistant.
It's become a meme to ask Chinese apps about Tiananmen Square in 1989, only to receive the same state-approved answers. While DeepSeek’s R1 model is technologically innovative, it represents a censored view of the world.
The same can be said about ClosedAI (OpenAI). A closed-source AGI from a company like OpenAI, trained on data curated by select Silicon Valley elites, will not be universally welcomed just as much as ChatCCP (Deepseek). Both the ClosedAI and ChatCCP models have questionable data training that become consequential when we rely on AI "agents" to make decisions on our behalf.
Beyond the infrastructure and training, as AI agents become integral to our daily lives, economic questions arise about how they’ll operate. Governments may push Central Bank Digital Currencies (CBDCs) as the default transaction mechanism for AI systems. AI tied to CBDCs creates a system ripe for censorship and even potentially debanking users who don't comply.
Open-sourced AI models on crypto rails that support cryptocurrencies (over CBDCs) help ensure an AI system that “can’t be evil,” rather than relying on governments or corporations to “not be evil.”
Commoditization of LLMs
DeepSeek’s R1 demonstrates that AI doesn’t have to be prohibitively expensive or exclusive. Meta’s Llama LLM is another open-source model that performs at a very high level. If AI continues moving in the open-source direction, with decreasing development costs, models may become commodities.
Some even speculate that this is why Apple has stayed out of the model race.
If this prediction holds, commoditized models could lead to a more diverse AI ecosystem instead of a single “God” model operated by one company.
I predict the future of AI will involve:
Small, task-specific models tailored for particular use cases.
Commoditized LLMs accessible to everyone and can be interchanged.
AgentFi: Onchain plug and play agent economy.
Final Thoughts
The race is on for AI supremacy. DeepSeek’s R1 feels like a “Sputnik moment” for U.S. AI companies. Investors are now questioning the value of different components of AI development.
The combination of closed and potentially censored AI and CBDCs foretells a dystopian future I’d rather not experience.
Instead, I support AI that is developed and operates transparently on decentralized protocols. AI being built on AO is a promising example of this. Learn more about how AO supports AI onchain below.
With AO mainnet launching on February 8, I’m excited to see its progress. Follow AO on X to stay up to date on developments and mainnet launch. This is not financial advice. Please do your own research.
Header image by ANTIPOLYGON YOUTUBE on Unsplash

