ClosedClaw
Open source typically dies, but hardware wins.
The concept of a world where your computer is ran by not the manufacturer of the hardware, not a hacker overseas, but a paid-per-token “trusted” software you installed.
I am a user of AI daily, and even run local LLM’s on a cluster, primarily for the sake of owning my software and having control over the output.
I predict, soon enough, most products we see that are currently open source, because they operate on rails they do not own, will fall into the hands of the model providers that gouge on API’s, and the only way to solve this will be through local hosting.
If everyone builds on models they do not host or run, they have zero access to where or how their software thinks. and since we all know that there is no durable distribution moat when the underlying capability is freely available, the endgame will have to be LLM providers taking a cut of whatever you make, while profiting from token costs akin to thermal runaway.
Ivan Illich wrote in 1973 about what he referred to as radical monopoly, which was not the dominance of one brand, but the dominance of one type of product so total that alternatives become unimaginable, and I hope that does not become the future.
Illich’s observation at the time was written about cars and highways, but the mechanism is still identical here. The danger now, is as centralized inference has become so nornalized that local reasoning stops feeling like an option because the ecosystem stopped being built around it. Foundations will see what “supported by OpenAI” means long term. To date, their only “open” model was gpt-oss-20B on August 5, 2025 ~ and is not as great compared to nearly every model from China.
In China, the affordability of AI is “too cheap to meter”, and has become an invsisible layer, leading to mass acceptance and adoption. I foresee the way that happens in the rest of the world is through open weight models.
Not to build upon the already incessant hype around the Mac Mini, it’s genuinely impressive how great a deal it is for $600 with its specs, but also its power consumption. One 512gb Mac Studio can run a very large model very well for $10k and just electricity cost.
On some back of the napkin math, I would need 24 NVIDIA gpu’s to match one M3 ultra’s performance, which the NVIDIA chips would then take about 1600W, around 4x the energy consumption while also requiring extra equipment.
I believe NVIDIA, with their DGX Spark and Dell (Pro Max with GB10) are also well suited for this future.
Mobile local LLM’s are only a couple years away, especially on Chinese Android devices such as the ZTE Redmagic 11 Pro.
I am curious if we’ll see more phones adopt liquid cooling.




