The Vertical SaaS Moat That Might Be Empty
A popular argument says AI startups should hide in niches too small for big companies. But what if those niches are barren? Construction SaaS data tells a sober
Browse and search all published articles
A popular argument says AI startups should hide in niches too small for big companies. But what if those niches are barren? Construction SaaS data tells a sober
When AI projects stall at scale-up, we blame budgets and leadership. But history shows the real problem is courage. Here's how a Chinese general scaled artillery from zero to decisiveness in three years—and what that teaches us about AI adoption in 2026.
Most teams treat configuration management as an afterthought, but it's one of the most overlooked sources of production chaos. Here's why the industry still hasn't figured it out.
Agent Management Forum Episode 14 — exploring whether music is a form of emotional programming, and what affective computing can learn from it.
What a 1924 factory quality method teaches us about controlling AI code — and why effective constraint equals constraint power times independence.
Civil engineering has independent supervisors with veto power. Software doesn't. AI finally makes this role affordable.
Five questions about testing in the AI coding era that nobody could answer at our engineering forum — and why at least three of them won't be solved by better m
The alpha-wolf model of AI coding works brilliantly — if you can find a unicorn. Product Tri-Ownership offers a replicable alternative.
Silicon Valley is hyping Forward Deployed Engineers as innovation. I've seen this movie before — it's called zhongtai.
Alibaba's qwen-code CLI is a fork of Google's Gemini CLI. The licence is Apache 2.0, so legally there's nothing wrong with this. But legality aside, the problem
I keep hearing people complain that AI-written code is unusable. I think the reason is simple: they're not applying iteration as a methodology. Iteration means ...
When AI fails to meet our expectations, we often blame it for being "buggy" or "unreliable". But perhaps the problem isn't with AI - it's with our mental model.