We're at an inflection point in AI that reminds me of the early web — full of promise, full of noise, and genuinely hard to predict.
The signal and the noise
Every week there's a new model, a new benchmark, a new startup claiming to have solved some fundamental problem. Most of it is noise. But underneath the hype cycle, something real is happening: software is becoming adaptive. Programs that were once static rule engines are starting to reason, to generalize, to surprise us.
That matters. Not because AI will replace programmers or designers or writers — the "replace" framing is almost always wrong. It matters because the building blocks of software are changing. And when the building blocks change, everything built on top of them shifts too.
Where I'm paying attention
Three areas feel genuinely important right now:
Developer tools. AI is changing how we write, test, and debug software. The compounding effects here are enormous — better tools make better software, which makes better tools.
Interfaces. The conversation interface is interesting but overrated. What's underrated is AI working behind the scenes — anticipating, organizing, filtering. The best AI features will be invisible.
Science. Protein folding was just the beginning. AI as a tool for scientific discovery is the most consequential application of this technology, and it gets a fraction of the attention it deserves.
What I'm skeptical about
I'm skeptical about the "AGI is 2 years away" crowd. Not because I doubt the trajectory, but because the framing misses the point. The interesting question isn't when machines match human intelligence — it's how we design systems where human and machine intelligence complement each other.
That's a design problem as much as an engineering one. And it's the problem I find most interesting right now.