Writing software with AI has become a big deal. If you work in software or data, you may be under pressure to start working with these tools and all sorts of people have already started experimenting. There’s a wide spectrum “vibe coding” practices. You could work with an AI agent, or you could ask an LLM for snippets of code. Across all of this, it’s natural to ask: Does this really help us get more done?
To the extent the answer isn't a categorical yes, can you be good at AI-assisted coding versus being bad at it? What does that skill set actually look like?
Across many things you can do with AI, it seems to be true that if you already know what you’re doing, AI will help you move faster. If you don’t know what you’re doing, or you’re careless, AI can help you waste a lot of time being spinning in useless circles more quickly. “AI psychosis” often looks like a sort of extreme where you become confused in a way that makes you too excited and AI just helps you go overboard.
In my own experience writing code with AI often makes me move much faster. Every once in a while, though, the AI acts like a sort of debugging saboteur and helps me waste time by reinforcing a theory about what’s wrong that simply isn’t true.
You can definitely see the potential in all of these new tools, but it’s a little unclear whether these tools consistently help anyone move faster. I believe Anthropic’s own research suggests that AI coding hasn’t meaningfully improved productivity on average thus far.
That brings us to the question: Can you get good at AI coding in a way that makes you truly turbo-productive?
Personally, my interest in generative AI has grown as I’ve realized that you can be good or bad at using it. It’s a skill to cultivate. This also has interesting implications for the future of labor.
Right now, many people are experimenting with how to manage AI agents using tools like Claude Code. For example, you can release multiple AI agents to perform different tasks on your code: checking for security bugs, adding features, and more. People's experiences seem consistent with the earlier point: if you do a good job thinking what to ask for and asking for it carefully, you tend to get good results. If you don’t, you can go down many blind alleys, accumulate technical debt, and have a bad experience.
I’ve noticed a few different schools of thought around this along with a slightly unusual, almost mystical philosophy layered on top.
Some people are working on what you might call “mission control” systems. You can build an entire corporate-style bureaucracy of agents to supervise individual contributor agents... making sure they write good code, add tests, and follow all the practices you might expect in a software engineering organization. You can create meta-agents that supervise other agents, with different roles like performing code reviews on one another. These ideas are very new and very exciting, and I can certainly see the potential.
There’s also a school of thought that says this is all madness and will never really work. You just need to do a good job giving your agents clear, precise instructions.
Across all of these perspectives, there is a fascinating consistency in tone. Conversations about AI coding sometimes start to sound like a sports psychologist and a monk discussing mindset. Many of the situations where AI ends up helping you waste time seem to involve fatigue or emotional state. Sometimes you actually know what the right instruction is and you just fail to give it because you’re tired, distracted, or otherwise not thinking clearly.
This is very similar to patterns we see in less obviously intellectual areas of life. If you had time to think and were in a calm state of mind, you would know the right answer... but because you weren’t in that state, you made the wrong choice.
All of this is very much still in progress. AI moves incredibly fast. Vibe coding is very new, and this meta-conversation about being good at vibe coding is even newer. But I think both are fascinating and point to something strange: a lot of the right way to think about AI is about grooming your own thoughts, staying grounded, and developing good mental discipline.
And if you’re focused on outcomes and productivity, I can’t imagine this won’t become extremely relevant to the future of software and data employment very soon.
