No Vibe Coding on the Dancefloor!

When you hear stories out in the wild that sound like the prevailing AI hyper-productivity narrative they often have unexpected twists in unexpected places. What follows is a story that paints this picture perfectly. As odd as it is in some ways, in others it is quite typical

Below is a true story about a friend with some details blurred out...

Person A has a hobbyist interest in web development but works in a completely different industry. He heard about vibe coding, decided to take Claude Code for a test drive, and was immediately enraptured. Among other things, he vibe coded his own bespoke CRM for work and this small detail alone is surprisingly common.

Shortly after discovering Claude Code, he was scheduled to take a cruise with his girlfriend. There were plenty of opportunities to relax and enjoy various attractions, but what he really wanted to do was continue vibe coding on his phone. His girlfriend wanted to go dancing, and Person A didn’t want to get in trouble for vibe coding on the dance floor, so he needed a cover.

The solution? He decided to build an ERP system to make his girlfriend’s job easier. Her job is at a mostly blue-collar, supply chain -oriented business where she holds sales and administrative responsibilities. By the end of the cruise Person A had successfully built an ERP system from the cruise ship dance floors and one that this business might actually start using.

This is a business large enough to benefit from an ERP system, but one small enough to end up without one. Because Claude Code is fun and accessible to hobbyists, this business suddenly had software that could make a real difference.

Some of us will immediately think of potential horror stories around security, compliance, etc. and it's certainly possible. Relative to the common conversation around those risks, though, the jobs of present software engineers were not significantly altered. It’s essentially a comically inexpensive software engineering service delivered to a business that otherwise wouldn’t have had access to the product.

AI discussion often orbits large corporations and after all, they are the ones with money to buy tools. However, if the turbo-productivity narrative is real, it seems to be happening elsewhere: among people who weren’t experts before, on the margins of confidence certain skills, suddenly able to function constructively in ways they didn't before. It’s happening at small organizations or even among individuals, often spontaneously, and it’s fascinating.

The AI productivity revolution? It’s not in the boardroom. It’s on a cruise dance floor.

It's not what you think, it's how you feel

Some think "vibe coding" tools like Claude Code are dangerous, others find them a revelation. Both are true, and its all about the state of mind you bring to using them.

If you surf social media looking for opinions, there are really two schools of thought about tools like Claude Code and similar. There are people who are really excited. They’re having so much fun. They feel like they’re on the cusp of infinite productivity. On the other hand, there are people who are skeptical, scared, or even a little bit angry. They feel like these vibe coding tools are just going to produce mountains of technical debt, security problems, and similar issues... and that it’s all going to be a disaster we shouldn't be courting.

Both of these viewpoints are absolutely true. Which card you get from the dealer is going to depend on your state of mind. In important ways, it’s not what you think about Claude Code but rather how you feel when you use it.

If you look for more nuanced conversations, many people agree that Claude is good at telling you about best practices if you lead with information about your guardrails. “I don’t want to have technical debt. I have a sensitive API key. Security is important for what we’re building.” If you frame things this way, it does a good job helping you plan, and you’ll end up in the right place.

This fits with the tagline: You are the executive function of the AI. If you are a demigod that can now do anything in a big hurry, what do you want to do?

And this is a good segue to the danger in the other viewpoint. If you’re a little bit manic, if you’re having a little too much fun feeling powerful, if what you want to do is add 10,000 features a day all day just to bask in the glory of your superficial productivity, then you are exactly the person who is going to end up with mountains of technical debt, security problems, and all of these other widely-forecasted nightmares.

This is not really a problem with the tool. The problem is that when you thought about what to do with your demigodly powers, you didn’t think about wanting a secure app or leaving behind a clean codebase for other people. You chose what you wanted to do carelessly.

Thus, it’s really all about the frame of mind you bring to using these tools. If you’re running an organization I hope you’re thinking about setting a cultural tone for everyone else. In many applications of AI, if you bring a grounded state of mind - if you’re thinking about what you really need to accomplish, what success looks like, what the risks are, and what the realistic timeline is - it can be great working with tools like Claude Code.

If you’re manic and just here to have a good time driving the race car of infinite productivity then you’re going to get in trouble.

It’s all about staying grounded.

You are the AI's executive function

In 2026 I will repeat again and again: maybe the best way to thrive in the age of AI is simply to stay grounded.

Today, I want to explore this through the lens of executive function, both in the psychological sense and in the corporate sense.

Here’s a simple observation: AI can do a lot, but it can’t decide what we want to use it for or take initiative on its own. At some point (at least for the foreseeable future) a human must make the decision to set up a process and direct AI toward a specific, desirable outcome.

This is where executive function comes in. On a personal level, it’s about asking yourself: What do I want to achieve? What could go wrong? What information should the AI focus on to accomplish this task? Whatever you might be doing with AI you still need to define the goals and boundaries. If I ask AI to draft a business letter and provide pulp slasher fiction as context, I’ve failed as the executive function. I told AI to focus on the wrong things, and the result will reflect that.

The same principle applies at the organizational level. If you’re the executive guiding an AI system, you need clarity and discipline. What is this system meant to accomplish? Which outcomes are desirable, and which are not? What should the AI pay attention to, and what should it avoid? These decisions about goals, priorities, and boundaries are at the core of executive function.

What's more, it's becoming clear you can’t outsource your executive function to AI safely. While AI can process limitless information, relying on it to decide what’s important or to regulate your emotions can be dangerous. We already see examples in the media: people developing what some call “AI psychosis.” As I have explained elsewhere and will explain again, it's not just about developing incorrect beliefs but letting AI amplify destructive emotions, impulsivity, or false confidence unto serious consequences.

The corporate parallels are clear. AI systems deployed without oversight or guardrails can produce harmful outcomes, damaging a company’s reputation. This, too, is a failure of executive function: knowing what not to do, and maintaining systems of recognition and discipline, is as important as knowing what to do.

The takeaway is simple: humans remain the source of executive function. To improve your executive function, both for yourself and for AI, you need to cultivate mental and emotional health. Lead a balanced life, manage stress, maintain wholesome relationships, and take time to recharge. These are not just lifestyle tips, but rather they are increasingly the foundation of effective decision-making in the age of AI.

If you’re anxious about AI, don’t focus solely on mastering the technology. Focus on grooming your mind so you can bring a calm, disciplined frame to the decisions you make with the technology. You’ve been handed a powerful tool. Understanding how to point it in the right direction, and understanding how your own state of mind affects those decisions, will make all the difference.

AI Red Flags: Circular Investments and the Adoption Gap

This post is part of my "Minor but Important Red Flags Around AI" series...

AI may feel new and trendy, but it actually has the longest history of hype cycles in all of IT including two well-documented “AI winters” dating back to the 1960s. It might sound extreme to suggest we could experience another one, but the possibility is real.

From this perspective, the weak spot in the current AI megatrend isn’t the technology itself but rather penetration into everyday use at large businesses.

Yes, there are remarkable individual use cases, often coming from solo innovators or smaller businesses. At the other end of the spectrum, large enterprises are spending heavily: purchasing tools, signing contracts, and investing in the infrastructure. However, when you talk to people inside these big organizations, you often hear: “My boss bought this and I don't use it.”

This gap - between buying AI and using AI effectively - is where red flags start to appear. Worse, there is another kind of red flag that fits in a little to well with this one.

The big money investment deals at the top LLM startups have a couple odd regularities. This morning, you may have read about Nvidia making a large investment in OpenAI. In the fine print, much of that investment will flow right back to Nvidia purchasing infrastructure for data centers. Similarly, Microsoft’s investment in OpenAI was structured so a substantial portion returned to Microsoft through cloud computing credits. In many cases, these “investments” are explicitly services-for-equity.

Why should this concern business leaders?

In an environment where AI companies can attract positive attention generating enormous revenue without profits, there’s a risk of circular economics: I give you a crisp dollar bill, you give me the dollar back, !REPEAT! ... on paper, both of us show impressive revenue even though no real value was created.

This isn’t the same as a Ponzi scheme, and it doesn’t mean AI companies lack substance. But it does highlight a structural weakness. If too much of the industry’s growth is built on these circular deals, it creates an illusion of traction while masking the real question: Is AI adoption actually creating sustainable value in the broader economy?

That’s why I believe you should consider two superficially different metrics together:

  1. The prevalence of circular investment deals. How often are dollars being recycled back to investors instead of funding real expansion and use?

  2. Actual cubical-level adoption. Not just whether companies are buying AI solutions, but whether they’re embedding them deeply enough to drive ongoing business results.

We’ll learn more over the coming months and years. Some organizations will achieve real penetration, while others may decide AI isn’t the right fit for their operations.

In the meantime, leaders should keep a skeptical eye on the difference between buying AI and using AI, between creating value and swirling it round in a circle.

Video Block
Double-click here to add a video by URL or embed code. Learn more

Enhance, don't replace

We may be at an AI crossroads and with large language models especially. The pressure and the hype are still real, yet the verdict on the first generation of pilots seems to be they're not going very well. There are plenty of technical and generally prosaic reasons, but I want to focus on one framing mistake I see again and again that I believe is an inevitable seed of failure: treating AI as a drop-in replacement instead of as a tool that enhances human work.

At the end of the day, to make any difference for your organization, AI needs to touch the human world. If you treat AI like an autonomous black box and don’t invest in the human side of the equation - training, culture, oversight - all of the standard LLM shortcomings are going to appear in a way that tells the story of their (deficient) interface point with humanity.

A quick personal note: I started out very skeptical of generative AI. I initially did not think to ask too much as myself regarding how I used it, and there is much about the "replacement" frame that encourages this headspace. Over time I realized that there is an art to using these systems. Prompting is a skill. The model isn’t a person, but it is constructive to respect the need to communicate with it carefully as one would a person.

That realization unlocked a tremendous amount of value for me. If you want to capture similar value, you need to invest in ensuring people are proficient in using the tools you give them.

It's often helpful to exploit the power of hindsight and examine some other, related trends. As "data science" rose in profile, organizations began to hire various sorts of quantitative experts and often expected immediate business breakthroughs. In many cases, what happened instead was failed communication and cultural mismatch: brilliant people retreated into technical work that didn’t align with business priorities. The result was wasted time, missed opportunities, and frustrated managers.

Intelligence, natural or artificial, is not a one-dimensional panacea. The journey that produces a highly technical person brings a certain culture and set of priorities with it - and that culture might not align with your company’s. If you don’t explicitly bridge those gaps through communication, shared expectations, and oversight, you’ll end up with outputs that are technically impressive but not useful.

The same is true with LLMs. Prompting and oversight are a form of communication. If teams don’t learn to “talk” to the model in ways that reflect their business needs, they will get outputs they cannot use. That mismatch can cause real problems.

There was an illustrative, tragic incident at the National Eating Disorder Association: an LLM-powered chatbot intended to help people in crisis began providing advice on crash dieting and generally doing exactly the wrong things. This wasn’t simply a technical failure but also an operational one: inadequate supervision, insufficient monitoring, and a failure to treat the bot as part of a human-facing system that required training and oversight. If you were running a crisis hotline staffed by humans, you wouldn't presume that staff was a fire and forget solution that would get everything right forever without supervision... yet it appears this was how the NEDA handled their AI system.

Technological disasters are rarely only about technology. The interface where humans and machines touch is almost always a critical failure point. This is very often true in cybersecurity, for example. You can design technically excellent systems, but if you don’t consider the broader system including the human element, outcomes range between meaninglessness and disaster.

So what should business leaders do?

  • Consider AI as a tool that enhances human capability.

  • Invest in training and develop cultural practices for interacting with AI tools.

  • Monitor outputs and build governance and oversight into workflows.

In short: enhance, don’t replace.

Two faces of AI adoption (and lack thereof)

Right about now I hear the same story over and over: "I'm under pressure to use #ArtificialIntelligence at work, but I receive no guidance on what to use it for and how."

This is reflective of another disconnect in how different groups talk about #AI adoption. At large companies, and the vendors that serve them, adoption is equated with buying tools and infrastructure. It tends to be true only among smaller companies and individual people that questions around actually using the tools get much attention, and it might be true that the much of the true power-user productivity explosion is actually a really small and neglected part of the AI economy.

There is an economic lens on AI adoption through which it is picking up steam, and an immediately practical lens through which it seems to be developing some unexamined stagnation.

The potent economics of vibe coding

People are very expensive and not necessarily good at their jobs. Most of us react to stories about high salaries with a "hell yeah!" but many of the people that will make decisions about adopting #ArtificialIntelligence are going to have the opposite perspective. There is a lot of skepticism about #VibeCoding that is not incorrect in and of itself, but the implicit conclusions about adoption are wrong for ignoring the economic half of the issue. Or maybe economics is really 100% of the issue...

Video Block
Double-click here to add a video by URL or embed code. Learn more

Why OpenAI became a Delaware public benefit corporation

In this video, I give an update on OpenAI's long-running incorporation saga and analyze their choice to reincorporate their for-profit arm as a Delaware public benefit corporation (PBC). Along the way, I analyze recent trends in where firms incorporate and why. If you aren't tired of Elon Musk yet, he manages to appear in this story in a surprising variety of ways.

McKinsey’s “Lilli” and our possible reactions

You may have seen some viral articles about McKinsey & Company's new #ArtificialIntelligence tool Lilli and its reported wide use. In this video, I unpack this story and use it to discuss...

why #AI may pile on more reasons you should prefer to work with a smaller company.

how agentic AI is something one might discover naturally refining and systematizing ad hoc #LLM usage.

why good product sensibilities require sensitivity to how stakeholders think about AI, independent of whether the AI works well or not.

Arguing about what words mean: AI edition

Many of our uglier parlor room arguments about #ArtificialIntelligence are really projections of arguments we have been having about ourselves for thousands of years. In turn, the story of these arguments is that they involve things close to us which we do not understand, and to handle our fear of the unknown we construct two straw men to fight each other rather than confront how far away from enlightenment we might be.

(RAG / Agentic AI) is dead! Long live (RAG / Agentic AI)!

In this video, I argue that retrieval augmented generation and agentic artificial intelligence are both great (and related) design principles that solve important problems around large language models... yet, they are also buzzwords catching a little abuse as we rush to define AI product categories that might not last anyway. Keep the philosophy, but be prepared for the details to change.

The U.S. sells assets, too!

Macroeconomic accounting principles quietly frame our public conversation about trade - the US also sells assets abroad beyond cash, and this is not part of the trade deficit calculation. In this video, I examine recent events involving OpenAI, SoftBank, the "Stargate" project, and dig up some old history on WeWork to argue the big picture of global trade looks a little different when you consider the big picture of how we measure it.

OpenAI abandons bid to go for-profit: Context & Analysis

OpenAI's corporate governance saga continues... In this video, I discuss the recent news that they have abandoned their effort to reincorporate with a for-profit status and discuss context like...

- their history as a not-for-profit

- the role of threatened lawsuits from Elon Musk

- SoftBank's "IPO-or-die" rider

- why becoming a public benefit corporation helps

- why becoming a public benefit corporation doesn't help

#ArtificialIntelligence

It's time to start worrying about OpenAI

I think it is time to start worrying about OpenAI a little. In this video, I discuss a number of warning signs and risk factors like...

1 commodification of LLM

2 erosion of scientific leadership

3 lack of product identity

4 financial sustainability

5 limitations from irregular corporate governance

6 powerful enemies like Elon Musk waiting in the wings to attack 4 and 5

Subtle but important is that these are not parallel risks but risks that are interacting destructively.

Take care with the silent surge of AI

It may be you can access a lot of value right now simply by taking a closer look at what your employees are already doing with #ArtificialIntelligence. These days, #LLM is likely oozing out of software you've long been using perhaps even including your operating system. There will be good in this you want to accentuate and risk in it you want to regulate with intention.

I also discuss related reasons why #DataGovernance and #DataPrivacy expertise can be important #AI to the extent they allow you to adopt new technologies with intention.