Does AI safety prevent AI safety?

In many ways, the AI safety conversation right now functions mostly to avoid doing something about AI safety.

In particular, there are existential but vague conversations about AI ending humanity in some dramatic way. These discussions seem to drain a lot of oxygen from far more tangible, tactical conversations about how AI is negatively affecting people’s lives right now in obvious, gross, and unethical ways.

"Purpose" here is not a suggestion of conscious intent. Rather, there’s always a dynamic in society where certain narratives are more convenient for those with influence that others. The tactical, tangible AI harms are inconvenient because they would require someone to do something specific right now. The scifi, apocalyptic AI safety conversations are convenient because they absorb attention without requiring concrete action in the present.

When you read the newspaper it's worth asking yourself: What is real in my life right now? What’s real in the stories I hear face-to-face from people about their lives?

Here’s something that’s real: people can't post images on social media without someone using AI to alter or exploit those images, and often in degrading ways. This disproportionately affects young women, including those not yet at the age of majority. It’s unacceptable.

And yet, in the United States, we are not moving with much urgency to address these problems. There has even been discussion at the federal level about limiting states’ ability to regulate AI. There has been some movement in other countries, but broadly speaking, the mainstream AI safety conversation rarely focuses on these issues as “AI safety.”

That’s tragic because these are problems we could actually do something about.

If you scroll through social media, you’ll see that AI safety discourse is often about the end of the world. I don’t want the world to end. I understand the instinct to focus on catastrophic risks. But how, exactly, is the world supposed to end? When? Through what mechanism? The details are often thin.

When the risks are framed in such abstract and long-term ways it becomes difficult for anything concrete to happen in the short term. Occasionally we hear that maybe a company won’t release its hot new model. But as we’ve recently seen, even companies that have made strong safety pledges feel market pressure to keep releasing new systems when competitors do.

The “end of the world” AI safety conversation often results in… nothing.

If we deprioritized these vague existential narratives, we might free up space for discussions where action is actually possible. These are discussions that could reduce real harm that people are suffering today under the status quo.

If you’ve never encountered this idea before, it might sound crazy: that AI safety discourse could function to prevent meaningful AI safety action. But this is part of a broader pattern that scholars have written about for decades. It’s often referred to as recuperation.

If you want to see recuperation in action, pay attention to classic rock songs in car commercials. Listen to the lyrics. Ask yourself: is this an anti-consumerist anthem now being used to sell me a car?

It’s a common pattern. When powerful interests face criticism, they adopt pieces of that criticism and use them in a diluted, aestheticized, or symbolic way. Paradoxically, this can neutralize the critique itself. The language remains, but the teeth are gone.

This is happening across much of today’s AI safety discourse.

That’s a real tragedy because the harms are not theoretical. They’re here. They’re personal. And they’re fixable.

AI and the economy's (not doomy) future

A number of doomer-style articles about AI, and its economic effects, have gone viral and everyone is asking about the fate of the economy. There are three ongoing trends that provide a constructive, not-so-doomy guess at how to answer these questions.

Life will change, and change can't be positive for everyone. Life is going to go on, though, and hopefully this perspective can help you adapt more constructively.

It’s funny...between reading all of these articles, I was driving in my car listening to the local oldies station, which loves to joke again and again that it is still humans selecting the music on the radio. This is framed as a nice perk for listeners. It's important, though, as we are on the cusp of an economy where humans are the ultimate luxury good.

Many of us are very scared of AI, and we don’t always think how powerful this fear is in shaping how things actually play out. People really want to talk to another human, to feel like a human is part of their experience.

Related: I’ve joked with friends for a long time that any job you have in 2026 is also a mental health support job for some stakeholder or another you encounter at work (in addition to whatever else you are supposedly hired to do).

And this brings us to...

Trend Number One: Humans Interaction is THE Prestige Good

Human interaction itself will remain economically valuable. Even as automation improves, there will increasing demand for roles that provide, overtly or often covertly, human connection.

Trend Number Two: We Are Entering an Era of Dynastic Wealth

This ongoing trend is rarely discussed in the context of AI, but whether it is good or bad, we are entering an era of dynastic wealth. Going forward, more than anything we’ve seen in the past hundred years, who is rich and who is not will increasingly be determined by inherited wealth (or not).

Many doomer narratives suggest that AI will take all the jobs and leave nobody with money to spend. This will be less of a factor than many expect for many reasons, including that wealth will matter more than income when it comes to spending power.

In this environment, where consumption patterns are increasingly determined by inherited wealth rather than career prestige, prestige consumption will be more important than ever. This brings us back to Trend One: What is the new hot luxury good? Increasingly, it will be access to people rather than machines.

Once again, demand for human labor will look different than what we are used to but it is not going away.

Trend Number Three: Large Corporate AI Adoption Is Incremental

AI adoption in large corporations is happening very quickly in terms of purchases... but it seems to be happening much more slowly in terms of actual productivity gains and workforce displacement.

You should also be skeptical of many AI-related layoff announcements. There are always layoffs that corporate managers want to make or feel pressured to make, and AI can serve as a convenient justification. In a strict practical sense, it is not accurate to say workers are being replaced by AI at scale right now.

Because of slow practical adoption and political / organizational dynamics (nobody likes to have their own headcount reduced) large employers (and the large portion of total employment they represent) will experience relatively slow disruption.

Another overlapping trend is that the workplace is increasingly becoming many Americans’ primary social outlet, especially for people with families and children. For many people, life is basically work and home.

If you look closely (and you are not encouraged to do so), many decisions in corporate office environments are partially about social comfort and signaling who is important. Headcount often becomes a proxy for value. This is a harbinger of the infant trend that humans themselves are becoming a luxury asset.

We tend not to talk about the economy this way because it doesn’t perfectly align with how we think economic systems should work. It's just not true, though, that everything is driven purely by ruthless, abstract efficiency.

AI is being introduced into a world shaped by human desires and many human desires are fundamentally social.

A Quick Example: Palantir

Disclaimer: I don’t have any special inside knowledge about Palantir. If you know more, consider this my idiosyncratic fantasy based on gossip about Palantir.

People often talk about Palantir in hushed tones as if they have some kind of mysterious or extremely advanced technology. In a more private setting, the hushed tones are used to discussed that they don't have any such thing.

What I observe is that they have the phone numbers of many Department of Defense generals who trust them to build exotic dashboards for the consumption of high-prestige stakeholders.

Like many businesses, part of what Palantir has going is established relationships. Important people at the Department of Defense don't want to negotiate multi-billion-dollar purchases with robots. They want to talk to a human. If there have a favored human they have worked with before, that human relationship is the real business asset.

If you are a company whose value is based on relationships ... if your employees are the luxury good that the AI era is implicitly increasing demand for ... then you are probably in a good position.

By contrast, if you are in the business of providing commodified, boilerplate dashboards, AI will make your commodification even more uncomfortable.

Your strategic choices then are either:

  • Make what you do more about relationships between people, or

  • Go all-in on AI and become very good at competing on commodity efficiency.

To Recap

Things are probably going to be… fine-ish. The economy will begin to revolve around the implicit idea that dealing with humans a luxury good. Feast and famine for different firms will turn heavily on how well they are positioned to provide this luxury to the stakeholders with money to spend. Someone will feast, try to stay calm and position yourself so that it is you.

Real humans exist, but we're pretty paranoid they don't

I feel called to speak on a current phenomenon: the assumption that everything is AI.

I provide videos to mirror all my writing, and if there’s any virtue in the mediocre lighting for the one that mirrors this article, I hope it’s that it convinces you I’m a real person.

I write in many ways, in part so I can share the lessons with you: with AI, by hand, with AI and edited by hand, by hand edited with AI. Right now, it honestly feels like people are more likely to accuse real human writing of being AI than they are to accurately identify AI-generated content.

And certainly… everyone is really paranoid. I'm tempted to dig up the term apophenia. It describes a kind of paranoia where someone becomes convinced of a theory and interprets everything around them as evidence for it. It’s very common with conspiracy theories, and I think we’re seeing a similar pattern with AI.

If you write and there’s a typo, people might call it AI. No typos? AI. Friendly tone? AI. A slightly off tone? AI. If we notice anything about something, it is the tell of lurking AI.

Maybe it’s worth remembering that every once in a while, you are talking to a real person. And maybe that matters... that not everyone on your network is a robot you can treat however you want.

Strategy requires letting go

Let's explore a generic, straw-man corporate AI strategy and why doing AI right is may be more emotional than intellectual. It’s about staying grounded and, ultimately, about letting go and accepting risk.

Letting go is hard. When you set out to create a strategy in a room full of people with different goals and emotions there's natural pressure to settle on a plan. It's relatively easy, and feels safer, to define a list of tangible steps to make this and that incrementally better. Tangibility is seductive, especially in AI, where things move fast and what looks easy can quickly become difficult.

Here’s the catch: it’s just as important to respect risk. Things won’t always go your way. You can’t win everywhere, and you certainly can’t boil the ocean. A strong strategy needs to identify where to put your chips down and hopefully use AI to deliver a truly meaningful strategic impact. Automating this or that may provide value, but it also exposes you to countless small but expensive disappointments when the AI world doesn’t unfold exactly as planned.

Emotionally, this is tough. Planning for failure, and for not even knowing where failure might occur among the million potential AI applications, is uncomfortable.

A subtler challenge is that productivity isn’t just about turning a crank harder. Anyone who’s spent time in a white-collar environment knows it’s often about organizational friction: waiting for approvals, permission bottlenecks, conflicting stakeholders. If we want to unlock radically enhanced productivity, the very shape of the organization has to change. Not every part of the machine can simply get bigger. Some areas will need to shrink or get leaner or simply experience the terror of change. Some initiatives, while possible and potentially valuable, aren’t strategic priorities and don’t deserve resources.

This is where culture matters. A grounded organization has the "emotional" strength to accept trade-offs, take calculated risks, and let go where needed. That emotional grounding positions you to build a strategy capable of delivering real, game-changing impact ... rather than a plan that delivers a lot of disappointing, but occasionally expensive, pilots.

In AI, the lesson is intellectually simple but emotionally profound: let go, focus on what truly moves the needle, and accept the risks in that decision.

No Vibe Coding on the Dancefloor!

When you hear stories out in the wild that sound like the prevailing AI hyper-productivity narrative they often have unexpected twists in unexpected places. What follows is a story that paints this picture perfectly. As odd as it is in some ways, in others it is quite typical

Below is a true story about a friend with some details blurred out...

Person A has a hobbyist interest in web development but works in a completely different industry. He heard about vibe coding, decided to take Claude Code for a test drive, and was immediately enraptured. Among other things, he vibe coded his own bespoke CRM for work and this small detail alone is surprisingly common.

Shortly after discovering Claude Code, he was scheduled to take a cruise with his girlfriend. There were plenty of opportunities to relax and enjoy various attractions, but what he really wanted to do was continue vibe coding on his phone. His girlfriend wanted to go dancing, and Person A didn’t want to get in trouble for vibe coding on the dance floor, so he needed a cover.

The solution? He decided to build an ERP system to make his girlfriend’s job easier. Her job is at a mostly blue-collar, supply chain -oriented business where she holds sales and administrative responsibilities. By the end of the cruise Person A had successfully built an ERP system from the cruise ship dance floors and one that this business might actually start using.

This is a business large enough to benefit from an ERP system, but one small enough to end up without one. Because Claude Code is fun and accessible to hobbyists, this business suddenly had software that could make a real difference.

Some of us will immediately think of potential horror stories around security, compliance, etc. and it's certainly possible. Relative to the common conversation around those risks, though, the jobs of present software engineers were not significantly altered. It’s essentially a comically inexpensive software engineering service delivered to a business that otherwise wouldn’t have had access to the product.

AI discussion often orbits large corporations and after all, they are the ones with money to buy tools. However, if the turbo-productivity narrative is real, it seems to be happening elsewhere: among people who weren’t experts before, on the margins of confidence certain skills, suddenly able to function constructively in ways they didn't before. It’s happening at small organizations or even among individuals, often spontaneously, and it’s fascinating.

The AI productivity revolution? It’s not in the boardroom. It’s on a cruise dance floor.

It's not what you think, it's how you feel

Some think "vibe coding" tools like Claude Code are dangerous, others find them a revelation. Both are true, and its all about the state of mind you bring to using them.

If you surf social media looking for opinions, there are really two schools of thought about tools like Claude Code and similar. There are people who are really excited. They’re having so much fun. They feel like they’re on the cusp of infinite productivity. On the other hand, there are people who are skeptical, scared, or even a little bit angry. They feel like these vibe coding tools are just going to produce mountains of technical debt, security problems, and similar issues... and that it’s all going to be a disaster we shouldn't be courting.

Both of these viewpoints are absolutely true. Which card you get from the dealer is going to depend on your state of mind. In important ways, it’s not what you think about Claude Code but rather how you feel when you use it.

If you look for more nuanced conversations, many people agree that Claude is good at telling you about best practices if you lead with information about your guardrails. “I don’t want to have technical debt. I have a sensitive API key. Security is important for what we’re building.” If you frame things this way, it does a good job helping you plan, and you’ll end up in the right place.

This fits with the tagline: You are the executive function of the AI. If you are a demigod that can now do anything in a big hurry, what do you want to do?

And this is a good segue to the danger in the other viewpoint. If you’re a little bit manic, if you’re having a little too much fun feeling powerful, if what you want to do is add 10,000 features a day all day just to bask in the glory of your superficial productivity, then you are exactly the person who is going to end up with mountains of technical debt, security problems, and all of these other widely-forecasted nightmares.

This is not really a problem with the tool. The problem is that when you thought about what to do with your demigodly powers, you didn’t think about wanting a secure app or leaving behind a clean codebase for other people. You chose what you wanted to do carelessly.

Thus, it’s really all about the frame of mind you bring to using these tools. If you’re running an organization I hope you’re thinking about setting a cultural tone for everyone else. In many applications of AI, if you bring a grounded state of mind - if you’re thinking about what you really need to accomplish, what success looks like, what the risks are, and what the realistic timeline is - it can be great working with tools like Claude Code.

If you’re manic and just here to have a good time driving the race car of infinite productivity then you’re going to get in trouble.

It’s all about staying grounded.

You are the AI's executive function

In 2026 I will repeat again and again: maybe the best way to thrive in the age of AI is simply to stay grounded.

Today, I want to explore this through the lens of executive function, both in the psychological sense and in the corporate sense.

Here’s a simple observation: AI can do a lot, but it can’t decide what we want to use it for or take initiative on its own. At some point (at least for the foreseeable future) a human must make the decision to set up a process and direct AI toward a specific, desirable outcome.

This is where executive function comes in. On a personal level, it’s about asking yourself: What do I want to achieve? What could go wrong? What information should the AI focus on to accomplish this task? Whatever you might be doing with AI you still need to define the goals and boundaries. If I ask AI to draft a business letter and provide pulp slasher fiction as context, I’ve failed as the executive function. I told AI to focus on the wrong things, and the result will reflect that.

The same principle applies at the organizational level. If you’re the executive guiding an AI system, you need clarity and discipline. What is this system meant to accomplish? Which outcomes are desirable, and which are not? What should the AI pay attention to, and what should it avoid? These decisions about goals, priorities, and boundaries are at the core of executive function.

What's more, it's becoming clear you can’t outsource your executive function to AI safely. While AI can process limitless information, relying on it to decide what’s important or to regulate your emotions can be dangerous. We already see examples in the media: people developing what some call “AI psychosis.” As I have explained elsewhere and will explain again, it's not just about developing incorrect beliefs but letting AI amplify destructive emotions, impulsivity, or false confidence unto serious consequences.

The corporate parallels are clear. AI systems deployed without oversight or guardrails can produce harmful outcomes, damaging a company’s reputation. This, too, is a failure of executive function: knowing what not to do, and maintaining systems of recognition and discipline, is as important as knowing what to do.

The takeaway is simple: humans remain the source of executive function. To improve your executive function, both for yourself and for AI, you need to cultivate mental and emotional health. Lead a balanced life, manage stress, maintain wholesome relationships, and take time to recharge. These are not just lifestyle tips, but rather they are increasingly the foundation of effective decision-making in the age of AI.

If you’re anxious about AI, don’t focus solely on mastering the technology. Focus on grooming your mind so you can bring a calm, disciplined frame to the decisions you make with the technology. You’ve been handed a powerful tool. Understanding how to point it in the right direction, and understanding how your own state of mind affects those decisions, will make all the difference.

AI Red Flags: Circular Investments and the Adoption Gap

This post is part of my "Minor but Important Red Flags Around AI" series...

AI may feel new and trendy, but it actually has the longest history of hype cycles in all of IT including two well-documented “AI winters” dating back to the 1960s. It might sound extreme to suggest we could experience another one, but the possibility is real.

From this perspective, the weak spot in the current AI megatrend isn’t the technology itself but rather penetration into everyday use at large businesses.

Yes, there are remarkable individual use cases, often coming from solo innovators or smaller businesses. At the other end of the spectrum, large enterprises are spending heavily: purchasing tools, signing contracts, and investing in the infrastructure. However, when you talk to people inside these big organizations, you often hear: “My boss bought this and I don't use it.”

This gap - between buying AI and using AI effectively - is where red flags start to appear. Worse, there is another kind of red flag that fits in a little to well with this one.

The big money investment deals at the top LLM startups have a couple odd regularities. This morning, you may have read about Nvidia making a large investment in OpenAI. In the fine print, much of that investment will flow right back to Nvidia purchasing infrastructure for data centers. Similarly, Microsoft’s investment in OpenAI was structured so a substantial portion returned to Microsoft through cloud computing credits. In many cases, these “investments” are explicitly services-for-equity.

Why should this concern business leaders?

In an environment where AI companies can attract positive attention generating enormous revenue without profits, there’s a risk of circular economics: I give you a crisp dollar bill, you give me the dollar back, !REPEAT! ... on paper, both of us show impressive revenue even though no real value was created.

This isn’t the same as a Ponzi scheme, and it doesn’t mean AI companies lack substance. But it does highlight a structural weakness. If too much of the industry’s growth is built on these circular deals, it creates an illusion of traction while masking the real question: Is AI adoption actually creating sustainable value in the broader economy?

That’s why I believe you should consider two superficially different metrics together:

  1. The prevalence of circular investment deals. How often are dollars being recycled back to investors instead of funding real expansion and use?

  2. Actual cubical-level adoption. Not just whether companies are buying AI solutions, but whether they’re embedding them deeply enough to drive ongoing business results.

We’ll learn more over the coming months and years. Some organizations will achieve real penetration, while others may decide AI isn’t the right fit for their operations.

In the meantime, leaders should keep a skeptical eye on the difference between buying AI and using AI, between creating value and swirling it round in a circle.

Video Block
Double-click here to add a video by URL or embed code. Learn more

Enhance, don't replace

We may be at an AI crossroads and with large language models especially. The pressure and the hype are still real, yet the verdict on the first generation of pilots seems to be they're not going very well. There are plenty of technical and generally prosaic reasons, but I want to focus on one framing mistake I see again and again that I believe is an inevitable seed of failure: treating AI as a drop-in replacement instead of as a tool that enhances human work.

At the end of the day, to make any difference for your organization, AI needs to touch the human world. If you treat AI like an autonomous black box and don’t invest in the human side of the equation - training, culture, oversight - all of the standard LLM shortcomings are going to appear in a way that tells the story of their (deficient) interface point with humanity.

A quick personal note: I started out very skeptical of generative AI. I initially did not think to ask too much as myself regarding how I used it, and there is much about the "replacement" frame that encourages this headspace. Over time I realized that there is an art to using these systems. Prompting is a skill. The model isn’t a person, but it is constructive to respect the need to communicate with it carefully as one would a person.

That realization unlocked a tremendous amount of value for me. If you want to capture similar value, you need to invest in ensuring people are proficient in using the tools you give them.

It's often helpful to exploit the power of hindsight and examine some other, related trends. As "data science" rose in profile, organizations began to hire various sorts of quantitative experts and often expected immediate business breakthroughs. In many cases, what happened instead was failed communication and cultural mismatch: brilliant people retreated into technical work that didn’t align with business priorities. The result was wasted time, missed opportunities, and frustrated managers.

Intelligence, natural or artificial, is not a one-dimensional panacea. The journey that produces a highly technical person brings a certain culture and set of priorities with it - and that culture might not align with your company’s. If you don’t explicitly bridge those gaps through communication, shared expectations, and oversight, you’ll end up with outputs that are technically impressive but not useful.

The same is true with LLMs. Prompting and oversight are a form of communication. If teams don’t learn to “talk” to the model in ways that reflect their business needs, they will get outputs they cannot use. That mismatch can cause real problems.

There was an illustrative, tragic incident at the National Eating Disorder Association: an LLM-powered chatbot intended to help people in crisis began providing advice on crash dieting and generally doing exactly the wrong things. This wasn’t simply a technical failure but also an operational one: inadequate supervision, insufficient monitoring, and a failure to treat the bot as part of a human-facing system that required training and oversight. If you were running a crisis hotline staffed by humans, you wouldn't presume that staff was a fire and forget solution that would get everything right forever without supervision... yet it appears this was how the NEDA handled their AI system.

Technological disasters are rarely only about technology. The interface where humans and machines touch is almost always a critical failure point. This is very often true in cybersecurity, for example. You can design technically excellent systems, but if you don’t consider the broader system including the human element, outcomes range between meaninglessness and disaster.

So what should business leaders do?

  • Consider AI as a tool that enhances human capability.

  • Invest in training and develop cultural practices for interacting with AI tools.

  • Monitor outputs and build governance and oversight into workflows.

In short: enhance, don’t replace.

Two faces of AI adoption (and lack thereof)

Right about now I hear the same story over and over: "I'm under pressure to use #ArtificialIntelligence at work, but I receive no guidance on what to use it for and how."

This is reflective of another disconnect in how different groups talk about #AI adoption. At large companies, and the vendors that serve them, adoption is equated with buying tools and infrastructure. It tends to be true only among smaller companies and individual people that questions around actually using the tools get much attention, and it might be true that the much of the true power-user productivity explosion is actually a really small and neglected part of the AI economy.

There is an economic lens on AI adoption through which it is picking up steam, and an immediately practical lens through which it seems to be developing some unexamined stagnation.

The potent economics of vibe coding

People are very expensive and not necessarily good at their jobs. Most of us react to stories about high salaries with a "hell yeah!" but many of the people that will make decisions about adopting #ArtificialIntelligence are going to have the opposite perspective. There is a lot of skepticism about #VibeCoding that is not incorrect in and of itself, but the implicit conclusions about adoption are wrong for ignoring the economic half of the issue. Or maybe economics is really 100% of the issue...

Video Block
Double-click here to add a video by URL or embed code. Learn more

Why OpenAI became a Delaware public benefit corporation

In this video, I give an update on OpenAI's long-running incorporation saga and analyze their choice to reincorporate their for-profit arm as a Delaware public benefit corporation (PBC). Along the way, I analyze recent trends in where firms incorporate and why. If you aren't tired of Elon Musk yet, he manages to appear in this story in a surprising variety of ways.

McKinsey’s “Lilli” and our possible reactions

You may have seen some viral articles about McKinsey & Company's new #ArtificialIntelligence tool Lilli and its reported wide use. In this video, I unpack this story and use it to discuss...

why #AI may pile on more reasons you should prefer to work with a smaller company.

how agentic AI is something one might discover naturally refining and systematizing ad hoc #LLM usage.

why good product sensibilities require sensitivity to how stakeholders think about AI, independent of whether the AI works well or not.

Arguing about what words mean: AI edition

Many of our uglier parlor room arguments about #ArtificialIntelligence are really projections of arguments we have been having about ourselves for thousands of years. In turn, the story of these arguments is that they involve things close to us which we do not understand, and to handle our fear of the unknown we construct two straw men to fight each other rather than confront how far away from enlightenment we might be.

(RAG / Agentic AI) is dead! Long live (RAG / Agentic AI)!

In this video, I argue that retrieval augmented generation and agentic artificial intelligence are both great (and related) design principles that solve important problems around large language models... yet, they are also buzzwords catching a little abuse as we rush to define AI product categories that might not last anyway. Keep the philosophy, but be prepared for the details to change.

The U.S. sells assets, too!

Macroeconomic accounting principles quietly frame our public conversation about trade - the US also sells assets abroad beyond cash, and this is not part of the trade deficit calculation. In this video, I examine recent events involving OpenAI, SoftBank, the "Stargate" project, and dig up some old history on WeWork to argue the big picture of global trade looks a little different when you consider the big picture of how we measure it.

OpenAI abandons bid to go for-profit: Context & Analysis

OpenAI's corporate governance saga continues... In this video, I discuss the recent news that they have abandoned their effort to reincorporate with a for-profit status and discuss context like...

- their history as a not-for-profit

- the role of threatened lawsuits from Elon Musk

- SoftBank's "IPO-or-die" rider

- why becoming a public benefit corporation helps

- why becoming a public benefit corporation doesn't help

#ArtificialIntelligence