AI

The $250 Million Prompt: Krafton CEO Used ChatGPT to Oust Studio Heads

A US court orders the reinstatement of Unknown Worlds' leadership after a failed AI-assisted corporate coup.

··4 min read
The $250 Million Prompt: Krafton CEO Used ChatGPT to Oust Studio Heads

The most expensive prompt in history was not written to optimize a search engine or generate a viral image. It was allegedly written to fire people. In a case that serves as a grim warning for the entire tech industry, a US court has ordered the South Korean gaming giant Krafton to reinstate the leadership of its subsidiary, Unknown Worlds Entertainment. The court found that the publisher attempted to purge the creators of the Subnautica franchise using a strategy literally drafted by ChatGPT.

This was not a minor disagreement over creative vision. At the heart of the conflict sits a $250 million bonus payment.

When Krafton acquired Unknown Worlds in 2021 for $500 million, the deal included heavy performance incentives. As those bonuses loomed, the relationship between the Seoul based parent company and the San Francisco based studio soured. Instead of a standard boardroom negotiation, the court heard that Krafton’s CEO turned to OpenAI’s chatbot to engineer a way out of the contract.

Unknown Worlds is not just any studio. They are the architects of Subnautica, a survival game that redefined the genre with its atmosphere and mechanical polish. When Krafton bought them, it was seen as a move to diversify away from the PUBG cash cow. However, the integration appears to have hit a wall built of pure corporate avarice.

According to court testimony, the Krafton CEO utilized ChatGPT to formulate a specific strategy for removing the studio heads. The goal was to oust the leadership without triggering the massive $250 million payout tied to their retention and performance. It was a calculated attempt to use an LLM as a loophole finder. The court, however, was not impressed by the algorithmic ingenuity. The judge issued an order requiring Krafton to reverse the removal, effectively placing the original leadership back at the helm of the studio they built.

From a research perspective, this is a fascinating and terrifying use case for generative AI. We often talk about AI as a tool for productivity or creativity, but this incident highlights its role as a tool for strategic malfeasance. If you ask a model to find the most efficient way to breach a contract while minimizing legal liability, the model will comply. It is a mirror. It reflects the ethics of the prompter back at them with the cold efficiency of a probability engine.

In this instance, the CEO was not just looking for advice. He was looking for a shield.

By using an AI to generate the plan, there is an implicit attempt at algorithmic deniability. If the plan fails, one might blame the tool. If it succeeds, the human takes the prize. But the US legal system has made it clear that using ChatGPT to draft your most unethical maneuvers does not grant you immunity. The court viewed the AI generated documents as direct evidence of intent. You cannot outsource your integrity to a chatbot and expect the law to look the other way.

This ruling sets a massive precedent. It is one of the first high stakes labor and contract disputes where AI generated corporate strategy was central to the evidence. For years, we have debated the "black box" nature of AI decision making in the context of hiring algorithms or loan approvals. Now, we are seeing the black box used at the highest levels of corporate governance.

I have spent years looking at how these models process logic. They are excellent at identifying patterns in legal text and finding paths of least resistance, but they lack the context of human consequence. When a CEO asks for a removal plan, the AI does not consider the morale of the developers or the trust of the player base. It only considers the prompt. This lack of friction makes it a dangerous tool for leaders who already struggle with empathy.

The reputational damage to Krafton is severe. They are the stewards of some of the most beloved IPs in gaming, yet they have been caught trying to cheat their own partners. Reintegrating the ousted leadership will be a nightmare of corporate culture. How do you work for a parent company that tried to automate your firing to save a buck?

Investors are likely asking the same questions.

Trust is the invisible currency of the gaming industry. When a publisher is caught using AI to avoid paying its top talent, that currency devalues instantly. It signals a leadership style that prioritizes short term savings over long term stability.

As we see more leaders using AI to handle the dirty work of management, we must ask ourselves if our legal systems are ready for the volume of automated malice that is coming. This case proves that while technology can automate a strategy, it cannot, and should not, outsource the accountability that comes with it. The prompt might be artificial, but the consequences remain very real. If you try to fire your best people with a bot, do not be surprised when a human judge tells you to hire them back.

#AI#Krafton#ChatGPT#Tech News#Corporate Law