Daniel Leeder


The ways in which innovation has been tilted so drastically toward profitability over genuine benefit is discouraging, to say the least. The current state of AI doesn't just continue this trend; it amplifies it to an unprecedented degree. We are witnessing a fundamental shift in how technology is built and judged.

Companies are setting a new standard of "seems good enough to sell," a strategy that is perfectly in line with the probabilistic nature of today's generative AI. A product built on this principle isn't always accurate or stable, but it gives the powerful impression that it's working.

From Logic and Rules to "Vibes"

For most of modern computing history, software has been built on a foundation of logic and rules. Code was deterministic. Given a specific input, it would produce a predictable, repeatable output. It was testable, verifiable, and reliable.

This new era is different. It's an era of "vibes."

Generative AI operates on probabilities, not certainties. It produces outputs that are often incredibly impressive and correct, but sometimes subtly—or catastrophically—wrong in ways that are difficult to predict. The output feels right, even when it isn't. This shortcut from hard logic to impressive-feeling results is now widely available and heavily marketed.

The Danger of a "Good Enough" Strategy

Pursuing a "vibes-based" strategy is dangerous, with effects that can touch critical areas of our lives in profoundly negative ways. When we apply this "good enough" standard to tools that provide financial, medical, or legal information, we are accepting a level of risk that is irresponsible.

This strategy forces us to ask a fundamental question about product viability: How many customers are willing to buy a product that has to display a disclaimer that it can be wrong or inaccurate? When a warning label about potential hallucinations or errors becomes a permanent fixture of your user interface, it fundamentally undermines user trust. It's an open admission that the product is not reliable, shifting the burden of verification from the tool to the user. This isn't a sustainable model for products that aim to be authoritative or critical to a user's workflow.

The pressure to compete in the AI race is pushing companies to launch products that are, in essence, public beta tests. The users become the unwitting quality assurance team for systems that can have real-world consequences.

The Responsibility of True Innovation

With this shortcut so widely available, we need more focus than ever to know how to reap the true benefits of innovation while preserving our ethical integrity. The goal of technology should be to create genuine, reliable value, not just the appearance of it.

Creating a widget that looks like it works, but can harm its users in some cases, is not a product to sell. It's a liability. As leaders, builders, and consumers of technology, we must demand a standard that goes beyond "vibes" and returns to the principles of reliability, safety, and trust.