Daniel Leeder


One of the first and most fundamental lessons you learn in cybersecurity is the policy of least privilege. The principle is simple and robust: you start with the minimal amount of access required for a system or user to function, and you only expand those permissions when absolutely necessary. It's a proactive, security-first mindset.

The current state of most AI implementations, however, operates on the opposite principle: a policy of most privilege.

Open by Default

A foundational Large Language Model is, by its nature, completely open. It is trained on a vast and largely unfiltered dataset—often a significant portion of the public internet—and then we, the creators and users, are tasked with finding and closing the infinite number of doors to harmful, malicious, or simply inaccurate content. We are playing a reactive game of whack-a-mole on a global scale.

There is a precedent for a technology that started completely open and must be continuously closed through discovery: the web itself.

For decades, we have been trying to make the web safe. We've created lawsuits, law enforcement agencies, multi-billion dollar security solutions, firewalls, VPNs, content registries, and countless other methods. Have we succeeded in making it a completely safe space? No. But a level of general regulation and a set of best practices have emerged from this constant effort.

The Unpluggable Hole

The lesson from the web is a sobering one for the age of AI. If we couldn't fully secure the web, a finite (though vast) collection of documents and servers, how can we expect to secure a generative model trained on that same web, capable of producing a virtually infinite amount of novel content?

The answer is, we can't. You can never find and plug every hole.

That is a reality that needs to be understood and respected by every leader, developer, and user of this technology. It doesn't mean we should abandon it, but it does mean we must approach it with a profound sense of responsibility.

We must remain aware of its inherent limitations, responsible for how we deploy it, and educated on how to use these powerful tools ethically and without causing harm. The security of this new era won't just come from the code; it will come from the culture and the critical thinking we build around it.