AuthZ for AI Agents

LLMs are only as useful as the knowledge and data they have access to.

This statement has widespread support, and we firmly believe in it too.

This becomes even more critical in architectures where multiple LLM-based applications interact. The impact of AI on data strategies will be immense. Eliminating data silos, integrating data platforms, and understanding data products are just a few essentials for effective AI implementation.

As the demand for data effectiveness grows, so will the market for new data solutions. One of the most promising, in our view, is Data Governance. While companies in heavily regulated industries are likely familiar with the concept of Data Governance, those who've experienced unintended information disclosures have learned its importance the hard way. With the advent of LLMs, even those who remained uninterested will soon follow suit.

The more interconnected and integrated a data stack becomes, the more control it requires. When LLM-based applications interact without constant human supervision, there must be a way to control data access and prevent unintended disclosures. Some of our clients initially addressed this by pre-processing datasets for specific LLM applications, ensuring no unnecessary data was included. This approach made sense for early tests and proof-of-concepts but became inefficient as complexity increased.

Imagine a scenario where a specific dataset is needed for one LLM application but should be restricted for another. Constantly duplicating the dataset is unsustainable. Add another layer of complexity—a multi-agent architecture interacting with live data in production—and disaster looms.

We believe the solution to address this challenge is a dynamic, rule-based system. Each application must be able to interrogate the data source and determine access permissions in real time. It's as if the application asks each piece of data, "What About You?" before proceeding.