Not many people have had a front-row seat to the evolution of the internet and the web quite like Dr. Tom Leighton. The former professor of Applied Mathematics at MIT, he co-founded Akamai back in 1998, a time when the web was still getting its footing. Now, he is the CEO, and he sits at the intersection of two of the most important technologies shaping the world: the internet and AI.
In this conversation with ETCIO, he shares his perspectives on how AI is changing the rules for internet infrastructure, the move to a decentralized, agentic future, and the new security frontiers that come with that.
How is AI redefining the internet infrastructure of the future and the way the web operates?With the rapid adoption of AI apps and agents, we are seeing a real transformation in how the web works. It will become all agent based, a fundamental shift from what we have experienced so far.
Overall, such a transformation will create demand for massive compute cycles and inference capabilities. The fundamental shift in the architecture, other than scale, will be a far more distributed model for compute delivery and storage infrastructure. Performance will matter more than ever as the users’ interaction with the web through typing and clicking may be replaced by a voice and video model. A distributed model for providing compute will be key to support the experience that users will want and expect.
This means building AI capabilities at the edge, closer to the users, and not just in a few core data centers in a few big cities but across hundreds of cities. With over 4,000 pops across 700 cities, Akamai is well positioned to deliver the performance needed for the new AI agents and applications.As AI adoption accelerates, the models themselves have come under attack? What are some of the major threats the AI models face today?
In theory, the models and the agents come with guardrails. However, the models can be tricked into giving up sensitive data or revealing the source code. Through a mere query or a prompt, adversaries can inject false information into the models to cause inaccurate output. You can convince the model that it’s now living in an alternative universe and the old rules are now incorrect.
Considering the model is learning on the go and getting updated based on new queries and prompts, maintaining the quality of input data has to be a continuous process. We’re now also seeing sophisticated AI attacks to spoof the agent to jump over its guardrails. This has significantly expanded the existing attack surface while creating a whole new kind of attack surface that the CIOs and CISOs didn’t have to think about earlier.
Often the models will work in a way that they can be continuously updating based on the queries made. They’re learning as they go. The danger there, of course, is that if the prompts coming in from a user are malicious and the model is learning based on that or updating continuous learning, now you’ve got a problem because the model has ingested bad stuff. And, so it may do bad things now, not what it was supposed to do.
How is Akamai helping secure at the critical point where the data is getting ingested into the AI model?
To combat the threats to AI, organizations need a firewall that is specifically designed to protect the AI tools because beyond the traditional attacks against APIs, there are now also sophisticated AI attacks to spoof the agent to jump over its guardrails. As a result, a whole new attack surface has emerged.
To mitigate these risks, Akamai has developed a first-generation AI firewall that sits between the user and the inference engine to inspect queries for legitimacy. The AI firewall monitors responses to prevent data leakage. When a user submits a query, the firewall verifies its legitimacy before it is ingested. It then checks the model’s response to ensure no sensitive data is being exposed.
As threats evolve with the attackers continuing to up their game and as AI becomes increasingly central to the digital experiences, we will continue to improve the firewall to address the newer threats.
How do you assess your unique relationship with hyperscalers who are both your partners and competitors?
While hyperscalers are our competitors for their delivery, security, and compute services, many of them are also among our largest customers. A hyperscaler might, for instance, run a media or commerce business that relies on Akamai for delivery and security. And now, with compute in the mix, they care about cost efficiency and placing certain applications closer to users for better performance.
Akamai’s strength lies in its distributed presence across 700 cities, which allows us to position applications closer to users than traditional data centers can. Hyperscalers tend to concentrate on a small number of very large data centers, whereas we operate a large number of smaller points of presence, and that distinction matters enormously for latency-sensitive, AI-driven experiences.
The speed at which new AI models are being launched and the existing ones getting upgraded is unprecedented? Where do you see the race heading and who is winning?
The advancements in LLMs are moving so fast, that it’s difficult to call out the winners yet. Significant investments are going into building and advancing the LLMs. And, while the large companies are doing well and might well be the future big leaders, there’s also room for the smaller and more nimble players. These companies can drive innovations that help the technology perform better and more economically. With substantial capital available in the market, many of them have a good chance to grow and build newer and more advanced capabilities.
Several technology experts and visionaries have called for pausing, or at least, slowing down of AI development for some time because of ethical and safety concerns. What is your take on the ethics debate, and do you think it’s now unavoidable?
Technology is going to keep developing, and in general, that is a good thing. Almost always when there have been big technological advancements, they have benefited the society tremendously even as bad actors sought to exploit them. But, that reality is not a reason to stop advancements. Rather, it needs staying vigilant, understanding what the malicious use cases could be and working towards proactively slowing them down or preventing them.
What is your prediction for the technology landscape and vision for the agentic web a decade from now?
This could be a genuine watershed moment and the web could radically transform in this timeframe. You may no longer be pointing, clicking, or typing URLs. There won’t be many sites, and it will be mostly agents. The protocols to support this agentic web are already being developed and the transition could happen faster than many expect.






