The Internet Needs a New Layer for AI Agents

We need a new layer for AI agents on the Internet. Not hype. Engineering reality we are racing towards faster than most people know. The web we have today was developed for humans clicking links and browsing content. That’s not how AI agents operate. They need organized communication, dependable authentication, and machine-readable protocols that don’t currently exist at scale.

I’ve been tracking this space for years and we’re reaching an inflection moment right now.

We are witnessing an explosion of autonomous AI systems.” Companies are using agents for customer support, code development, research, supply chain management etc. But these agents tend to work in silos. They can’t consistently communicate with one other, authenticate identities, or negotiate jobs across platforms. The plumbing isn’t there.

I’ll unpack below what “new layer” means in practice — the protocols, standards and infrastructure needed to make agent-to-agent communication function consistently across the open internet.

Why the Current Internet Falls Short for AI Agents

The web depends on protocols that are decades old. HTTP, HTML and DNS work quite well for human users. But they were not built to be independent software that makes decisions, does several steps, and works with other devices.

That’s the nub of the matter. When you view a website, your browser renders HTML for your eyeballs. An AI agent need not render pages. It requires structured data, defined action endpoints and permission frameworks. Web scraping is fragile, sluggish, and generally a violation of terms of service. That is how brittle this is . I’ve seen entire agent pipelines break because a site changed its layout .

In particular, many architectural deficiencies make the present-day internet unsuited for agent-scale operations:

  • No generic identity scheme for agents. Agents cannot authenticate themselves to other agents or services.
  • No common protocol for jobs. There is no common way for agents to seek, negotiate and fulfill work across platforms.
  • No discovery mechanism. Agents can’t discover other agents or services without hard coded integrations.
  • Zero trust framework. How can one agent validate the capabilities/permissions of another agent?
  • No value exchange or charging layer. Agents are not permitted to pay for services or negotiate prices on their own.

As a result, each company designs its own proprietary integration layer. This results in fragmentation — like the early internet before HTTP standardized web communication. And truthfully, it’s tiring to see the same wheel reinvented again and time again.

The internet requires a new layer to fill these key shortcomings for AI agents.

Tim Berners-Lee’s original web proposal was about people sharing information. What we need now is a similar vision for machine-to-machine agent communication. That’s a big ask, but it’s the correct ask.

The Emerging Protocols That Define This New Layer

Many organizations and enterprises are already creating parts of this agent infrastructure. No one standard has yet emerged as dominant, although distinct patterns are forming. These protocols are the first building elements of the new layer the internet requires for AI agents.

An example is the Model Context Protocol (MCP). Anthropic open-sourced MCP as a standard for how AI models communicate with external data sources and tools. MCP is a USB-C port for AI. Rather than creating specific integrations for each tool, it’s a universal connector. It describes how agents ask for context, call tools and get structured responses. I set up a couple MCP servers myself and the dev experience is honestly really good compared to what existed before.

Google’s Agent to Agent (A2A) Protocol tackles a different part of the puzzle. MCP links agents with tools, and A2A focuses on agent-to-agent communication. It allows agents to discover what other agents can do, negotiate tasks and collaborate on complicated workflows. Google built A2A as a compliment to MCP, not a rival – which, notably, is exactly the right impulse.

Machine readable API descriptions already are provided by OpenAPI specs. More importantly, they are changing to better support agent use cases. Agents can read OpenAPI specs to know what an API does, what parameters it takes, and what response to expect.

How do these procedures compare?

Protocol Primary Function Scope Developer Status
MCP Agent-to-tool connection Tool integration Anthropic Open standard, growing adoption
A2A Agent-to-agent communication Multi-agent coordination Google Early stage, open specification
OpenAPI API description Service documentation OpenAPI Initiative Mature, widely adopted
ActivityPub Federated social messaging Decentralized communication W3C Mature, limited agent use
JSON-LD Linked data format Semantic web data W3C Mature, foundational

Also, comparable patterns can be found in the W3C Web of Things architecture. It explains how IoT devices find each other and how they communicate. Much like IoT, AI agents require similar discovery and interaction standards – and that IoT playbook is more significant than most give it credit for.

There’s no single protocol that will do all the internet’s next layer for AI agents needs. What we need instead is a coordinated stack that pulls from all of these. The main problem is getting rival organizations to actually coordinate – and historically that’s tougher than the engineering itself.

Interoperability Frameworks: Making Agents Work Across Platforms

Protocols are not sufficient. And you need inter-operability frameworks that allow agents designed with diverse tools to really co-operate.

This is where it gets practically difficult.

Think how things are. An agent produced using LangChain cannot communicate natively with an agent built using CrewAI or AutoGen. They have their own abstractions, memory systems and execution patterns. So to get agents to work across platforms, you need translation levels. And in those translation layers, there are flaws.

What Interoperability Really Means:

  1. Shared capabilities descriptions. Every agent has to publish what it can do in a standard format. Think of it as a resume other agents can read programmatically.
  2. Standard message formats. Agents should agree on how to format requests, answers and error messages.
  3. Consolidated state management. When agents collaborate on a job, they need a common view of the progress and status of the activity.
  4. Usual error handling. Agents must be able to convey failures in predictable ways, enabling other agents to adapt.
  5. Version negotiation. Protocols evolve over time. Agents must agree on which version of a protocol they will use for a particular interaction.

Importantly, the enterprise software market has handled comparable difficulties before. SOAP, REST, and GraphQL standardized many aspects of service communication. What it really needs is a new layer for AI agents that learns from past precedents, notably the part where REST prevailed because it was simpler than SOAP, not more powerful.

Semantic interoperability is very relevant. Two agents might both comprehend “schedule a meeting” but interpret it in radically different ways. Some will want to verify availability on their calendar first, others will just make an event. When I first began testing multi-agent systems, this astonished me. Not all failure modes are visible until something silently fails. Shared ontologies and task definitions can help address these gaps but we are still in early days.

Also, interoperability must work across corporate borders. An agent at Company A should engage with an agent at Company B safely. This calls for agreed trust limits, data sharing rules and liabilities. And that last bit – responsibility – is where lawyers start to make their money.

Infrastructure Requirements: Identity, Trust, and Discovery

Why the Current Internet Falls Short for AI Agents
Why the Current Internet Falls Short for AI Agents

The internet needs a new layer for AI agents, and that requires considerable infrastructure investment. Three pillars come to mind: identification, trust and discovery.

Agent Identity

All agents need validated identity. The vast majority of agents authenticate nowadays with API credentials related to human users. That’s a workaround, not a solution – and it falls apart terribly at scale. Agents have to have their own identity credentials that identify:

  • Who made the agent
  • What permissions it has
  • What it is the organization
  • What it can do
  • When the credentials run out

One interesting approach is the Decentralized Identifiers (DIDs) from the W3C. In the absence of a central authority, entities can generate self-sovereign identities using DIDs. Agents could use the DIDs to confirm their identity to other agents or services. Fair warning it’s a difficult implementation but the idea is good.

Reputation and Trust

That’s not enough, just identity. You also need trust mechanisms. How does an agent decide whether to share data with an agent? Crucially, confidence in agents is not the same as trust in humans. Agents require:

  • Cryptographic proofs of capabilities
  • Verifiable history of execution
  • Reputation scores based on previous performance
  • Support from organization
  • Revocation methods in case of breach of trust

Without this layer, you’re just putting strangers into your systems on the honor system.

Discovery Service

Agents need to find one another. The method currently is to hard-code API endpoints or to use human-configured integrations. A proper discovery layer would enable agents to:

  • Find agents with specific capabilities
  • Compare benchmarks price and performance
  • Automated Negotiation of Terms of Service
  • Set up communication channels dynamically

Think DNS, but for agent abilities. An agent discovery service matches task descriptions to capable agents rather than domain names to IP addresses. This discovery demands a new layer for AI agents on the internet that is fast and secure — and this particular piece doesn’t yet exist in any mature form.

Real-World Challenges Blocking Adoption

The momentum is there, but there are big hurdles to face. Building this new layer the internet needs for AI agents won’t be easy. If I skipped over the hard bits, it would be doing you a disservice.

The greatest fear is fragmentation of standards. Different firms are offering rival standards – Google has A2A, Anthropic has MCP, and Microsoft is behind AutoGen’s protocols. Without coordination, ecosystems will be incompatible. Yet the early hints of co-operation are promising. Google built A2A not to supplant MCP but to complement it. That’s a better point of departure than we got from the browser wars.

“The more autonomous agents you have, the more security risks you have.”

When people browse the Internet, they make judgement calls on dubious requests. But agents might not. Malicious actors might use protocol flaws to leak sensitive data, inject malicious instructions into multi-agent workflows, impersonate legitimate agents, or perform denial-of-service attacks against agent infrastructure. So security has to be a fundamental part of it, not an afterthought. The OWASP Foundation has started to work on the AI-specific security issues yet agent-to-agent security frameworks are rather immature. This is the space I’d be watching most intently over the next 18 months.

There is also a significant challenge of economic model uncertainty. Who pays when agents negotiate? How do you handle micro-payments between agents performing little tasks? Traditional payment systems were not built for millions of tiny automated transactions – and then the bookkeeping becomes messy very fast.

Another layer of complexity is created by regulatory uncertainty. In particular:

  • Who is Responsible for an Agent’s Harmful Choice?
  • What is the role of data privacy legislation in data-sharing between agents?
  • Can agents make binding agreements on behalf of organizations?
  • How can you audit agent behaviour over distributed systems?

And then there’s latency and performance. Loading a few seconds is acceptable for human users. Real-time workflow agents demand sub-second reaction times, sometimes considerably below 100ms. The infrastructure must support huge concurrent agent interactions with no loss in performance. That’s a challenging engineering problem on its own, and it gets much tougher when you add security and identity verification to it.

You can’t only solve the technical challenges and ignore security, economics and legislation. The internet requires a new layer for AI agents that takes all of these difficulties together — and that’s a coordination problem as much as a technical one.

What Developers and Organizations Should Do Now

The fact is, you don’t need to wait for ideal standards. There are concrete actions now for those building toward the new agent infrastructure layer. And frankly, waiting for unanimity is a smart way to get left behind.

For developers:

  • Get MCP today. It is the most advanced agent protocol with real adoption. I’ve tested hundreds of integration methods and MCP always has the most pleasant developer experience. Build MCP Servers for your service. It gets you ready for the agent economy regardless of what other standards emerge.
  • Design agent API’s. Add structured error messages Add capability descriptions Add machine readable documentation Start with the OpenAPI Specification.
  • Use correct authentication. Use OAuth 2.0 flows that support agent credentialing. Never share API keys between agents. It’s a security nightmare waiting to happen.
  • Create idempotent operations. unsuccessful requests will be retried by agents. Your services should gracefully handle redundant requests.
  • Test on some agent frameworks. Don’t optimize for a one. Test your integrations with LangChain, CrewAI and AutoGen to verify broad compatibility.

For businesses:

  • Set policies for agent governance. Set what your agents can and can’t do before they are deployed, not when anything goes south.
  • Invest in observability. You have to monitor agent behavior, monitor inter-agent communications, and audit decisions. And you want to have this instrumentation in place before you scale.
  • Standards body participation. Help to define agent protocols in working groups. The use cases matter, and the people who are turning up to these meetings are the people who are shaping the outcomes.
  • Start with internal agents tiny. Start by rolling out agent-to-agent communication in your company. Get internal before you get external.
  • Infrastructure modification budget. Agent traffic patterns are very different from human traffic patterns. We’re talking maybe 10x API call volume with tighter latency requirements.

On the other hand, there are things that are too soon. Don’t put all your eggs in one protocol. Do not develop complicated multi-agent systems without sufficient oversight. And don’t put external facing agents out there without security evaluations. That final one is the mistake I see the most now.

Conclusion

The Emerging Protocols That Define This New Layer
The Emerging Protocols That Define This New Layer

The internet requires a new layer for AI agents – and this is no longer just a theoretical issue. That’s an active engineering challenge with actual solutions coming.” Protocols like MCP and A2A are leading the way. Identity frameworks such as DIDs offer promising foundations. Organizations throughout the world are recognizing that agent infrastructure is a competitive need, not a nice-to-have.

But we’re still in the early stages. Some protocols will be adopted, some will go, and some standards will change. The idea is to be involved now and not wait for things to settle.

Crucially, this new layer must mix openness with security, standardization with flexibility and innovation with governance. The companies and developers that are building it will determine how AI functions for decades to come. What you choose to accomplish in two or three years will be very difficult to undo.

Your following steps are clear and clear cut. Start using MCP in your services immediately. Develop APIs, to be consumed by agents. Set up governance mechanisms for the agents in your firm. And keep involved with the standards communities that are defining this new layer of infrastructure. The next chapter of the web isn’t about better sites, it is about better protocols for thinking machines, and that chapter is being created right now.

FAQ

What does “new layer for AI agents” actually mean?

It refers to a set of protocols, standards, and infrastructure that sit on top of the existing internet. Specifically, this layer handles agent identity, discovery, communication, and trust. Think of it like how HTTP added a layer for web browsing on top of TCP/IP. The internet needs a new layer for AI agents that serves a similar foundational role for autonomous software.

How is MCP different from regular APIs?

Regular APIs require custom integration code for each service. MCP provides a universal standard for connecting AI agents to tools and data sources. It’s like the difference between having a different charger for every phone versus one USB-C standard. MCP defines how agents discover capabilities, request actions, and receive structured responses consistently across services.

Will one protocol win, or will multiple coexist?

Multiple protocols will likely coexist, each handling different aspects of agent communication. MCP focuses on agent-to-tool connections. A2A handles agent-to-agent coordination. OpenAPI describes service capabilities. Similarly to how the web uses HTTP, DNS, TLS, and other protocols together, the internet needs a new layer for AI agents built from complementary standards.

What are the biggest security risks with AI agent infrastructure?

The primary risks include agent impersonation, prompt injection across agent chains, unauthorized data access, and cascading failures in multi-agent systems. Additionally, malicious agents could exploit trust relationships to access sensitive resources. Solid identity verification, encrypted communication, and behavior monitoring are essential safeguards.

How soon will this new agent layer be widely adopted?

Early adoption is happening now through MCP and similar protocols. Broad standardization will likely take three to five years. Nevertheless, developers should start building with these protocols today. Early movers will have significant advantages as the ecosystem matures. The internet needs a new layer for AI agents, and the foundation is being poured right now.

Do small companies need to worry about agent infrastructure?

Yes, although the urgency varies. If you offer APIs or digital services, agents will eventually consume them — and probably sooner than you expect. Preparing your services for agent interaction now is straightforward and worthwhile. Furthermore, small companies can gain real competitive advantages by being early adopters. Start with basic steps like adding structured API documentation and supporting MCP connections.

References

Leave a Comment