On February 17th, the United States government formally declared that AI agents are infrastructure.
The Center for AI Standards and Innovation — CAISI, the newly rebranded successor to the Biden-era AI Safety Institute — announced its AI Agent Standards Initiative, a structured federal push to bring order to the fastest-growing category of software in the world. The announcement named OpenClaw — by that point approaching 190,000 GitHub stars, with 1,184 malicious skills having been stripped from its marketplace and a critical CVE patched just days earlier — as a primary example of why urgency was warranted.
That timing was not accidental.
What NIST is actually proposing
The initiative is built on three pillars.
The first is industry-led standards development and U.S. leadership in international standards bodies — ISO, ITU, and similar forums where technical specifications become global defaults. NIST wants American frameworks to be the ones that get adopted internationally, not Chinese or European ones.
The second is community-led open source protocol development — backing projects like the Model Context Protocol (MCP), now governed under the Linux Foundation after Anthropic donated it in December 2025, as the interoperability substrate for multi-agent systems. The ambition is a world where any agent can talk to any tool or service using common standards.
The third is AI agent security and identity research — the most technically substantive pillar. NIST’s NCCoE (National Cybersecurity Center of Excellence) published a concept paper on February 5th proposing a demonstration project to explore how identity and authorization practices can be applied to AI agents in enterprise settings. The approach draws on OAuth 2.0, policy-based access control, and Zero Trust architecture — existing enterprise security concepts applied to a category of software that did not meaningfully exist two years ago.
The public engagement timeline is specific: responses to CAISI’s Request for Information on AI Agent Security are due March 9. Responses to the NCCoE concept paper are due April 2. Starting in April, sector-specific listening sessions will focus on healthcare, finance, and education. The initiative will then coordinate with the National Science Foundation and other federal partners to produce guidance.
What the initiative does not include: any mandatory requirements, any enforcement mechanism, or any binding regulatory timeline. Like every NIST output, this will be voluntary — influential, but voluntary.
Why OpenClaw made it unavoidable
NIST does not typically name individual open source projects. The News9Live headline was not subtle: “US launches AI agent standards push as autonomous tools like OpenClaw spread fast.”
The CSO Online coverage went further, citing OpenClaw directly as an example of “a helpful agent which opens a door for attackers to roam unseen around a user’s applications and data.”
For anyone who has followed the last six weeks, the context is clear. OpenClaw’s security record entering February 2026 was genuinely alarming:
- CVE-2026-25253 — a CVSS 8.8 critical vulnerability enabling one-click remote code execution via cross-site WebSocket hijacking. Visiting a single malicious webpage is sufficient to have your authentication token stolen and your agent turned against you. Patched January 30th. Exploitation window: unknown.
- ClawHavoc — a coordinated supply chain campaign that placed 341 malicious skills (later expanding to 824+, then 1,184) in ClawHub, OpenClaw’s official skill marketplace. Infostealers, keyloggers, and reverse shells embedded in functional-looking code.
- 135,000+ exposed instances — SecurityScorecard found 40,214 publicly exposed OpenClaw deployments. A subsequent scan found 135,000. Of observed deployments, 63% were vulnerable; 12,812 were exploitable via remote code execution. Censys put the exposed instance count even higher.
- Six additional vulnerabilities — Endor Labs disclosed six new OpenClaw vulnerabilities on February 18th covering SSRF, missing authentication, and path traversal bugs, ranging from moderate to high severity.
- Corporate bans — Meta banned OpenClaw from corporate networks. Other companies followed without announcement.
Microsoft’s security team published its assessment on February 19th in language that left little room for interpretation: “OpenClaw should be treated as untrusted code execution with persistent credentials. It is not appropriate to run on a standard personal or enterprise workstation.”
Cisco had called it “groundbreaking from a capability perspective but an absolute nightmare from a security perspective.” CrowdStrike had warned it “could be commandeered as a powerful AI backdoor agent capable of taking orders from adversaries.” Kaspersky had dubbed it “the biggest insider threat of 2026.” Sophos called the whole episode “a warning shot for enterprise AI security.”
When a piece of software accumulates that kind of citation list inside four weeks, federal standards bodies take notice.
The numbers behind the urgency
The regulatory intervention did not happen in a vacuum. The data on AI agent adoption — and the governance gap that trails behind it — is striking.
The Gravitee State of AI Agent Security 2026 report surveyed 919 practitioners across telecom, financial services, manufacturing, healthcare, and logistics. Headline findings:
- 3 million+ AI agents are now active in U.S. and UK organizations.
- 47.1% of those agents — roughly 1.5 million — are not actively monitored or secured.
- Only 14.4% of organizations have full IT and security approval for their entire agent fleet.
- 88% of organizations reported a confirmed or suspected security incident involving AI agents in the last year. In healthcare, that figure is 92.7%.
Gravitee CEO Rory Blundell described the 1.5 million unmonitored agents as “invisible risk” — a workforce “larger than the entire global employee count at Walmart” operating without oversight.
Microsoft’s own telemetry, published in its Cyber Pulse report, found that 80% of Fortune 500 companies are now running active AI agents — the majority built with low-code and no-code tools. Twenty-nine percent of employees are using unsanctioned AI agents. Only 47% of organizations have AI-specific security controls in place.
The governing framing from Microsoft: “AI agents are scaling faster than some companies can see them — and that visibility gap is a business risk.”
The gap NIST is stepping into is not theoretical. Millions of agents are already running in production with no identity, no authorization framework, and no monitoring. The question is whether a voluntary standards initiative can change that trajectory.
The skeptic case
Not everyone was impressed.
The most pointed critique came from Gary Phipps, head of customer success at Helmet Security, quoted in CSO Online: “From the time NIST announced the AI Risk Management Framework to its publication was roughly two years, during which the entire generative AI landscape was born, scaled, and began reshaping enterprise security.”
The implication is direct. OpenClaw went from GitHub repository to global security incident in under four months. NIST’s first concrete deliverable is a listening session in April. The AI RMF — NIST’s last major AI output — took two years and emerged into a world that had largely already formed its own views.
Phipps added that NIST’s claim about “cementing US dominance at the technological frontier” was “a bold thing to say about an initiative whose first concrete deliverable is a listening session in April.” His summary: “Standards don’t create dominance: they follow it.”
The HackerNews reception was, characteristically, cold. Two threads were posted — both linking to the official NIST announcement — both scoring two points with zero comments. The community had more to say about OpenClaw’s security record in general (the ZeroLeaks security assessment PDF scored 61 points with 20 comments; the NanoClaw thread scored 42 with 26), but the standards initiative itself landed with a quiet thud in the developer community.
One SC Media piece carried the most direct counter-thesis: “Don’t wait for NIST: Secure AI agents now or fall behind.” The argument was operational: enterprises waiting for formal guidance before deploying AI agent governance controls are already behind the attack surface. The winners in this cycle are deploying agents and securing them simultaneously, not sequencing compliance checklists after the standards body publishes.
There is also a political dimension worth naming. NIST’s Center for AI Standards and Innovation was not always called that. Until early 2025 it was the AI Safety Institute — a Biden-era creation with an explicit safety mandate. The Trump administration renamed it, deliberately dropping “safety” and substituting “standards and innovation.” OSTP Director Michael Kratsios described the rebranding as focusing on “what NIST is really good at,” framing the pivot away from safety evaluation toward standards-setting as a clarification of mission rather than a retreat from it.
Kratsios’s X post on the initiative’s announcement: “The future of AI is agentic, and America is leading the way to make it secure and interoperable.”
That is the frame the administration wants. Whether “interoperable” and “secure” can be separated from “safe” in a world where 135,000 OpenClaw instances are internet-exposed is a question the initiative does not directly answer.
The identity problem no one has solved
The most technically substantive part of the initiative — and the most relevant to OpenClaw — is the agent identity question.
RNWY, a company building what it calls “the trust layer for autonomous AI,” published a sharp analysis of the structural gap:
“There is no cryptographic signing of skills, no persistent publisher identity, and no chain of accountability from skill author to agent behavior. An OpenClaw agent talking to another agent on Moltbook has no way to verify that the other agent is who it claims to be — there is no mutual authentication protocol, no verifiable credentials, and no persistent reputation.”
This is not primarily a software bug. It is an architectural absence. OpenClaw was designed as a local, personal agent. Authentication was never part of the original scope. As the system evolved into something capable of agent-to-agent communication, marketplace skill distribution, and enterprise deployment, that absence became a structural liability.
The OSO blog put the core problem cleanly: “an AI agent that can really help you has to have real power, and anything with real power can be misused.” The design tradeoff is not a flaw — it is a consequence of utility. What you can do with an OpenClaw agent is precisely what makes it dangerous to run one carelessly.
Security researcher Simon Willison, who coined the term “prompt injection,” describes the architecture with his “lethal trifecta”: access to private data, exposure to untrusted content, ability to communicate externally. OpenClaw has all three by design. That is not an accident. That is the product.
Two approaches are emerging to fill the identity gap, and they come from different directions.
From the enterprise/government side: NIST’s NCCoE concept paper proposes OAuth 2.0 and policy-based access control as the foundation. Agents would be treated like service accounts — issued cryptographic credentials, governed by access policies, logged against those policies. The World Economic Forum issued a parallel call for a “Know Your Agent” standard with four requirements: establish identity, confirm permissions, maintain accountability, enable continuous monitoring. Hogan Lovells, the law firm, published a client alert treating NIST’s concept paper as an enterprise compliance planning opportunity.
From the crypto/open-source side: ERC-8004, an Ethereum standard for AI agent identity backed by Coinbase, Google, and MetaMask, had 30,000+ agents registered as of January 2026. Clawlett, an OpenClaw wallet integration, implements ERC-8004 directly — giving agents on-chain financial credibility, verifiable reputation, and economic history. The vision is a decentralized identity layer where agents build persistent, cross-platform reputation through verifiable behavior rather than central registry.
These two approaches are not obviously compatible. One assumes enterprise infrastructure. The other assumes a blockchain. OpenClaw sits at the intersection with nothing, and the next major breach may determine which model the ecosystem converges on.
What already exists, and what NIST might be following
One thing worth clarifying: NIST is not arriving to a space that has no standards.
The Agentic AI Foundation (AAIF), launched December 2025 under the Linux Foundation, brought together Anthropic (donating MCP), OpenAI (donating AGENTS.md), and Block (donating Goose), with Google, Microsoft, and AWS as supporters. AGENTS.md — a universal standard giving AI coding agents consistent project-specific guidance — has been adopted by 60,000+ open source projects. MCP is now the dominant interoperability protocol for multi-agent systems, adopted by OpenAI, Google, Microsoft, and AWS.
Industry is not waiting for NIST. The question is whether NIST-produced standards will be additive to what industry builds or redundant to it.
The AI RMF is instructive here. It took two years. When it published, enterprises adapted it into compliance checklists and mapped their existing risk processes to its framework language. It did not drive behavior so much as it provided a common vocabulary for behavior that was already occurring. If the AI Agent Standards Initiative follows the same pattern, it will codify what the industry has figured out around 2028 — a long time from now in agent-years.
The Netizen blog made this point in the context of federal compliance: “For agencies operating under NIST 800-53, FISMA, or CMMC, agent skills introduce additional governance pressure. Supply Chain Risk Management explicitly addresses third-party software and distribution channels. If a skill marketplace does not provide transparency into submission vetting, version changes, and runtime behavior, agencies inherit documentation gaps.”
That framing matters. NIST standards are voluntary for the private sector. They are not voluntary for the federal government. Federal agencies cannot run OpenClaw on government hardware without authorization packages that do not currently exist. NIST producing agent-specific guidance makes that path shorter. That, more than any private sector influence, may be the initiative’s most concrete near-term impact.
What NIST can and cannot fix
What it can do:
Standards give security teams institutional cover. The most common barrier to AI agent governance is not technical — it is political. Security teams that want to impose logging, credential isolation, and monitoring on departmental agent deployments often lack the standing to require it. A NIST framework gives them that standing: “we are aligning with the federal standard.” That is not nothing.
Standards shape international norms. ISO adopts NIST frameworks. The EU AI Act draws on NIST language. If the AI Agent Standards Initiative produces a well-constructed identity and authorization model, it will influence regulation in jurisdictions that do have enforcement mechanisms — EU, UK, Canada — even if NIST itself cannot compel compliance.
Public comment processes surface real-world problems. The RFI due March 9th is not theater — NIST’s prior RFIs have surfaced specific technical issues that shaped final guidance. If the OpenClaw community, security researchers, and enterprise CISO offices file substantive responses, the resulting guidance will be better for it.
What it cannot do:
It cannot move faster than the threat. CVE-2026-25253 was patched in January. A listening session is scheduled for April. Standards guidance will emerge somewhere between 2027 and 2028. The ClawHavoc campaign was designed, executed, and partially contained before NIST held its first internal planning meeting on this initiative.
It cannot fix the design tradeoffs. OpenClaw’s power comes from the same properties that create its risk surface. A standard that requires agents to operate with minimal permissions will produce agents that are less useful. The tradeoff is real, and standards language cannot resolve it — it can only name it.
It cannot authenticate retroactively. The 135,000 exposed instances exist. The 1.5 million unmonitored enterprise agents exist. Standards for identity and authorization will apply to future deployments. The installed base is already out there.
The deeper question
NIST’s initiative is, at its core, a bet that the infrastructure analogy holds. Power grids have standards. Financial networks have standards. Internet protocols have standards. If AI agents become as embedded in daily economic activity as those systems — and the 3-million-agent number suggests that moment is closer than most expected — then standards become load-bearing infrastructure for trust.
The OpenClaw case is what happens when that infrastructure doesn’t exist. A single developer in Austria builds something genuinely useful, it spreads to 190,000 GitHub stars and millions of deployments faster than anyone can audit it, the attack surface explodes, and by the time institutions respond, the ecosystem is already shaped.
NIST is not going to prevent the next OpenClaw. It is trying to ensure that when the next one arrives, there is a framework for thinking about what questions to ask: Who authenticated this agent? What permissions does it actually need? Where is the log? Who can revoke it?
Those are not unreasonable questions. They are just arriving late.
The NIST public comment period for AI Agent Security closes March 9, 2026. The NCCoE concept paper on AI Agent Identity and Authorization accepts responses through April 2, 2026. Links to both can be found at nist.gov/caisi.
Sources: NIST AI Agent Standards Initiative · CSO Online · Gravitee State of AI Agent Security 2026 · Microsoft Security Blog (Feb 10) · Microsoft Security Blog (Feb 19) · RNWY Blog · Netizen Blog · CrowdStrike · SecurityWeek CVE-2026-25253 · OpenClaw Wikipedia