In April 2026, a bombshell dropped in the AI world: Hermes Agent, a star project from the well-known Silicon Valley AI laboratory Nous Research, was accused of “architectural-level plagiarism” by an obscure Chinese team called EvoMap. This “self-evolving agent” had accumulated nearly 100,000 GitHub stars in less than two months since its launch and had major domestic tech companies rushing to release one-click deployment solutions — yet at the very peak of its fame, it plunged to the depths of a credibility crisis. This is not merely a cross-Pacific intellectual property dispute; it functions more like a mirror, reflecting the fragility and anxiety of the open-source community in the AI era. When the cost of “AI-laundered code” approaches zero, and when self-evolving agents become the next technological high ground, we are compelled to ask: how much longer can the open-source spirit hold?
Technical Analysis: What Made Hermes Agent a Global Phenomenon?
Before diving into the controversy, it is necessary to first understand the technical foundations of Hermes Agent. The fact that this project went viral in such a short time was no accident.
The Base Model: The Accumulated Depth of the Hermes Series
The underlying foundation of Hermes Agent is built upon the Hermes large model family developed in-house by Nous Research. From the original Hermes 2 to the latest Hermes 4, the series has consistently been based on deep fine-tuning of open-source base models such as Llama and Qwen. Take Hermes 3 as an example: it is a frontier-level full-parameter fine-tuned version of the Llama-3.1 405B base model, focused on enhancing the model’s instruction-following, function-calling, structured output, and multi-turn conversation capabilities.
Hermes 4 goes a step further, introducing a “hybrid reasoning” architecture — the model can dynamically switch between standard responses and explicit reasoning via a think tag, achieving more than 90% accuracy on the MATH-500 benchmark and surpassing 50% on LiveCodeBench. This reasoning capability of “being fast when speed is needed and going deep when depth is required” provides a solid intellectual foundation for the agent’s autonomous decision-making.
Agent Architecture: From “Able to Talk” to “Able to Grow”
Unlike early ChatGPT plugins or simple tool-calling systems, the core selling point of Hermes Agent is “Self-Evolution.” Its technical architecture can be divided into three layers.
The model layer, based on the Hermes 2 and 3 series models, provides powerful language understanding and generation capabilities. The agent layer handles task planning, tool invocation, memory management, and error recovery — this is the most complex part of Hermes Agent and the central area of the controversy. The interface layer provides a unified API gateway supporting plug-and-play integration with mainstream platforms such as Telegram, Discord, and Slack, and is compatible with the MCP (Model Context Protocol) protocol.
But what truly caught developers’ attention was its “memory-evolution” closed loop. Unlike traditional large models that “start from zero” on every invocation — a kind of amnesia — Hermes Agent features a built-in cross-session persistent memory system that intelligently filters and compresses key information, accumulating user preferences and project knowledge across multiple conversations. This means your agent grows to “understand you” more over time, gradually evolving from a “newly met intern” into a “well-attuned long-term collaborator.”
Lightweight Design and Open-Source Strategy
In terms of architectural design, Hermes Agent deliberately sidesteps the layers of abstraction found in heavyweight frameworks such as LangChain, pursuing an “out-of-the-box” developer experience. Getting from installation to deployment takes only a few minutes, with production-ready essentials such as error handling, retry mechanisms, and logging built in.
More importantly, its strategy of being fully open-source and supporting local private deployment is significant. At a time when data privacy is increasingly sensitive, “data that never leaves your domain” has become a key selling point for enterprise-grade applications. This is precisely why domestic cloud vendors such as Alibaba Cloud and Tencent Cloud moved swiftly to offer cloud deployment solutions for it.
From a technical standpoint, Hermes Agent genuinely represents a pivotal step in AI’s evolution from a “conversational tool” to a “long-term execution system.” But it is precisely this “self-evolution” capability — its proudest feature — that has pushed it into the eye of the plagiarism storm.
The Plagiarism Controversy: An “Architectural-Level Code Laundering” Across the Pacific
The Outbreak: A Chinese Team’s Accusation
On April 15, 2026, the Shenzhen AI team EvoMap published a lengthy post on the X platform accusing the core “self-evolution” mechanism of Hermes Agent of being “highly similar” to their Evolver engine — which they had open-sourced 36 days earlier — in terms of overall architecture, execution flow, and the design of key modules.
EvoMap is no unknown quantity. This team of fewer than 20 people consists mostly of members born in the 1990s and 2000s. Founder Zhang Haoyang is a technologist born in the mid-1990s who became China’s youngest Unity developer at the age of 14, started his entrepreneurial journey at 17, and later served as a technical planner for Tencent’s game “Peacekeeper Elite” (Peace Elite). The Evolver engine they developed was open-sourced on February 1, 2026 under the highly permissive MIT license, and within a short period of going live it topped the ClawHub trending charts and accumulated thousands of GitHub stars.
In their technical report, EvoMap noted that it took them months of work and countless all-nighters to build Evolver — while the Hermes Agent team, backed by far greater resources, had “reinvented it” in just 30 days.
The Core Evidence: More Than Just “Great Minds Think Alike”
EvoMap’s accusations are not without foundation. The technical comparison report they published lays out four core pieces of evidence in escalating detail, attempting to demonstrate that this is not a case of coincidental technical convergence, but a systematic “architectural-level code laundering.”
First is the “translation-level” correspondence in the main loop. The most striking piece of evidence is the astonishing consistency between the core evolution loops of the two projects. Despite the fact that Evolver is developed in Node.js and Hermes Agent in Python, the two exhibit synchronized correspondence in the core flow of self-evolution that transcends programming languages. This cross-stack structural isomorphism is likened to: “one person writing a ten-step recipe in Chinese, another translating it into English, and then claiming to have independently invented a new dish.”
Second is the systematic replacement of core terminology. If the consistent flow represents copying the skeleton, then the systematic replacement of terminology is viewed as the specific technique of “laundering.” EvoMap listed more than ten pairs of corresponding core concepts. This practice of “changing the soup but not the medicine” has been called by the community a textbook case of “AI code laundering” — using a large model to fully understand the original architecture and then rewrite all the code and replace all naming conventions, producing a “new” project that bears extremely low textual similarity to the original but is structurally and logically identical.
Third, and more importantly, is the transplantation of the three-layer memory system. An AI agent’s evolutionary capability depends heavily on its memory system. Evolver was designed with a sophisticated three-layer memory architecture — factual memory, procedural memory, and search memory — which is considered the core mechanism enabling its “gets smarter the more you use it” capability. The memory architecture of Hermes Agent is alleged to be identical to this. Somewhat ironically, Hermes Agent’s documentation generously cites other academic works such as Stanford’s DSPy and Berkeley’s GEPA, yet makes no mention whatsoever of Evolver — the project whose architecture it most closely resembles.
The Timeline Mystery: A 39-Day “Magic Window”
Another central basis of EvoMap’s accusations is an extraordinarily short time window:
February 1, 2026: EvoMap open-sources the Evolver engine, publicly releasing the core GEP design.
February 25, 2026: The Hermes Agent main repository is publicly released for the first time, having previously been kept private for an extended period.
March 9, 2026: Nous Research creates a dedicated independent repository named hermes-agent-self-evolution specifically for Hermes Agent.
March 12, 2026: Hermes Agent v0.2.0 is released, with the “skills ecosystem” and self-evolution functionality fully online.
From the open-sourcing of Evolver to the launch of Hermes Agent’s self-evolution module, the interval was only 39 days. Given that the EvoMap team spent months refining Evolver, the probability of independently inventing a highly isomorphic and complex architecture in such a short time is very low.
The Deeper Concern: When AI Learns to “Launder Code,” Where Does the Open-Source World Go?
What is most disturbing about this incident is not traditional “copy-paste” plagiarism, but rather what may represent an entirely new paradigm of infringement — AI code laundering — where the cost approaches zero, the cost of reproduction becomes extremely low, and a lack of respect for open-source code is becoming the norm.
In the era of AI-assisted programming, the “human boss” may sometimes have no idea that the “AI employee” underneath referenced someone else’s project. An engineer may have simply issued a single instruction to Cursor such as “write me a self-evolution module,” the AI encountered Evolver’s code during training, and then produced a “laundered” version. The human developer may not even realize there was any plagiarism involved. The terrifying aspect of this “unconscious plagiarism” or “structural plagiarism” is that it leaves almost no trace at the code text level, rendering traditional code similarity detection tools completely ineffective. Yet at the level of architecture, workflow, data structures, and design patterns, it is a systematic replication.
Evolver was open-sourced under the MIT license, meaning anyone can freely use, modify, commercialize, and even close-source it — the only requirement being attribution to the original authors. But Hermes Agent is accused of failing to honor even this most basic requirement. This exposes a deeper problem: when innovative ideas, system architectures, and API designs become the core value, and code text is merely one form of expression, are the open-source licenses we have relied on for decades still effective? Licenses such as MIT, Apache, and GPL protect specific code expressions, not abstract architectural ideas. In an era where AI can readily “understand and rewrite” an architecture, how are the rights of original creators to be protected? EvoMap’s reluctant response provides a bleak answer: they announced that Evolver’s core modules would be switched to obfuscated releases, and that the open-source license would be changed from MIT to the highly contagious GPL-3.0, in order to raise the usage cost and legal risk for potential future plagiarists. If a team that originally embraced open source is forced to close-source due to infringement, the “bad money driving out good” effect of this dynamic on the entire open-source ecosystem is incalculable.
The Hermes Agent incident may never reach a clear conclusion in a legal sense. The costs of cross-border enforcement are high, the definition of architectural plagiarism is ambiguous, and Nous Research’s strategy of silence may well allow it to walk away unscathed. But the warning this incident leaves behind deserves deep reflection from every AI practitioner and user.
As AI-assisted programming becomes standard practice, and as large models can “understand” and “rewrite” a complex system in a matter of seconds, there is an urgent need to establish new industry norms. At the technical level: developing code review tools capable of detecting “architectural-level similarity” rather than merely “text-level similarity.” At the licensing level: exploring new open-source licenses that protect architectural design and API semantics, filling the vacuum in existing legal frameworks. At the ethical level: establishing traceability mechanisms for AI-generated code, requiring developers to disclose training data sources and referenced projects when publishing AI-assisted work. At the community level: forming a collective resistance to “code laundering” behavior, ensuring that projects which violate the open-source spirit pay a reputational price.
This may be the saddest scene the open-source world has ever witnessed: original creators forced to close their source, while plagiarists bask in the spotlight. If this situation continues, the future development of AI will lose its most important soil — openness, collaboration, and mutual trust.

[Disclaimer]: The above content reflects analysis of publicly available information, expert insights, and BCC research. It does not constitute investment advice. BCC is not responsible for any losses resulting from reliance on the views expressed herein. Investors should exercise caution.
