In March 2026, at NVIDIA’s GTC conference, founder Jensen Huang announced that the world’s first co-packaged optics (CPO) switch, Spectrum X, had entered full-scale production, manufactured by TSMC. This switch shortens the electrical signal transmission distance from the centimeter range to within 1 millimeter, reducing transmission loss by 60%. Hot on its heels, the Quantum3400 CPO switch targeting the next-generation Rubin Ultra platform was also unveiled at the same event. This announcement marks the fact that optical interconnect technology in AI data centers is undergoing a fundamental architectural transformation — a shift from “electrical” to “optical.” At a time when demand for AI computing power is exploding and traditional copper cable interconnects are approaching their physical limits, whether CPO can truly break through the computing bottleneck has become the focal point of industry attention.

The Truth About the Computing Bottleneck: More Intractable Than “Can’t Compute” Is “Can’t Transmit”

To understand the value of CPO, one must first recognize a fundamental fact: in the era of large AI models, the core bottleneck of computing clusters is no longer the computational capability of individual chips, but the efficiency of data transmission between chips.

The volume of global data is expanding at a staggering rate. According to IDC forecasts, global data volume will surge from 213.56 ZB in 2025 to 527.47 ZB in 2029, maintaining a compound annual growth rate of 28% to 30% from 2024 to 2029. At the same time, the electrical signal attenuation of traditional copper cable interconnects is approaching its physical limit. As single-channel SerDes rates evolve toward 224G, electrical signal attenuation on conventional PCBs can reach as high as 97%. Inside AI data centers, a chip measuring 2 square inches already operates at working currents approaching 35,000 amperes, with per-chip power consumption reaching as high as 35 kilowatts. The growth of computing power is being held back by “connectivity.”

In 2023, the mainstream training clusters in the industry were still at the scale of thousands of GPU cards; by 2026, clusters of tens of thousands of cards have become the standard configuration for leading vendors, and clusters of hundreds of thousands of cards are being deployed rapidly. As GPU clusters advance from hundreds of thousands of cards toward millions, the bottlenecks of power consumption, transmission distance, and density for electrical signals transmitted over copper cables are becoming increasingly pronounced. Traditional AI cluster interconnect solutions rely on pluggable optical modules for optical-to-electrical conversion. This approach was manageable in the era of thousand-card clusters, but at the scale of ten thousand or even a hundred thousand cards, the problems become impossible to ignore. A cluster used for large model training may require thousands of GPU cards, with each card configured with an 800G pluggable optical module; a single module already consumes 15 to 20 watts of power, and when thousands of modules are added together, the “interconnect” alone consumes one-third to one-half of the data center’s total electricity.

Even more alarming is the runaway trend in power consumption. According to industry data, if NVIDIA’s GB300 NVL72 cluster adopts traditional solutions, the proportion of power consumed by optical interconnects alone will exceed 30%. McKinsey & Company projects that by 2030, meeting global artificial intelligence demand will require USD 5.2 trillion in data center investment, and addressing the challenges of power and bandwidth has become the central proposition for hyperscale enterprises seeking to ensure returns on investment. A single 64-bit data movement off-chip or across systems consumes approximately 1,000 picojoules of energy, whereas a single floating-point operation requires only about 10 picojoules — a difference of two orders of magnitude — forming a severe “memory wall” and “interconnect wall.” Traditional electronic packaging approaches such as ball grid arrays, constrained by signal integrity, PCB routing channels, and connector bottlenecks, are approaching the limits of their bandwidth scalability.

What Is CPO: An Architectural Revolution From “Living Apart” to “Living Together”

In response to the bottlenecks described above, CPO technology offers an answer that appears simple yet is profoundly disruptive: move the optical engine into the same “house” as the switch chip.

In conventional solutions, the pluggable optical module and the switch chip are separate, connected through long circuit boards and optical fibers, with electrical signals needing to travel a distance of 15 to 30 centimeters. CPO technology directly integrates the photonic engine onto the same packaging substrate or within the same module, shortening the electrical signal transmission distance within multi-chip assemblies to just a few millimeters. This shift from “living apart” to “living together” delivers benefits that are comprehensive in scope.

First is a significant reduction in power consumption. NVIDIA’s 1.6 Tb/s CPO solution consumes only 9 watts — approximately 70% lower than conventional solutions — and the optical interconnect power consumption of large-scale clusters can be reduced by 84%. The core reason for this reduction is that CPO eliminates the DSP chip — the highest-power-consuming component in traditional pluggable modules — and converts electrical signals directly into optical signals, without relying on traditional DSP or ASIC chips for retiming and signal equalization.

Second is a leap in bandwidth density. CPO can raise bandwidth density from the 5 to 40 Gbps/mm of conventional solutions to 50 to 200 Gbps/mm, delivering a revolution in computing power per unit area. With this technology, switch panel bandwidth can readily reach 3.2T, 6.4T, or even 12.8T, whereas traditional modules are constrained by panel size and thermal dissipation capacity and struggle to break through this ceiling.

Third is a breakthrough in signal integrity. CPO reduces electrical interconnect distances to the sub-hundred-micrometer scale, fundamentally resolving the integrity attenuation problems of high-speed signals above 224G and paving the way for 3.2T, 6.4T, and even higher rates. At the same time, the distance from the optical signal to the chip is shortened from tens of centimeters to just a few millimeters, significantly reducing latency.

Silicon photonics is the most promising technology platform for realizing CPO. It utilizes high-refractive-index-contrast waveguides formed from silicon and silicon dioxide to achieve ultra-compact optical path transmission on a chip, with low loss and support for small bending radii, while remaining compatible with mature CMOS manufacturing processes. However, achieving high-performance CPO still faces multiple technical challenges: how to efficiently couple the micrometer-scale mode of an optical fiber with the sub-micrometer-scale silicon waveguide; how to manage the polarization state of optical signals; and how to integrate the light source required by silicon — a material that cannot itself emit light. Currently, the industry primarily addresses this last challenge by integrating III-V material lasers, such as indium phosphide (InP).

2026: The Turning Point Year in Which CPO Moves From “Feasible” to “Mass Production”

If 2024 and 2025 were the proof-of-concept and introductory phases for CPO, then 2026 is unquestionably the inaugural year in which CPO formally advances toward industrialization. According to industry data, global CPO-related orders in 2026 have already exceeded RMB 120 billion (approximately USD 16.67 billion), representing year-on-year growth of more than 300%, with industry order production schedules pushing close to 2028.

In March 2026, at NVIDIA’s GTC conference, Jensen Huang announced that the world’s first CPO switch, Spectrum X, had entered full-scale production. Manufactured by TSMC, this switch supports large-scale generative AI workloads with a bandwidth of up to 409.6 Tb/s. NVIDIA simultaneously unveiled the Quantum3400 CPO switch, which employs a deeply co-packaged process to shorten electrical signal transmission distances from the centimeter scale to within 1 millimeter, providing critical data support for the Rubin Ultra platform slated for mass production in the second half of 2026.

Almost simultaneously, TSMC announced that its silicon photonics integration platform, COUPE, is expected to enter mass production in 2026. The COUPE platform integrates optical engines and a variety of computing and control ASICs onto the same packaging substrate using SoIC technology, bringing components closer together to improve bandwidth and power efficiency while reducing electrical coupling losses. TSMC noted that technical breakthroughs in three key areas — wafer testing, fiber array unit assembly, and high-speed optical packaging assembly — will be decisive in determining whether CPO can be successfully scaled.

As early as 2025, Broadcom had already launched a 51.2T CPO switch with directly integrated silicon photonic engines, and had partnered with Tencent to achieve the first field deployment, reportedly saving more than 50% of optical interconnect power consumption. Intel, drawing on years of accumulated expertise in silicon photonics and CPO, has already delivered product samples to customers. Domestic Chinese manufacturers are also accelerating their pursuit. Innolight, Tianfu Communication, Accelink Technologies, Huagong Zhengyuan, and others have already laid out CPO-related technologies, and some samples have appeared at the OFC exhibition.

From a market data perspective, some institutions forecast that the global optical module market will reach USD 28.75 billion in 2026, with AI data center-related applications accounting for more than 62%. The global high-speed cable market will reach USD 4.114 billion, up 39% year-on-year, with the 800G specification’s share jumping from 36% to 43% to become the absolute mainstream of the industry; 1.6T cables have officially begun mass production and are expected to capture a market share exceeding 22% by 2028. 2026 is also the inaugural year of large-scale volume shipments for 1.6T optical modules, with full-year shipments expected to exceed 10 million units and the full-year market size reaching USD 16.697 billion, up 29% year-on-year.

Market Outlook and Challenges: The Speed-of-Light Revolution Is No Smooth Road

Despite the broad prospects, the large-scale deployment of CPO still faces multiple obstacles.

From a market expectations standpoint, research institutions’ forecasts of the CPO market size show some divergence, stemming primarily from differences in how each institution defines the scope of CPO. Some industry reports indicate that the global CPO market was not yet USD 500 million in 2024, but is expected to expand to USD 1 billion in 2025 and could potentially surpass the USD 5 billion mark in 2027. Some institutions project that CPO technology will begin scaling up in volume from 800G and 1.6T ports in 2026 and 2027, at which point the global Ethernet optical module market is expected to grow 35% year-on-year to USD 18.9 billion.

At the technical level, CPO faces a series of challenges including high-density fiber array packaging, laser integration reliability, thermal management, and coordinated chip-optical engine testing. Taking thermal management as an example: in advanced packaging designs, the power consumption of a single chip can reach as high as 35 kilowatts; if the number of such chips in a data center reaches millions, existing power supply solutions will be unable to meet demand, and it may even be necessary to construct dedicated power generation facilities. In addition, the supply chain has not yet matured, and the complete ecosystem spanning from design tools to manufacturing processes still requires time to build.

At the business model level, the widespread adoption of CPO will have a profound impact on the existing optical module supply chain. Some industry insiders have stated bluntly: “For optical module companies, not doing CPO means waiting to die, while doing CPO means looking for death” — because the technical difficulty is extreme and the business model will undergo fundamental change. CPO integrates the optical engine into the switch chip packaging, meaning that optical modules that were previously sold as independent products may be consolidated into the offerings of switch chip vendors, and the value of the supply chain will concentrate toward upstream core segments.

Competing technology routes also cannot be ignored. Linear Pluggable Optics (LPO) removes the DSP chip from within the optical module, reducing some power consumption and latency while maintaining the advantage of being pluggable; although it makes certain compromises on cable length and link diagnostics, its deployment flexibility poses a short-term challenge to CPO. Some institutions project that LPO deployment has now commenced and that deployments in the millions of units will occur next year.

Nevertheless, an industry consensus is forming: in the short term (two to three years), CPO will primarily be used for internal interconnects within AI clusters at hyperscale cloud vendors, while traditional pluggable modules will continue to dominate long-distance interconnects and general-purpose data centers; in the long term (five or more years), as 1.6T and 3.2T rates become fully prevalent, the power consumption and form factor of pluggable modules will become unacceptable, and CPO will become the standard configuration for high-end switches.

The rise of CPO technology is, in essence, the inevitable result of AI computing power demand forcing a fundamental reconstruction of underlying infrastructure. As Moore’s Law approaches its physical limits, and as the pace at which GPU computing power doubles far outstrips the improvement in interconnect bandwidth, breaking through the “interconnect wall” has become the key to unlocking the latent potential of computing power. From NVIDIA’s mass-production CPO switches to TSMC’s COUPE platform, from Broadcom’s 51.2T solution to the comprehensive engagement of China’s domestic supply chain, 2026 has already become the turning point year in which CPO moves from the laboratory to the data center floor. This “speed-of-light revolution” may not replace all conventional solutions overnight, but it has unquestionably already pointed the way toward the future of AI computing power interconnects. As industry insiders have put it, the future competition in AI data centers is not only a contest of computing chips, but also a contest of connectivity technology. And in this contest, light is replacing electricity.

[Disclaimer]: The above content reflects analysis of publicly available information, expert insights, and BCC research. It does not constitute investment advice. BCC is not responsible for any losses resulting from reliance on the views expressed herein. Investors should exercise caution.