Crunch Time for AI: Photonic Chips Enter the Menu
In the diet of modern artificial intelligence, chips are the staple. For decades, CPUs, GPUs, and more recently TPUs, have powered the explosion of deep learning. But what if the future of AI isn’t just about faster silicon—it’s about harnessing the speed of light itself?
Two recent Nature papers—Hua et al. (2025)1 and Ahmed et al. (2025)2—offer a potent answer: photonic computing is no longer experimental garnish—it’s becoming the main course.
Appetite for Acceleration
AI’s computational needs are growing insatiably. In 2024, OpenAI’s GPT-4 reportedly used over 25,000 GPUs for training, requiring hundreds of megawatt-hours of energy3. Meanwhile, NVIDIA’s CEO recently projected that AI data centers will consume more power than entire nations by 20304.
Transformer models, reinforcement learning agents, and real-time systems demand faster operations and lower energy consumption—something traditional electronics increasingly struggle to provide.
That’s where photonic computing enters.
In traditional chips (CPUs, GPUs), information travels as electrons through metal circuits. These electrons face resistance, generate heat, and slow down as circuits scale. In contrast, photonic chips use light particles (photons) to transmit data—no resistance, no heating, and much faster speeds.
In everyday terms: imagine comparing postal mail (electronics) with fiber-optic internet (photonics). One works, the other flies. But here’s the rub: fiber-optics send light signals over long distances through cables, while photonic chips must manipulate and compute with light on a silicon wafer a few centimeters wide. The challenge lies in scaling down optical systems to nanoscale waveguides, integrating them with electronic components, and maintaining precision and stability in an inherently analog domain.
Building an internet is one thing; building a microscopic light-based calculator is another entirely.
Hua et al.: PACE Yourself—500x Faster Inference
Hua and colleagues from Lightelligence Pte. Ltd. (Singapore) and Stanford University present PACE, a 64×64 photonic accelerator that achieves:
- Less than 5 ns latency per MAC cycle (vs. >2,300 ns on NVIDIA A10 GPU)
- ~7.6 ENOB (bit precision) for optical matrix multiplications
- Demonstrated solutions to Ising optimization problems
In plain terms: PACE can solve certain optimization problems in milliseconds, while GPUs might take a second. It’s like swapping out a bicycle for a bullet train.
Built using commercial silicon photonics and packaged with a 2.5D flip-chip design, the system blends light-speed math (photonic) with memory/control (electronic). This hybrid setup isn’t just efficient—it’s practical and scalable.
Ahmed et al.: Photonics Meets General AI
Ahmed and collaborators, mainly from MIT and Cornell Tech, push the vision even further. Their universal photonic AI processor executes real-world models:
- ResNet3 for computer vision
- BERT for language understanding
- DeepMind’s Atari RL benchmarks for control
Here’s what matters: their chip doesn’t just run lab demos. It handles high-profile, large-scale models with accuracy comparable to electronic chips.
For non-technical readers, this is like building a brand-new engine—and watching it outperform Tesla’s on the track. Photonics just became competitive for mainstream AI.
From Lab to Fab
Though their headlines suggest overlap, these two systems actually complement each other in demonstrating photonic computing’s versatility:
- PACE focuses on ultra-low-latency for structured optimization tasks
- Ahmed’s chip targets general AI inference workloads with broader model support
Anthony Rizzo, in his Nature commentary5, emphasizes this shift from academic proof to scalable tech. Both systems are:
- Built with CMOS-compatible designs
- Packaged in PCIe card form factors
- Suitable for commercial-scale deployment
Rather than replacing electronics, Rizzo views photonics as a hybrid teammate—handling math-heavy tasks in light, while leaving logic and memory to electrons. The result? Faster, greener, and smarter computation.
Together, these chips suggest that AI systems could soon be redesigned from the ground up, optimized for light-speed logic instead of transistor-bound tradition.
Implications for BPA and AI Innovation
From Cognaptus’ perspective, these breakthroughs in photonic computing could reshape both business process automation (BPA) and AI startup innovation—not just in vision, but in real execution.
Performance: Latency No Longer Tolerated
Today’s BPA and automation systems often suffer from lag between perception and action—for instance, AI document processors may take hundreds of milliseconds to classify content or trigger next steps. Photonic chips like PACE reduce that to microseconds, unlocking real-time AI workflows that were previously unfeasible.
Take the case of a global courier service: during peak season, delays in automated address verification and risk scoring—even just 300ms per transaction—can create compounding backlogs. With photonic chips, verification and routing decisions could happen faster than packages hit the next conveyor belt.
Industries like logistics, telecom, and finance could shift from “wait-and-react” to “see-and-act.”
Cost: From Luxury to Core Infrastructure
High-performance inference today still demands expensive GPUs or cloud credits. But once photonic chips become cheap and scalable, the entire business model changes. We’ve seen this before: in the 2010s, GPUs were niche tools for graphics rendering—then deep learning turned them into essential infrastructure. Cloud providers drove down cost, and suddenly, small startups could afford compute that once belonged to national labs.
Now imagine:
- On-premise RPA systems running on $500 photonic cards
- AI hardware bundled with software licenses for SMEs
- Consumer AI devices (like copilot appliances) built around light-speed processing
Just as cloud democratized compute, photonics could democratize low-latency, high-efficiency AI hardware—especially for startups that can’t afford big GPU clusters.
What This Means for the Future
We are standing at the edge of a shift not just in hardware—but in imagination. As photons replace electrons in the critical path of AI computation:
- Data centers may bend toward the speed of light, accelerating massive workloads without proportional energy costs
- Edge devices could grow smarter, faster, and cooler—finally achieving real-time inference without cloud dependence
- Algorithms themselves may evolve, redesigned to thrive on the parallelism and fluidity that light affords
Photonic chips are not just an upgrade; they are a rethinking of what’s possible in computation. As they move from prototype to production, they invite us to see computing—indeed, intelligence itself—through a brighter lens.
Cognaptus: Automate the Present, Incubate the Future.
-
Hua, S., Divita, E., Yu, S. et al. An integrated large-scale photonic accelerator with ultralow latency. Nature 640, 361–367 (2025). https://doi.org/10.1038/s41586-025-08786-6 ↩︎
-
Ahmed, S.R., Baghdadi, R., Bernadskiy, M. et al. Universal photonic artificial intelligence acceleration. Nature 640, 368–374 (2025). https://doi.org/10.1038/s41586-025-08854-x ↩︎
-
Source: SemiAnalysis. Inside OpenAI’s compute for GPT-4. https://www.semianalysis.com/p/gpt-4-training-openai-ai-infrastructure ↩︎
-
Source: Reuters. Nvidia CEO says AI data centers could consume more power than nations. https://www.reuters.com/technology/nvidia-ceo-says-ai-data-centers-could-consume-more-power-than-nations-2024-03-19/ ↩︎
-
Rizzo, A. A photonic processing boost for AI. Nature 640, 331–332 (2025). https://doi.org/10.1038/d41586-025-00945-5 ↩︎