Most computer science students at 21 are still wrestling with coursework. Boris Kriuk is running AI infrastructure inside the Hong Kong Government.
That gap sums up the distance between Kriuk and basically everyone else in his cohort. The Hong Kong-based AI researcher and entrepreneur has built a body of work that combines peer-reviewed theory with production deployments at a scale that usually takes decades to reach. His guiding line is short. AI should “define a field,” not just scale what already exists.
The headline project is Deep Workflow Orchestration, or DWO, which is now embedded inside the Hong Kong Government’s Electrical and Mechanical Services Department. DWO coordinates fleets of autonomous AI agents that monitor the kind of mechanical and electrical systems that quietly keep public infrastructure running. The hard part isn’t building the agents themselves. It’s wrangling unpredictable autonomous systems into something stable enough that regulators and engineers will actually sign off on it. That’s a problem most AI vendors have struggled to solve at any meaningful scale. Pulling it off at a governmental level before turning 21 puts Kriuk in rare company.
His research portfolio is unusually wide for someone his age. POSEIDON, short for Physics-Optimized Seismic Energy Inference and Detection Operating Network, builds seismological laws like the Gutenberg-Richter relationship directly into its neural architecture. The model doesn’t have to learn physics from scratch. The physics is built into the architecture itself. He paired the model with an open-source dataset of 2.8 million global earthquake events, which Kriuk describes as the largest of its kind. Physics-informed AI rarely operates at that scale, and the open release angle matters too. Well-resourced AI labs tend to keep their datasets locked up.
ELENA, his work on Epigenetic Learning through Evolved Neural Adaptation, borrows from biology to help neural architectures escape local optima in high-dimensional search spaces. The system pulls in ideas like mutation resistance, stability scores, and epigenetic tags to help models avoid getting stuck. It’s an unusual framing for a neural network optimization approach, which tends to lean on more conventional techniques.
Then there’s MorphBoost, released open-source on GitHub, which introduces what Kriuk calls adaptive tree morphing. The model self-organizes its splitting behavior during training, and in his benchmark testing it has beaten XGBoost on consistency and accuracy. That’s a real claim to make in a space XGBoost has dominated for close to a decade. The library is entrenched enough across machine learning workflows that displacing it requires more than incremental gains.
GeloVec, his computer vision work, addresses a quieter problem. Spatial inconsistencies in attention maps. By using Chebyshev distance to enforce multi-dimensional smoothness, it improves semantic segmentation, which is one of the foundational tasks behind autonomous vehicles, medical imaging, and satellite analysis. The applications it touches are ones people actually depend on.
The pattern across all four is what you could fairly call physics-first AI. Dynamic, adaptive, and grounded in the laws of whatever system is being modeled. That’s a notable position to take right now, with most of the field focused on scaling language models bigger.
The commercial side is where it gets even more interesting. Across a reported 40-plus projects for Forbes Global 2000 companies, his systems have, by his own accounting, produced millions in operational savings by automating workflows and clearing out the repetitive tasks that quietly drain enterprise resources. The savings are coming from the unglamorous work of replacing manual processes with systems that actually run reliably in production. Most AI vendors promise that kind of bottom-line impact. Few actually deliver it.
What makes his market position in Asia-Pacific unusual is that few others appear to be operating at the same intersection. Most firms in the region either consult on AI strategy or resell foreign models built somewhere else. Kriuk publishes the research, builds the underlying systems, and ships the infrastructure himself. That vertical integration has left competitors hard-pressed to match his scope. You can hire a strategy consultancy. You can license a foreign model. You can’t easily replicate someone who does all three layers.
He’s also collected the recognition you’d expect from someone twice his age. Best Paper Award at the International Conference on Machine Learning and Cybernetics, multiple editorial board seats, speaking slots at G20 WBAF, ACM and IEEE events. He has reportedly published more research in large-scale Climate AI and physics-informed neural networks than many established academics twice his age. That’s the kind of output that usually signals a much longer career behind it.
His core argument is that the future of AI isn’t more enormous language models. It’s dynamic, physics-informed, agentic systems that respond to reality instead of approximating it. It’s a contrarian thesis in a moment dominated by generative AI. But contrarian theses tend to look different when they’re already shipping. With government systems already running on his work and enterprise savings reportedly stacking into nine figures, that thesis has stopped being a prediction. It’s already operational.





























