Latency is a tax on reality. Letโs eliminate it.
I partner with a handful of elite teams on their hardest architectural challenges.
This funds my private R&D lab, where I have full skin in the game: risking my own capital.
If you're building an infrastructure-level system that must survive failure, evolve for years, and perform at the limits of hardware, let's talk. Available for fast one-time second opinions and quick deep audits.
๐ง yura.nevsky@gmail.com
๐ LinkedIn
Recommendation Engine for TikTok-like social app (2023)
Built a recommendation engine to serve 10K RPS with p99 < 20ms on bare metal. The C++/Go solution used SIMD HNSW + Roaring Bitmaps. I was an early adopter of AWS Graviton (ARM instances), achieving 2x cost efficiency before it became the norm. The system now sustains p99 < 15 ms.
Accelerated a black-box PoW node that had no docs and limited source code. I reverse-engineered its gossip layer and wrote a custom booster in Go/C++. Optimizing with PGO + AVX2 resulted in a +16% validator reward with zero protocol changes.
Designed end-to-end security: Zero Trust + Intel SGX + anti-DDoS + cold wallet protection + secure SDLC. Result: 100% compliance (Estonian regulations), zero incidents, full protection of capital and proprietary code.
Engineered a system to query a 50 TB / 500B-row market data lake in real time, cutting latency from 60+ seconds to milliseconds. I designed the ClickHouse architecture (schema, clustering, OS tuning) long before it became the industry standard. The result: <50 ms query latency: a 1200x speedup.
Distributed Financial DB for a NASDAQ-listed Fintech (2017)
Co-designed (with one other engineer) a geo-distributed financial processor that could survive a full datacenter failure. Built on ePaxos and RAMCloud principles, the system passed Jepsen: the world's toughest distributed systems test.
Built a scanner in 4 months to monitor 50K+ servers 24/7, even during DC failures. Deeply integrated with internal infra across 3 datacenters, it saved the company over $1M vs. a commercial solution. It's still in use today.
At my R&D lab, 1ms.tech, I treat trading as a science, not speculation. The process is about forming a hypothesis about a market inefficiency, testing it rigorously, and only accepting discoveries backed by undeniable evidence. It's the antithesis of overfitting models to random noise.
This is the ultimate application of my 15 years in systems engineering: building the architecture to turn market chaos into a laboratory for verifiable discoveries.
Adopted a bare-metal C approach in HPC: writing code as a direct conversation with the CPU (MESI, caches, NUMA, prefetching) to turn hardware behavior into predictable performance.
Deepened my expertise in C++, modern Bazel, and binary-level tuning. I worked across the stack: from iOS (Swift, Obj-C) to Flutter (Dart) to build systems, not just services.
Deployed Kubernetes across multiple datacenters before it was popular. Built high-concurrency systems with async Python long before it was mainstream.
Shifted from full-time employment to independent consulting, making an early, successful bet on Go for high-load infrastructure.
Launched production SPAs with Angular.js and MongoDB in the pre-MEAN era, making a strategic bet on JSON-native infrastructure while others clung to the LAMP stack.
Built real-time platforms in Erlang and Ruby. Dev/prod parity with Vagrant: four years before Docker existed.
Worked with a talented team of engineers to develop a dynamic-schema API (conceptually similar to what later became GraphQL) for a VoD platform, and to build a 10 Gbps video CDN: extreme scale for the time.
At 14, I left formal schooling to work full-time as a software engineer: thanks to an exceptional team who trusted me, supported my growth, and gave me the opportunity to start my career years ahead of schedule.