Project
About
I'm a Lead Engineer at Angara Ecommerce. I run the backend: high-traffic commerce, personalisation, payments, fulfillment, plus the infra under all of it.
Before this I spent four years at mavQ (ex-MTX Group). Mostly building HIPAA and SOC 2 SaaS products, a few platforms for US state governments, and some early AI stuff on top of document pipelines.
I'm happiest one layer below the application. Distributed systems, Rust, observability, tools that make other engineers' lives easier. On the side right now I'm working on Wiremap and CapybaraDB.
In production I've shipped in Rust, Go, TypeScript, Python, Java, C#, and C/C++. I've run workloads on AWS, GCP, Azure, and bare metal. Companies have ranged from seed-stage startups to enterprises with multi-million-dollar platforms.
Experience
-
Aug 2025 — present
Lead Engineer · Angara Ecommerce
Noida · Lead Engineer
- Re-architected 10+ core commerce modules: personalisation, product customisation, payments, fulfillment.
- Took the platform through 2x traffic growth and cut AWS bill by 30% in the process.
- Running EKS and GKE with Helm, ArgoCD, Terraform. Sitting at 99.9%+ uptime.
- Lead 3+ cross-functional teams. Our goals map to conversion, AOV, and revenue.
- Shipped CRO work that moved conversion 12 to 18% on our main funnels.
-
Apr 2021 — Aug 2025
Senior Software Engineer · mavQ (ex-MTX Group)
Jaipur / Remote · 4 yrs 4 mos
- Built the distributed document processing and indexing stack. High ingest, low-latency querying.
- Shipped enterprise case management platforms with AI search, workflows, reporting, and call-centre integration.
- Owned architecture on several multi-million-dollar projects. Did PoCs and estimations during pre-sales.
- Most of this ran under HIPAA and SOC 2, some for US state government customers.
- Mentored engineers, ran technical interviews, owned on-call.
-
Jan 2020 — Apr 2021
Associate Software Engineer · Impledge Technologies
Noida · 1 yr 4 mos
- Did full-stack work across .NET, NestJS, Angular, React, Django.
- Shipped features to real customers on week-long cycles.
- Honestly, the year I learned the most by just writing a lot of code.
Projects
-
Wiremap
An API security and codebase intelligence CLI. It's a single Rust binary you run against a Python, Node, Next, Angular or React codebase. It tells you your API surface, flags OWASP API Top 10 issues, and draws your module dependency graph. Everything runs locally. It doesn't phone home.
A single Rust binary that reads your codebase and tells you where your APIs are vulnerable. Ships as a
curl-installable binary. No cloud. No phone home. Safe to run in regulated environments.Why I built it
Most API-security tools want a cloud account and a copy of your source. That's a non-starter for half the companies I've worked at. I wanted something you run against your repo and get answers in under a minute.
How it works
- Tree-sitter parsers for each supported language handle the AST.
- A ruleset mapped to OWASP API Top 10 runs against the AST plus a call graph.
- Findings get written into an embedded SQLite DB that the bundled React + D3 UI reads from Axum on localhost.
- The binary is self-contained. UI, server, rules, parsers, all baked in.
What's interesting
- Zero runtime dependencies. Works in an airgap.
- Dependency graph visualisation uses D3 force layout.
- Rules are hot-reloadable during development.
-
CapybaraDB
Plug in your databases and files. Ask questions in English. Get answers back with citations. Under the hood, Go does ingest and orchestration, Rust does the search (tantivy for BM25, usearch for vectors, DuckDB for metadata). Results get fused with RRF, reranked, and streamed out via SSE.
A question-answering engine for your company's own data. You connect sources, it streams them in, you ask in English, it answers with citations that actually point to the right row or file.
Why I built it
Every company I've worked at has the same problem: nobody can find anything. The data's spread across Postgres, Mongo, a few S3 buckets, a fileshare, and four Notion workspaces. I wanted one endpoint that could answer questions across all of it without becoming a data-warehouse project.
How it works
- Ingest. Connectors stream in continuously: Postgres CDC, Mongo change streams, filesystem watchers, S3.
- Process. Chunk, dedupe via SHA-256, embed to vectors.
- Store. A Rust search core: tantivy for BM25, usearch for HNSW vectors, DuckDB for metadata filters.
- Query. LLM planner turns the question into a plan. Hybrid BM25+vector search merged with RRF, filtered, cross-encoder reranked, synthesised by an LLM, streamed token-by-token over SSE.
What's interesting
- Go handles orchestration, Rust handles the search core. They talk over gRPC.
- Multi-tenant via namespaces. Connector creds are AES-GCM encrypted.
- Cross-encoder rerank is what takes results from "pretty relevant" to "right answer".
-
Persistent Cache-DB
A key/value store that hits 320M records/sec. It uses memory-mapped I/O and a binary format I tuned for the access pattern I cared about. Zero syscalls on the read path. Survives crashes.
A persistent key/value store I wrote to find out how fast a read path can actually get when you stop making excuses for it.
Why I built it
I kept reading benchmark numbers that felt off. I wanted to feel for myself how much performance you leave on the table by using standard libraries.
How it works
- The whole store is one memory-mapped file. The kernel's page cache is my cache.
- Records are 32-byte headers, cache-line aligned. Each has a hash, TTL, flags, value offset, length, and CRC.
- A robin-hood hash table with bounded probe distance sits on top.
- Power-of-two table size so probe is a bitmask, not a modulo.
- Values are returned as zero-copy slices over the mmap. No allocation on the hot path.
What I learned
- The read path is: one hash, one load, one branch, one slice. That's the budget.
- At this speed, every syscall is a catastrophe. Every alloc is a catastrophe.
- Most "slow" storage is slow because it apologises to the OS too often.
-
Rusty OS
A very small operating system in Rust. I wrote the bootloader, the kernel, paging, and a cooperative scheduler. It boots in a VM. I wrote it because I wanted to know what actually happens before
main().I wrote an operating system from scratch in Rust so I'd finally know what happens between pressing the power button and seeing a prompt.
Why I built it
Most engineers I know treat the OS as a black box. I was tired of that. I wanted to be the kind of person who could reason about paging, interrupts, and the jump from 16-bit to 64-bit without looking things up.
How it works
- BIOS loads my 512-byte MBR, which loads the stage-2 bootloader.
- Stage-2 flips into protected mode, sets up a GDT, enables A20.
- Then into long mode with paging enabled, identity-mapping the low 2GB.
- Kernel init: IDT, timer, keyboard, bump allocator.
- Cooperative round-robin scheduler. A toy shell on top.
What I learned
- Reading
no_stdRust is a whole different skill. You build your own vocabulary. - Page tables are a beautiful data structure. Until you debug one.
- Every layer above this exists because of a specific problem this layer doesn't solve.
-
Observability Agent
A Kubernetes DaemonSet that runs one pod per node and ships metrics, logs, and traces. The hot-path collector is in Rust, the control-plane sidecar is in Go, and the rule DSL is in Python.
A Kubernetes-native agent that sits on every node and ships everything useful to a central collector. I got tired of running three different agents for three different signals.
Why I built it
Most clusters end up running a Prometheus agent, a log shipper, and a trace agent as three separate things. That's three deploys, three YAML files, three sets of resource limits. I wanted one.
How it works
- Deploys as a DaemonSet, so one pod per node automatically.
- The hot-path collector is in Rust: zero-alloc ring buffer, OTLP gRPC to the collector.
- The control-plane sidecar is in Go. It talks to the Kubernetes API, discovers pods, wires routing.
- A small Python service lets you author rules in a DSL and hot-reload them without redeploys.
What's interesting
- The split is intentional. Rust does what Rust does best (go fast, don't allocate). Go does control-plane work. Python is for the ergonomic layer.
- Exports to Prometheus, Loki, Tempo over OTLP. Nothing proprietary.
-
Human Protocol
A way to verify a request came from a human without cookies, tracking, or CAPTCHAs. Clients pay a small proof-of-work cost once per session. Bots running at scale pay it at every URL. The token is blind-signed so the verifier never learns who you are.
A protocol that makes bots too expensive to run without punishing humans. No CAPTCHAs, no fingerprinting, no cookies. Just asymmetric cost.
Why I built it
CAPTCHAs are a tax on humans and a rounding error for bot farms. I wanted to flip that. The idea is old (Hashcash, etc.), but I hadn't seen a version that was both privacy-preserving and stateless.
How it works
- Client asks the edge for something. Edge responds with a proof-of-work challenge.
- Client solves the challenge, pays ~200ms of CPU, sends the solution.
- Edge issues a blind-signed token. The signer never learns which user got which token.
- Client reuses the token across a bounded number of requests. Origin verifies the signature and serves.
Why it works
- For a human: one challenge, invisible. Zero friction.
- For a bot crawling 10M URLs: that's weeks of CPU across a fleet. The economics stop working.
- Blind signatures mean you get verification without identification.
-
Lazy Script
A small scripting language I wrote in Python to understand how languages actually work. Hand-rolled lexer (no regex), Pratt parser for expressions, tree-walking interpreter. Closures and first-class functions.
A small scripting language I built to stop being mystified by languages. It has closures, first-class functions, and a hand-rolled everything.
Why I built it
Every time I hit an edge case in Python or JavaScript, I realised I didn't actually understand how the language got there. The only way to fix that is to write one yourself.
How it works
- Lexer. A cursor and a state machine. No regex. I wanted to feel every transition.
- Parser. Recursive descent for statements, Pratt parser for expression precedence.
- AST. Small, typed, easy to walk.
- Eval. Tree-walking interpreter with lexical closures.
What I learned
- Most "weird" language behaviour is actually obvious once you've written the parser.
- Closures are trivial at the interpreter level and hilariously painful at the compiler level.
- If you can implement
let, you can implement most of the rest.
-
ESP32 AI Robot
A rover built on an ESP32-CAM. It navigates on its own, runs vision on-device, fires webhooks when it sees motion, and falls back to a Bluetooth gamepad if I want to drive it.
A small rover I built on weekends. It drives itself, sees motion, and yells over MQTT when something interesting happens.
Why I built it
I wanted to touch hardware again. Software-only projects start to feel very clean and very not-real after a while. This one breaks if the battery is low.
How it works
- ESP32-CAM for vision and Wi-Fi. Two DC motors driven by an L298N.
- Firmware in C++. A small Rust bridge runs on a laptop for the dashboard.
- On-device motion detection. When it fires, it publishes to MQTT and hits a webhook.
- Bluetooth gamepad as a fail-safe. If autonomy gets confused, I take over.
What was interesting
- Memory on an ESP32 is painfully tight. You feel every allocation.
- Wi-Fi drops mid-drive are their own category of bug.
- Getting the motor PWM and camera stream to coexist without jitter took longer than I'd like to admit.
-
FlowX
A no-code workflow builder. You drag and drop nodes on a canvas, and a real-time engine runs them as a DAG. Runs are versioned and replayable.
A no-code workflow builder. Business users draw the workflow. An engine behind it runs the workflow as a DAG in real time. Every run is versioned and replayable.
Why I built it
Product people kept asking engineering to build "just a small workflow for this one edge case" every week. So I built the thing that let them build it themselves.
How it works
- React + Konva canvas for the drag-and-drop editor. Outputs a graph as JSON.
- Node.js execution engine. The graph becomes a DAG, topologically sorted.
- Event-driven runtime. Each node emits events the engine reacts to.
- Every run gets a version stamp so you can replay any past run byte-for-byte.
What was interesting
- Versioned runs turn incidents into "let me replay the exact thing" debugging.
- Getting the canvas to feel good to use took more work than the engine.
-
File Drop
File sharing without a middleman. The Ethereum contract stores ownership and access rules. IPFS stores the bytes. The client encrypts everything with AES before it leaves the browser.
A way to share files without trusting a server in the middle. Contract says who owns what, IPFS holds the bytes, the client encrypts everything before upload.
Why I built it
Dropbox and Drive are fine. But there's a different question: can file sharing work without a company in the middle at all? I wanted to see how close you can get with today's tools.
How it works
- Solidity contract stores file metadata: owner, ACL, content hash, encrypted key material.
- IPFS pins the encrypted blob. The CID goes on-chain.
- Before upload, the client generates an AES key and encrypts the file. The key is wrapped for each recipient using their public key.
- To read, a recipient fetches from IPFS, unwraps the key on-chain read, decrypts locally.
What was interesting
- Gas costs shape everything. You spend a lot of design effort just avoiding writes.
- The trust model is honest: if you lose your key, the file is gone. There's no support line.
Stack
Languages
- TypeScript · JavaScript7 yrs
- Python7 yrs
- Java5 yrs
- Go4 yrs
- Rust4 yrs
- C / C++3 yrs
- C#2 yrs
Frameworks
- Angular7 yrs
- React7 yrs
- Node · NestJS6 yrs
- Next.js4 yrs
- Django · FastAPI4 yrs
- Spring Boot3 yrs
- Axum · Tokio3 yrs
- .NET · ASP.NET2 yrs
Data
- Postgres · MySQL7 yrs
- MongoDB6 yrs
- Redis6 yrs
- Elastic · OpenSearch5 yrs
- Kafka · RabbitMQ5 yrs
- S3 · GCS6 yrs
Cloud & Infra
- AWS7 yrs
- GCP7 yrs
- Azure3 yrs
- Baremetal4 yrs
- Kubernetes · Helm5 yrs
- Terraform · ArgoCD4 yrs
Things I've gotten good at
- HIPAA and SOC 2 SaaS. Multi-tenant platforms with audit trails, encryption, and PHI isolation done properly.
- US state-government SaaS. Case management and workflow tools that actually meet residency and accessibility rules.
- AI in production. RAG, hybrid search, LLM orchestration. Real users, not demos.
- Distributed systems. Event-driven architectures, idempotency, CDC pipelines, hot paths that don't blow up.
- Systems programming. Writing Rust where it matters. Kernels, memory-mapped storage, lock-free structures.
- Cloud-native platforms. Kubernetes-first, multi-cloud failover, keeping infra bills sane.
- Developer tools. CLIs, SDKs, linters, whatever helps the rest of the team move faster.