Portfolio

Works

Deployed apps, open-source experiments, and a curated collection of projects spanning AI, full-stack, and more.

Live deployments

echo-vault-one.vercel.app

EC

Echo Vault

LIVE

Privacy-first journaling app with flexible LLM and embedding support. Uses RAG with PGVector for semantic search across entries, configurable time decay for conversational memory, and real-time AI-generated reflections via Celery + Redis streaming. Supports self-hosted or cloud model endpoints.

FastAPI
Next.js
PGVector
RAG
Celery
Redis
Launch

comic-ranker.vercel.app

CH

Character Rank

LIVE

Interactive ranking platform with Elo-based matchup system and 3D character visualization. Serverless backend with tRPC, full TypeScript, and offline-compatible data layer.

Next.js
TypeScript
tRPC
Three.js
LaunchSource

fireblog-gray.vercel.app

FI

Fireblog

LIVE

Full-stack blogging platform with Google OAuth, real-time Firestore sync, and rich text editing.

Next.js
Tailwind
Firestore

Open source

github.com/akumar23/fleet

FL

Fleet

GITHUB

Multi-cluster Kubernetes management CLI built in Go. Concurrent execution engine using worker pools runs operations across clusters in parallel. Supports batch apply/get/delete with configurable parallelism, dry-run safety, and multi-format output (table, JSON, YAML). Tested across 8 clusters.

Go
Kubernetes
CLI

github.com/akumar23/Collision-Detection-Neural-Net

CO

Collision Detection NN

GITHUB

Autonomous navigation system using a neural network trained for real-time obstacle avoidance. Achieves 99% success rate in a simulated environment.

Python
scikit-learn
Pygame

github.com/akumar23/SlitherRL

SL

SlitherRL

GITHUB

Reinforcement learning agent trained to play Snake. Learns entirely from game state observations using deep Q-learning — no hand-crafted heuristics.

Go
DQN

More projects

AI & Machine Learning

4 projects
LLM Fine-Tuning Toolkit
Featured

LLM Fine-Tuning Toolkit

Command-line toolkit that democratizes billion-parameter language model fine-tuning for consumer hardware through LoRA and 4-bit quantization, reducing memory requirements by 90%.

Python
PyTorch
HuggingFace
LoRA