Projects

Selected AI builds: Luna Hub, Live NPC, ChefByte, and an LLM-based floorplan generator.

Luna Hub: Your Personal AI Automation Platform

LunaHub.dev

Project Summary:

Supervisor-driven AI hub with Caddy as the single entry point, GitHub OAuth, OpenAI-compatible Agent API, and FastMCP. Extensions and services are auto-discovered, port-assigned, and secured with per-service API keys. Hub UI manages tools, presets, and an update queue that syncs configs and restarts cleanly.

Features:

  • Caddy + GitHub OAuth front door for auth, Agent API, MCP, and supervisor API
  • OpenAI-compatible Agent API with FastMCP hubs (main + named hubs with API keys)
  • Extension discovery with deterministic port assignment and per-service API key generation
  • Supervisor orchestrates auth, Agent API, MCP, Hub UI dev server, and extension services
  • Update queue + config sync keep master_config and .env consistent; restart-safe
  • Hub UI Tool/Agent Preset manager for enabling tools per hub and creating scoped agents

Stack:

  • Backend: FastAPI Agent API, FastMCP, Caddy
  • Frontend: React, Vite (Hub UI)
  • Infrastructure: Supervisor, GitHub OAuth, Docker
Luna Hub dashboard overview

Live NPC: Real-Time AI Societies for Games

huggingface.co/spaces/jbrinkw/live-npc

Why it matters:

Keeps immersion by blending navigation, survival, and conversation in one loop so humans can play alongside AI characters that feel present in the moment.

What it is:

A personality-first agent framework built for real-time worlds. Agents continuously perceive, navigate, coordinate, and speak in context—not just turn-taking chat. Any game that exposes tools matching the contract can host the agents with minimal glue.

Highlights:

  • Standardized real-time tool layer: plug into any host game that matches the spec
  • Continuous perception + routing + actions with deterministic event/trigger loop
  • Personality-first architecture that drives both behavior and dialogue tone
  • Lightweight multi-agent testbed for latency, behavior loops, social dynamics, and human-facing conversation flow
  • JSON socket protocol with trigger coalescing and survival/crafting systems for stress-testing
Live NPC gameplay screenshot

ChefByte: AI-Powered Meal Planning & Nutrition Tracking

ChefByte.app

Project Motivation:

ChefByte is a free, low-ops food inventory and macro platform. It combines barcode scanning, automated nutrition/storage/expiration fill, macro-density recipe search, shopping automation, and smart-scale ingestion—designed to run on a Supabase + Vercel free-tier footprint.

Results:

  • Barcode + manual entry with auto-filled nutrition/storage/expiration; macro-linked inventory
  • Macro-density recipe search with availability filters and meal-plan integration
  • Shopping automation: add-below-minimum, Walmart link generation, import-to-stock
  • Price intelligence with batch Walmart scraping and progress tracking
  • LiquidTrack device-key flow for smart scale ingestion (API key + Supabase RLS)
  • Serverless endpoints on Vercel; Supabase Postgres with per-user RLS for multi-tenant use
ChefByte dashboard overview

LLM Powered Floorplan Generator (OLD)

Project Motivation:

This project was an attempt at setting the foundation for infinite text driven generation of virtual world. In 2021 when I first had the idea for this I was trying to figure out what the barrier was to full stack text driven generation of video games. A lot of the fundamental forms of AI were already in place. GPT 3 had recently come out and it was good enough to outline the premise of a basic game and recursively generate game mechanics and storyline. Text to image was doing very well and text to 3D had just recently become useable. The only fundamental piece that was missing as a text driven model with enough spatial understanding to dynamically generate the layout of a game world. This project was my attempt at creating that model by augmenting existing LLMs with heavy prompt engineering and automated error correction.

Results:

  • You can see an example of the the before and after to the left. As you can see in the before pictures the room placements are essentially random. The after is significantly more coherent.
  • To get that result I hardcoded the reasoning chain of the model thinking about how to generate the coordinates of the room. By breaking up the thought process into about 7 different logical steps. For about half of the steps I implemented error checking via another LLM or a script that checked if the new values lined up with the values the LLM planned originally.
Floorplan after