The goal was to explore whether an assistant could reliably handle interview style interactions such as system design discussions, multi step coding problems, and deeper follow up questioning without hiding behavior behind a closed SaaS.
The assistant supports both cloud and local LLMs, uses a bring your own API key model, and is intentionally opinionated so behavior stays predictable under pressure.
Most of the work went into managing context, follow ups, and failure cases rather than optimizing for fast single shot answers.
I used Antigravity heavily during development to iterate quickly, then refined and validated the behavior manually.
Repository https://github.com/evinjohnn/natively-cluely-ai-assistant
Happy to answer questions about design tradeoffs, local versus cloud inference, or what worked and failed while building this.
No comments