Here's a related issue that took me a whole day to figure out why Claude Code telemetry pings were causing a total network failure when using CC with a local LLM via llama-server.
I wanted to use local LLMs (~30B) on my M1 Macbook Pro Max, with Claude Code for a privacy-sensitive project. I spun up Qwen3-30B-A3B via llama-server and hooked it up to Claude Code, and after using it for an hour or so, found that my network connectivity got totally borked: browser not loading any web-pages at all.
Some investigation showed that Claude Code assumes it's talking to the Anthropic API and sends event logging requests (/api/event_logging/batch) to the llama-server endpoint. The local server doesn't implement that route and returns 404s, but Claude Code retries aggressively. These failed requests pile up as TCP connections in TIME_WAIT state, and on macOS this can exhaust the ephemeral port range. So my browser stopped loading pages, my CLI tools couldn't reach the internet, and the only option was to reboot my macbook.
After some more digging (with Claude Code's help of course) I found that the fix was to add this setting in my ~/.claude/settings.json
{
// ... other settings ...
"env": {
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
}
// ... other settings ...
}
I added this to my local-LLM + Claude Code/ Codex-CLI guide here:
https://github.com/pchalasani/claude-code-tools/blob/main/do...
I don't know if others faced this issue; hopefully this is helpful, or maybe there are other fixes I'm not aware of.