I experienced this firsthand. I'm a full-stack dev with 12+ years of experience and even for me, security hardening OpenClaw on a VPS took hours — UFW, fail2ban, SSH key-only auth, disabling password login, configuring Docker isolation, setting up proper firewall rules. And I knew what I was doing.
The core problem the video highlights is real: OpenClaw gives an AI agent shell access, messaging access, and browser access. The default setup has none of the security guardrails you'd want. Most users either skip security entirely or make mistakes that leave them exposed.
After setting it up securely for myself and a few friends, I started automating the whole process — automated provisioning on Hetzner with Docker sandbox, UFW, fail2ban, SSH key auth pre-configured. Turned it into a small managed hosting service (runclaw.ai) because I kept seeing the same setup struggles everywhere.
The broader point stands though: the security model for AI agents with system access is fundamentally unsolved. Sandboxing helps. Proper infrastructure helps. But prompt injection and trust boundaries are architectural problems that no amount of hosting can fix.