Monday, February 2, 2026
Running Claude Code Locally
People are discussing methods and guides for running Claude Code using local LLMs and open-source alternatives like Ollama, potentially for free or with cost savings, while reserving Claude for complex tasks.
I love these guides here: "With this simple trick and a 250 000 CZK machine you can save hundreds of crowns on Claude Code by installing a different LLM as the core and running Claude Code completely for free."
Iβve never used GLM for my own work, but when Iβm teaching people how to use claude code I have them get the Z AI subscription. Insane value if you donβt need gigabrain fully autonomous capabilities
Smart split: local models for the high-volume routine stuff (briefings, summaries), Claude for the complex reasoning work. Same pattern I use - MLX for quick iterations, then Claude Code when I need it to actually think through architecture or debug tricky issues. Cost efficiency