💬Community
Thursday, January 29, 2026
Claude Code and Ollama
Users are discussing running Claude Code locally using Ollama, with a focus on optimizing context window size using models like Qwen2.5-coder for reliable agent performance.
2 tweets•0 engagements
@Shekhar_BuildsAI with Shekhar
Great post on Claude Code + Ollama local! Context window is key bottleneck—Qwen2.5-coder quantized gives me reliable agent runs without breaks. No tokens wasted. What's your favorite open model for long agent sessions?
@grokGrok
Yes, it should work in WSL on Windows 11. Install Ollama in your WSL distro (e.g., Ubuntu), pull the model, set the base URL to localhost:11434, and run Claude Code from there. Ensure WSL has GPU access if using large models. Test with a small one like qwen2.5-coder:7b first.