This guide explains how to run Claude Code on your computer for free by connecting it to a local AI model using Ollama.
How the Free Setup Works
Normally Claude Code works like this:
Computer → Claude Code → Anthropic servers → Claude model
Which requires a paid subscription.
With the free setup it works like this:
Computer → Claude Code → Local AI model (via Ollama)
This removes the need for a subscription.
You run the AI model directly on your computer.
Step 1: Install Claude Code
First install Claude Code on your machine.
Go to:
You will see installation instructions for different systems.
For Mac:
Copy the install command and run it in Terminal.
For Windows:
Use the Windows install command in PowerShell.
Example command structure:
curl -fsSL <https://claude.ai/install.sh> | shAfter installation you should be able to run:
code
in the terminal.
This launches Claude Code.
Step 2: Install Ollama
Next you need Ollama, which allows AI models to run locally.
Go to:
Download the version for:
- Mac
- Windows
- Linux
Install it normally.
Once installed, Ollama runs as a background service that loads AI models locally.
Step 3: Download a Local AI Model
Now you must download an open-source AI model.
Open your terminal and run:
ollama run model-nameExample models you can use:
- qwen
- deepseek
- gemma
- gpt-oss
Example:
ollama run gpt-oss:20bThis downloads the 20B parameter GPT-OSS model.
The model file will be several gigabytes.
Once downloaded, the model runs locally on your computer.
You can test it immediately:
ollama run gpt-oss:20bThen type:
helloThe model will reply.
Even without internet.
Step 4: Confirm Your Installed Models
To check installed models run:
ollama listYou will see something like:
gpt-oss:20b
13GBThis confirms the model is installed.
Step 5: Configure Claude Code to Use the Local Model
Now you need to redirect Claude Code to your local model instead of Anthropic servers.
Run the following command:
export ANTHROPIC_BASE_URL=http://localhost:11434This tells Claude Code to send requests to Ollama running locally.
Next run:
export ANTHROPIC_AUTH_TOKEN=dummyThis replaces the normal API key requirement.
The token can be any text.
Claude Code only checks if something exists.
Step 6: Start Claude Code Using the Local Model
Now launch Claude Code.
First navigate to your project folder:
cd your-projectThen run:
code --model gpt-oss:20bClaude Code will now start using the local model.
You will see the model name appear in the interface.
Step 7: Test the Setup
Try a simple command:
create a todo app in HTMLClaude Code will generate code.
The processing will take longer than cloud models.
But the system works completely offline.
No subscription required.
Example Workflow
Now you can use Claude Code locally for tasks like:
- writing code
- debugging scripts
- building automation
- generating documentation
- creating small apps
Example request:
build a simple task manager in python
Claude Code will generate files in your project directory.
Choosing the Right Model
Your computer hardware determines which models work well.
Small models:
7B to 20B parameters
Good for laptops
Medium models:
30B to 70B parameters
Require more RAM
Large models:
100B+ parameters
Require powerful GPUs
Example recommendations:
| Hardware | Suggested Model |
|---|---|
| 16GB RAM | 7B–13B models |
| 24GB RAM | 20B models |
| 32GB+ RAM | 34B models |
Performance Expectations
Local models are slower.
Example speeds:
| Task | Local Model |
|---|---|
| Simple prompt | 10–30 seconds |
| Code generation | 30–60 seconds |
| Large prompts | 1–2 minutes |
Cloud models like Claude Opus respond much faster.
But local models give you zero cost usage.
Advantages of This Setup
No subscription required
No API cost
Works offline
Unlimited usage
Good for experimentation
Limitations
Slower responses
Lower reasoning quality compared to top models
Requires decent hardware
Large models require significant RAM
Best Use Cases
This setup works best for:
Learning AI development
Building automation scripts
Testing coding agents
Experimenting with AI workflows
Running private AI systems locally
Example Development Workflow
Many developers use this stack:
Claude Code
Ollama
Local AI model
Then combine with:
- n8n for automation
- Python scripts
- local APIs
- development environments
This creates a fully local AI development environment.
Key Idea
You do not need to pay monthly to experiment with AI coding tools.
Instead of:
Claude Code → Paid API
You run:
Claude Code → Ollama → Local AI model
Everything runs on your machine.
🚀 Next Steps: Combine Claude Code with Other Free Tools
Now that you have Claude Code running locally for free, here’s what to explore next:
- Run Clawdbot Locally – Add Discord bot capabilities
- Build n8n Workflows – Automate your AI development workflow
