Local models
Local models
- This guide shows how to set up local models.
- You should already be familiar with the quickstart guide.
- You should also quickly skim the global configuration guide to understand the global configuration and yaml configuration files guide.
Examples
- Issue #303 has several examples of how to use local models.
- We also welcome concrete examples of how to use local models per pull request into this guide.
Using litellm
Currently, models are supported via litellm by default.
There are typically two steps to using local models:
- Editing the agent config file to add settings like
custom_llm_providerandapi_base. - Either ignoring errors from cost tracking or updating the model registry to include your local model.
Setting API base/provider
If you use local models, you most likely need to add some extra keywords to the litellm call.
This is done with the model_kwargs dictionary which is directly passed to litellm.completion.
In other words, this is how we invoke litellm:
litellm.completion(
model=model_name,
messages=messages,
**model_kwargs
)
You can set model_kwargs in an agent config file like the following one:
Default configuration file
agent:
system_template: |
You are a helpful assistant that can interact with a computer.
Your response must contain exactly ONE bash code block with ONE command (or commands connected with && or ||).
Include a THOUGHT section before your command where you explain your reasoning process.
Format your response as shown in <format_example>.
<format_example>
Your reasoning and analysis here. Explain why you want to perform the action.
```bash
your_command_here
```
</format_example>
Failure to follow these rules will cause your response to be rejected.
instance_template: |
Please solve this issue: {{task}}
You can execute bash commands and edit files to implement the necessary changes.
## Recommended Workflow
This workflows should be done step-by-step so that you can iterate on your changes and any possible problems.
1. Analyze the codebase by finding and reading relevant files
2. Create a script to reproduce the issue
3. Edit the source code to resolve the issue
4. Verify your fix works by running your script again
5. Test edge cases to ensure your fix is robust
6. Submit your changes and finish your work by issuing the following command: `echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT`.
Do not combine it with any other command. <important>After this command, you cannot continue working on this task.</important>
## Important Rules
1. Every response must contain exactly one action
2. The action must be enclosed in triple backticks
3. Directory or environment variable changes are not persistent. Every action is executed in a new subshell.
However, you can prefix any action with `MY_ENV_VAR=MY_VALUE cd /path/to/working/dir && ...` or write/load environment variables from files
<system_information>
{{system}} {{release}} {{version}} {{machine}}
</system_information>
## Formatting your response
Here is an example of a correct response:
<example_response>
THOUGHT: I need to understand the structure of the repository first. Let me check what files are in the current directory to get a better understanding of the codebase.
```bash
ls -la
```
</example_response>
## Useful command examples
### Create a new file:
```bash
cat <<'EOF' > newfile.py
import numpy as np
hello = "world"
print(hello)
EOF
```
### Edit files with sed:
{%- if system == "Darwin" -%}
<important>
You are on MacOS. For all the below examples, you need to use `sed -i ''` instead of `sed -i`.
</important>
{%- endif -%}
```bash
# Replace all occurrences
sed -i 's/old_string/new_string/g' filename.py
# Replace only first occurrence
sed -i 's/old_string/new_string/' filename.py
# Replace first occurrence on line 1
sed -i '1s/old_string/new_string/' filename.py
# Replace all occurrences in lines 1-10
sed -i '1,10s/old_string/new_string/g' filename.py
```
### View file content:
```bash
# View specific lines with numbers
nl -ba filename.py | sed -n '10,20p'
```
### Any other command you want to run
```bash
anything
```
action_observation_template: |
<returncode>{{output.returncode}}</returncode>
{% if output.output | length < 10000 -%}
<output>
{{ output.output -}}
</output>
{%- else -%}
<warning>
The output of your last command was too long.
Please try a different command that produces less output.
If you're looking at a file you can try use head, tail or sed to view a smaller number of lines selectively.
If you're using grep or find and it produced too much output, you can use a more selective search pattern.
If you really need to see something from the full command's output, you can redirect output to a file and then search in that file.
</warning>
{%- set elided_chars = output.output | length - 10000 -%}
<output_head>
{{ output.output[:5000] }}
</output_head>
<elided_chars>
{{ elided_chars }} characters elided
</elided_chars>
<output_tail>
{{ output.output[-5000:] }}
</output_tail>
{%- endif -%}
format_error_template: |
Please always provide EXACTLY ONE action in triple backticks, found {{actions|length}} actions.
If you want to end the task, please issue the following command: `echo COMPLETE_TASK_AND_SUBMIT_FINAL_OUTPUT`
without any other command.
Else, please format your response exactly as follows:
<response_example>
Here are some thoughts about why you want to perform the action.
```bash
<action>
```
</response_example>
Note: In rare cases, if you need to reference a similar format in your command, you might have
to proceed in two steps, first writing TRIPLEBACKTICKSBASH, then replacing them with ```bash.
step_limit: 0.
cost_limit: 3.
mode: confirm
environment:
env:
PAGER: cat
MANPAGER: cat
LESS: -R
PIP_PROGRESS_BAR: 'off'
TQDM_DISABLE: '1'
model:
model_kwargs:
drop_params: true
In the last section, you can add
model:
model_name: "my-local-model"
model_kwargs:
custom_llm_provider: "openai"
api_base: "https://..."
...
...
Updating the default mini configuration file
You can set the MSWEA_MINI_CONFIG_PATH setting to set path to the default mini configuration file.
This will allow you to override the default configuration file with your own.
See the global configuration guide for more details.
If this is not enough, our model class should be simple to modify:
Complete model class
import json
import logging
import os
from collections.abc import Callable
from dataclasses import asdict, dataclass, field
from pathlib import Path
from typing import Any, Literal
import litellm
from tenacity import (
before_sleep_log,
retry,
retry_if_not_exception_type,
stop_after_attempt,
wait_exponential,
)
from minisweagent.models import GLOBAL_MODEL_STATS
from minisweagent.models.utils.cache_control import set_cache_control
logger = logging.getLogger("litellm_model")
@dataclass
class LitellmModelConfig:
model_name: str
model_kwargs: dict[str, Any] = field(default_factory=dict)
litellm_model_registry: Path | str | None = os.getenv("LITELLM_MODEL_REGISTRY_PATH")
set_cache_control: Literal["default_end"] | None = None
"""Set explicit cache control markers, for example for Anthropic models"""
cost_tracking: Literal["default", "ignore_errors"] = os.getenv("MSWEA_COST_TRACKING", "default")
"""Cost tracking mode for this model. Can be "default" or "ignore_errors" (ignore errors/missing cost info)"""
class LitellmModel:
def __init__(self, *, config_class: Callable = LitellmModelConfig, **kwargs):
self.config = config_class(**kwargs)
self.cost = 0.0
self.n_calls = 0
if self.config.litellm_model_registry and Path(self.config.litellm_model_registry).is_file():
litellm.utils.register_model(json.loads(Path(self.config.litellm_model_registry).read_text()))
@retry(
stop=stop_after_attempt(int(os.getenv("MSWEA_MODEL_RETRY_STOP_AFTER_ATTEMPT", "10"))),
wait=wait_exponential(multiplier=1, min=4, max=60),
before_sleep=before_sleep_log(logger, logging.WARNING),
retry=retry_if_not_exception_type(
(
litellm.exceptions.UnsupportedParamsError,
litellm.exceptions.NotFoundError,
litellm.exceptions.PermissionDeniedError,
litellm.exceptions.ContextWindowExceededError,
litellm.exceptions.APIError,
litellm.exceptions.AuthenticationError,
KeyboardInterrupt,
)
),
)
def _query(self, messages: list[dict[str, str]], **kwargs):
try:
return litellm.completion(
model=self.config.model_name, messages=messages, **(self.config.model_kwargs | kwargs)
)
except litellm.exceptions.AuthenticationError as e:
e.message += " You can permanently set your API key with `mini-extra config set KEY VALUE`."
raise e
def query(self, messages: list[dict[str, str]], **kwargs) -> dict:
if self.config.set_cache_control:
messages = set_cache_control(messages, mode=self.config.set_cache_control)
response = self._query([{"role": msg["role"], "content": msg["content"]} for msg in messages], **kwargs)
try:
cost = litellm.cost_calculator.completion_cost(response, model=self.config.model_name)
if cost <= 0.0:
raise ValueError(f"Cost must be > 0.0, got {cost}")
except Exception as e:
cost = 0.0
if self.config.cost_tracking != "ignore_errors":
msg = (
f"Error calculating cost for model {self.config.model_name}: {e}, perhaps it's not registered? "
"You can ignore this issue from your config file with cost_tracking: 'ignore_errors' or "
"globally with export MSWEA_COST_TRACKING='ignore_errors'. "
"Alternatively check the 'Cost tracking' section in the documentation at "
"https://klieret.short.gy/mini-local-models. "
" Still stuck? Please open a github issue at https://github.com/SWE-agent/mini-swe-agent/issues/new/choose!"
)
logger.critical(msg)
raise RuntimeError(msg) from e
self.n_calls += 1
self.cost += cost
GLOBAL_MODEL_STATS.add(cost)
return {
"content": response.choices[0].message.content or "", # type: ignore
"extra": {
"response": response.model_dump(),
},
}
def get_template_vars(self) -> dict[str, Any]:
return asdict(self.config) | {"n_model_calls": self.n_calls, "model_cost": self.cost}
The other part that you most likely need to figure out are costs.
There are two ways to do this with litellm:
- You set up a litellm proxy server (which gives you a lot of control over all the LM calls)
- You update the model registry (next section)
Cost tracking
If you run with the above, you will most likely get an error about missing cost information.
If you do not need cost tracking, you can ignore these errors, ideally by editing your agent config file to add:
model:
cost_tracking: "ignore_errors"
...
...
Alternatively, you can set the global setting:
export MSWEA_COST_TRACKING="ignore_errors"
However, note that this is a global setting, and will affect all models!
However, the best way to handle the cost issue is to add a model registry to litellm to include your local model.
LiteLLM gets its cost and model metadata from this file. You can override or add data from this file if it's outdated or missing your desired model by including a custom registry file.
The model registry JSON file should follow LiteLLM's format:
{
"my-custom-model": {
"max_tokens": 4096,
"input_cost_per_token": 0.0001,
"output_cost_per_token": 0.0002,
"litellm_provider": "openai",
"mode": "chat"
},
"my-local-model": {
"max_tokens": 8192,
"input_cost_per_token": 0.0,
"output_cost_per_token": 0.0,
"litellm_provider": "ollama",
"mode": "chat"
}
}
Model names
Model names are case sensitive. Please make sure you have an exact match.
There are two ways of setting the path to the model registry:
- Set
LITELLM_MODEL_REGISTRY_PATH(e.g.,mini-extra config set LITELLM_MODEL_REGISTRY_PATH /path/to/model_registry.json) - Set
litellm_model_registryin the agent config file
model:
litellm_model_registry: "/path/to/model_registry.json"
...
...
Concrete examples
Generating SWE-bench trajectories with vLLM
This example shows how to generate SWE-bench trajectories using vLLM as the local inference engine.
First, launch a vLLM server with your chosen model. For example:
vllm serve ricdomolm/mini-coder-1.7b &
By default, the server will be available at http://localhost:8000.
Second, edit the mini-swe-agent SWE-bench config file located in src/minisweagent/config/extra/swebench.yaml to include your local vLLM model:
model:
model_name: "hosted_vllm/ricdomolm/mini-coder-1.7b" # or hosted_vllm/path/to/local/model
model_kwargs:
api_base: "http://localhost:8000/v1" # adjust if using a non-default port/address
If you need a custom registry, as detailed above, create a registry.json file:
cat > registry.json <<'EOF'
{
"ricdomolm/mini-coder-1.7b": {
"max_tokens": 40960,
"input_cost_per_token": 0.0,
"output_cost_per_token": 0.0,
"litellm_provider": "hosted_vllm",
"mode": "chat"
}
}
EOF
Now you’re ready to generate trajectories! Let's solve the django__django-11099 instance of SWE-bench Verified:
LITELLM_MODEL_REGISTRY_PATH=registry.json mini-extra swebench \
--output test/ --subset verified --split test --filter '^(django__django-11099)$'
You should now see the generated trajectory in the test/ directory.