Skip to main content

Agent Tools

LocalGPT's agent has access to 7 built-in tools for interacting with your system.

Tool Overview

ToolDescription
bashExecute shell commands
read_fileRead file contents
write_fileCreate or overwrite files
edit_fileMake targeted edits to files
memory_searchSearch the memory index
memory_getRead specific content from memory files
web_fetchFetch content from URLs

bash

Execute shell commands and return the output. All commands run inside the shell sandbox by default.

Parameters:

NameTypeDescription
commandstringThe command to execute
working_dirstringOptional working directory

Example:

{
"name": "bash",
"arguments": {
"command": "ls -la ~/projects",
"working_dir": "~"
}
}

Notes:

  • Commands run inside a kernel-enforced sandbox (Landlock + seccomp on Linux, Seatbelt on macOS)
  • Sandbox restricts writes to the workspace directory, blocks network access, and denies credential directories
  • Timeout after 120 seconds by default (configurable via sandbox.timeout_secs)
  • Output capped at 1MB (configurable via sandbox.max_output_bytes)
  • Tilde (~) is expanded automatically

read_file

Read the contents of a file.

Parameters:

NameTypeDescription
pathstringPath to the file
offsetintegerLine number to start from (optional)
limitintegerMaximum lines to read (optional)

Example:

{
"name": "read_file",
"arguments": {
"path": "~/projects/app/src/main.rs",
"offset": 100,
"limit": 50
}
}

Notes:

  • Returns line numbers with content
  • Handles large files with offset/limit
  • Tilde expansion supported

write_file

Create a new file or overwrite an existing one.

Parameters:

NameTypeDescription
pathstringPath to the file
contentstringContent to write

Example:

{
"name": "write_file",
"arguments": {
"path": "~/projects/app/README.md",
"content": "# My App\n\nA description of my app."
}
}

Notes:

  • Creates parent directories if needed
  • Overwrites existing files completely
  • Use edit_file for partial changes
  • Writes are restricted to the workspace directory
  • Protected files (LocalGPT.md, .localgpt_manifest.json, IDENTITY.md) cannot be written

edit_file

Make targeted string replacements in a file.

Parameters:

NameTypeDescription
pathstringPath to the file
old_stringstringText to find
new_stringstringText to replace with

Example:

{
"name": "edit_file",
"arguments": {
"path": "~/projects/app/config.toml",
"old_string": "port = 8080",
"new_string": "port = 3000"
}
}

Notes:

  • Finds and replaces exact string matches
  • Only replaces first occurrence
  • Returns error if string not found
  • Preserves file formatting

Search the memory index for relevant content.

Parameters:

NameTypeDescription
querystringSearch query
limitintegerMaximum results (optional, default: 10)

Example:

{
"name": "memory_search",
"arguments": {
"query": "rust error handling",
"limit": 5
}
}

Returns:

  • Matching chunks with file paths
  • Relevance scores
  • Surrounding context

memory_get

Read specific content from memory files. Use after memory_search to pull only the needed lines.

Parameters:

NameTypeDescription
pathstringPath to memory file (MEMORY.md or memory/*.md)
start_lineintegerLine number to start from (optional)
end_lineintegerLine number to end at (optional)

Example:

{
"name": "memory_get",
"arguments": {
"path": "memory/2024-01-15.md",
"start_line": 10,
"end_line": 25
}
}

Notes:

  • Safe snippet read from memory files
  • Use line ranges to keep context small
  • Works with MEMORY.md and daily logs

web_fetch

Fetch content from a URL.

Parameters:

NameTypeDescription
urlstringURL to fetch

Example:

{
"name": "web_fetch",
"arguments": {
"url": "https://api.github.com/repos/owner/repo"
}
}

Notes:

  • HTTP GET request only
  • Response capped at 1MB by default (configurable via tools.web_fetch_max_bytes)
  • Respects timeouts
  • Returns error for non-2xx responses

Provider Tool Support

All LLM providers in LocalGPT support tool calling:

ProviderTool Calling
Claude CLINative support
Anthropic APINative support
OpenAINative support
OllamaSupported (v0.1.2+) — requires Ollama models with tool calling capability
GLM (Z.AI)Native support

Tool Execution Flow

When the AI wants to use a tool:

User: "What files are in my project?"


┌─────────────────┐
│ AI decides to │
│ use bash tool │
└────────┬────────┘


┌─────────────────┐
│ Tool Request: │
│ bash │
│ "ls ~/project" │
└────────┬────────┘


┌─────────────────┐
│ LocalGPT │
│ executes cmd │
└────────┬────────┘


┌─────────────────┐
│ Tool Result: │
│ file1.rs │
│ file2.rs │
└────────┬────────┘


┌─────────────────┐
│ AI formats │
│ response │
└────────┬────────┘


"Your project contains file1.rs and file2.rs"

Safety Considerations

These measures reduce risk but do not eliminate it. LLMs are probabilistic systems — no prompt or tooling arrangement can guarantee that an AI agent will never take an unintended action.

  • Shell commands run inside a kernel-enforced sandbox — write access limited to workspace, network denied, credentials blocked
  • File tools (write_file, edit_file, read_file) are path-validated and restricted to the workspace
  • Protected files — the agent cannot write to LocalGPT.md, .localgpt_manifest.json, or IDENTITY.md (see LocalGPT.md)
  • No sudo escalation is performed automatically
  • Web requests are outbound only with SSRF protection
  • Memory stays entirely local

Always review agent actions, especially in sensitive environments. The sandbox and protections are a safety net, not a substitute for human oversight.