Building llmsh: Natural Language Commands for the Terminal
As someone who spends a lot of time in the terminal, I often find myself needing to construct complex shell commands. Sometimes I know what I want to do, but I’m not sure of the exact syntax or flags needed. Other times, I remember a command exists but can’t recall its name. This friction was slowing me down, especially when working on automation tasks or exploring new systems.
That’s when I decided to build llmsh - a zsh plugin that uses LLMs to transform natural language descriptions into ready-to-run shell commands. The idea was simple: describe what you want to do, press a hotkey, and get concrete command suggestions that you can review and execute.
The Problem
Before building llmsh, my workflow for finding or constructing commands looked like this:
- Try to remember the exact command syntax
- If that fails, search through command history (
history | grep) - If that fails, search online or check man pages
- Construct the command, test it, and refine it
This process was time-consuming and broke my flow. I wanted something that could:
- Understand natural language descriptions
- Work entirely within the terminal
- Integrate seamlessly with my existing workflow
- Support both local and remote LLM instances
- Allow me to review suggestions before executing
Why LLMs for Command Generation?
Large Language Models excel at understanding context and generating syntactically correct code. They’ve been trained on vast amounts of code and documentation, making them surprisingly good at generating shell commands from natural language descriptions.
The key insight was that command generation is a perfect use case for LLMs:
- Shell commands have well-defined syntax
- LLMs understand context (file operations, system administration, etc.)
- They can suggest multiple alternatives
- They can explain what commands do
Architecture and Design
llmsh is built as a zsh plugin that integrates with Oh-My-Zsh. Here’s how it works:
Core Components
- Python API Client (
llmsh_api.py): Handles communication with Ollama-compatible endpoints - Zsh Plugin (
llmsh.plugin.zsh): Provides the shell integration and keybinding - fzf Integration: Allows fuzzy selection of command suggestions
- Configuration System: Uses XDG config directory for clean setup
The Flow
User types description → Presses Ctrl+O →
Python client sends request to Ollama →
LLM generates command suggestions →
fzf displays options →
User selects command →
Command inserted into prompt
Key Design Decisions
Ollama Compatibility: I chose to support Ollama-compatible APIs because:
- Ollama makes it easy to run models locally
- It supports remote instances with authentication
- The API is simple and well-documented
- It’s free and open-source
- Data sovereignty: You maintain full control over your data, which is crucial for a tool that processes command descriptions and potentially sensitive system information
fzf Integration: Using fzf for command selection provides:
- Fuzzy search through suggestions
- Keyboard navigation
- Visual feedback
- Familiar interface for terminal users
XDG Config: Storing configuration in ~/.config/llmsh/ instead of ~/.zshrc:
- Keeps
~/.zshrcclean - Follows XDG directory standards
- Makes configuration portable
- Easier to manage and version control
Implementation Details
Python API Client
The Python client handles the LLM communication, sending natural language queries to Ollama-compatible endpoints and receiving command suggestions in return. The client supports bearer token authentication for remote instances, configurable timeouts, comprehensive error handling and logging, and can request multiple command suggestions at once.
Zsh Integration
The zsh plugin provides seamless shell integration. When triggered, it captures the current command line buffer, sends it to the Python client for processing, displays the returned suggestions in fzf for selection, and then inserts the chosen command back into the prompt, ready for execution or further editing.
Configuration
Configuration is stored in ~/.config/llmsh/config.zsh using environment variables. This approach makes it easy to switch between different Ollama instances, use different models for different use cases, configure timeouts and suggestion counts, and share configurations across systems. The XDG config directory keeps everything organized and separate from shell configuration files.
Practical Use Cases
Here are some real-world scenarios where llmsh has been helpful:
Finding Files
Instead of remembering find syntax:
Description: "find all large files over 500MB"
Result: find . -type f -size +500M
System Administration
For system monitoring tasks:
Description: "show disk usage sorted by size"
Result: du -h | sort -rh | head -20
Text Processing
For data manipulation:
Description: "count lines in all Python files"
Result: find . -name "*.py" -exec wc -l {} + | tail -1
Network Operations
For network troubleshooting:
Description: "show all listening ports"
Result: netstat -tulpn | grep LISTEN
Challenges and Solutions
Challenge: Response Time
Problem: LLM API calls can be slow, especially with remote instances.
Solution:
- Added a spinner to show progress
- Made timeout configurable
- Support for local Ollama instances for faster responses
- Caching could be added in the future
Challenge: Command Quality
Problem: LLMs sometimes generate incorrect or unsafe commands.
Solution:
- Always show suggestions in
fzffor review before execution - User must explicitly select and execute commands
- Multiple suggestions allow comparison
- Clear documentation that users should review commands
Challenge: Authentication
Problem: Remote Ollama instances need secure authentication.
Solution:
- Bearer token support via
LLMSH_TOKEN - Token stored in config file (not in shell history)
- Support for both authenticated and unauthenticated instances
Lessons Learned
Start Simple
The initial version was much simpler than the current implementation. I started with basic Ollama integration and added features incrementally:
- Basic API client
- Zsh integration
- fzf integration
- Configuration system
- Error handling and logging
This iterative approach made the project manageable and allowed me to use it while developing it.
User Experience Matters
Even for a terminal tool, UX is important:
- Clear error messages
- Visual feedback (spinner during API calls)
- Familiar interfaces (fzf)
- Configurable defaults
These small touches make the tool more pleasant to use.
Documentation is Crucial
Good documentation makes tools accessible:
- Clear installation instructions
- Example use cases
- Configuration guide
- Troubleshooting section
The README and setup script make it easy for others to use the tool.
Integration Over Isolation
Building llmsh as a zsh plugin rather than a standalone script:
- Integrates with existing workflows
- Uses familiar tools (fzf, zsh)
- Doesn’t require learning new interfaces
- Feels natural to terminal users
Future Improvements
Some ideas for future enhancements:
- Command History Integration: Learn from previously executed commands
- Context Awareness: Consider current directory, git status, etc.
- Command Explanation: Show what each command does before execution
- Multi-line Commands: Support for complex command pipelines
- Model Switching: Easy switching between different LLM models
- Caching: Cache common queries for faster responses
Conclusion
Building llmsh was a great learning experience that solved a real problem in my daily workflow. It demonstrates how LLMs can enhance terminal productivity without replacing the user’s control or understanding.
The key takeaways:
- LLMs excel at code generation tasks - Shell commands are a perfect fit
- Integration matters - Working within existing tools (zsh, fzf) feels natural
- User control is essential - Always review suggestions before execution
- Simple tools can be powerful - A small plugin can significantly improve workflow
If you’re interested in trying llmsh, you can find it on GitHub. The installation is straightforward, and it works with any Ollama-compatible endpoint.
The terminal is a powerful tool, and sometimes the best way to make it more powerful is to add a layer of intelligence that understands what you’re trying to accomplish, not just what you’re typing.
Note: llmsh is open-source and available under the MIT license. Always review generated commands before executing them, especially when working with production systems or sensitive data.