Local. Autonomous. Unrestricted. Transform your local AI into a powerful System Agent with full file access, long-term memory, and real-time web connectivity. Works with Ollama and LM Studio.
ollama pull all-minilm
Smart VRAM recovery automatically unloads idle models when GPU memory is full. The agent autonomously decides which facts to save with local persistence in ~/.config/xkaliber-agent/.
Real-time [X MEMS] counter and flashing [SAVING...] indicator. One-click full wipe button to instantly clear all session data and the physical vector database.
Run shell commands directly on your host machine. Private sudo UI field handles root privileges safely using sudo -S without ever saving to logs or neural memory.
Real-time web search via DuckDuckGo with no API keys required. Inject live data into the neural stream for up-to-date responses.
Link your account via QR code to send autonomous notifications directly to your phone. Perfect for alerts and system updates.
High-quality, 100% offline Text-to-Speech synthesis. Fast and reliable voice generation without any internet connection.
Full Read, Write, List, and Delete capabilities. Navigate and manipulate local directory structures seamlessly.
Drag-and-drop images or text files for instant vision analysis. Works with Llava, Bakllava, and other vision models.
Turn your host into a localized AI server. Mobile-optimized with dynamic viewports, anti-zoom scaling, and touch-friendly UI.
Toggle the Sys-Access harness for true AI agency. Continuous loop operation with tool selection for complex requests.
Make raw HTTP requests to interact with external REST APIs or local services. Seamless integration with your workflow.
After installation, type 'xagent' to spawn in terminal. Full-featured command-line interface for power users.
Download and install Ollama from the official website. Ollama is required to run local AI models.
# Download from: https://ollama.com/download
# Or use winget: winget install Ollama.Ollama
Download the all-minilm model for embeddings (required for memory functionality).
ollama pull all-minilm
# Recommended models for best experience:
ollama pull gemma4
ollama pull gwen3.5
ollama pull gwen3.6
Ensure you have Node.js v18+ installed for running Electron.
# Download from: https://nodejs.org/
# Verify installation:
node --version
npm --version
Clone the repository and install npm dependencies.
git clone https://github.com/sneha-yadav1111/xkaliber-agent.git
cd xkaliber-agent
npm install
Start the Electron application.
npm start
Package the application into a standalone .exe file.
npm run dist
Install Ollama using Homebrew or download from the website.
# Using Homebrew (recommended):
brew install ollama
# Or download from: https://ollama.com/download
Download the all-minilm model and recommended LLMs.
ollama pull all-minilm
# Recommended models:
ollama pull gemma4
ollama pull gwen3.5
ollama pull gwen3.6
Install Node.js using Homebrew.
brew install node
# Verify:
node --version
npm --version
Clone the repository and install dependencies.
git clone https://github.com/sneha-yadav1111/xkaliber-agent.git
cd xkaliber-agent
npm install
Start the Electron application.
npm start
Package the application into a .app file.
npm run dist
Install Ollama using the official installation script.
curl -fsSL https://ollama.com/install.sh | sh
Download the all-minilm model and recommended LLMs.
ollama pull all-minilm
# Recommended models:
ollama pull gemma4
ollama pull gwen3.5
ollama pull gwen3.6
Install Node.js using your package manager.
# Ubuntu/Debian:
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
# Arch:
sudo pacman -S nodejs npm
# Fedora:
sudo dnf install nodejs
Clone the repository and install dependencies.
git clone https://github.com/sneha-yadav1111/xkaliber-agent.git
cd xkaliber-agent
npm install
Start the Electron application.
npm start
For command-line access, ensure xagent is in your PATH.
# After installation, type:
xagent
# This spawns the agent in terminal
Create .AppImage or .deb package.
npm run dist