๐ Install ATOM
ATOM is your private local AI terminal โ no cloud, no surveillance, no nonsense.
โ Requirements
- ๐ Python 3.10+
- ๐ง Ollama (for local LLMs like Gemma or Mistral)
- ๐ฅ Node.js + npm (for frontend UI)
๐ Quick Start (One-Liner)
git clone https://github.com/cipherswitch/atom && cd atom && ./run.sh
๐ Manual Setup
If you prefer setting things up step-by-step:
1. Clone the Repo
git clone https://github.com/cipherswitch/atom
cd atom
2. Backend Setup (FastAPI)
cd backend
pip install -r requirements.txt
Then run it:
uvicorn main:app --reload
3. Frontend Setup (Vite + React)
cd ../frontend
npm install
npm run dev
4. Optional: Ollama + Models
Install Ollama (opens in a new tab) and pull a model:
ollama run gemma:7b
Supported models:
- ๐ง Gemma
- ๐ง Mistral
- ๐ง Mixtral
- ๐ง Phi-2 / Phi-3
- ๐ง LLaMA3 (experimental)
๐งช Test It Out
Once both servers are running:
- Visit
http://localhost:5173
- Drop in a file
- Ask ATOM something like:
"Summarize this PDF."
"Who is mentioned in the doc?"
๐ง Features You Just Enabled
- File chunking + memory injection
- Local LLM chat
- Modular tool support (e.g., TTS, search)
- Persona tone + streaming (optional)
- Real-time logs + model switching