Development Setup
Prerequisites
| Tool | Version | Notes |
|---|
| Rust | 1.88+ | Pinned via rust-toolchain in the repo root |
| Node.js | 20+ | For the dashboard frontend |
| pnpm | latest | Dashboard package manager |
| protobuf-compiler | — | Required by gRPC / Raft transport |
| cmake | — | Required by some native dependencies |
| libclang-dev | — | Required by bindgen (Linux only) |
macOS
brew install protobuf cmake node
npm i -g pnpm
rustup install 1.88.0
Ubuntu / Debian
sudo apt install -y protobuf-compiler cmake libclang-dev
npm i -g pnpm
rustup install 1.88.0
Clone and Build
git clone https://github.com/ai-akashic/Memorose.git
cd Memorose
cargo build
The workspace contains four crates — memorose-common, memorose-core, memorose-server, and memorose-gateway. A plain cargo build compiles all of them.
For the dashboard frontend:
cd dashboard
pnpm install
pnpm build
Configuration
Copy the example files and fill in your LLM API key:
cp .env.example .env
cp config.example.toml config.toml
At minimum, set one of these in .env:
# Gemini (default)
GOOGLE_API_KEY=your_key_here
# Or OpenAI
# LLM_PROVIDER=openai
# OPENAI_API_KEY=your_key_here
For development without real LLM calls, enable mock mode in config.toml:
[development]
use_mock_llm = true
Running Locally
Standalone Mode (single node)
The fastest way to get a running instance:
./scripts/start_cluster.sh start --mode standalone
This starts one server node on port 3000 and the dashboard on port 3100.
Cluster Mode (3 local nodes)
Simulates a 3-node Raft cluster on one machine:
./scripts/start_cluster.sh start --clean --build
Other Commands
# Check process status
./scripts/start_cluster.sh status
# Stop everything
./scripts/start_cluster.sh stop
# Restart with fresh data
./scripts/start_cluster.sh restart --clean
Dashboard Dev Server
For frontend development with hot reload:
Ports
| Service | Port | Notes |
|---|
| API (node 1) | 3000 | Primary server |
| API (node 2) | 3001 | Cluster mode only |
| API (node 3) | 3002 | Cluster mode only |
| Gateway | 8080 | Stateless request router |
| Dashboard | 3100 | Next.js frontend |
| Raft | 5001–5003 | Internal consensus |
| Metrics | 9090 | Prometheus endpoint |
Docker
A docker-compose.yml is provided for running the full stack:
This starts the gateway, two server nodes, and the dashboard. Make sure your .env file has the required API keys before running.
When running the dashboard in Docker, set DASHBOARD_API_ORIGIN to point to the backend container (e.g., http://memorose:3000). Otherwise the dashboard will try to reach 127.0.0.1:3000 inside its own container and fail.