“Plug in AI. That’s it.”
Plug a LaRuche node into your local network and AI becomes available to connected devices. Zero configuration, zero cloud dependency, and privacy-first by design.
Local network
LaRuche Core (LLM/RAG) <---- LAND protocol (mDNS + HTTP) ----> LaRuche Pro (VLM/Code)
| |
+---------------------- Swarm intelligence ------------------+
Clients
- VS Code extension
- Web UI
- CLI / SDK
- IoT integrations
laruche/
|-- land-protocol/ # Core LAND protocol library
| `-- src/
| |-- lib.rs
| |-- capabilities.rs
| |-- manifest.rs
| |-- discovery.rs
| |-- auth.rs
| |-- qos.rs
| |-- swarm.rs
| `-- error.rs
|
|-- laruche-node/ # Node daemon
| `-- src/main.rs
|
|-- laruche-client/ # Rust client SDK
| `-- src/lib.rs
|
|-- laruche-cli/ # CLI tool
| `-- src/main.rs
|
`-- laruche-dashboard/ # Web dashboard
`-- src/
|-- main.rs
`-- templates/dashboard.html
Windows notes:
rustup default stable-x86_64-pc-windows-msvcC:\msys64\mingw64\bin to PATHrustup default stable-x86_64-pc-windows-gnuollama pull mistral
cargo check --workspace
cargo run -p laruche-node
http://localhost:8419/dashboard
2s) to keep nodes visible.45s (from land-protocol)./swarm includes node port and merged capabilities (HTTP + mDNS fallback)./health returns plain text OK.10 polls before removal.12s6 pollsThe node loads configuration in this order:
laruche.toml (or LARUCHE_CONFIG=<path>)LARUCHE_NAME (default random laruche-xxxxxx)LARUCHE_TIER (nano|core|pro|max)LARUCHE_PORT (default 8419)LARUCHE_DASH_PORT (default 8420)OLLAMA_URL (default http://127.0.0.1:11434)LARUCHE_MODEL (default model name)LARUCHE_CAP (primary capability, example: llm)LARUCHE_CAP2 + LARUCHE_MODEL2 (optional second capability/model)laruche.tomlnode_name = "laruche-salon"
tier = "core"
ollama_url = "http://127.0.0.1:11434"
default_model = "mistral"
api_port = 8419
dashboard_port = 8420
[[capabilities]]
capability = "llm"
model_name = "mistral"
model_size = "7B"
quantization = "Q4_K_M"
[[capabilities]]
capability = "code"
model_name = "deepseek-coder"
GET / - Node status (CPU/RAM, queue, capabilities)GET /health - Health check (OK)GET /nodes - Discovered peers (mDNS view)GET /swarm - Collective view (self + peers, with port)GET /models - Local Ollama modelsGET /swarm/models - Models across swarmPOST /infer - Inference requestGET /activity - Recent node activity logPOST /auth/request - Auth request (POC)POST /auth/approve - Approve auth request (POC)GET /dashboard - Embedded dashboard{
"prompt": "Explain ownership in Rust",
"capability": "code",
"model": "deepseek-coder",
"qos": "normal",
"max_tokens": 1024,
"temperature": 0.7
}
cd laruche-vscode
npm install
npm run compile
Then launch the Extension Development Host with F5 from VS Code.
use laruche_client::{Cap, LaRuche};
#[tokio::main]
async fn main() {
let client = LaRuche::discover().await.unwrap();
let answer = client.ask_with("Write a Rust iterator", Cap::Code).await.unwrap();
println!("{}", answer.text);
}
land-protocol from GitHub (workspace.dependencies).