LaRuche

image

LaRuche - Networked Edge AI System

icon-removebg-preview

“Plug in AI. That’s it.”

Plug a LaRuche node into your local network and AI becomes available to connected devices. Zero configuration, zero cloud dependency, and privacy-first by design.

Architecture

Local network

  LaRuche Core (LLM/RAG) <---- LAND protocol (mDNS + HTTP) ----> LaRuche Pro (VLM/Code)
            |                                                           |
            +---------------------- Swarm intelligence ------------------+

Clients
  - VS Code extension
  - Web UI
  - CLI / SDK
  - IoT integrations

Workspace structure

laruche/
|-- land-protocol/        # Core LAND protocol library
|   `-- src/
|       |-- lib.rs
|       |-- capabilities.rs
|       |-- manifest.rs
|       |-- discovery.rs
|       |-- auth.rs
|       |-- qos.rs
|       |-- swarm.rs
|       `-- error.rs
|
|-- laruche-node/         # Node daemon
|   `-- src/main.rs
|
|-- laruche-client/       # Rust client SDK
|   `-- src/lib.rs
|
|-- laruche-cli/          # CLI tool
|   `-- src/main.rs
|
`-- laruche-dashboard/    # Web dashboard
    `-- src/
        |-- main.rs
        `-- templates/dashboard.html

Quick start

Prerequisites

Windows notes:

1. Pull a model

ollama pull mistral

2. Build

cargo check --workspace

3. Run a node

cargo run -p laruche-node

4. Open the dashboard

http://localhost:8419/dashboard

Current implementation notes

Node configuration

The node loads configuration in this order:

  1. laruche.toml (or LARUCHE_CONFIG=<path>)
  2. Environment variables (override file values)

Environment variables

Example laruche.toml

node_name = "laruche-salon"
tier = "core"
ollama_url = "http://127.0.0.1:11434"
default_model = "mistral"
api_port = 8419
dashboard_port = 8420

[[capabilities]]
capability = "llm"
model_name = "mistral"
model_size = "7B"
quantization = "Q4_K_M"

[[capabilities]]
capability = "code"
model_name = "deepseek-coder"

API

Core endpoints

Example inference request

{
  "prompt": "Explain ownership in Rust",
  "capability": "code",
  "model": "deepseek-coder",
  "qos": "normal",
  "max_tokens": 1024,
  "temperature": 0.7
}

VS Code extension

cd laruche-vscode
npm install
npm run compile

Then launch the Extension Development Host with F5 from VS Code.

Rust client SDK

use laruche_client::{Cap, LaRuche};

#[tokio::main]
async fn main() {
    let client = LaRuche::discover().await.unwrap();
    let answer = client.ask_with("Write a Rust iterator", Cap::Code).await.unwrap();
    println!("{}", answer.text);
}

Notes