// MEMBER GUIDE · AI AGENT

SECURE
ALWAYS-ON
LOCAL AI AGENT

A private, sandboxed AI agent that runs 24/7 on hardware you own. Uses NVIDIA NemoClaw to orchestrate OpenShell, OpenClaw, and the Nemotron 3 Super 120B model.

NO GPU REQUIRED SANDBOXED TELEGRAM BRIDGE OPEN SOURCE

// STACK ARCHITECTURE

  ┌────────────────────────────────────────────────────┐
  │                  YOUR HARDWARE                     │
  │                                                    │
  │   ╔══════════════════════════════════════════════╗   │
  │     NemoClaw · orchestrator & onboarding          │
  │   ╚══════════════════╤═══════════════════════════╝   │
  │                      │                             │
  │   ┌──────────────────┴──────────────────────────┐   │
  │   │  OpenShell · sandbox (net + fs isolation)   │   │
  │   └──────────────────┬──────────────────────────┘   │
  │                      │                             │
  │   ┌──────────────────┴──────────────────────────┐   │
  │   │  OpenClaw · agent loop, tools, memory       │   │
  │   └──────────────────┬──────────────────────────┘   │
  │                      │                             │
  └──────────────────────┼─────────────────────────────┘
                         │
           ╔═════════════╧══════════════╗
           ║  Nemotron 3 Super 120B     ║
           ║                            ║
           ║  [  NIM cloud  ]  or       ║
           ║  [  local Ollama  ]        ║
           ╚════════════════════════════╝
01

CHOOSE YOUR PATH

Pick how you want inference to run. Everything else on the page is identical.

ADVANCED

PATH B · OLLAMA

LOCAL GPU INFERENCE

Everything runs on your metal — model, agent, sandbox. Nothing leaves the device, ever. Needs an NVIDIA GPU with enough VRAM for a 120B model (DGX Spark or equivalent).

  • + Fully offline capable
  • + No per-token cost
  • + Zero data egress
  • − Requires capable GPU
  • − ~87 GB model download
02

PREREQUISITES

PATH A · NIM

  • > Any Linux / macOS machine with Docker
  • > ~10 GB free disk
  • > Reliable internet
  • > NVIDIA NIM API key (build.nvidia.com)

PATH B · OLLAMA

  • > Linux host with NVIDIA GPU
  • > NVIDIA drivers + Container Toolkit
  • > ~100 GB free disk
  • > DGX Spark / equivalent recommended

A Telegram account is optional but recommended if you want to message your agent from anywhere.

// PREREQUISITE · CLI ACCESS

Want to drive the agent from a terminal? Install the openclaude CLI (Windows PowerShell, macOS, Linux — one line each) and point it at https://llm.web3claw.net/v1 to reach NVIDIA Nemotron through the Web3Claw LiteLLM proxy — one key, every model, no local GPU required. Full setup in step 6.3.

03

THE STACK

Four layers, each with a single job.

NEMOCLAW

ORCHESTRATOR

Wires the other pieces together. Lifecycle, policy, updates, onboarding wizard.

OPENSHELL

SECURITY RUNTIME

Sandboxes the agent. Network + filesystem isolation. Nothing reaches the internet without your approval.

OPENCLAW

AGENT FRAMEWORK

Defines the agent loop — tools, prompts, memory, actions. This is what actually "thinks" on your behalf.

NEMOTRON 3 SUPER 120B

REASONING MODEL

NVIDIA's open-weights 120B-parameter model. Served through NIM (Path A) or local Ollama (Path B).

04

INSTALL

4.1 Docker sanity check

Both paths need Docker. Confirm it's working:

docker run --rm hello-world

Path B users: also confirm GPU passthrough with docker run --rm --gpus all nvidia/cuda:12.4.1-base-ubuntu22.04 nvidia-smi.

4.2 Install NemoClaw

One command pulls the orchestrator, OpenShell runtime, and OpenClaw agent framework:

curl -fsSL https://get.nemoclaw.dev/install.sh | sh
nemoclaw init

The wizard asks for sandbox policies and your inference backend. Pick nim or ollama.

4.3 Wire up inference

PATH A · NIM

Grab an API key from build.nvidia.com, then:

nemoclaw config set inference.backend nim
nemoclaw config set inference.nim.api_key $NIM_API_KEY
nemoclaw config set inference.nim.model nemotron3-super-120b

PATH B · OLLAMA

Install Ollama, pull the 120B model (~87 GB), and point NemoClaw at it:

curl -fsSL https://ollama.com/install.sh | sh
ollama pull nemotron3-super:120b
nemoclaw config set inference.backend ollama
nemoclaw config set inference.ollama.endpoint http://localhost:11434
05

VERIFY

The built-in doctor checks every layer end-to-end:

nemoclaw doctor

EXPECTED OUTPUT

[ok]  docker daemon reachable
[ok]  openshell sandbox active
[ok]  openclaw agent process up
[ok]  inference backend: nim (or ollama)
[ok]  nemotron-3-super reachable
[ok]  web ui at http://localhost:7860

Any red line is a blocker — fix it before enabling access methods.

06

ACCESSING THE AGENT

Three built-in interfaces. Pick whichever fits the moment.

WEB UI

localhost:7860

TELEGRAM

@your_bot

CLI / SSH

nemoclaw chat

6.1 Web UI

Opens on http://localhost:7860. For remote access, SSH-tunnel it — do not publish the port:

ssh -L 7860:localhost:7860 user@your-agent-host

6.2 Telegram bridge

you ───▶ @BotFather ───▶ token │ ▼ nemoclaw config ───▶ telegram.token = "…" │ ▼ phone ◀─── @your_bot ◀─── your local agent

Create a bot with @BotFather, copy the token, feed it to NemoClaw:

nemoclaw config set telegram.token $TELEGRAM_BOT_TOKEN
nemoclaw restart telegram

Test with /start. If silent, check nemoclaw logs telegram — usually a bad token or blocked outbound HTTPS.

6.3 Terminal / CLI access

Get a free, personal AI key and start chatting with NVIDIA's top model from your terminal. Three clicks and one copy-paste.

1

Connect your wallet

The same wallet you used to register on Web3Claw. No gas, no transaction — just a signature to prove it's you.

2

Get your free AI key

Click the button. You'll be asked to sign a message (it says exactly what you're signing — no gas). In return you get a personal key.

3

Install the AI agent (one line)

Pick your operating system. The command below is auto-filled with your key — just copy, paste, and press Enter.

Step A. Need Node.js first? Download Node.js LTS and run the installer (accept defaults). Skip this if you already have it.

Step B. Open PowerShell (Start menu → type "PowerShell" → hit Enter). Paste this one line:

$env:W3C_KEY="(get your key in step 2)"; iwr https://web3claw.net/install-ai.ps1 -UseBasicParsing | iex

It'll install openclaude, save your key, and hook it into your PowerShell profile. Close PowerShell, open a new one, then type:

openclaude

If PowerShell complains about script execution policy, run this once (one-time admin unlock for user scripts): Set-ExecutionPolicy -Scope CurrentUser RemoteSigned

That's it. Ask it anything:

  • "Summarize this folder"
  • "Help me write a Python script that downloads my emails"
  • "Explain this error I just got"

Lost your key?

Come back, connect the same wallet, click GET MY AI KEY. If it detects an existing key it'll show it again. If you want a new one, click ROTATE.

Does it cost anything?

No. Web3Claw covers the NVIDIA bill for registered members. If you ever want your own NVIDIA account, see the advanced option below.

ADVANCED · Bring your own NVIDIA key (optional)

If you'd rather use your own NVIDIA account (you get your own 4,000 requests/month free at build.nvidia.com), skip steps 1 and 2 above and set three environment variables by hand instead:

export OPENAI_API_KEY="nvapi-..."                              # your NVIDIA key
export OPENAI_BASE_URL="https://integrate.api.nvidia.com/v1"   # straight to NVIDIA
export OPENAI_MODEL="meta/llama-3.3-nemotron-super-49b-v1"

Then install openclaude once: npm install -g @gitlawb/openclaude. Everything else works the same.

07

POLICY APPROVALS

OpenShell denies external access by default. When the agent wants to touch the outside world, it asks first.

┌─────────────────────────────────────────────────┐ │ POLICY APPROVAL REQUIRED │ ├─────────────────────────────────────────────────┤ │ agent wants to reach: api.coingecko.com │ │ purpose: read btc price │ │ scope: this domain only │ │ │ │ [ ALLOW ONCE ] [ ALLOW ALWAYS ] │ │ [ DENY ] │ └─────────────────────────────────────────────────┘

Approve once per domain or path; OpenShell remembers your decision. The policy ledger at ~/.nemoclaw/policies.log is append-only — every decision is auditable.

This is the whole safety model: the agent runs autonomously 24/7, but its reach is gated by your explicit grants.

08

DAY-2 OPERATIONS

DAILY DRIVER

nemoclaw status
nemoclaw start
nemoclaw stop
nemoclaw restart
nemoclaw logs <component>
nemoclaw update

CLEAN UNINSTALL

nemoclaw stop
nemoclaw uninstall --purge

# Path B only:
ollama rm nemotron3-super:120b

--purge removes sandbox volumes and agent memory. Back up anything worth keeping first.

// NEXT STEP

WIRE IT TO WEB3CLAW

Once the agent is running, point it at the Web3Claw MCP server. It can read on-chain state, call matrix contracts, and earn alongside you — autonomously, with your policy rules as guardrails.

Adapted from the NVIDIA Developer Blog. For Web3Claw members.