ML COMPUTATION LAYER

Score, Rank, Build Paths No LLM Required

SHGAT attention networks score tool relevance across a hypergraph hierarchy. Multi-level message passing, K-head attention, zero LLM calls. Deterministic. Observable. Runs on your hardware.

644 Nodes indexed
86.3% Hit@3
21ms Score latency

From Intent to Ranked Tools

One model, one pipeline. SHGAT scores tool relevance across the full hierarchy, then the DAG executor runs the top-ranked tools.

SHGAT-TF

SuperHyperGraph Attention Networks

Why a hypergraph? Regular graphs model pairwise relations (tool A calls tool B). Hypergraphs model N-to-N: one composite groups multiple leaves, one leaf belongs to multiple composites. This captures the real structure of agentic tool ecosystems.

hub

K-Head Attention (16 × 64D)

Each head captures a different relevance signal — co-occurrence, recency, error recovery, success rates. Heads are combined via learned fusion weights.

account_tree

Multi-Level Message Passing

L0: 218 leaves (tools). L1: 26 composites. L2: meta-composites. Context propagates bottom-up then top-down. A leaf inherits relevance from sibling composites it has never been paired with.

trending_up

InfoNCE Contrastive Loss

Temperature-annealed training (0.10 → 0.06) with hard negatives and prioritized experience replay. Hit@3 reaches 86.3% on 644 nodes.

model_training

Training Included

SHGAT-TF trains from production traces — no external service, no GPU required. libtensorflow FFI runs natively via Deno.dlopen. Self-contained.

Numbers, Not Promises

Benchmarked on 245 nodes (218 leaves + 26 composites + 1 root). All metrics from production traces.

hub SHGAT-TF
Hit@1 56.2%
Hit@3 86.3%
MRR 0.705
Leaves (L0) 218
Composites (L1) 26
Attention heads 16 × 64D
Hierarchy levels 3 (L0 → L1 → L2)
Score latency 2.3s