TX-M-72B
Mesh-augmented model. Runs across the TARX network.
TX-M-72B is TARX's flagship distributed model, designed to run across the mesh network of TARX users. It combines local inference with network-augmented capabilities for maximum performance.
Model Details
| Property | Value |
|---|---|
| Parameters | 72B |
| Architecture | Mesh-distributed |
| Minimum Local RAM | 8 GB (with mesh) |
| Context Length | 128K tokens |
| License | Apache 2.0 |
How It Works
TX-M-72B uses TARX's mesh network to distribute inference across multiple nodes:
- Local Processing: Your device handles initial processing and context
- Mesh Augmentation: Complex reasoning is distributed across the network
- Privacy Preserved: Conversation content is encrypted; only embeddings are shared
- Automatic Fallback: Works offline with reduced capability
Capabilities
- β Everything TX-16G does, plus:
- β 128K context window
- β Expert-level reasoning
- β Complex code generation
- β Multi-document synthesis
- β Research-grade analysis
Requirements
| Requirement | Value |
|---|---|
| TARX Desktop | v1.0.0+ |
| Mesh Network | Enabled |
| Internet | Required for mesh features |
| Local RAM | 8 GB minimum |
Usage
With TARX Desktop
TX-M-72B is automatically available when mesh network is enabled:
Settings β Network β Enable Mesh
The model activates automatically for complex queries when mesh nodes are available.
API Access
# TX-M-72B is accessed through standard TARX API
# Mesh routing is handled automatically
from tarx import Client
client = Client()
response = client.chat(
model="tx-m-72b",
messages=[{"role": "user", "content": "Analyze this research paper..."}]
)
Mesh Network
TX-M-72B leverages the TARX mesh network:
- Distributed Computing: Inference spread across participating nodes
- Privacy First: Zero-knowledge proofs verify computation without exposing data
- Incentivized: Mesh contributors earn TARX tokens
- Resilient: No single point of failure
Performance
| Benchmark | TX-M-72B | GPT-4 | Claude 3.5 |
|---|---|---|---|
| MMLU | TBD | 86.4 | 88.7 |
| HumanEval | TBD | 67.0 | 92.0 |
| MATH | TBD | 42.5 | 71.1 |
Benchmarks in progress - Q1 2026
Comparison
| Feature | TX-16G | TX-M-72B |
|---|---|---|
| Parameters | 14B | 72B |
| Context | 32K | 128K |
| Runs Offline | β Full | β Reduced |
| Mesh Required | β | β For full capability |
| Best For | General use | Complex reasoning |
Links
Built by TARX | tarx.com