TX-M-72B

Mesh-augmented model. Runs across the TARX network.

TX-M-72B is TARX's flagship distributed model, designed to run across the mesh network of TARX users. It combines local inference with network-augmented capabilities for maximum performance.

Model Details

Property Value
Parameters 72B
Architecture Mesh-distributed
Minimum Local RAM 8 GB (with mesh)
Context Length 128K tokens
License Apache 2.0

How It Works

TX-M-72B uses TARX's mesh network to distribute inference across multiple nodes:

  1. Local Processing: Your device handles initial processing and context
  2. Mesh Augmentation: Complex reasoning is distributed across the network
  3. Privacy Preserved: Conversation content is encrypted; only embeddings are shared
  4. Automatic Fallback: Works offline with reduced capability

Capabilities

  • βœ… Everything TX-16G does, plus:
  • βœ… 128K context window
  • βœ… Expert-level reasoning
  • βœ… Complex code generation
  • βœ… Multi-document synthesis
  • βœ… Research-grade analysis

Requirements

Requirement Value
TARX Desktop v1.0.0+
Mesh Network Enabled
Internet Required for mesh features
Local RAM 8 GB minimum

Usage

With TARX Desktop

TX-M-72B is automatically available when mesh network is enabled:

Settings β†’ Network β†’ Enable Mesh

The model activates automatically for complex queries when mesh nodes are available.

API Access

# TX-M-72B is accessed through standard TARX API
# Mesh routing is handled automatically

from tarx import Client

client = Client()
response = client.chat(
    model="tx-m-72b",
    messages=[{"role": "user", "content": "Analyze this research paper..."}]
)

Mesh Network

TX-M-72B leverages the TARX mesh network:

  • Distributed Computing: Inference spread across participating nodes
  • Privacy First: Zero-knowledge proofs verify computation without exposing data
  • Incentivized: Mesh contributors earn TARX tokens
  • Resilient: No single point of failure

Performance

Benchmark TX-M-72B GPT-4 Claude 3.5
MMLU TBD 86.4 88.7
HumanEval TBD 67.0 92.0
MATH TBD 42.5 71.1

Benchmarks in progress - Q1 2026

Comparison

Feature TX-16G TX-M-72B
Parameters 14B 72B
Context 32K 128K
Runs Offline βœ… Full βœ… Reduced
Mesh Required ❌ βœ… For full capability
Best For General use Complex reasoning

Links


Built by TARX | tarx.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support