Nebula-S-3B

Nebula-S-3B is an internal reasoning model package with custom runtime components.

This package intentionally does not include upstream lineage, source training records, or private provenance. Those records are maintained separately in restricted internal release files.

Contents

  • core/: model weights, tokenizer, and generation configuration
  • runtime_weights.safetensors: runtime weight artifact
  • modeling_nebula.py: local runtime loader
  • nebula_runtime.py: import-friendly loader alias
  • release_metadata.json: neutral package metadata
  • release_manifest.internal.json: file checksums for this release

Install

python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt

Smoke test

Run this from inside the extracted model directory:

python modeling_nebula.py .

Local usage

from nebula_runtime import load_model

model, tokenizer = load_model("./Nebula-S-3B")

messages = [{"role": "user", "content": "Solve: what is 2+2?"}]

if getattr(tokenizer, "chat_template", None):
    prompt = tokenizer.apply_chat_template(
        messages,
        add_generation_prompt=True,
        tokenize=False,
    )
else:
    prompt = "User: Solve: what is 2+2?\nAssistant:"

inputs = tokenizer(
    prompt,
    add_special_tokens=False,
    return_tensors="pt",
).to(next(model.parameters()).device)

text = model.generate(
    inputs["input_ids"],
    inputs["attention_mask"],
    tokenizer,
    max_new_tokens=512,
    temperature=0,
)

print(text)

Creating a tuned successor release

This downloadable package is an inference artifact. To create a tuned successor release, use the approved restricted training workspace rather than modifying this folder in place.

Recommended internal flow:

  1. Create a new release ID, for example nebula_s_3b_v0_1_1.
  2. Add approved examples or correction data to the internal training dataset.
  3. Train a candidate runtime artifact in the restricted training environment.
  4. Compare the candidate against this release on fixed evaluation prompts and tasks.
  5. Repackage the candidate with the internal packaging tool.
  6. Run package validation: smoke load, leak scan, strict runtime-weight validation, checksum manifest, and license/notice review.
  7. Promote only the sanitized downloadable package.

Do not upload private provenance, source training records, optimizer state, source data paths, or build logs with this package.

License and Access Restrictions

This model package is provided under the Decompute Internal Evaluation License.

Use is limited to internal evaluation or expressly authorized evaluation. Commercial use, redistribution, sublicensing, hosted inference, reverse engineering, fine-tuning, distillation, derivative model creation, benchmark publication, and use in competing products are prohibited without prior written permission from Decompute Inc.

Access to this repository does not grant ownership or any implied commercial rights.

Evaluation Results

The following results are from Decompute internal evaluations of Nebula-S-SVMS2-3B.

Benchmark Score
GPQA 86.85
HMMT Nov 2025 80.00
GSM8K 92.00
MMLU-Pro 83.00

These scores are reported from internal evaluation runs. Evaluation settings, prompts, decoding parameters, and extraction methods may affect comparability with public leaderboard results.

Downloads last month
35
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support