BWSK OPT-350M

OPT-350M (331M params) trained in 6 variants (3 BWSK modes x 2 experiments) on WikiText-2 with full convergence training and early stopping.

This repo contains all model weights, configs, and training results in a single consolidated repository.

What is BWSK?

BWSK is a framework that classifies every neural network operation as S-type (information-preserving, reversible, coordination-free) or K-type (information-erasing, synchronization point) using combinator logic. This classification enables reversible backpropagation through S-phases to save memory, and CALM-based parallelism analysis.

Model Overview

Property Value
Base Model facebook/opt-350m
Architecture Transformer (causal_lm)
Parameters 331M
Dataset WikiText-2
Eval Metric Perplexity

S/K Classification

Type Ratio
S-type (information-preserving) 89.1%
K-type (information-erasing) 10.9%

Fine-tune Results

Mode Final Loss Val Perplexity Test Perplexity Peak Memory Time Epochs
Conventional 2.5453 15.94 15.58 8.2 GB 9.7m 3
BWSK Analyzed 2.9235 15.97 15.65 8.2 GB 13.1m 4
BWSK Reversible 2.5020 15.92 15.54 7.4 GB 11.4m 3

Memory savings (reversible vs conventional): 9.2%

From Scratch Results

Mode Final Loss Val Perplexity Test Perplexity Peak Memory Time Epochs
Conventional 7.6294 1714.18 1784.14 8.2 GB 19.6m 5
BWSK Analyzed 7.4390 1716.11 1786.86 8.2 GB 19.6m 5
BWSK Reversible 7.0635 1718.39 1783.87 7.4 GB 23.0m 5

Memory savings (reversible vs conventional): 9.3%

Repository Structure

β”œβ”€β”€ README.md
β”œβ”€β”€ results.json
β”œβ”€β”€ finetune-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ finetune-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-conventional/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-analyzed/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json
β”œβ”€β”€ scratch-bwsk-reversible/
β”‚   β”œβ”€β”€ model.safetensors
β”‚   β”œβ”€β”€ config.json
β”‚   └── training_results.json

Usage

Load a specific variant:

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load fine-tuned conventional variant
model = AutoModelForCausalLM.from_pretrained(
    "tzervas/bwsk-opt-350m", subfolder="finetune-conventional"
)
tokenizer = AutoTokenizer.from_pretrained(
    "tzervas/bwsk-opt-350m", subfolder="finetune-conventional"
)

# Load from-scratch BWSK reversible variant
model = AutoModelForCausalLM.from_pretrained(
    "tzervas/bwsk-opt-350m", subfolder="scratch-bwsk-reversible"
)

Training Configuration

Setting Value
Optimizer AdamW
LR (fine-tune) 2e-05
LR (from-scratch) 1e-04
LR Schedule Cosine with warmup
Max Grad Norm 1.0
Mixed Precision AMP (float16)
Early Stopping Patience 3
Batch Size 2
Sequence Length 512

Links

Citation

@software{zervas2026bwsk,
  author = {Zervas, Tyler},
  title = {BWSK: Combinator-Typed Neural Network Analysis},
  year = {2026},
  url = {https://github.com/tzervas/ai-s-combinator},
}

License

MIT

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for tzervas/bwsk-opt-350m

Base model

facebook/opt-350m
Finetuned
(154)
this model

Dataset used to train tzervas/bwsk-opt-350m

Evaluation results