SmolLM2-135M LoRA Adapter for GSM8K
This is a LoRA adapter for HuggingFaceTB/SmolLM2-135M fine-tuned on the GSM8K dataset for mathematical reasoning.
Model Description
- Base Model: HuggingFaceTB/SmolLM2-135M
- Training Method: Baseline fine-tuning
- Dataset: GSM8K (Grade School Math 8K)
- Task: Mathematical word problem solving
- Exact Match Accuracy: 2.15%
Training Details
LoRA adapter for SmolLM2 fine-tuned on GSM8K with standard (baseline) training
Training Configuration
- Method: LoRA (Low-Rank Adaptation)
- Rank: 16
- Alpha: 32
- Target Modules: q_proj, k_proj, v_proj, o_proj
- Dropout: 0.1
- Epochs: 3
- Batch Size: 4 (with gradient accumulation of 4)
- Learning Rate: 3e-4
Usage
Loading the Adapter
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"HuggingFaceTB/SmolLM2-135M",
device_map="auto",
torch_dtype="auto"
)
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/SmolLM2-135M")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "CrystalRaindropsFall/smollm2-gsm8k-baseline")
# Inference
prompt = "Question: Janet's ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?\nAnswer:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using with Pipeline
from transformers import pipeline
from peft import PeftModel, AutoPeftModelForCausalLM
# Load model with adapter
model = AutoPeftModelForCausalLM.from_pretrained(
"YOUR_USERNAME/REPO_NAME",
device_map="auto"
)
# Create pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Generate
result = pipe("Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?\nAnswer:")
print(result[0]['generated_text'])
Performance
Evaluated on GSM8K test set (512 samples):
| Metric | Score |
|---|---|
| Exact Match | 2.15% |
| Format Correct | 100% |
Limitations
- Trained on grade school level math problems
- May struggle with problems requiring external knowledge
- Performance depends on problem complexity and wording
- Best used with base model's standard generation settings
Acknowledgments
- Base model: HuggingFaceTB/SmolLM2-135M
- Dataset: GSM8K by Cobbe et al.
- Training framework: HuggingFace PEFT
License
Apache 2.0 (following base model license)
- Downloads last month
- 20
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for CrystalRaindropsFall/smollm2-gsm8k-baseline
Base model
HuggingFaceTB/SmolLM2-135M