whole-model

Fine-tuned google/gemma-3-1b-pt model from Gemma Garage

This model was fine-tuned using Gemma Garage, a platform for fine-tuning Gemma models with LoRA.

Model Details

  • Base Model: google/gemma-3-1b-pt
  • Fine-tuning Method: LoRA (Low-Rank Adaptation)
  • Training Platform: Gemma Garage
  • Fine-tuned on: 2025-07-25

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("LucasFMartins/whole-model")
model = AutoModelForCausalLM.from_pretrained("LucasFMartins/whole-model")

# Generate text
inputs = tokenizer("Your prompt here", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Training Details

This model was fine-tuned using the Gemma Garage platform with the following configuration:

  • Request ID: 43a3a2fd-ada0-40f1-9a29-9f4050d94bcf
  • Training completed on: 2025-07-25 21:39:44 UTC

For more information about Gemma Garage, visit our GitHub repository.

Downloads last month
11
Safetensors
Model size
0.7B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for LucasFMartins/whole-model

Adapter
(47)
this model