Model usage

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_path = "FlameF0X/Qwen2-0.2B-pt"

tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_path, 
    torch_dtype="auto", 
    device_map="auto",
    trust_remote_code=True
)

prompt = "The future of Artificial Intelligence is"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

output_ids = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    repetition_penalty=1.1
)

response = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(f"--- Base Model Completion ---\n{response}")
Downloads last month
52
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for FlameF0X/Qwen2-0.2B-pt

Finetunes
1 model
Quantizations
1 model

Datasets used to train FlameF0X/Qwen2-0.2B-pt