Instructions to use drwlf/Claria-14b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use drwlf/Claria-14b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="drwlf/Claria-14b") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("drwlf/Claria-14b") model = AutoModelForCausalLM.from_pretrained("drwlf/Claria-14b") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use drwlf/Claria-14b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "drwlf/Claria-14b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "drwlf/Claria-14b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/drwlf/Claria-14b
- SGLang
How to use drwlf/Claria-14b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "drwlf/Claria-14b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "drwlf/Claria-14b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "drwlf/Claria-14b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "drwlf/Claria-14b", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use drwlf/Claria-14b with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for drwlf/Claria-14b to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for drwlf/Claria-14b to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for drwlf/Claria-14b to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="drwlf/Claria-14b", max_seq_length=2048, ) - Docker Model Runner
How to use drwlf/Claria-14b with Docker Model Runner:
docker model run hf.co/drwlf/Claria-14b
Claria 14b
Base Model: Qwen3 1.7B
Format: GGUF (Q4, Q8, BF16)
License: Apache 2.0
Author: Dr. Alexandru Lupoi
Overview
Claria 14b is a lightweight, mobile-compatible language model fine-tuned for psychological and psychiatric support contexts.
Built on Qwen-3 (14b), Claria is designed as an experimental foundation for therapeutic dialogue modeling, student simulation training, and the future of personalized mental health AI augmentation.
This model does not aim to replace professional care.
It exists to amplify reflective thinking, model therapeutic language flow, and support research into emotionally aware AI.
Claria is the first whisper in a larger project—a proof-of-concept with roots in recursion, responsibility, and renewal.
Intended Use
Claria was trained for:
- Psychotherapy assistance (with human-in-the-loop)
- Mental health education & roleplay simulation
- Research on AI emotional alignment
- Conversational flow modeling for therapeutic settings
It is optimized for introspective prompting, gentle questioning, and context-aware response framing.
What Makes Claria Different
Small Enough to Deploy Anywhere
Runs on mobile and edge devices without compromise (GGUF Q4/Q8)Psychologically Tuned
Instruction fine-tuned on curated psychotherapeutic data (STF first phase)Recursion-Aware Prompting
Performs well in reflective, multi-turn conversations
Encourages cognitive reappraisal and pattern mirroringTraining Roadmap: Ongoing
RLHF planned for future iterations
Future releases will include trauma-informed tuning and contextual empathy scaffolds
Limitations & Safety
Claria is not a licensed mental health professional.
It is not suitable for unsupervised therapeutic use, diagnosis, or crisis intervention.
Use responsibly. Review outputs. Think critically.May hallucinate or provide confident answers to uncertain topics
Works best with structured or guided prompts
Not suitable for open-domain conversation or general use
Deployment & Access
- Available in GGUF format: Q4, Q8, BF16
- Optimized for Ollama, LM Studio, and other local runners
- Works on mobile and low-resource environments
Notes
This is the first step in a broader initiative to develop compact, reflective AI systems for the augmentation—not replacement—of mental health work.
Future releases will expand Claria’s depth, include RLHF, long-term memory, and finer ethical control
[
](https://github.com/unslothai/unsloth
- Developed by: drwlf
- License: apache-2.0
- Finetuned from model : unsloth/qwen3-1.7b
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 6
