Constitutional AI - Deontological
A Constitutional AI model trained with deontological ethical framework.
Model Details
- Base Model: Mistral-7B-v0.1
- Training: Constitutional AI with critique and revision
- Ethics Framework: Deontological
- Model Size: ~13GB (full merged model)
Training Process
- Base Mistral-7B-v0.1
- Helpful-Mistral-7B (HM7B) adapter
- Constitutional AI training with deontological principles
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("0chanly/deontological-constitutional")
tokenizer = AutoTokenizer.from_pretrained("0chanly/deontological-constitutional")
prompt = "Human: Should I prioritize personal happiness or moral duty?\n\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Ethics Framework
- Deontological: Duty-based ethics, focuses on rules and principles
- Consequentialist: Outcome-based ethics, focuses on results and consequences
- Downloads last month
- 23
Model tree for 0chanly/deontological-constitutional
Base model
mistralai/Mistral-7B-v0.1