Constitutional AI - Deontological

A Constitutional AI model trained with deontological ethical framework.

Model Details

  • Base Model: Mistral-7B-v0.1
  • Training: Constitutional AI with critique and revision
  • Ethics Framework: Deontological
  • Model Size: ~13GB (full merged model)

Training Process

  1. Base Mistral-7B-v0.1
    • Helpful-Mistral-7B (HM7B) adapter
    • Constitutional AI training with deontological principles

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("0chanly/deontological-constitutional")
tokenizer = AutoTokenizer.from_pretrained("0chanly/deontological-constitutional")

prompt = "Human: Should I prioritize personal happiness or moral duty?\n\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Ethics Framework

  • Deontological: Duty-based ethics, focuses on rules and principles
  • Consequentialist: Outcome-based ethics, focuses on results and consequences
Downloads last month
23
Safetensors
Model size
7B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0chanly/deontological-constitutional

Finetuned
(984)
this model