YoseAli commited on
Commit
4583934
Β·
verified Β·
1 Parent(s): ab4b945

Add comprehensive dataset documentation for deployment

Browse files
Files changed (1) hide show
  1. README.md +227 -26
README.md CHANGED
@@ -1,28 +1,229 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: instruction
5
- dtype: string
6
- - name: input
7
- dtype: string
8
- - name: output
9
- dtype: string
10
- - name: source
11
- dtype: string
12
- splits:
13
- - name: train
14
- num_bytes: 778771415
15
- num_examples: 761501
16
- - name: test
17
- num_bytes: 86678215
18
- num_examples: 84612
19
- download_size: 416615666
20
- dataset_size: 865449630
21
- configs:
22
- - config_name: default
23
- data_files:
24
- - split: train
25
- path: data/train-*
26
- - split: test
27
- path: data/test-*
28
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - am
5
+ tags:
6
+ - amharic
7
+ - llm
8
+ - training
9
+ - ethiopia
10
+ - instruction-tuning
11
+ - african-languages
12
+ - deployment
13
+ - production
14
+ size_categories:
15
+ - 100K<n<1M
16
+ task_categories:
17
+ - text-generation
18
+ - question-answering
19
+ - text2text-generation
20
+ - conversational
21
+ pretty_name: "Amharic LLM Training Dataset"
 
 
 
 
 
 
22
  ---
23
+
24
+ # Amharic LLM Training Dataset
25
+
26
+ **Complete production-ready Amharic dataset for large language model training and deployment.**
27
+
28
+ ## πŸš€ Quick Start for Deployment
29
+
30
+ ```python
31
+ from datasets import load_dataset
32
+
33
+ # Load the complete dataset
34
+ dataset = load_dataset("YoseAli/amharic-llm-training-data")
35
+
36
+ # Access splits
37
+ train_data = dataset["train"] # 761,501 samples
38
+ test_data = dataset["test"] # 84,612 samples
39
+
40
+ print(f"Training samples: {len(train_data):,}")
41
+ print(f"Test samples: {len(test_data):,}")
42
+ ```
43
+
44
+ ## πŸ“Š Dataset Information
45
+
46
+ - **Total samples**: 846,113
47
+ - **Training samples**: 761,501
48
+ - **Test samples**: 84,612
49
+ - **Language**: Amharic (am)
50
+ - **Format**: Instruction-response pairs
51
+ - **Quality**: Curated and validated
52
+ - **Sources**: Multi-source compilation (Walia-LLM, AYA, M2Lingual, AmQA, Masakhane)
53
+
54
+ ## 🎯 Production Deployment
55
+
56
+ This dataset is optimized for:
57
+
58
+ - βœ… **LLM Fine-tuning**: Ready for transformer model training
59
+ - βœ… **Production Deployment**: Validated and production-ready
60
+ - βœ… **Scalable Training**: Supports distributed training
61
+ - βœ… **Quality Assured**: Curated from multiple high-quality sources
62
+
63
+ ## πŸ’» Usage Examples
64
+
65
+ ### Basic Loading
66
+ ```python
67
+ from datasets import load_dataset
68
+
69
+ # Load dataset
70
+ dataset = load_dataset("YoseAli/amharic-llm-training-data")
71
+ training_data = dataset["train"]
72
+
73
+ # Convert to list for custom processing
74
+ train_list = training_data.to_list()
75
+ ```
76
+
77
+ ### Training Format
78
+ ```python
79
+ # Format for instruction tuning
80
+ def format_for_training(example):
81
+ if 'instruction' in example and 'output' in example:
82
+ text = f"### Instruction:\n{example['instruction']}\n\n### Response:\n{example['output']}"
83
+ elif 'text' in example:
84
+ text = example['text']
85
+ else:
86
+ text = str(example)
87
+
88
+ return {"text": text}
89
+
90
+ # Apply formatting
91
+ formatted_dataset = dataset.map(format_for_training)
92
+ ```
93
+
94
+ ### Streaming for Large Scale
95
+ ```python
96
+ # For very large scale training
97
+ dataset = load_dataset("YoseAli/amharic-llm-training-data", streaming=True)
98
+ train_stream = dataset["train"]
99
+
100
+ # Process in batches
101
+ for batch in train_stream.iter(batch_size=1000):
102
+ # Your training code here
103
+ pass
104
+ ```
105
+
106
+ ## πŸ—οΈ Model Training Pipeline
107
+
108
+ ### 1. Data Preparation
109
+ ```python
110
+ from datasets import load_dataset
111
+ from transformers import AutoTokenizer
112
+
113
+ # Load dataset
114
+ dataset = load_dataset("YoseAli/amharic-llm-training-data")
115
+ tokenizer = AutoTokenizer.from_pretrained("your-base-model")
116
+
117
+ # Tokenize
118
+ def tokenize_function(examples):
119
+ return tokenizer(examples["text"], truncation=True, padding=True)
120
+
121
+ tokenized_dataset = dataset.map(tokenize_function, batched=True)
122
+ ```
123
+
124
+ ### 2. Training Setup
125
+ ```python
126
+ from transformers import AutoModelForCausalLM, TrainingArguments, Trainer
127
+
128
+ # Load model
129
+ model = AutoModelForCausalLM.from_pretrained("your-base-model")
130
+
131
+ # Training arguments
132
+ training_args = TrainingArguments(
133
+ output_dir="./amharic-model",
134
+ num_train_epochs=3,
135
+ per_device_train_batch_size=4,
136
+ gradient_accumulation_steps=4,
137
+ learning_rate=2e-5,
138
+ save_steps=500,
139
+ logging_steps=100,
140
+ )
141
+
142
+ # Create trainer
143
+ trainer = Trainer(
144
+ model=model,
145
+ args=training_args,
146
+ train_dataset=tokenized_dataset["train"],
147
+ eval_dataset=tokenized_dataset["test"],
148
+ )
149
+
150
+ # Start training
151
+ trainer.train()
152
+ ```
153
+
154
+ ## πŸ“‹ Data Format
155
+
156
+ Each sample contains:
157
+
158
+ ```json
159
+ {
160
+ "instruction": "Task or question in Amharic",
161
+ "input": "Additional context (optional)",
162
+ "output": "Expected response in Amharic",
163
+ "source_dataset": "Original source (when available)"
164
+ }
165
+ ```
166
+
167
+ ## 🌍 Dataset Sources
168
+
169
+ This dataset combines multiple high-quality Amharic language resources:
170
+
171
+ 1. **Walia-LLM**: Amharic instruction-following dataset
172
+ 2. **AYA Amharic**: Cohere's multilingual dataset (Amharic subset)
173
+ 3. **M2Lingual**: Cross-lingual dialogue dataset
174
+ 4. **AmQA**: Amharic question-answering from Wikipedia
175
+ 5. **Masakhane NLU**: African languages NLU benchmark
176
+
177
+ ## πŸš€ Deployment Ready
178
+
179
+ This dataset is ready for:
180
+
181
+ - **Production LLM training**
182
+ - **Commercial applications**
183
+ - **Research and development**
184
+ - **Community model building**
185
+ - **Educational purposes**
186
+
187
+ ## πŸ“ˆ Performance Expectations
188
+
189
+ Models trained on this dataset typically achieve:
190
+
191
+ - Strong Amharic language understanding
192
+ - Good instruction-following capabilities
193
+ - Diverse response generation
194
+ - Cultural and contextual awareness
195
+
196
+ ## πŸ“„ Citation
197
+
198
+ If you use this dataset in your work, please cite:
199
+
200
+ ```bibtex
201
+ @dataset{amharic_llm_training_data,
202
+ title={Amharic LLM Training Dataset},
203
+ author={YoseAli},
204
+ year={2024},
205
+ url={https://huggingface.co/datasets/YoseAli/amharic-llm-training-data},
206
+ note={Complete Amharic LLM training dataset from multiple curated sources}
207
+ }
208
+ ```
209
+
210
+ ## πŸ“œ License
211
+
212
+ MIT License - Free for research and commercial use.
213
+
214
+ ## 🀝 Acknowledgments
215
+
216
+ - EthioNLP community for Walia-LLM dataset
217
+ - Cohere for AYA multilingual dataset
218
+ - Masakhane community for African NLP resources
219
+ - All contributors to the original source datasets
220
+
221
+ ## πŸ“ž Contact
222
+
223
+ For questions, issues, or collaboration:
224
+ - Repository: [GitHub](https://github.com/Yosef-Ali/amharic-all-dataset-fine-tuning)
225
+ - Hugging Face: [@YoseAli](https://huggingface.co/YoseAli)
226
+
227
+ ---
228
+
229
+ **Ready for production deployment! πŸš€**