Codyfederer commited on
Commit
6e8712b
·
verified ·
1 Parent(s): eb0b24d

Upload metadata: README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -0
README.md ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - text-to-speech
6
+ language:
7
+ - tr
8
+ tags:
9
+ - speech
10
+ - audio
11
+ - dataset
12
+ - tts
13
+ - asr
14
+ - merged-dataset
15
+ size_categories:
16
+ - 10K<n<100K
17
+ configs:
18
+ - config_name: default
19
+ data_files:
20
+ - split: train
21
+ path: "data.jsonl"
22
+ default: true
23
+ dataset_info:
24
+ features:
25
+ - name: audio
26
+ dtype:
27
+ audio:
28
+ sampling_rate: null
29
+ - name: text
30
+ dtype: string
31
+ - name: speaker_id
32
+ dtype: string
33
+ - name: emotion
34
+ dtype: string
35
+ - name: language
36
+ dtype: string
37
+ splits:
38
+ - name: train
39
+ num_examples: 41427
40
+ config_name: default
41
+ ---
42
+
43
+ # TR-Full_dataset
44
+
45
+ This is a merged speech dataset containing 41427 audio segments from 88 source datasets.
46
+
47
+ ## Dataset Information
48
+
49
+ - **Total Segments**: 41427
50
+ - **Speakers**: 222
51
+ - **Languages**: tr
52
+ - **Emotions**: neutral, angry, sad, happy
53
+ - **Original Datasets**: 88
54
+
55
+ ## Dataset Structure
56
+
57
+ Each example contains:
58
+ - `audio`: Audio file (WAV format, original sampling rate preserved)
59
+ - `text`: Transcription of the audio
60
+ - `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
61
+ - `emotion`: Detected emotion (neutral, happy, sad, etc.)
62
+ - `language`: Language code (en, es, fr, etc.)
63
+
64
+ ## Usage
65
+
66
+ ### Loading the Dataset
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ # Load the dataset
72
+ dataset = load_dataset("Codyfederer/tr-full-dataset")
73
+
74
+ # Access the training split
75
+ train_data = dataset["train"]
76
+
77
+ # Example: Get first sample
78
+ sample = train_data[0]
79
+ print(f"Text: {sample['text']}")
80
+ print(f"Speaker: {sample['speaker_id']}")
81
+ print(f"Language: {sample['language']}")
82
+ print(f"Emotion: {sample['emotion']}")
83
+
84
+ # Play audio (requires audio libraries)
85
+ # sample['audio']['array'] contains the audio data
86
+ # sample['audio']['sampling_rate'] contains the sampling rate
87
+ ```
88
+
89
+ ### Alternative: Load from JSONL
90
+
91
+ ```python
92
+ from datasets import Dataset, Audio, Features, Value
93
+ import json
94
+
95
+ # Load the JSONL file
96
+ rows = []
97
+ with open("data.jsonl", "r", encoding="utf-8") as f:
98
+ for line in f:
99
+ rows.append(json.loads(line))
100
+
101
+ features = Features({
102
+ "audio": Audio(sampling_rate=None),
103
+ "text": Value("string"),
104
+ "speaker_id": Value("string"),
105
+ "emotion": Value("string"),
106
+ "language": Value("string")
107
+ })
108
+
109
+ dataset = Dataset.from_list(rows, features=features)
110
+ ```
111
+
112
+ ### Dataset Structure
113
+
114
+ The dataset includes:
115
+ - `data.jsonl` - Main dataset file with all columns (JSON Lines)
116
+ - `*.wav` - Audio files under `audio_XXX/` subdirectories
117
+ - `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
118
+
119
+ JSONL keys:
120
+ - `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`)
121
+ - `text`: Transcription of the audio
122
+ - `speaker_id`: Unique speaker identifier
123
+ - `emotion`: Detected emotion
124
+ - `language`: Language code
125
+
126
+ ## Speaker ID Mapping
127
+
128
+ Speaker IDs have been made unique across all merged datasets to avoid conflicts.
129
+ For example:
130
+ - Original Dataset A: `speaker_0`, `speaker_1`
131
+ - Original Dataset B: `speaker_0`, `speaker_1`
132
+ - Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
133
+
134
+ Original dataset information is preserved in the metadata for reference.
135
+
136
+ ## Data Quality
137
+
138
+ This dataset was created using the Vyvo Dataset Builder with:
139
+ - Automatic transcription and diarization
140
+ - Quality filtering for audio segments
141
+ - Music and noise filtering
142
+ - Emotion detection
143
+ - Language identification
144
+
145
+ ## License
146
+
147
+ This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
148
+
149
+ ## Citation
150
+
151
+ ```bibtex
152
+ @dataset{vyvo_merged_dataset,
153
+ title={TR-Full_dataset},
154
+ author={Vyvo Dataset Builder},
155
+ year={2025},
156
+ url={https://huggingface.co/datasets/Codyfederer/tr-full-dataset}
157
+ }
158
+ ```
159
+
160
+ This dataset was created using the Vyvo Dataset Builder tool.