Datasets:

Modalities:
Audio
Languages:
Chinese
License:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 198, in _split_generators
                  for pa_metadata_table in self._read_metadata(downloaded_metadata_file, metadata_ext=metadata_ext):
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 308, in _read_metadata
                  csv_file_reader = pd.read_csv(metadata_file, iterator=True, dtype=dtype, chunksize=chunksize)
                                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/streaming.py", line 73, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1250, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pandas/_libs/parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "pandas/_libs/parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "pandas/_libs/parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
                File "<frozen codecs>", line 322, in decode
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbd in position 40: invalid start byte
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

000 Dataset Discription

This dataset is: s001. 今天天气很好,适合出门散步。 s002. 我喜欢在早晨喝一杯热茶。 s003. 计算机科学是一门有趣的学科。 s004. 我们可以通过互联网获取大量信息。 s005. 语言是人类交流的重要工具。 s006. 学习外语可以开阔视野。 s007. 人工智能正在改变我们的生活。 s008. 传统文化需要得到传承和保护。 s009. 现代科技让沟通变得更加便捷。 s010. 健康饮食对身体非常重要。 s011. 阅读能帮助我们放松心情。 s012. 运动是保持活力的好方法。 s013. 音乐可以表达丰富的情感。 s014. 自然风光总是让人心旷神怡。 s015. 教育是社会发展的重要基石。 s016. 环境保护是我们共同的责任。 s017. 科技进步带来了许多便利。 s018. 友谊是人生中宝贵的财富。 s019. 创新是推动社会前进的动力。 s020. 旅行可以让我们认识不同的文化。 s021. 时间是有限的,我们应该珍惜。 s022. 努力工作能带来成就感。 s023. 睡眠质量直接影响健康。 s024. 家庭教育对孩子成长至关重要。 s025. 社会媒体改变了人们的交流方式。 s026. 艺术创作需要灵感和耐心。 s027. 科学探索永无止境。 s028. 诚信是做人的基本原则。 s029. 城市规划需要考虑可持续发展。 s030. 饮食文化反映了地域特色。 s031. 历史是人类经验的积累。 s032. 音乐节拍能让人心情愉悦。 s033. 交通便利促进了经济发展。 s034. 语言学习需要持续的练习。 s035. 团队合作能提高工作效率。 s036. 技术进步带来了新的职业机会。 s037. 绿色能源是未来的发展方向。 s038. 情绪管理对心理健康很重要。 s039. 法律是社会秩序的保障。 s040. 阅读经典书籍可以提升修养。 s041. 体育锻炼有助于增强体质。 s042. 技术创新离不开人才培养。 s043. 美食是人们生活中的一大享受。 s044. 良好的沟通能减少误解。 s045. 科技进步也带来了新的挑战。 s046. 人与自然应当和谐共生。 s047. 文化多样性丰富了人类社会。 s048. 家庭教育应注重品格培养。 s049. 学习是一种终身的习惯。 s050. 感谢你听完这段录音。

000 Issues Encountered & Solution

During the dataset preparation process, I encountered two main challenges. Firstly, there was significant background noise in the original audio. To address this issue, I used the 'Voice Extractor' (beta) feature in Audacity. This AI-driven tool effectively separates speech from ambient noise, allowing me to suppress constant hums and minor disturbances without significantly affecting the sound quality.The second issue is inconsistent audio segmentation. When initially using Audacity's 'Silence Detector,' the audio was sometimes cut in the middle of words or left with excessive silence. My solution is to use a hybrid approach: I first use the automatic detector to obtain approximate boundaries, and then manually inspect and fine-tune each segment by zooming in on the waveform to ensure that cuts occur only at natural pauses between sentences.

Downloads last month
99