The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Missing a name for object member. in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 276, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 34, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1391, in _parse
self.obj = DataFrame(
^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/frame.py", line 778, in __init__
mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 503, in dict_to_mgr
return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 114, in arrays_to_mgr
index = _extract_index(arrays)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/core/internals/construction.py", line 680, in _extract_index
raise ValueError(
ValueError: Mixing dicts with non-Series may lead to ambiguous ordering.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4195, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 279, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 242, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Missing a name for object member. in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
AirFM-DDA Dataset
Dataset Summary
This repository contains the precomputed CSI evaluation set used by the AirFM-DDA validation pipeline.
The data is derived from the processed DeepMIMO test split and stores binary PyTorch sample files under samples/, together with a root-level manifest.json that records sample ordering, split metadata, and per-sample paths.
This repository is intended to be used together with the model checkpoints in:
https://huggingface.co/SII-kejia/AirFM-DDA-Model
What Is Included
Current repository contents:
samples/: precomputed CSI samples stored as.ptfilesmanifest.json: dataset manifest used by the released validation loader.gitattributes: Hugging Face LFS tracking metadataREADME.md: this dataset card
Dataset scale:
- Total saved samples:
47,256 - Approximate on-disk size:
122 GB - Save dtype:
float32 - Maximum temporal length (
T):80 - Maximum subcarrier coverage (
K):128
Per-folder counts:
| Folder | Samples |
|---|---|
samples/city_18_denver_3p5/cfg1 |
8,863 |
samples/city_18_denver_3p5/cfg2 |
8,863 |
samples/city_19_oklahoma_3p5/cfg1 |
8,222 |
samples/city_19_oklahoma_3p5/cfg2 |
8,222 |
samples/city_23_beijing_3p5/cfg1 |
4,570 |
samples/city_23_beijing_3p5/cfg2 |
4,570 |
samples/city_27_rio_de_janeiro_3p5/cfg1 |
1,973 |
samples/city_27_rio_de_janeiro_3p5/cfg2 |
1,973 |
Source and Generation Settings
The current release was generated with the following recorded settings from manifest.json:
- Source multipath root: processed DeepMIMO
testdata - Frame configuration file:
frame_structure_configs_test2.yaml - Validation ratio:
0.1 - Split seed:
42 - CSI generation seed:
59 - Number of paths used in CSI generation:
25 - Storage layout:
samples/<city_folder>/<cfg_name>/csi_sample_XXXXXXXX.pt
This repository stores precomputed evaluation artifacts, not the original raw DeepMIMO source files.
Directory Layout
AirFM-DDA-dataset/
βββ README.md
βββ manifest.json
βββ samples/
βββ city_18_denver_3p5/
β βββ cfg1/
β βββ cfg2/
βββ city_19_oklahoma_3p5/
β βββ cfg1/
β βββ cfg2/
βββ city_23_beijing_3p5/
β βββ cfg1/
β βββ cfg2/
βββ city_27_rio_de_janeiro_3p5/
βββ cfg1/
βββ cfg2/
Sample Format
Each .pt file stores one serialized PyTorch dictionary with the following keys:
CSI_sample: saved CSI tensormask_TK: boolean mask tensorRx_ant_ind: receive-antenna index tensorcfg_tensor: frame/configuration tensormeta: per-sample metadata dictionary
The released validation code expects:
CSI_sampleto have shape[2, T, K, S]manifest.jsonto exist at the dataset rootmanifest["samples"][i]["relative_path"]to point to the corresponding.ptfile
The manifest additionally records, for every sample:
sample_indexrelative_pathsource_val_indexsource_batch_indexsource_in_batchshape_keycity_foldercfg_namerow_indexcfg_index
Recommended Usage
Option 1: Download with Hugging Face CLI
hf download SII-kejia/AirFM-DDA-dataset --repo-type dataset --local-dir ./AirFM-DDA-dataset
After downloading, the local directory should contain both:
./AirFM-DDA-dataset/manifest.json./AirFM-DDA-dataset/samples/
Option 2: Download from Python
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="SII-kejia/AirFM-DDA-dataset",
repo_type="dataset",
local_dir="./AirFM-DDA-dataset",
)
Option 3: Load One Sample with PyTorch
import json
from pathlib import Path
import torch
root = Path("./AirFM-DDA-dataset")
manifest = json.loads((root / "manifest.json").read_text(encoding="utf-8"))
first_rel_path = manifest["samples"][0]["relative_path"]
sample = torch.load(root / first_rel_path, map_location="cpu", weights_only=False)
print(sample.keys())
print(sample["CSI_sample"].shape)
print(sample["mask_TK"].dtype)
Using with the Released Validation Pipeline
The AirFM-DDA validation pipeline in the released codebase expects a dataset root containing manifest.json and samples/ exactly as provided here.
A matching loader looks up the root manifest and then loads individual files using relative_path. In other words, point the validation code to the dataset root, not directly to samples/.
Notes and Limitations
- This is a binary artifact dataset composed of
.ptfiles, so the Hugging Face dataset viewer is not expected to provide an interactive table preview. - This release is designed for AirFM-DDA evaluation and reproducible validation, not as a raw-source DeepMIMO redistribution.
- The repository currently contains the precomputed evaluation split used by the AirFM-DDA workflow; it is not presented as a full train/val/test benchmark package.
- Downloads last month
- 71