bhogan commited on
Commit
3f6dc50
Β·
verified Β·
1 Parent(s): 044c592

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -1
README.md CHANGED
@@ -20,4 +20,118 @@ configs:
20
  - split: validation
21
  path: data/validation-*
22
  license: apache-2.0
23
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  - split: validation
21
  path: data/validation-*
22
  license: apache-2.0
23
+ ---
24
+ Absolutely, here’s the full dataset card as **plain Markdown** (just copy and paste as is).
25
+ Let me know if you want to tweak anything for your repo style.
26
+
27
+ ---
28
+
29
+ ```markdown
30
+ # Q Code Pretraining Corpus
31
+
32
+ This dataset provides a high-quality corpus of Q programming language code and documentation, curated for pretraining large language models and code models. It is designed to maximize coverage of Q syntax, idioms, and real-world usage for robust domain-adaptive pretraining.
33
+
34
+ ## πŸ“Š Dataset Overview
35
+
36
+ - **Total Data**: Over 1.6 million Q tokens, 5+ million characters
37
+ - **Documents**: 342 training chunks, 39 validation chunks
38
+ - **Source Types**:
39
+ - Open-source Q repositories (MIT/Apache 2.0 licenses)
40
+ - Official KDB+/Q documentation and tutorials
41
+ - Hand-curated code snippets and scripts
42
+ - **Format**: Cleaned, deduplicated, chunked for efficient pretraining
43
+
44
+ ## 🎯 Key Features
45
+
46
+ - **Q-Only**: All data is pure Q language (no mixed Python or non-code noise)
47
+ - **Permissive Licensing**: All source code is MIT or Apache 2.0, suitable for both research and commercial use
48
+ - **Coverage**: Includes code from analytics, time-series, database queries, and utilities
49
+ - **Filtered & Scored**: LLM-assisted quality scoring plus manual review for top-tier data fidelity
50
+ - **Chunked & Ready**: Delivered as 4k-token chunks for immediate use with Hugging Face, TRL, or custom pipelines
51
+
52
+ ## πŸ—οΈ Dataset Structure
53
+
54
+ Each record is a text chunk, containing code or documentation in Q.
55
+
56
+ Splits:
57
+ - `train`: Main corpus for pretraining (342 chunks)
58
+ - `validation`: Holdout set for evaluation (39 chunks)
59
+
60
+ Sample record:
61
+ ```python
62
+ {
63
+ "text": str # Raw Q code or documentation chunk
64
+ }
65
+ ```
66
+
67
+ ## πŸ§‘β€πŸ’» Usage
68
+
69
+ ### Loading the Dataset
70
+
71
+ ```python
72
+ from datasets import load_dataset
73
+
74
+ # Load the full Q pretraining dataset
75
+ dataset = load_dataset("bhogan/q-pretraining-corpus")
76
+
77
+ # Access splits
78
+ train_data = dataset["train"]
79
+ val_data = dataset["validation"]
80
+ ```
81
+
82
+ ### Example: Previewing Data
83
+
84
+ ```python
85
+ sample = dataset["train"][0]
86
+ print(sample["text"])
87
+ ```
88
+
89
+ ### Training Usage
90
+
91
+ This dataset is designed for language model pretraining using next-token prediction or masked language modeling objectives.
92
+ Supports efficient training with Hugging Face Transformers, TRL, or custom frameworks.
93
+
94
+ ## πŸ”€ About Q Programming Language
95
+
96
+ Q is a vector and array programming language developed by Kx Systems for high-performance analytics, finance, and time-series applications.
97
+
98
+ It features:
99
+ - Concise, functional, array-oriented syntax
100
+ - Powerful built-in operators for large-scale data manipulation
101
+ - Industry adoption in trading, banking, and real-time analytics
102
+
103
+ ## πŸ“ Source Repositories
104
+
105
+ Major open-source Q repos included:
106
+ - DataIntellectTech/TorQ
107
+ - psaris/qtips
108
+ - psaris/funq
109
+ - KxSystems/ml
110
+ - finos/kdb
111
+ - LeslieGoldsmith/qprof
112
+ - jonathonmcmurray/reQ
113
+ - ...and more
114
+
115
+ All with permissive licenses (MIT or Apache 2.0).
116
+
117
+ ## πŸ“ˆ Data Preparation & Filtering
118
+
119
+ - **Automated Scoring**: Qwen-2.5-32B was used to score each file (0–10) for quality and relevance; only files scoring β‰₯4 were included.
120
+ - **Manual Review**: Additional cleaning to remove non-Q files or low-value content.
121
+ - **Deduplication**: Duplicate and boilerplate code removed.
122
+
123
+ ## πŸ“ Citation
124
+
125
+ If you use this dataset in your research, please cite:
126
+
127
+ ```bibtex
128
+ @dataset{q_pretraining_corpus_2024,
129
+ title={Q Code Pretraining Corpus},
130
+ author={Brendan Rappazzo Hogan},
131
+ year={2024},
132
+ url={https://huggingface.co/datasets/bhogan/q-pretraining-corpus},
133
+ note={Dataset for domain-adaptive pretraining of language models on the Q programming language}
134
+ }
135
+ ```
136
+
137
+ **Associated Paper:** [Link to paper will be added here]