File size: 4,621 Bytes
9caa8f9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0286e1a
 
 
ae68fb3
 
 
f8ada0e
 
 
81951b2
 
 
04a6692
 
 
9bffa9f
 
 
9eb9bd1
 
 
df4db26
 
 
5787daa
 
 
9475bdf
 
 
f4824f2
 
 
ff35473
 
 
f8863cb
 
 
83a5da9
 
 
 
 
9caa8f9
 
 
 
 
933f1c7
 
0286e1a
 
ae68fb3
 
f8ada0e
 
81951b2
 
04a6692
 
9bffa9f
 
9eb9bd1
 
df4db26
 
9475bdf
 
f4824f2
 
ff35473
 
f8863cb
 
83a5da9
 
7b5f1d8
 
c7785b4
 
 
 
 
 
 
 
 
 
 
 
 
5787daa
59b82a8
ec1399a
 
 
3c5276f
ec1399a
 
c1383cb
 
 
 
 
 
 
5866578
dccb858
13c476d
 
a66a237
13c476d
39fcf3e
 
 
 
c7785b4
 
1706fdd
c7785b4
805da2c
 
c7785b4
 
dccb858
805da2c
c7785b4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
---
dataset_info:
  features:
  - name: image_id
    dtype: string
  - name: label
    dtype: int32
  - name: clip_model
    dtype: string
  - name: clip_features
    list: float32
  - name: vector_dim
    dtype: int32
  - name: timestamp
    dtype: timestamp[ns]
  splits:
  - name: clip_vit_b32_train
    num_bytes: 2723761042
    num_examples: 1281167
  - name: clip_vit_laion_b32_train
    num_bytes: 2789100559
    num_examples: 1281167
  - name: clip_vit_laion_b32_validation
    num_bytes: 108850000
    num_examples: 50000
  - name: clip_vit_b16_train
    num_bytes: 2777570056
    num_examples: 1281167
  - name: clip_vit_b16_validation
    num_bytes: 108400000
    num_examples: 50000
  - name: clip_vit_l14_train
    num_bytes: 4090766231
    num_examples: 1281167
  - name: clip_vit_l14_validation
    num_bytes: 159650000
    num_examples: 50000
  - name: clip_vit_laion_bigg14_train
    num_bytes: 6728689084
    num_examples: 1281167
  - name: clip_vit_laion_bigg14_validation
    num_bytes: 262600000
    num_examples: 50000
  - name: clip_vit_b32_validation
    num_bytes: 108400000
    num_examples: 50000
  - name: clip_vit_b32_test
    num_bytes: 216800000
    num_examples: 100000
  - name: clip_vit_b16_test
    num_bytes: 216800000
    num_examples: 100000
  - name: clip_vit_laion_b32_test
    num_bytes: 217700000
    num_examples: 100000
  - name: clip_vit_l14_test
    num_bytes: 319300000
    num_examples: 100000
  - name: clip_vit_laion_h14_test
    num_bytes: 422500000
    num_examples: 100000
  download_size: 25438949728
  dataset_size: 21250886972
configs:
- config_name: default
  data_files:
  - split: clip_vit_b32_train
    path: data/clip_vit_b32_train-*
  - split: clip_vit_b32_validation
    path: data/clip_vit_b32_validation-*
  - split: clip_vit_laion_b32_train
    path: data/clip_vit_laion_b32_train-*
  - split: clip_vit_laion_b32_validation
    path: data/clip_vit_laion_b32_validation-*
  - split: clip_vit_b16_train
    path: data/clip_vit_b16_train-*
  - split: clip_vit_b16_validation
    path: data/clip_vit_b16_validation-*
  - split: clip_vit_l14_train
    path: data/clip_vit_l14_train-*
  - split: clip_vit_l14_validation
    path: data/clip_vit_l14_validation-*
  - split: clip_vit_laion_bigg14_train
    path: data/clip_vit_laion_bigg14_train-*
  - split: clip_vit_laion_bigg14_validation
    path: data/clip_vit_laion_bigg14_validation-*
  - split: clip_vit_b32_test
    path: data/clip_vit_b32_test-*
  - split: clip_vit_b16_test
    path: data/clip_vit_b16_test-*
  - split: clip_vit_laion_b32_test
    path: data/clip_vit_laion_b32_test-*
  - split: clip_vit_l14_test
    path: data/clip_vit_l14_test-*
  - split: clip_vit_laion_h14_test
    path: data/clip_vit_laion_h14_test-*
task_categories:
- feature-extraction
- image-feature-extraction
license: mit
tags:
- features
- image_features
- extracted_features
- precomputed_features
- imagenet
- imagenet_features
- clip_vit
- variants
size_categories:
- 1M<n<10M
---

# Update: 10/2/2025

Claude said that I'm not being careful enough with my database curation after grilling me for 20 minutes, so I included the preparer script as well.

Claude Sonnet 4.5 is kind of a chad.

# Update; 9/26/2025

Having to download this whole repo is annoying, so I'm making sure the splits are named train/val/test (if they exist) and the named subset is the clip name.


# Older non-dated updates

Everything extracted with torch configured as deterministic; using seed 42 on an a100 using colab; so if it has variances from expectation it's on cuda.

It's a little quirky; 
* Most of the splits have train, test, val. Many do not.
* Most of the splits have a proper "image_id" md5 id for verification.

The prompts used were direct literal prompts for the classification name;

No use of "a photo of" or any such invariance; just the classification text.

This is a series of clip-vit extracted feature maps from a 256x256 cropped and resized imagenet variant hosted here on huggingface.

I ran the processor 224x224 and then extracted features from the entire dataset batch-sequentially while simultaneously capturing the necessary classifiers and classifications associated with the images for downstream testing and assessment.

Academic and research purpose use only.

clip-vit-large-patch14 variations do exist in the splits. 

clip-vit-bigG is the 1280 dim variation and it does exist; it took quite a while to extract - and it is in fact missing it's test split. Sorry about that.

There are many variants of clip-vit-base from many variant forms. Each of them extracted using the same process as the others.