ryokamoi commited on
Commit
b97dea3
ยท
verified ยท
1 Parent(s): 29ca982

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -117,7 +117,7 @@ configs:
117
  ๐ŸŒ <a href="https://visonlyqa.github.io/">Project Website</a> | ๐Ÿ“„ <a href="https://arxiv.org/abs/2412.00947">Paper</a> | ๐Ÿค— <a href="https://huggingface.co/collections/ryokamoi/visonlyqa-674e86c7ec384b629bb97bc3">Dataset</a> | ๐Ÿ”ฅ <a href="https://github.com/open-compass/VLMEvalKit">VLMEvalKit</a>
118
  </p>
119
 
120
- This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information".
121
 
122
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
123
 
@@ -132,10 +132,11 @@ VisOnlyQA is designed to evaluate the visual perception capability of large visi
132
  </p>
133
 
134
  ```bibtex
135
- @misc{kamoi2024visonlyqa,
136
- title={VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
137
- author={Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
138
- year={2024},
 
139
  }
140
  ```
141
 
 
117
  ๐ŸŒ <a href="https://visonlyqa.github.io/">Project Website</a> | ๐Ÿ“„ <a href="https://arxiv.org/abs/2412.00947">Paper</a> | ๐Ÿค— <a href="https://huggingface.co/collections/ryokamoi/visonlyqa-674e86c7ec384b629bb97bc3">Dataset</a> | ๐Ÿ”ฅ <a href="https://github.com/open-compass/VLMEvalKit">VLMEvalKit</a>
118
  </p>
119
 
120
+ This repository contains the code and data for the paper "VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information" (COLM 2025).
121
 
122
  VisOnlyQA is designed to evaluate the visual perception capability of large vision language models (LVLMs) on geometric information of scientific figures. The evaluation set includes 1,200 mlutiple choice questions in 12 visual perception tasks on 4 categories of scientific figures. We also provide a training dataset consisting of 70k instances.
123
 
 
132
  </p>
133
 
134
  ```bibtex
135
+ @inproceedings{kamoi2025visonlyqa,
136
+ author = {Ryo Kamoi and Yusen Zhang and Sarkar Snigdha Sarathi Das and Ranran Haoran Zhang and Rui Zhang},
137
+ title = {VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information},
138
+ year = {2025}
139
+ booktitle={COLM 2025},
140
  }
141
  ```
142