-
Ziya2: Data-centric Learning is All LLMs Need
Paper • 2311.03301 • Published • 20 -
Co-training and Co-distillation for Quality Improvement and Compression of Language Models
Paper • 2311.02849 • Published • 8 -
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper • 2311.02303 • Published • 12 -
ADaPT: As-Needed Decomposition and Planning with Language Models
Paper • 2311.05772 • Published • 15
Yassine Ouali
youali
·
AI & ML interests
ML, ∀ subject ∈ adjacent(ML)
Organizations
None yet
Multimodal/Vision LLMs
-
GLaMM: Pixel Grounding Large Multimodal Model
Paper • 2311.03356 • Published • 37 -
CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding
Paper • 2311.03354 • Published • 8 -
CogVLM: Visual Expert for Pretrained Language Models
Paper • 2311.03079 • Published • 28 -
UnifiedVisionGPT: Streamlining Vision-Oriented AI through Generalized Multimodal Framework
Paper • 2311.10125 • Published • 6
LLMs
-
Ziya2: Data-centric Learning is All LLMs Need
Paper • 2311.03301 • Published • 20 -
Co-training and Co-distillation for Quality Improvement and Compression of Language Models
Paper • 2311.02849 • Published • 8 -
MFTCoder: Boosting Code LLMs with Multitask Fine-Tuning
Paper • 2311.02303 • Published • 12 -
ADaPT: As-Needed Decomposition and Planning with Language Models
Paper • 2311.05772 • Published • 15
Multimodal/Vision LLMs
-
GLaMM: Pixel Grounding Large Multimodal Model
Paper • 2311.03356 • Published • 37 -
CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding
Paper • 2311.03354 • Published • 8 -
CogVLM: Visual Expert for Pretrained Language Models
Paper • 2311.03079 • Published • 28 -
UnifiedVisionGPT: Streamlining Vision-Oriented AI through Generalized Multimodal Framework
Paper • 2311.10125 • Published • 6
models
0
None public yet
datasets
0
None public yet