I built a unified wrapper for llmcompressor, llama.cpp & coremltools. Looking for LLM users to help me break it!
Hey everyone,
If you regularly quantize models, you know the pain of jumping between different libraries depending on your target format. I’ve been working on Qwodel, a Python package that acts as a clean, single-API wrapper around llmcompressor, coremltools, and llama.cpp.
The goal is to let you export to AWQ, GGUF, or Apple's CoreML without rewriting your pipeline or fighting with dependencies.
I need heavy LLM users to test this and tell me exactly where it fails. >
If you are currently converting heavy models or custom fine-tunes, please try running them through Qwodel. I am specifically looking for:
Edge cases: Which model architectures cause the wrapper to crash?
Dependency conflicts: Did it break your existing environment?
Missing pass-throughs: Are there specific llama.cpp or llmcompressor arguments you need exposed in the Qwodel API?
Read the Docs & Quickstart: http://docs.qwodel.com
Report Bugs / Contribute: www.github.com/qwodel/qwodel
Drop your error logs in the GitHub issues. Brutal feedback is welcome!
Hi @kinderasteroid ,
This looks really useful. Having a single API to export to AWQ, GGUF, and CoreML is a great idea, managing dependencies across multiple quantization tools can become quite complex. Thanks for building and sharing Qwodel.