Update README.md
Browse files
README.md
CHANGED
|
@@ -6,10 +6,10 @@ This model is a 7B [Self-RAG](https://selfrag.github.io/) model that generates o
|
|
| 6 |
|
| 7 |
Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
|
| 8 |
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
|
| 9 |
-
See full descriptions in [our paper](
|
| 10 |
|
| 11 |
## Usage
|
| 12 |
-
Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/
|
| 13 |
To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
|
| 14 |
|
| 15 |
```py
|
|
|
|
| 6 |
|
| 7 |
Self-RAG is trained on our instruction-following corpora with interleaving passages and reflection tokens using the standard next-token prediction objective, enabling efficient and stable learning with fine-grained feedback.
|
| 8 |
At inference, we leverage reflection tokens covering diverse aspects of generations to sample the best output aligning users' preferences.
|
| 9 |
+
See full descriptions in [our paper](https://arxiv.org/abs/2310.11511).
|
| 10 |
|
| 11 |
## Usage
|
| 12 |
+
Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirementd.txt).
|
| 13 |
To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
|
| 14 |
|
| 15 |
```py
|