alime-reranker-large-zh
The alime reranker model.
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
pairs = [["θ₯ΏζΉε¨εͺοΌ", "θ₯ΏζΉι£ζ―εθεΊδ½δΊζ΅ζ±ηζε·εΈ"],["δ»ε€©ε€©ζ°δΈι","δ½ εζ»ζδΊ"]]
if torch.cuda.is_available():
device = torch.device("cuda")
else:
device = torch.device("cpu")
tokenizer = AutoTokenizer.from_pretrained("Pristinenlp/alime-reranker-large-zh")
model = AutoModelForSequenceClassification.from_pretrained("Pristinenlp/alime-reranker-large-zh").to(device)
inputs = tokenizer(pairs, padding=True, truncation=True, return_tensors='pt', max_length=512).to(device)
scores = model(**inputs, return_dict=True).logits.view(-1, ).float()
print(scores.tolist())
- Downloads last month
- 10
Spaces using Pristinenlp/alime-reranker-large-zh 10
Evaluation results
- map on MTEB CMedQAv1test set self-reported82.322
- mrr on MTEB CMedQAv1test set self-reported84.914
- map on MTEB CMedQAv2test set self-reported84.086
- mrr on MTEB CMedQAv2test set self-reported86.901
- map on MTEB MMarcoRerankingself-reported35.497
- mrr on MTEB MMarcoRerankingself-reported35.292
- map on MTEB T2Rerankingself-reported68.258
- mrr on MTEB T2Rerankingself-reported78.642