You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

sentenceTransformer_nepali_embedding

This is a sentence-transformers model finetuned from jangedoo/all-MiniLM-L6-v2-nepali on the json dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: jangedoo/all-MiniLM-L6-v2-nepali
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: nep
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ritesh-07/fine_tuned_model_03")
# Run inference
sentences = [
    'राहदानी रद्द गर्न कस्तो सत्यताको घोषणा चाहिन्छ?',
    'राहदानी रद्द गर्न निवेदकले उल्लेखित विवरण साँचो भएको र प्रचलित कानून बमोजिम अपराध ठहरिने कुनै काम नगरेको सत्यताको घोषणा गर्नुपर्छ।',
    'नागरिकता टोलीले गलत तथ्य वा अपूर्ण जानकारी भएमा सर्जमिनको प्रतिवेदन रद्द गर्न सक्छ।',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.2891
cosine_accuracy@3 0.5013
cosine_accuracy@5 0.6154
cosine_accuracy@10 0.7772
cosine_precision@1 0.2891
cosine_precision@3 0.1671
cosine_precision@5 0.1231
cosine_precision@10 0.0777
cosine_recall@1 0.2891
cosine_recall@3 0.5013
cosine_recall@5 0.6154
cosine_recall@10 0.7772
cosine_ndcg@10 0.5114
cosine_mrr@10 0.4288
cosine_map@100 0.4379

Information Retrieval

Metric Value
cosine_accuracy@1 0.2971
cosine_accuracy@3 0.5225
cosine_accuracy@5 0.626
cosine_accuracy@10 0.7772
cosine_precision@1 0.2971
cosine_precision@3 0.1742
cosine_precision@5 0.1252
cosine_precision@10 0.0777
cosine_recall@1 0.2971
cosine_recall@3 0.5225
cosine_recall@5 0.626
cosine_recall@10 0.7772
cosine_ndcg@10 0.5196
cosine_mrr@10 0.4391
cosine_map@100 0.4483

Information Retrieval

Metric Value
cosine_accuracy@1 0.2891
cosine_accuracy@3 0.504
cosine_accuracy@5 0.6127
cosine_accuracy@10 0.7772
cosine_precision@1 0.2891
cosine_precision@3 0.168
cosine_precision@5 0.1225
cosine_precision@10 0.0777
cosine_recall@1 0.2891
cosine_recall@3 0.504
cosine_recall@5 0.6127
cosine_recall@10 0.7772
cosine_ndcg@10 0.5134
cosine_mrr@10 0.4313
cosine_map@100 0.4398

Information Retrieval

Metric Value
cosine_accuracy@1 0.2812
cosine_accuracy@3 0.4934
cosine_accuracy@5 0.6101
cosine_accuracy@10 0.7639
cosine_precision@1 0.2812
cosine_precision@3 0.1645
cosine_precision@5 0.122
cosine_precision@10 0.0764
cosine_recall@1 0.2812
cosine_recall@3 0.4934
cosine_recall@5 0.6101
cosine_recall@10 0.7639
cosine_ndcg@10 0.504
cosine_mrr@10 0.423
cosine_map@100 0.4317

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 3,385 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 18 tokens
    • mean: 49.31 tokens
    • max: 103 tokens
    • min: 17 tokens
    • mean: 81.7 tokens
    • max: 256 tokens
  • Samples:
    anchor positive
    राहदानी नियमावली, २०७७ मा दस्तुर बुझाउने प्रक्रिया कस्तो छ? राहदानी नियमावली, २०७७ मा दस्तुर तोकिएको बैङ्कमा बुझाई रसिद निवेदनमा संलग्न गर्नुपर्छ।
    दफा ३ को उपदफा (६) मा विदेशी नागरिकसँग विवाह गरेकी नेपाली महिलाको सन्तानले कसरी नागरिकता प्राप्त गर्छ? दफा ३ को उपदफा (६) मा विदेशी नागरिकसँग विवाह गरेकी नेपाली महिला नागरिकबाट नेपालमा जन्मिएको व्यक्तिले, यदि निजको आमा र बाबु दुवै नेपाली नागरिक रहेछन् भने, वंशजको आधारमा नेपालको नागरिकता प्राप्त गर्नेछ।
    दफा ३ को उपदफा (४) मा कस्तो व्यवस्था थपिएको छ? दफा ३ को उपदफा (४) मा थपिएको व्यवस्था अनुसार, संवत् २०७२ साल असोज ३ गतेभन्दा अघि जन्मको आधारमा नेपालको नागरिकता प्राप्त गरेको नागरिकको सन्तानले, यदि बाबु र आमा दुवै नेपालको नागरिक रहेछन् भने, निजको उमेर सोह्र वर्ष पूरा भएपछि वंशजको आधारमा नेपालको नागरिकता प्राप्त गर्नेछ।
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            384,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: False
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: False
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • hub_revision: None
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • liger_kernel_config: None
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_384_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 7 - 0.4635 0.4673 0.4674 0.4406
1.4528 10 2.6919 - - - -
2.0 14 - 0.4977 0.5140 0.4963 0.4759
2.9057 20 1.0521 - - - -
3.0 21 - 0.5111 0.5242 0.5130 0.5017
4.0 28 - 0.5114 0.5196 0.5134 0.504
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.13
  • Sentence Transformers: 4.1.0
  • Transformers: 4.54.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.9.0
  • Datasets: 4.0.0
  • Tokenizers: 0.21.2

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ritesh-07/fine_tuned_model_03

Unable to build the model tree, the base model loops to the model itself. Learn more.

Evaluation results