YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

Deployment Scripts for Medguide (Built with Gradio)

This document provides instructions for deploying the Medguide model for inference using Gradio.

  1. Set up the Conda environment: Follow the instructions in the PKU-Alignment/align-anything repository to configure your Conda environment.

  2. Configure the model path: After setting up the environment, update the MODEL_PATH variable in deploy_medguide_v.sh to point to your local Medguide model directory.

  3. Verify inference script parameters: Check the following three parameters in both multimodal_inference.py:

    # NOTE: Replace with your own model path if not loaded via the API base
    model = ''
    

    These scripts utilize an OpenAI-compatible server approach. The deploy_medguide_v.sh script launches the Medguide model locally and exposes it on port 8231 for external access via the specified API base URL.

  4. Running Inference:

    • Streamed Output:
      bash deploy_medguide_v.sh
      python multimodal_inference.py
      
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support