How to use depth-anything/Depth-Anything-V2-Small with DepthAnythingV2:
# Install from https://github.com/DepthAnything/Depth-Anything-V2 # Load the model and infer depth from an image import cv2 import torch from depth_anything_v2.dpt import DepthAnythingV2 # instantiate the model model = DepthAnythingV2(encoder="vits", features=64, out_channels=[48, 96, 192, 384]) # load the weights filepath = hf_hub_download(repo_id="depth-anything/Depth-Anything-V2-Small", filename="depth_anything_v2_vits.pth", repo_type="model") state_dict = torch.load(filepath, map_location="cpu") model.load_state_dict(state_dict).eval() raw_img = cv2.imread("your/image/path") depth = model.infer_image(raw_img) # HxW raw depth map in numpy
Corresponding PR: https://github.com/huggingface/huggingface.js/pull/785
· Sign up or log in to comment