File size: 3,394 Bytes
bf50f05 f33570b bf50f05 a3f75c5 bf50f05 f33570b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
title: SAM3 Detection Browser
emoji: π
colorFrom: blue
colorTo: green
sdk: static
pinned: false
models:
- facebook/sam3
---
# π SAM3 Object Detection Browser
A simple, interactive web application for browsing and exploring object detection results from Meta's [SAM3 (Segment Anything Model 3)](https://huggingface.co/facebook/sam3).
## Features
- **Visual Detection Display**: View images with bounding boxes overlaid on detected objects
- **Confidence Filtering**: Adjust confidence threshold with a slider to filter detections
- **Statistics Dashboard**: See overall detection statistics including:
- Total filtered images
- Images with detections
- Total detections
- Average detections per image
- **Flexible Navigation**: Browse through datasets with pagination
- **Real-time Updates**: Filters update visualization instantly
- **Color-coded Confidence**: Bounding boxes and scores are color-coded:
- π’ Green: High confidence (β₯70%)
- π Orange: Medium confidence (40-70%)
- π΄ Red: Low confidence (<40%)
## How to Use
1. **Enter a Dataset ID**: Input any HuggingFace dataset that contains object detection results from SAM3
- Default: `davanstrien/newspapers-image-predictions`
- Format: `username/dataset-name`
2. **Select Split**: Choose which dataset split to view (train/validation/test)
3. **Adjust Filters**:
- Move the confidence slider to filter detections by minimum confidence score
- Check "Show only images with detections" to hide images without any detected objects
4. **Browse**: Navigate through pages using Previous/Next buttons
## Dataset Requirements
This browser works with any HuggingFace dataset that has:
- An `image` column containing images
- An `objects` column with the structure:
```python
{
"bbox": [[x, y, width, height], ...], # Bounding box coordinates
"category": [0, 0, ...], # Category indices
"score": [0.8, 0.6, ...] # Confidence scores
}
```
This matches the output format from the [detect-objects.py](https://huggingface.co/datasets/uv-scripts/sam3/blob/main/detect-objects.py) script.
## Example Datasets
- [davanstrien/newspapers-image-predictions](https://huggingface.co/datasets/davanstrien/newspapers-image-predictions) - Photograph detections in historical newspapers
## Creating Your Own Detection Dataset
Use the SAM3 detection script to create your own datasets:
```bash
# Detect objects in your images
uv run https://huggingface.co/datasets/uv-scripts/sam3/raw/main/detect-objects.py \
your-dataset \
your-output-dataset \
--class-name "photograph" \
--confidence-threshold 0.5
```
Then view the results by entering `your-output-dataset` in this browser!
## Technical Details
- **Technology**: Pure HTML/CSS/JavaScript (no backend required)
- **Data Source**: HuggingFace Datasets API (parquet endpoint)
- **Rendering**: HTML5 Canvas for bounding box overlays
- **Performance**: Loads 12 images per page for optimal performance
## Related Projects
- [SAM3 Detection Script](https://huggingface.co/datasets/uv-scripts/sam3) - Create detection datasets
- [SAM3 Model](https://huggingface.co/facebook/sam3) - The base model
- [UV Scripts Organization](https://huggingface.co/uv-scripts) - More ready-to-run ML scripts
## License
MIT License - Feel free to use and modify for your own projects!
|