Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

Nymboย 
posted an update 1 day ago
view post
Post
3771
We should really have a release date range slider on the /models page. Tired of "trending/most downloaded" being the best way to sort and still seeing models from 2023 on the first page just because they're embedded in enterprise pipelines and get downloaded repeatedly. "Recently Created/Recently Updated" don't solve the discovery problem considering the amount of noise to sift through.

Slight caveat: Trending actually does have some recency bias, but it's not strong/precise enough.
ยท
OzTianluย 
posted an update 2 days ago
view post
Post
5215
Arcade-3B โ€” SmolReasoner
NoesisLab/Arcade-3B
Arcade-3B is a 3B instruction-following and reasoning model built on SmolLM3-3B. It is the public release from the ARCADE project at NoesisLab, which investigates the Stateโ€“Constraint Orthogonality Hypothesis: standard Transformer hidden states conflate factual content and reasoning structure in the same subspace, and explicitly decoupling them improves generalization.
  • 5 replies
ยท
prithivMLmodsย 
posted an update 3 days ago
view post
Post
4737
QIE-2509-Object-Remover-Bbox-v3 is a more stable version of the Qwen Image Edit visual groundingโ€“based object removal model. The app was previously featured in HF Spaces of the Week and is now updated with the latest Bbox-v3 LoRA adapter.

๐Ÿค— Demo: prithivMLmods/QIE-Object-Remover-Bbox
๐Ÿค— LoRA: prithivMLmods/QIE-2509-Object-Remover-Bbox-v3
๐Ÿค— Collection: https://huggingface.co/collections/prithivMLmods/qwen-image-edit-layout-bbox

To learn more, visit the app page or the respective model pages.
  • 2 replies
ยท
unmodeled-tylerย 
posted an update 1 day ago
view post
Post
3023
LINK: https://github.com/unmodeled-tyler/vessel-browser

Hey Hugging Face!

It's been quiet from me over here for the last few weeks, but I've been busy building! I just submitted my project to the Hermes Agent Hackathon, and wanted to share it with all of you.

This is Vessel Browser - an AI-native web browser that runs locally on Linux, and is operated by your personal AI agent via MCP server. Vessel is built from the ground up around the agent as first-class and visible UI for human-in-the-loop with 3 different levels of permissions.

Your agent finds, reads, and organizes the web for you, based on what you actually care about - not what a platform's algorithm thinks you care about.

Once your agent finds what it's looking for, it can organize bookmarked pages into custom folders with summaries for later browsing, take screenshots with highlighted text, and integrate with Obsidian for long-term browsing related-memory.

Check it out!
  • 1 reply
ยท
BibbyResearchย 
posted an update 3 days ago
view post
Post
2444
We are working on the largest Dataset and Pre-trained model for Text to Speech and Speech to text for the low-resourced language called Marwari in India.
danielhanchenย 
posted an update 3 days ago
view post
Post
3532
We collaborated with NVIDIA to teach you about Reinforcement Learning and RL environments. ๐Ÿ’š Learn:

โ€ข Why RL environments matter + how to build them
โ€ข When RL is better than SFT
โ€ข GRPO and RL best practices
โ€ข How verifiable rewards and RLVR work

Blog: https://unsloth.ai/blog/rl-environments
ยท
robtacconelliย 
posted an update about 14 hours ago
view post
Post
149
๐Ÿงฌ Midicoth: diffusion-based lossless compression โ€” no neural net, no GPU, no training data

What if reverse diffusion could compress text โ€” without a neural network?
Midicoth brings score-based denoising into classical compression. It treats prior smoothing as forward noise and reverses it with Tweedie's formula on a binary tree โ€” 3 denoising steps, James-Stein shrinkage, applied after all model blending. ~2,000 lines of C, single CPU core.

Beats every dictionary compressor we tested:
enwik8 (100 MB) โ†’ 1.753 bpb (โˆ’11.9% vs xz, โˆ’15% vs Brotli, โˆ’24.5% vs bzip2)
alice29.txt โ†’ 2.119 bpb (โˆ’16.9% vs xz)
Outperforms xz, zstd, Brotli, bzip2, gzip on all inputs

PAQ/CMIX still win with hundreds of models + LSTMs. LLM compressors win with pre-trained knowledge. Midicoth closes the gap with pure statistics โ€” no mixer, no gradient descent, just counting.
The Tweedie denoising layer adds 2.3โ€“2.7% on every file tested โ€” the most consistent component in the ablation. Adding SSE or logistic mixers made things worse. In the online setting, count-based beats gradient-based.
No external dependencies. Fully deterministic. Bit-exact encode/decode. ~60 KB/s throughput.
๐Ÿ’ป Code: https://github.com/robtacconelli/midicoth
๐Ÿ“„ Paper: Micro-Diffusion Compression -- Binary Tree Tweedie Denoising for Online Probability Estimation (2603.08771)
โญ Space: robtacconelli/midicoth

If you ever wondered whether diffusion ideas belong in data compression โ€” here's proof they do. โญ appreciated!
kanaria007ย 
posted an update about 21 hours ago
view post
Post
117
โœ… Article highlight: *Federated SI* (art-60-044, v0.1)

TL;DR:
Most real systems do not live inside a single SI-Core. Cities, hospital networks, grid operators, transit systems, vendors, and neighboring institutions all run under different governance, trust, and legal boundaries.

This note sketches *Federated SI*: how multiple SI-Cores coordinate without pretending to share one brain. The focus is on portable artifacts, explicit trust boundaries, negotiated goals, limited memory exchange, and graceful failure when cooperation partially breaks.

Read:
kanaria007/agi-structural-intelligence-protocols

Why it matters:
โ€ข makes cross-operator coordination explicit instead of hiding it inside ad hoc APIs
โ€ข supports cooperation under separate trust anchors, legal regimes, and policy surfaces
โ€ข treats failure modes seriously: partitions, vetoes, degraded cooperation, partial visibility
โ€ข keeps governance portable via normalized verdicts, pinned bindings, and export-safe artifacts

Whatโ€™s inside:
โ€ข why โ€œone SI-Core sees everythingโ€ is the wrong default
โ€ข federation objects such as federated SIRs, goal surfaces, memory views, and consent records
โ€ข negotiation across cities, hospitals, utilities, and other institutional stacks
โ€ข operational labels vs exported governance verdicts (ACCEPT / DEGRADE / REJECT)
โ€ข deterministic, auditable exchange rules for cross-run / cross-vendor comparison
โ€ข failover, mutual aid, and graceful degradation when trust or connectivity breaks

Key idea:
Intelligence at institution scale is not a single runtime. It is a *federation of governed runtimes* that must negotiate, coordinate, and fail safely without collapsing auditability.
W8Yiย 
posted an update 1 day ago
view post
Post
78
I built a **TCGA WSI feature dataset using UNI2-h**.

The official release currently has incomplete coverage (see discussion):
MahmoodLab/UNI2-h-features#2

To make the features easier to use for research, I generated a new dataset:

W8Yi/tcga-wsi-uni2h-features

Key differences from the official release:

โ€ข **All detected tissue tiles are encoded** (not a sampled subset)
โ€ข **Features can be downloaded per slide** instead of large ZIP archives
โ€ข **QC overlay images** are provided for visual inspection
โ€ข **UNI2-h 1536-D tile embeddings** stored in H5 format
โ€ข Organized by TCGA project for easier use in MIL / retrieval pipelines

Example layout:

TCGA-HNSC/
  features/*.h5
  vis/*__overlay.png


Hope this helps others working on computational pathology and TCGA WSI research.
karstenskytย 
posted an update about 5 hours ago
view post
Post
29
๐Ÿš€ ๐—ช๐—ฒ ๐—ท๐˜‚๐˜€๐˜ ๐—ผ๐—ฝ๐—ฒ๐—ป๐—ฒ๐—ฑ ๐˜๐—ต๐—ฒ ๐—ฑ๐—ผ๐—ผ๐—ฟ๐˜€ ๐˜๐—ผ ๐—น๐˜‚๐˜…๐˜‚๐—ฟ๐˜†-๐—น๐—ฎ๐—ธ๐—ฒ๐—ต๐—ผ๐˜‚๐˜€๐—ฒ! Our full 12-page open-source soccer analytics platform is now live and publicly accessible on Hugging Face Spaces.

We successfully migrated our entire backend Streamlit dashboard to a Docker Space. Instead of being locked behind workspace accounts, anyone can now directly explore our models, 38M+ tracking frames, and advanced metrics (xT, PAUSA, DEFCON) in the browser.

To prepare for the public release, we ran a massive optimization sprint (what used to be days of normal engineering, now condensed into a few hours of "Claude Time"):

โšก ๐—ฃ๐—ฒ๐—ฟ๐—ณ๐—ผ๐—ฟ๐—บ๐—ฎ๐—ป๐—ฐ๐—ฒ: Eliminated O(nยฒ) bottlenecks, dropping ingestion pipeline runs from 36 minutes to <30 seconds.

๐Ÿง  ๐—จ๐—ซ: Applied a deep Cognitive Interface Audit across all surfaces to ensure the data is intuitive and human-readable.

The Hugging Face ecosystem now natively handles our entire public-facing presentation layerโ€”Models, Datasets, Gradio Demos, and the full Streamlit App.

๐—ง๐—ฟ๐˜† ๐˜๐—ต๐—ฒ ๐—น๐—ถ๐˜ƒ๐—ฒ ๐—ฎ๐—ฝ๐—ฝ: luxury-lakehouse/soccer-analytics-app

๐—˜๐˜…๐—ฝ๐—น๐—ผ๐—ฟ๐—ฒ ๐˜๐—ต๐—ฒ ๐—ฎ๐—ฟ๐—ฐ๐—ต๐—ถ๐˜๐—ฒ๐—ฐ๐˜๐˜‚๐—ฟ๐—ฒ:
luxury-lakehouse