kmhf

community
Activity Feed

AI & ML interests

None defined yet.

jbilcke-hfΒ 
posted an update 1 day ago
view post
Post
1303
I made a code sniping agent to detect when new AI papers with code (and weights) are released, and then automatically create a Gradio demo on Hugging Face πŸ§™

Here are some examples generated 100% automatically:
https://huggingface.co/collections/jbilcke-hf/sniped

I call this agent CheatCode (https://github.com/jbilcke-hf/CheatCode) because it skips so many steps that it kinda feels like breaking the rules of the AI tech release game πŸ˜…

As with any experimental technology, there is still room for improvement πŸ‘©πŸ»β€πŸ”¬:

- Currently the demos are all generated in one go and not built or tested by the agent itself. A more robust version should loop over the deployed app to fix build/runtime issues.
- There is still a bit of human curation done to avoid making demos for things that can’t really be demonstrated on ZeroGPU (eg. tasks taking several minutes)
- Some papers can actually be showcased in a variety of ways, which isn’t really supported (see Demo 2)


freddyaboultonΒ 
posted an update about 1 month ago
lysandreΒ 
posted an update about 1 month ago
view post
Post
6242
We're kick-starting the process of Transformers v5, with @ArthurZ and @cyrilvallez !

v5 should be significant: we're using it as a milestone for performance optimizations, saner defaults, and a much cleaner code base worthy of 2025.

Fun fact: v4.0.0-rc-1 came out on Nov 19, 2020, nearly five years ago!
  • 6 replies
Β·
jbilcke-hfΒ 
posted an update about 2 months ago
view post
Post
2823
Did you know you can use AI-Toolkit by Ostris (https://github.com/ostris/ai-toolkit) to train AI image and video models directly inside a Hugging Face space?

The benefit of doing so is that you will get a nice UI, if you do not want to deal with JSON files and CLI shenanigans!

I have created a ready-to-use template you can deploy to your own HF Space to train generative models in a few clicks.

All you have to do is to duplicate my space to your private space by going here: jbilcke-hf/ai-toolkit

This space requires a good GPU and most importantly a persistent storage, as everything is stored in /data.

Currently multiple GPUs isn't supported yet in the UI but it is planned(https://discord.com/channels/1144033039786188891/1166417204015808603/1404851082361835623)

P.S.: Don't forget to set your space to private to make sure only you can access, delete and run training jobs!
  • 1 reply
Β·
jbilcke-hfΒ 
posted an update 4 months ago
view post
Post
5404
Are you looking to run a robot simulator, maybe run long robot policy training tasks, but you don't have the GPU at home?

Well.. you can run MuJoCo inside a Hugging Face space!

All you have to do is to clone this space:
jbilcke-hf/train-robots-with-mujoco

Don't forget to a pick a Nvidia GPU for your space, to be able to get some nice OpenGL renders!

Are you new to MuJoCo and/or JupyterLab notebooks?

You can get started with this tutorial (select "Open from URL" then paste the URL to this notebook):
jbilcke-hf/train-robots-with-mujoco

Happy robot hacking! 🦾
  • 2 replies
Β·
freddyaboultonΒ 
posted an update 4 months ago
reach-vbΒ 
posted an update 5 months ago
view post
Post
5440
Excited to onboard FeatherlessAI on Hugging Face as an Inference Provider - they bring a fleet of 6,700+ LLMs on-demand on the Hugging Face Hub 🀯

Starting today, you'd be able to access all those LLMs (OpenAI compatible) on HF model pages and via OpenAI client libraries too! πŸ’₯

Go, play with it today: https://huggingface.co/blog/inference-providers-featherless

P.S. They're also bringing on more GPUs to support all your concurrent requests!
  • 1 reply
Β·
freddyaboultonΒ 
posted an update 5 months ago
view post
Post
634
Time is running out! ⏰

Less than 24 hours to participate in the MCP Hackathon and win thousands of dollars in prizes! Don't miss this opportunity to showcase your skills.

Visit Agents-MCP-Hackathon/AI-Marketing-Content-Creator to register!

freddyaboultonΒ 
posted an update 5 months ago
view post
Post
466
🚨 NotebookLM Dethroned?! 🚨

Meet Fluxions vui: The new open-source dialogue generation model.
🀯 100M Params, 40k hours audio!
πŸŽ™οΈ Multi-speaker audio
πŸ˜‚ Non-speech sounds (like [laughs]!)
πŸ“œ MIT License

Is this the future of content creation? Watch the video and decide for yourself!

https://huggingface.co/spaces/fluxions/vui-spacehttps://huggingface.co/fluxions/vui
  • 1 reply
Β·
jbilcke-hfΒ 
posted an update 5 months ago
jbilcke-hfΒ 
posted an update 5 months ago
view post
Post
2026
Hi everyone,

I've seen some unsuccessful attempts at running Wan2GP inside a Hugging Face Space, which is a shame as it is a great Gradio app!

So here is a fork that you can use, with some instructions on how to do this:

jbilcke-hf/Wan2GP_you_must_clone_this_space_to_use_it#1

Note : some things like persistent models/storage/custom LoRAs might not be fully working out of the box. If you need those, you might have to dig into the Wan2GP codebase, see how to tweak the storage folder. Happy hacking!

reach-vbΒ 
posted an update 5 months ago
view post
Post
4547
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! πŸ’₯

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! πŸ€—
Β·
julien-cΒ 
posted an update 6 months ago
view post
Post
7672
BOOOOM: Today I'm dropping TINY AGENTS

the 50 lines of code Agent in Javascript πŸ”₯

I spent the last few weeks working on this, so I hope you will like it.

I've been diving into MCP (Model Context Protocol) to understand what the hype was all about.

It is fairly simple, but still quite powerful: MCP is a standard API to expose sets of Tools that can be hooked to LLMs.

But while doing that, came my second realization:

Once you have a MCP Client, an Agent is literally just a while loop on top of it. 🀯

➑️ read it exclusively on the official HF blog: https://huggingface.co/blog/tiny-agents
  • 1 reply
Β·
freddyaboultonΒ 
posted an update 7 months ago
view post
Post
2192
Ever wanted to share your AI creations with friends? ✨

Screenshots are fine, but imagine letting others play with your ACTUAL model!

Introducing Gradio deep links πŸ”— - now you can share interactive AI apps, not just images.

Add a gr.DeepLinkButton to any app and get shareable URLs that let ANYONE experiment with your models.

freddyaboultonΒ 
posted an update 8 months ago
view post
Post
2045
Privacy matters when talking to AI! πŸ”‡

We've just added a microphone mute button to FastRTC in our latest update (v0.0.14). Now you control exactly what your LLM hears.

Plus lots more features in this release! Check them out:
https://github.com/freddyaboulton/fastrtc/releases/tag/0.0.14
julien-cΒ 
posted an update 8 months ago
view post
Post
4230
Important notice 🚨

For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference – with more coming soon), we've started enabling Pay as you go (=PAYG)

What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.

You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
Β·
freddyaboultonΒ 
posted an update 8 months ago
view post
Post
3367
Getting WebRTC and Websockets right in python is very tricky. If you've tried to wrap an LLM in a real-time audio layer then you know what I'm talking about.

That's where FastRTC comes in! It makes WebRTC and Websocket streams super easy with minimal code and overhead.

Check out our org: hf.co/fastrtc
lysandreΒ 
posted an update 8 months ago
view post
Post
8158
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
Β·