- Home
- Top MCPs for AI & Machine Learning
- Hugging Face
Hugging Face
Search, inspect, and run Hugging Face models and datasets from an agent.
Install Hugging Face
— pick your client, copy, paste.{
"mcpServers": {
"huggingface": {
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}Paste under mcpServers. Fully quit and reopen Claude after editing.
# export HF_TOKEN=hf_your_token
claude mcp add huggingface -- npx -y @huggingface/mcp-serverRun from your repo. Commit .mcp.json to share with your team.
{
"mcpServers": {
"huggingface": {
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}Global path: ~/.cursor/mcp.json. Reload window after editing.
{
"servers": {
"huggingface": {
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}VS Code uses the "servers" key (not "mcpServers").
{
"mcpServers": {
"huggingface": {
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}Open via Cascade → hammer icon → Configure.
{
"mcpServers": {
"huggingface": {
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}Open via the Cline sidebar → MCP Servers → Edit.
{
"experimental": {
"modelContextProtocolServers": [
{
"transport": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
]
}
}Continue uses modelContextProtocolServers with a transport block.
# ~/.codex/config.toml
[mcp_servers.huggingface]
command = "npx"
args = [
"-y",
"@huggingface/mcp-server",
]
env = { HF_TOKEN = "${HF_TOKEN}" }Codex uses TOML. Each server is a [mcp_servers.<name>] subtable.
{
"context_servers": {
"huggingface": {
"command": {
"path": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
]
},
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}
}
}Zed calls them "context_servers". Settings live-reload on save.
{
"name": "Hugging Face",
"transport": "stdio",
"command": "npx",
"args": [
"-y",
"@huggingface/mcp-server"
],
"env": {
"HF_TOKEN": "${HF_TOKEN}"
}
}Enable Developer mode (paid plans) and enter these values in the UI.
Quick answer
What it does
Exposes Hugging Face Hub search, model metadata, dataset info, and the Inference API as MCP tools.
Best for
- Model discovery and triage
- License + card lookups
- Experimental inference
- Dataset exploration
Not for
- Production inference traffic
- Private enterprise models without access
Setup recipe
- 1
Install
Copy the install snippet for your client from the Install section above.
- 2
Set required secrets
Set
HF_TOKENin your shell environment before launching your MCP client. - 3
Try a minimum working prompt
Find candidate open-source models for a task
Find the 3 most-downloaded open-source text-to-image models under 10B parameters released in the last 12 months. For each return: model id, license, parameter count, and one-sentence description.Tested with: Claude Desktop, Cursor.
Tools & permissions
Tools list pending verification. The server exposes tools over MCP; we haven’t yet parsed its capability manifest into this page. Check the GitHub repo for the authoritative list.
Security & scope
- Access scope
- Network
- Sandbox
- Public-read access to the Hugging Face Hub; optional write access to spaces/models if a token with write scope is set in env.
- Gotchas
- Gated models require the authenticated account to have accepted the gate — the MCP does not auto-click through.
Agent prompt pack
— copy into Claude, Cursor, or ChatGPT.Recommend the best MCP servers for [task: e.g. ai & machine learning work] in [client: Claude]. Constraints: - Prefer tools that are [official | open-source | read-only] — pick what matters for my use case. - Exclude MCPs that require [e.g. a paid plan, OAuth-only flows, remote-only transport]. - Return at most 3 picks, ranked. For each pick include: 1. One-sentence rationale. 2. The ready-to-paste install snippet for my client. 3. Any required secrets I need to create before installing. Cross-check the top-mcps.com listing: https://top-mcps.com/top-mcps-for-ai-machine-learning
Compare Hugging Face MCP vs [Context7 MCP] for the following job: [describe the job, e.g. "let an agent create GitHub issues on bug triage"]. Judge them on: - Setup time and complexity (what a new user hits first). - Auth model (none / API key / OAuth 2.1) and credential risk. - Transport (stdio / Streamable HTTP / SSE) and where the server runs. - Required secrets and the blast radius if they leak. - Operational risk in an unattended agent loop. - Which one is "good enough" for a weekend prototype vs. production. End with one sentence: which should I pick for my scenario, which is: [my scenario]. References: - https://top-mcps.com/mcp/huggingface - top-mcps.com listing for Context7
Install the Hugging Face MCP server for my [client: Claude] at the default config path for that client. Use the exact install snippet published at https://top-mcps.com/mcp/huggingface (fetch https://top-mcps.com/mcp/huggingface.json for the canonical server.json if you can read URLs). Before finishing: 1. Create the required secrets (HF_TOKEN) and put them in the appropriate env block — do not hard-code them. 2. Restart or reload the client so it picks up the new server. 3. Verify the server is connected (green / running state) and at least one tool is listed. 4. If anything fails, read the client's MCP logs and report the exact error — do not silently retry. Confirm when done and list the tools the server now exposes.
Frequently asked questions
What changed
— 2 updates tracked.Refreshed install snippets and fact sheet; verified for 2026.
Initial directory listing.
Exploring Top MCPs for AI & Machine Learning? See all AI & Machine Learning MCPs →