Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
This feature is in Beta.
Model Context Protocol (MCP) servers act as bridges that let AI agents access external data and tools. Instead of building these connections from scratch, you can use Databricks managed MCP servers to instantly connect your agents to data stored in Unity Catalog, vector search indexes, and custom functions.
Available managed servers
Databricks provides three types of managed MCP servers that work out of the box:
MCP server | Description | URL pattern |
---|---|---|
Vector search | Query Vector Search indexes to find relevant documents and data | https://<workspace-hostname>/api/2.0/mcp/vector-search/{catalog}/{schema} |
Unity Catalog functions | Run Unity Catalog functions like custom Python or SQL tools | https://<workspace-hostname>/api/2.0/mcp/functions/{catalog}/{schema} |
Genie space | Query Genie spaces to get insights from structured data tables | https://<workspace-hostname>/api/2.0/mcp/genie/{genie_space_id} |
Note
The Managed MCP server for Genie invokes Genie as an MCP tool which means that history is not passed when invoking the Genie APIs. As an alternative, you can use Genie in a multi-agent system.
Example: Customer support agent
Imagine you want to build an agent that helps with customer support. You could connect it to multiple managed MCP servers:
- Vector search:
https://<workspace-hostname>/api/2.0/mcp/vector-search/prod/customer_support
- Searches support tickets and documentation
- Genie space:
https://<workspace-hostname>/api/2.0/mcp/genie/{billing_space_id}
- Queries billing data and customer information
- UC functions:
https://<workspace-hostname>/api/2.0/mcp/functions/prod/billing
- Runs custom functions for account lookups and updates
This gives your agent access to both unstructured data (support tickets) and structured data (billing tables) plus custom business logic.
Notebooks: Build an agent with managed MCP servers
The following notebooks show how to author LangGraph and OpenAI agents that call MCP tools.
LangGraph MCP tool-calling agent
OpenAI MCP tool-calling agent
Local IDE: Build an agent with managed MCP servers
Connecting to an MCP server on Databricks is similar to any other remote MCP server. You can connect to the server using standard SDKs, such as the MCP Python SDK. The main difference is that Databricks MCP servers are secure by default and require clients to specify authentication.
The databricks-mcp Python library helps simplify authentication in custom agent code.
The simplest way to develop agent code is to run it locally and authenticate to your workspace. Use the following steps to build an AI agent that connects to a Databricks MCP server.
Set up your environment
Use OAuth to authenticate to your workspace. Run the following in a local terminal:
databricks auth login --host https://<your-workspace-hostname>
Create a profile name when prompted and remember this name.
Ensure you have a local environment with Python 3.12 or above, then install dependencies:
pip install -U "mcp>=1.9" "databricks-sdk[openai]" "mlflow>=3.1.0" "databricks-agents>=1.0.0" "databricks-mcp"
Test your local environment connection
Validate your connection to the MCP server by listing your Unity Catalog tools and executing the built-in Python code interpreter tool.
Serverless compute must be enabled in your workspace to run this snippet.
- Run the following snippet to validate your connection to the MCP server.
from databricks_mcp import DatabricksMCPClient
from databricks.sdk import WorkspaceClient
# TODO: Update to the Databricks CLI profile name you specified when
# configuring authentication to the workspace.
databricks_cli_profile = "YOUR_DATABRICKS_CLI_PROFILE"
assert (
databricks_cli_profile != "YOUR_DATABRICKS_CLI_PROFILE"
), "Set databricks_cli_profile to the Databricks CLI profile name you specified when configuring authentication to the workspace"
workspace_client = WorkspaceClient(profile=databricks_cli_profile)
workspace_hostname = workspace_client.config.host
mcp_server_url = f"{workspace_hostname}/api/2.0/mcp/functions/system/ai"
# This snippet below uses the Unity Catalog functions MCP server to expose built-in
# AI tools under `system.ai`, like the `system.ai.python_exec` code interpreter tool
def test_connect_to_server():
mcp_client = DatabricksMCPClient(server_url=mcp_server_url, workspace_client=workspace_client)
tools = mcp_client.list_tools()
print(
f"Discovered tools {[t.name for t in tools]} "
f"from MCP server {mcp_server_url}"
)
result = mcp_client.call_tool(
"system__ai__python_exec", {"code": "print('Hello, world!')"}
)
print(
f"Called system__ai__python_exec tool and got result "
f"{result.content}"
)
if __name__ == "__main__":
test_connect_to_server()
Create your agent
Build on the snippet above to define a basic single-turn agent that uses tools. Save the agent code locally as a file named
mcp_agent.py
:import json import uuid import asyncio from typing import Any, Callable, List from pydantic import BaseModel import mlflow from mlflow.pyfunc import ResponsesAgent from mlflow.types.responses import ResponsesAgentRequest, ResponsesAgentResponse from databricks_mcp import DatabricksMCPClient from databricks.sdk import WorkspaceClient # 1) CONFIGURE YOUR ENDPOINTS/PROFILE LLM_ENDPOINT_NAME = "databricks-claude-3-7-sonnet" SYSTEM_PROMPT = "You are a helpful assistant." DATABRICKS_CLI_PROFILE = "YOUR_DATABRICKS_CLI_PROFILE" assert ( DATABRICKS_CLI_PROFILE != "YOUR_DATABRICKS_CLI_PROFILE" ), "Set DATABRICKS_CLI_PROFILE to the Databricks CLI profile name you specified when configuring authentication to the workspace" workspace_client = WorkspaceClient(profile=DATABRICKS_CLI_PROFILE) host = workspace_client.config.host # Add more MCP server URLs here if desired, e.g # f"{host}/api/2.0/mcp/vector-search/prod/billing" # to include vector search indexes under the prod.billing schema, or # f"{host}/api/2.0/mcp/genie/<genie_space_id>" # to include a Genie space MANAGED_MCP_SERVER_URLS = [ f"{host}/api/2.0/mcp/functions/system/ai", ] # Add Custom MCP Servers hosted on Databricks Apps CUSTOM_MCP_SERVER_URLS = [] # 2) HELPER: convert between ResponsesAgent “message dict” and ChatCompletions format def _to_chat_messages(msg: dict[str, Any]) -> List[dict]: """ Take a single ResponsesAgent‐style dict and turn it into one or more ChatCompletions‐compatible dict entries. """ msg_type = msg.get("type") if msg_type == "function_call": return [ { "role": "assistant", "content": None, "tool_calls": [ { "id": msg["call_id"], "type": "function", "function": { "name": msg["name"], "arguments": msg["arguments"], }, } ], } ] elif msg_type == "message" and isinstance(msg["content"], list): return [ { "role": "assistant" if msg["role"] == "assistant" else msg["role"], "content": content["text"], } for content in msg["content"] ] elif msg_type == "function_call_output": return [ { "role": "tool", "content": msg["output"], "tool_call_id": msg["tool_call_id"], } ] else: # fallback for plain {"role": ..., "content": "..."} or similar return [ { k: v for k, v in msg.items() if k in ("role", "content", "name", "tool_calls", "tool_call_id") } ] # 3) “MCP SESSION” + TOOL‐INVOCATION LOGIC def _make_exec_fn( server_url: str, tool_name: str, ws: WorkspaceClient ) -> Callable[..., str]: def exec_fn(**kwargs): mcp_client = DatabricksMCPClient(server_url=server_url, workspace_client=ws) response = mcp_client.call_tool(tool_name, kwargs) return "".join([c.text for c in response.content]) return exec_fn class ToolInfo(BaseModel): name: str spec: dict exec_fn: Callable def _fetch_tool_infos(ws: WorkspaceClient, server_url: str) -> List[ToolInfo]: print(f"Listing tools from MCP server {server_url}") infos: List[ToolInfo] = [] mcp_client = DatabricksMCPClient(server_url=server_url, workspace_client=ws) mcp_tools = mcp_client.list_tools() for t in mcp_tools: schema = t.inputSchema.copy() if "properties" not in schema: schema["properties"] = {} spec = { "type": "function", "function": { "name": t.name, "description": t.description, "parameters": schema, }, } infos.append( ToolInfo( name=t.name, spec=spec, exec_fn=_make_exec_fn(server_url, t.name, ws) ) ) return infos # 4) “SINGLE‐TURN” AGENT CLASS class SingleTurnMCPAgent(ResponsesAgent): def _call_llm(self, history: List[dict], ws: WorkspaceClient, tool_infos): """ Send current history → LLM, returning the raw response dict. """ client = ws.serving_endpoints.get_open_ai_client() flat_msgs = [] for msg in history: flat_msgs.extend(_to_chat_messages(msg)) return client.chat.completions.create( model=LLM_ENDPOINT_NAME, messages=flat_msgs, tools=[ti.spec for ti in tool_infos], ) def predict(self, request: ResponsesAgentRequest) -> ResponsesAgentResponse: ws = WorkspaceClient(profile=DATABRICKS_CLI_PROFILE) # 1) build initial history: system + user history: List[dict] = [{"role": "system", "content": SYSTEM_PROMPT}] for inp in request.input: history.append(inp.model_dump()) # 2) call LLM once tool_infos = [ tool_info for mcp_server_url in (MANAGED_MCP_SERVER_URLS + CUSTOM_MCP_SERVER_URLS) for tool_info in _fetch_tool_infos(ws, mcp_server_url) ] tools_dict = {tool_info.name: tool_info for tool_info in tool_infos} llm_resp = self._call_llm(history, ws, tool_infos) raw_choice = llm_resp.choices[0].message.to_dict() raw_choice["id"] = uuid.uuid4().hex history.append(raw_choice) tool_calls = raw_choice.get("tool_calls") or [] if tool_calls: # (we only support a single tool in this “single‐turn” example) fc = tool_calls[0] name = fc["function"]["name"] args = json.loads(fc["function"]["arguments"]) try: tool_info = tools_dict[name] result = tool_info.exec_fn(**args) except Exception as e: result = f"Error invoking {name}: {e}" # 4) append the “tool” output history.append( { "type": "function_call_output", "role": "tool", "id": uuid.uuid4().hex, "tool_call_id": fc["id"], "output": result, } ) # 5) call LLM a second time and treat that reply as final followup = ( self._call_llm(history, ws, tool_infos=[]).choices[0].message.to_dict() ) followup["id"] = uuid.uuid4().hex assistant_text = followup.get("content", "") return ResponsesAgentResponse( output=[ { "id": uuid.uuid4().hex, "type": "message", "role": "assistant", "content": [{"type": "output_text", "text": assistant_text}], } ], custom_outputs=request.custom_inputs, ) # 6) if no tool_calls at all, return the assistant’s original reply assistant_text = raw_choice.get("content", "") return ResponsesAgentResponse( output=[ { "id": uuid.uuid4().hex, "type": "message", "role": "assistant", "content": [{"type": "output_text", "text": assistant_text}], } ], custom_outputs=request.custom_inputs, ) mlflow.models.set_model(SingleTurnMCPAgent()) if __name__ == "__main__": req = ResponsesAgentRequest( input=[{"role": "user", "content": "What's the 100th Fibonacci number?"}] ) resp = SingleTurnMCPAgent().predict(req) for item in resp.output: print(item)
Deploy your agent
When you're ready to deploy an agent that connects to managed MCP servers, use the standard agent deployment process.
Make sure to specify all the resources your agent needs access to at logging time. For example, if your agent uses the following MCP server URLs:
https://<your-workspace-hostname>/api/2.0/mcp/vector-search/prod/customer_support
https://<your-workspace-hostname>/api/2.0/mcp/vector-search/prod/billing
https://<your-workspace-hostname>/api/2.0/mcp/functions/prod/billing
You must specify all the vector search indexes your agent needs in the prod.customer_support
and prod.billing
schemas as resources, as well as all the Unity Catalog functions in prod.billing
.
If your agent connects to MCP servers on Databricks to discover and run tools, make sure to log the resources needed by these MCP servers with your agent. Databricks recommends installing the databricks-mcp
PyPI package to simplify this process.
In particular, if using managed MCP servers, you can use databricks_mcp.DatabricksMCPClient().get_databricks_resources(<server_url>)
to retrieve resources needed by the managed MCP server. If your agent queries a custom MCP server hosted on Databricks app, you can configure authorization by explicitly including the server as a resource when logging your model.
For example, to deploy the agent defined above, you can run the following snippet, assuming you saved
the agent code definition in mcp_agent.py
:
import os
from databricks.sdk import WorkspaceClient
from databricks import agents
import mlflow
from mlflow.models.resources import DatabricksFunction, DatabricksServingEndpoint, DatabricksVectorSearchIndex
from mcp_agent import LLM_ENDPOINT_NAME
from databricks_mcp import DatabricksMCPClient
# TODO: Update this to your Databricks CLI profile name
databricks_cli_profile = "YOUR_DATABRICKS_CLI_PROFILE"
assert (
databricks_cli_profile != "YOUR_DATABRICKS_CLI_PROFILE"
), "Set databricks_cli_profile to the Databricks CLI profile name you specified when configuring authentication to the workspace"
workspace_client = WorkspaceClient(profile=databricks_cli_profile)
# Configure MLflow and the Databricks SDK to use your Databricks CLI profile
current_user = workspace_client.current_user.me().user_name
mlflow.set_tracking_uri(f"databricks://{databricks_cli_profile}")
mlflow.set_registry_uri(f"databricks-uc://{databricks_cli_profile}")
mlflow.set_experiment(f"/Users/{current_user}/databricks_docs_example_mcp_agent")
os.environ["DATABRICKS_CONFIG_PROFILE"] = databricks_cli_profile
MANAGED_MCP_SERVER_URLS = [
f"{host}/api/2.0/mcp/functions/system/ai",
]
# Log the agent defined in mcp_agent.py
here = os.path.dirname(os.path.abspath(__file__))
agent_script = os.path.join(here, "mcp_agent.py")
resources = [
DatabricksServingEndpoint(endpoint_name=LLM_ENDPOINT_NAME),
DatabricksFunction("system.ai.python_exec"),
# --- Uncomment and edit the following lines to include custom mcp servers hosted on Databricks Apps ---
# DatabricksApp(app_name="app-name")
]
for mcp_server_url in MANAGED_MCP_SERVER_URLS:
mcp_client = DatabricksMCPClient(server_url=mcp_server_url, workspace_client=workspace_client)
resources.extend(mcp_client.get_databricks_resources())
with mlflow.start_run():
logged_model_info = mlflow.pyfunc.log_model(
artifact_path="mcp_agent",
python_model=agent_script,
resources=resources,
)
# TODO Specify your UC model name here
UC_MODEL_NAME = "main.default.databricks_docs_mcp_agent"
registered_model = mlflow.register_model(logged_model_info.model_uri, UC_MODEL_NAME)
agents.deploy(
model_name=UC_MODEL_NAME,
model_version=registered_model.version,
)
Next steps
- Connect external services like Cursor and Claude Desktop to managed MCP servers.