Share via


Best practices for serverless workspaces

Note

Serverless workspaces are in Private Preview.

This page lists important best practices for creating, managing, and securing serverless workspaces. These guidelines focus on use cases, cost management, security, and platform requirements.

What are serverless workspaces?

Serverless workspaces are lightweight, fast-to-deploy environments that support only serverless compute. They can be provisioned through the account console, API, or Terraform. See Create a serverless workspaces.

Note

Serverless workspaces always use default storage for the root storage of the workspace, as compared to workspaces with classic compute enabled that use a customer-owned cloud storage bucket.

Common use cases for serverless workspaces

  • Analysis-driven workspaces: Ideal for data analysis or visualization workflows that use AI/BI dashboards, Genie, Databricks Apps, serverless SQL warehouses, or notebooks, without needing classic compute resources.

  • Fully serverless workloads: Organizations that rely exclusively on serverless compute can eliminate the overhead of classic compute by using serverless workspaces as their default environment.

  • Composable environments: Serverless workspaces can easily be created and destroyed, making them well-suited for short-lived use cases such as internal training, testing new Azure Databricks features, or onboarding new teams.

  • Deploy workspaces without cloud permissions: Databricks account admins who lack the cloud permissions required to provision traditional workspaces can still deploy serverless workspaces. Admins can manage the workspace without relying on external cloud infrastructure.

General best practices

  • Review serverless compute limitations: Before migrating workloads to a serverless workspace, review the current serverless compute limitations to ensure compatibility with your use case. See Serverless compute limitations.

Best practices for default storage

  • Use default storage for simplified data management: Serverless workspaces support creating catalogs backed by default storage, working with default storage does not require separate storage credentials or external locations. This is ideal for environments where cloud infrastructure cannot be provisioned or managed by Azure Databricks admins.

  • Use default storage for Delta Sharing between Azure Databricks customers: When sharing data between Azure Databricks accounts using Delta Sharing, default storage can be used as the underlying storage layer. This eliminates the need to configure customer-managed storage with custom network access policies.

  • Review default storage limitations: Before migrating data to default storage, review the current limitations to ensure compatibility with your use case. See default storage limitations.

Security best practices

For instructions on how to connect to resources in your private network, see Configure private connectivity to resources in your VNet.

  • Configure a customer-managed key (CMK) for serverless workspaces: During or after workspace creation, you can specify a customer-managed key to encrypt managed services data. In addition to managed services, this key also encrypts data in the workspace-specific catalog and workspace root storage backed by default storage. Only the managed services key is applicable for serverless workspaces. See Enable customer-managed keys for managed services.