Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
This article contains Python user-defined function (UDF) examples. It shows how to register UDFs, how to invoke UDFs, and provides caveats about evaluation order of subexpressions in Spark SQL.
In Databricks Runtime 14.0 and above, you can use Python user-defined table functions (UDTFs) to register functions that return entire relations instead of scalar values. See Python user-defined table functions (UDTFs).
Note
In Databricks Runtime 12.2 LTS and below, Python UDFs and Pandas UDFs are not supported on Unity Catalog compute that uses standard access mode. Scalar Python UDFs and Pandas UDFs are supported in Databricks Runtime 13.3 LTS and above for all access modes.
In Databricks Runtime 13.3 LTS and above, you can register scalar Python UDFs to Unity Catalog using SQL syntax. See User-defined functions (UDFs) in Unity Catalog.
Register a function as a UDF
def squared(s):
return s * s
spark.udf.register("squaredWithPython", squared)
You can optionally set the return type of your UDF. The default return type is StringType
.
from pyspark.sql.types import LongType
def squared_typed(s):
return s * s
spark.udf.register("squaredWithPython", squared_typed, LongType())
Call the UDF in Spark SQL
spark.range(1, 20).createOrReplaceTempView("test")
%sql select id, squaredWithPython(id) as id_squared from test
Use UDF with DataFrames
from pyspark.sql.functions import udf
from pyspark.sql.types import LongType
squared_udf = udf(squared, LongType())
df = spark.table("test")
display(df.select("id", squared_udf("id").alias("id_squared")))
Alternatively, you can declare the same UDF using annotation syntax:
from pyspark.sql.functions import udf
@udf("long")
def squared_udf(s):
return s * s
df = spark.table("test")
display(df.select("id", squared_udf("id").alias("id_squared")))
Evaluation order and null checking
Spark SQL (including SQL and the DataFrame and Dataset API) does not guarantee the order of
evaluation of subexpressions. In particular, the inputs of an operator or function are not
necessarily evaluated left-to-right or in any other fixed order. For example, logical AND
and OR
expressions do not have left-to-right “short-circuiting” semantics.
Therefore, it is dangerous to rely on the side effects or order of evaluation of Boolean
expressions, and the order of WHERE
and HAVING
clauses, since such expressions and clauses can be
reordered during query optimization and planning. Specifically, if a UDF relies on short-circuiting semantics in SQL for null checking, there's no
guarantee that the null check will happen before invoking the UDF. For example,
spark.udf.register("strlen", lambda s: len(s), "int")
spark.sql("select s from test1 where s is not null and strlen(s) > 1") # no guarantee
This WHERE
clause does not guarantee the strlen
UDF to be invoked after filtering out nulls.
To perform proper null checking, we recommend that you do either of the following:
- Make the UDF itself null-aware and do null checking inside the UDF itself
- Use
IF
orCASE WHEN
expressions to do the null check and invoke the UDF in a conditional branch
spark.udf.register("strlen_nullsafe", lambda s: len(s) if not s is None else -1, "int")
spark.sql("select s from test1 where s is not null and strlen_nullsafe(s) > 1") // ok
spark.sql("select s from test1 where if(s is not null, strlen(s), null) > 1") // ok
Service credentials in Scalar Python UDFs
Scalar Python UDFs can use Unity Catalog service credentials to securely access external cloud services. This is useful for integrating operations such as cloud-based tokenization, encryption, or secret management directly into your data transformations.
Service credentials for scalar Python UDFs are only supported on SQL warehouse and general compute.
To create a service credential, see Create service credentials.
To access the service credential, use the databricks.service_credentials.getServiceCredentialsProvider()
utility in your UDF logic to initialize cloud SDKs with the appropriate credential. All code must be encapsulated in the UDF body.
@udf
def use_service_credential():
from azure.mgmt.web import WebSiteManagementClient
# Assuming there is a service credential named 'testcred' set up in Unity Catalog
web_client = WebSiteManagementClient(subscription_id, credential = getServiceCredentialsProvider('testcred'))
# Use web_client to perform operations
Service credentials permissions
The creator of the UDF must have ACCESS permission on the Unity Catalog service credential.
UDFs that run in No-PE scope, also known as dedicated clusters, require MANAGE permissions on the service credential.
Default credentials
When used in Scalar Python UDFs, Databricks automatically uses the default service credential from the compute environment variable. This behavior allows you to securely reference external services without explicitly managing credential aliases in your UDF code. See Specify a default service credential for a compute resource
Default credential support is only available in Standard and Dedicated access mode clusters. It is not available in DBSQL.
You must install the azure-identity
package to use the DefaultAzureCredential
provider. To install the package, see Notebook-scoped Python libraries or Compute-scoped libraries.
@udf
def use_service_credential():
from azure.identity import DefaultAzureCredential
from azure.mgmt.web import WebSiteManagementClient
# DefaultAzureCredential is automatically using the default service credential for the compute
web_client_default = WebSiteManagementClient(DefaultAzureCredential(), subscription_id)
# Use web_client to perform operations
Get task execution context
Use the TaskContext PySpark API to get context information such as user's identity, cluster tags, spark job ID and more. See Get task context in a UDF.
Limitations
The following limitations apply to PySpark UDFs:
File access restrictions: On Databricks Runtime 14.2 and below, PySpark UDFs on shared clusters cannot access Git folders, workspace files, or Unity Catalog Volumes.
Broadcast variables: PySpark UDFs on standard access mode clusters and serverless compute do not support broadcast variables.
Service credentials: Service credentials are available only in Batch Unity Catalog Python UDFs and Scalar Python UDFs. They are not supported in standard Unity Catalog Python UDFs.
Service credentials: Service credentials are not supported on serverless or dedicated compute.
- Memory limit on serverless: PySpark UDFs on serverless compute have a memory limit of 1GB per PySpark UDF. Exceeding this limit results in an error of type UDF_PYSPARK_USER_CODE_ERROR.MEMORY_LIMIT_SERVERLESS.
- Memory limit on standard access mode: PySpark UDFs on standard access mode have a memory limit based on the available memory of the instance type chosen. Exceeding available memory results in an error of type UDF_PYSPARK_USER_CODE_ERROR.MEMORY_LIMIT.