Hi Jona You can definitely pass parameters from a Synapse pipeline to a notebook activity, and then access those parameters inside your PySpark code. Here's how you can do it:
Step 1: Define Parameters in the Notebook
At the top of your notebook, define the parameters using the mssparkutils
library:
# Import the required utility
from notebookutils import mssparkutils
# Get the parameters
param1 = mssparkutils.notebookutils.getArgument("param1")
param2 = mssparkutils.notebookutils.getArgument("param2")
Note: If you're using a newer runtime, you can also use dbutils.widgets.get("param1")
, but mssparkutils
is preferred in Synapse.
Step 2: Pass Parameters in the Pipeline
In your Synapse pipeline, select the Notebook activity.
In the Settings tab, under Base parameters, you can add parameters by name and assign them pipeline expressions like:
@pipeline().parameters.param1
Example: Pipeline Parameters:
param1
: string
param2
: string
Notebook Activity Base Parameters:
Name | Value |
---|---|
param1 | @pipeline().parameters.param1 |
param2 | @pipeline().parameters.param2 |
Notebook Code:
param1 = mssparkutils.notebookutils.getArgument("param1")
param2 = mssparkutils.notebookutils.getArgument("param2")
print(f"Received param1: {param1}")
print(f"Received param2: {param2}")
Hope this helps. If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.