Share via


jobs command group

Note

This information applies to Databricks CLI versions 0.205 and above. The Databricks CLI is in Public Preview.

Databricks CLI use is subject to the Databricks License and Databricks Privacy Notice, including any Usage Data provisions.

The jobs command group within the Databricks CLI allows you to create, edit, and delete jobs. See Lakeflow Jobs.

databricks jobs cancel-all-runs

Cancel all active runs of a job. The runs are canceled asynchronously, so it doesn't prevent new runs from being started.

databricks jobs cancel-all-runs [flags]

Arguments

None

Options

--all-queued-runs

    Optional boolean parameter to cancel all queued runs.

--job-id int

    The canonical identifier of the job to cancel all runs of.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs cancel-run

Cancel a run.

Cancels a job run or a task run. The run is canceled asynchronously, so it may still be running when this request completes.

databricks jobs cancel-run RUN_ID [flags]

Arguments

RUN_ID

    This field is required.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--no-wait

    do not wait to reach TERMINATED or SKIPPED state

--timeout duration

    maximum amount of time to reach TERMINATED or SKIPPED state (default 20m0s)

Global flags

databricks jobs create

Create a new job.

databricks jobs create [flags]

Arguments

None

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs delete

Delete a job.

databricks jobs delete JOB_ID [flags]

Arguments

JOB_ID

    The canonical identifier of the job to delete. This field is required.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs delete-run

Delete a non-active run. Returns an error if the run is active.

databricks jobs delete-run RUN_ID [flags]

Arguments

RUN_ID

    ID of the run to delete.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs export-run

Export and retrieve the job run task.

databricks jobs export-run RUN_ID [flags]

Arguments

RUN_ID

    The canonical identifier for the run. This field is required.

Options

--views-to-export ViewsToExport

    Which views to export. Supported values: ALL, CODE, DASHBOARDS

Global flags

databricks jobs get

Retrieves the details for a single job.

Large arrays in the results will be paginated when they exceed 100 elements. A request for a single job will return all properties for that job, and the first 100 elements of array properties (tasks, job_clusters, environments and parameters). Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subsequent requests. Arrays without additional results will be empty on later pages.

databricks jobs get JOB_ID [flags]

Arguments

JOB_ID

    The canonical identifier of the job to retrieve information about. This field is required.

Options

--page-token string

    Use next_page_token returned from the previous GetJob response to request the next page of the job's array properties.

Global flags

databricks jobs get-run

Retrieves the metadata of a job run.

Large arrays in the results will be paginated when they exceed 100 elements. A request for a single run will return all properties for that run, and the first 100 elements of array properties (tasks, job_clusters, job_parameters and repair_history). Use the next_page_token field to check for more results and pass its value as the page_token in subsequent requests. If any array properties have more than 100 elements, additional results will be returned on subsequent requests. Arrays without additional results will be empty on later pages.

databricks jobs get-run RUN_ID [flags]

Arguments

RUN_ID

    The canonical identifier of the run for which to retrieve the metadata. This field is required.

Options

--include-history

    Include the repair history in the response.

--include-resolved-values

    Include resolved parameter values in the response.

--page-token string

    Use next_page_token returned from the previous GetRun response to request the next page of the run's array properties.

Global flags

databricks jobs get-run-output

Retrieve the output and metadata of a single task run. When a notebook task returns a value through the dbutils.notebook.exit() call, you can use this command to retrieve that value. Databricks restricts this API to returning the first 5 MB of the output. To return a larger result, you can store job results in a cloud storage service.

This command validates that the run_id parameter is valid and returns an HTTP status code 400 if the run_id parameter is invalid. Runs are automatically removed after 60 days. If you to want to reference them beyond 60 days, you must save old run results before they expire.

databricks jobs get-run-output RUN_ID [flags]

Arguments

RUN_ID

    The canonical identifier for the run.

Options

Global flags

databricks jobs list

Retrieve a list of jobs.

databricks jobs list [flags]

Arguments

None

Options

--expand-tasks

    Whether to include task and cluster details in the response.

--limit int

    The number of jobs to return.

--name string

    A filter on the list based on the exact (case insensitive) job name.

--offset int

    The offset of the first job to return, relative to the most recently created job.

--page-token string

    Use next_page_token or prev_page_token returned from the previous request to list the next or previous page of jobs respectively.

Global flags

databricks jobs list-runs

List job runs in descending order by start time.

databricks jobs list-runs [flags]

Arguments

None

Options

--active-only

    If active_only is true, only active runs are included in the results; otherwise, lists both active and completed runs.

--completed-only

    If completed_only is true, only completed runs are included in the results; otherwise, lists both active and completed runs.

--expand-tasks

    Whether to include task and cluster details in the response.

--job-id int

    The job for which to list runs.

--limit int

    The number of runs to return.

--offset int

    The offset of the first run to return, relative to the most recent run.

--page-token string

    Use next_page_token or prev_page_token returned from the previous request to list the next or previous page of runs respectively.

--run-type RunType

    The type of runs to return. Supported values: [JOB_RUN, SUBMIT_RUN, WORKFLOW_RUN]

--start-time-from int

    Show runs that started at or after this value.

--start-time-to int

    Show runs that started at or before this value.

Global flags

databricks jobs repair-run

Re-run one or more job tasks. Tasks are re-run as part of the original job run. They use the current job and task settings, and can be viewed in the history for the original job run.

databricks jobs repair-run RUN_ID [flags]

Arguments

RUN_ID

    The job run ID of the run to repair. The run must not be in progress.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--latest-repair-id int

    The ID of the latest repair.

--no-wait

    do not wait to reach TERMINATED or SKIPPED state

--performance-target PerformanceTarget

    The performance mode on a serverless job. Supported values: [PERFORMANCE_OPTIMIZED, STANDARD]

--rerun-all-failed-tasks

    If true, repair all failed tasks.

--rerun-dependent-tasks

    If true, repair all tasks that depend on the tasks in rerun_tasks, even if they were previously successful.

--timeout duration

    maximum amount of time to reach TERMINATED or SKIPPED state (default 20m0s)

Global flags

databricks jobs reset

Overwrite all settings for the given job. Use the databricks jobs update command to update job settings partially.

databricks jobs reset [flags]

Arguments

None

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs run-now

Run a job and return the run_id of the triggered run.

databricks jobs run-now JOB_ID [flags]

Arguments

JOB_ID

    The ID of the job to be executed

Options

--idempotency-token string

    An optional token to guarantee the idempotency of job run requests.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--no-wait

    do not wait to reach TERMINATED or SKIPPED state

--performance-target PerformanceTarget

    The performance mode on a serverless job. Supported values: [PERFORMANCE_OPTIMIZED, STANDARD]

--timeout duration

    maximum amount of time to reach TERMINATED or SKIPPED state (default 20m0s)

Global flags

databricks jobs submit

Create and trigger a one-time run. This allows you to submit a workload directly without creating a job.

databricks jobs submit [flags]

Arguments

None

Options

--budget-policy-id string

    The user specified id of the budget policy to use for this one-time run.

--idempotency-token string

    An optional token that can be used to guarantee the idempotency of job run requests.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--no-wait

    do not wait to reach TERMINATED or SKIPPED state

--run-name string

    An optional name for the run.

--timeout duration

    maximum amount of time to reach TERMINATED or SKIPPED state (default 20m0s)

--timeout-seconds int

    An optional timeout applied to each run of this job.

Global flags

databricks jobs update

Add, update, or remove specific settings of an existing job. Use reset to overwrite all job settings.

databricks jobs update JOB_ID [flags]

Arguments

JOB_ID

    The canonical identifier of the job to update. This field is required.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs get-permission-levels

Get job permission levels.

databricks jobs get-permission-levels JOB_ID [flags]

Arguments

JOB_ID

    The job for which to get or manage permissions.

Options

Global flags

databricks jobs get-permissions

Get the permissions of a job. Jobs can inherit permissions from their root object.

databricks jobs get-permissions JOB_ID [flags]

Arguments

JOB_ID

    The job for which to get or manage permissions.

Options

Global flags

databricks jobs set-permissions

Set job permissions.

Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.

databricks jobs set-permissions JOB_ID [flags]

Arguments

JOB_ID

    The job for which to get or manage permissions.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks jobs update-permissions

Update the permissions on a job. Jobs can inherit permissions from their root object.

databricks jobs update-permissions JOB_ID [flags]

Arguments

JOB_ID

    The job for which to get or manage permissions.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

Global flags

--debug

  Whether to enable debug logging.

-h or --help

    Display help for the Databricks CLI or the related command group or the related command.

--log-file string

    A string representing the file to write output logs to. If this flag is not specified then the default is to write output logs to stderr.

--log-format format

    The log format type, text or json. The default value is text.

--log-level string

    A string representing the log format level. If not specified then the log format level is disabled.

-o, --output type

    The command output type, text or json. The default value is text.

-p, --profile string

    The name of the profile in the ~/.databrickscfg file to use to run the command. If this flag is not specified then if it exists, the profile named DEFAULT is used.

--progress-format format

    The format to display progress logs: default, append, inplace, or json

-t, --target string

    If applicable, the bundle target to use