Share via


experiments command group

Note

This information applies to Databricks CLI versions 0.205 and above. The Databricks CLI is in Public Preview.

Databricks CLI use is subject to the Databricks License and Databricks Privacy Notice, including any Usage Data provisions.

The experiments command group within the Databricks CLI allows you to create, edit, delete, and manage experiments in MLflow. See Organize training runs with MLflow experiments.

databricks experiments create-experiment

Create an experiment with a name. Returns the ID of the newly created experiment. Validates that another experiment with the same name does not already exist and fails if another experiment with the same name already exists.

Throws RESOURCE_ALREADY_EXISTS if an experiment with the given name exists.

databricks experiments create-experiment NAME [flags]

Arguments

NAME

    Experiment name.

Options

--artifact-location string

    Location where all artifacts for the experiment are stored.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments create-run

Create a new run within an experiment. A run is usually a single execution of a machine learning or data ETL pipeline. MLflow uses runs to track the mlflowParam, mlflowMetric, and mlflowRunTag associated with a single execution.

databricks experiments create-run [flags]

Arguments

None

Options

--experiment-id string

    ID of the associated experiment.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body

--run-name string

    The name of the run.

--start-time int

    Unix timestamp in milliseconds of when the run started.

--user-id string

    ID of the user executing the run.

Global flags

databricks experiments delete-experiment

Mark an experiment and associated metadata, runs, metrics, params, and tags for deletion. If the experiment uses FileStore, artifacts associated with the experiment are also deleted.

databricks experiments delete-experiment EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    ID of the associated experiment.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body

Global flags

databricks experiments delete-run

Mark a run for deletion.

databricks experiments delete-run RUN_ID [flags]

Arguments

RUN_ID

    ID of the run to delete.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body

Global flags

databricks experiments delete-runs

Bulk delete runs in an experiment that were created prior to or at the specified timestamp. Deletes at most max_runs per request. To call this API from a Databricks Notebook in Python, you can use the client code snippet on

databricks experiments delete-runs EXPERIMENT_ID MAX_TIMESTAMP_MILLIS [flags]

Arguments

EXPERIMENT_ID

    The ID of the experiment containing the runs to delete.

MAX_TIMESTAMP_MILLIS

    The maximum creation timestamp in milliseconds since the UNIX epoch for deleting runs. Only runs created prior to or at this timestamp are deleted.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body

--max-runs int

    An optional positive integer indicating the maximum number of runs to delete.

Global flags

databricks experiments delete-tag

Delete a tag on a run. Tags are run metadata that can be updated during a run and after a run completes.

databricks experiments delete-tag RUN_ID KEY [flags]

Arguments

RUN_ID

    ID of the run that the tag was logged under. Required.

KEY

    Name of the tag. Maximum size is 255 bytes. Required.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body

Global flags

databricks experiments get-by-name

Get metadata for an experiment with the specified name.

This command will return deleted experiments, but prefers the active experiment if an active and deleted experiment share the same name. If multiple deleted experiments share the same name, the API will return one of them.

Throws RESOURCE_DOES_NOT_EXIST if no experiment with the specified name exists.

databricks experiments get-by-name EXPERIMENT_NAME [flags]

Arguments

EXPERIMENT_NAME

    Name of the associated experiment.

Options

Global flags

databricks experiments get-experiment

Get metadata for an experiment with the specified ID. This command works on deleted experiments.

databricks experiments get-experiment EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    ID of the associated experiment.

Options

Global flags

databricks experiments get-history

Get a list of all values for the specified metric for a given run.

databricks experiments get-history METRIC_KEY [flags]

Arguments

METRIC_KEY

    Name of the metric.

Options

--max-results int

    Maximum number of Metric records to return per paginated request.

--page-token string

    Token indicating the page of metric histories to fetch.

--run-id string

    ID of the run from which to fetch metric values.

--run-uuid string

    Deprecated, use --run_id instead. ID of the run from which to fetch metric values.

Global flags

databricks experiments get-run

Get the metadata, metrics, params, and tags for a run. In the case where multiple metrics with the same key are logged for a run, return only the value with the latest timestamp.

If there are multiple values with the latest timestamp, return the maximum of these values.

databricks experiments get-run RUN_ID [flags]

Arguments

RUN_ID

    ID of the run to fetch. Must be provided.

Options

--run-uuid string

    Deprecated, use --run_id instead. ID of the run to fetch.

Global flags

databricks experiments list-artifacts

List artifacts for a run. Takes an optional artifact_path prefix which if specified, the response contains only artifacts with the specified prefix. A maximum of 1000 artifacts will be retrieved for Unity Catalog volumes. Use databricks fs ls for listing artifacts in Unity Catalog volumes, which supports pagination.

databricks experiments list-artifacts [flags]

Arguments

None

Options

--page-token string

    The token indicating the page of artifact results to fetch.

--path string

    Filter artifacts matching this path (a relative path from the root artifact directory).

--run-id string

    ID of the run whose artifacts to list.

--run-uuid string

    Deprecated, use --run_id instead. ID of the run whose artifacts to list.

Global flags

databricks experiments list-experiments

Get a list of all experiments.

databricks experiments list-experiments [flags]

Arguments

None

Options

--max-results int

    Maximum number of experiments desired.

--page-token string

    Token indicating the page of experiments to fetch.

--view-type ViewType

    Qualifier for type of experiments to be returned. Supported values: ACTIVE_ONLY, ALL, DELETED_ONLY

Global flags

databricks experiments log-batch

Log a batch of metrics, params, and tags for a run. If any data failed to be persisted, the server will respond with an error (non-200 status code). For overwrite behavior and request limits, see Experiments.

databricks experiments log-batch [flags]

Arguments

None

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--run-id string

    ID of the run to log under.

Global flags

databricks experiments log-inputs

Note

This command is experimental.

Logs inputs, such as datasets and models, to an MLflow Run.

databricks experiments log-inputs RUN_ID [flags]

Arguments

RUN_ID

    ID of the run to log under

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments log-metric

Log a metric for a run. A metric is a key-value pair (string key, float value) with an associated timestamp. Examples include the various metrics that represent ML model accuracy. A metric can be logged multiple times.

databricks experiments log-metric KEY VALUE TIMESTAMP [flags]

Arguments

KEY

    Name of the metric.

VALUE

    Double value of the metric being logged.

TIMESTAMP

    Unix timestamp in milliseconds at the time metric was logged.

Options

--dataset-digest string

    Dataset digest of the dataset associated with the metric, e.g.

--dataset-name string

    The name of the dataset associated with the metric.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--model-id string

    ID of the logged model associated with the metric, if applicable.

--run-id string

    ID of the run under which to log the metric.

--run-uuid string

    Deprecated, use --run_id instead. ID of the run under which to log the metric.

--step int

    Step at which to log the metric.

Global flags

databricks experiments log-model

Log a model.

Note

This command is experimental.

databricks experiments log-model [flags]

Arguments

None

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--model-json string

    MLmodel file in json format.

--run-id string

    ID of the run to log under.

Global flags

databricks experiments log-param

Log a param used for a run. A param is a key-value pair (string key, string value). Examples include hyperparameters used for ML model training and constant dates and values used in an ETL pipeline. A param can be logged only once for a run.

databricks experiments log-param KEY VALUE [flags]

Arguments

KEY

    Name of the param. Maximum size is 255 bytes.

VALUE

    String value of the param being logged. Maximum size is 500 bytes.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--run-id string

    ID of the run under which to log the param.

--run-uuid string

    [Deprecated, use run_id instead] ID of the run under which to log the param.

Global flags

databricks experiments restore-experiment

Restore an experiment marked for deletion. This also restores associated metadata, runs, metrics, params, and tags. If experiment uses FileStore, underlying artifacts associated with experiment are also restored.

Throws RESOURCE_DOES_NOT_EXIST if experiment was never created or was permanently deleted.

databricks experiments restore-experiment EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    ID of the associated experiment.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments restore-run

Restore a deleted run. This also restores associated metadata, runs, metrics, params, and tags.

Throws RESOURCE_DOES_NOT_EXIST if the run was never created or was permanently deleted.

databricks experiments restore-run RUN_ID [flags]

Arguments

RUN_ID

    ID of the run to restore.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments restore-runs

Bulk restore runs in an experiment that were deleted no earlier than the specified timestamp. Restores at most max_runs per request.

databricks experiments restore-runs EXPERIMENT_ID MIN_TIMESTAMP_MILLIS [flags]

Arguments

EXPERIMENT_ID

    The ID of the experiment containing the runs to restore.

MIN_TIMESTAMP_MILLIS

    The minimum deletion timestamp in milliseconds since the UNIX epoch for restoring runs. Only runs deleted no earlier than this timestamp are restored.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--max-runs int

    An optional positive integer indicating the maximum number of runs to restore.

Global flags

databricks experiments search-experiments

Searches for experiments that satisfy specified search criteria.

databricks experiments search-experiments [flags]

Arguments

None

Options

--filter string

    String representing a SQL filter condition

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--max-results int

    Maximum number of experiments desired.

--page-token string

    Token indicating the page of experiments to fetch.

--view-type ViewType

    Qualifier for type of experiments to be returned. Supported values: ACTIVE_ONLY, ALL, DELETED_ONLY

Global flags

databricks experiments search-runs

Searches for runs that satisfy expressions. Search expressions can use mlflowMetric and mlflowParam keys.

databricks experiments search-runs [flags]

Arguments

None

Options

--filter string

    A filter expression over params, metrics, and tags, that allows returning a subset of runs.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--max-results int

    Maximum number of runs desired.

--page-token string

    Token for the current page of runs.

--run-view-type ViewType

    Whether to display only active, only deleted, or all runs. Supported values: ACTIVE_ONLY, ALL, DELETED_ONLY

Global flags

databricks experiments set-experiment-tag

Sets a tag on an experiment. Experiment tags are metadata that can be updated.

databricks experiments set-experiment-tag EXPERIMENT_ID KEY VALUE [flags]

Arguments

EXPERIMENT_ID

    ID of the experiment under which to log the tag. Must be provided.

KEY

    Name of the tag. Keys up to 250 bytes in size are supported.

VALUE

    String value of the tag being logged. Values up to 64KB in size are supported.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments set-tag

Sets a tag on a run. Tags are run metadata that can be updated during a run and after a run completes.

databricks experiments set-tag KEY VALUE [flags]

Arguments

KEY

    Name of the tag. Keys up to 250 bytes in size are supported.

VALUE

    String value of the tag being logged. Values up to 64KB in size are supported.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--run-id string

    ID of the run under which to log the tag.

--run-uuid string

    Deprecated, use --run_id instead. ID of the run under which to log the tag.

Global flags

databricks experiments update-experiment

Update an experiment.

databricks experiments update-experiment EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    ID of the associated experiment.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--new-name string

    If provided, the experiment's name is changed to the new name.

Global flags

databricks experiments update-run

Update a run.

databricks experiments update-run [flags]

Arguments

None

Options

--end-time int

    Unix timestamp in milliseconds of when the run ended.

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

--run-id string

    ID of the run to update.

--run-name string

    Updated name of the run.

--run-uuid string

    Deprecated, use --run_id instead. ID of the run to update.

--status UpdateRunStatus

    Updated status of the run. Supported values: FAILED, FINISHED, KILLED, RUNNING, SCHEDULED

Global flags

databricks experiments get-permission-levels

Get experiment permission levels.

databricks experiments get-permission-levels EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    The experiment for which to get or manage permissions.

Options

Global flags

databricks experiments get-permissions

Get the permissions of an experiment. Experiments can inherit permissions from their root object.

databricks experiments get-permissions EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    The experiment for which to get or manage permissions.

Options

Global flags

databricks experiments set-permissions

Set experiment permissions.

Sets permissions on an object, replacing existing permissions if they exist. Deletes all direct permissions if none are specified. Objects can inherit permissions from their root object.

databricks experiments set-permissions EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    The experiment for which to get or manage permissions.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

databricks experiments update-permissions

Update experiment permissions. Experiments can inherit permissions from their root object.

databricks experiments update-permissions EXPERIMENT_ID [flags]

Arguments

EXPERIMENT_ID

    The experiment for which to get or manage permissions.

Options

--json JSON

    The inline JSON string or the @path to the JSON file with the request body.

Global flags

Global flags

--debug

  Whether to enable debug logging.

-h or --help

    Display help for the Databricks CLI or the related command group or the related command.

--log-file string

    A string representing the file to write output logs to. If this flag is not specified then the default is to write output logs to stderr.

--log-format format

    The log format type, text or json. The default value is text.

--log-level string

    A string representing the log format level. If not specified then the log format level is disabled.

-o, --output type

    The command output type, text or json. The default value is text.

-p, --profile string

    The name of the profile in the ~/.databrickscfg file to use to run the command. If this flag is not specified then if it exists, the profile named DEFAULT is used.

--progress-format format

    The format to display progress logs: default, append, inplace, or json

-t, --target string

    If applicable, the bundle target to use