Azure Databricks CLI: A Complete Practical Guide

Many data engineers start by building Databricks pipelines directly in notebooks using the web interface. That works well early on — but as projects grow and teams adopt software engineering practices, you need to develop pipelines locally, use version control, and automate deployments. That’s where the Databricks CLI comes in.

What is the Databricks CLI?

The Databricks CLI is an open-source command-line interface built on top of the Databricks REST APIs. Instead of clicking through the UI or writing HTTP requests manually, you type a command — and the CLI handles the rest.

Main use cases

  • Deploying and managing Declarative Asset Bundles (DABs)
  • Automating job creation, cluster management, and permissions
  • Authenticating and switching between multiple workspaces
  • Integrating Databricks into CI/CD pipelines (Azure DevOps, GitHub Actions)
  • AI-driven development using the newly launched AI Dev Kit (ADK)

How does the CLI talk to Databricks?

When you run a CLI command, it reads your .databrickscfg configuration file, builds an HTTPS request using your credentials, and sends it to the Databricks Control Plane — the brain of your workspace that manages jobs, clusters, and permissions.

The CLI never directly touches your data plane. It communicates exclusively with the control plane via REST APIs.

How to install the Databricks CLI

Mac / Linux

curl -fsSL \
  https://raw.githubusercontent.com/databricks/setup-cli/main/install.sh \
  | sh

Use this same command to upgrade as well.

Windows

# Install
winget install Databricks.DatabricksCLI

# Upgrade
winget upgrade Databricks.DatabricksCLI

Verify installation

databricks --version

The .databrickscfg configuration file

This file stores your workspace connection details. On Mac/Linux it lives at /Users/<username>/.databrickscfg; on Windows at C:\Users\<username>\.databrickscfg.

[DEFAULT]
host = https://adb-1234567890.12.azuredatabricks.net
token = dapi1234567890abcdef

[DEV]
host = https://adb-0987654321.15.azuredatabricks.net
token = dapi9876543210fedcba

[TEST]
host = https://adb-0987654321.15.azuredatabricks.net
azure_client_id = <x>
azure_tenant_id = <y>
azure_client_secret = <z>

Each block in square brackets is a profile. To use a specific profile:

databricks jobs list --profile DEV

Authentication options

1. Personal Access Token (PAT)

Generate a token in Databricks UI under Settings → Developer → Access Tokens. Good for local development and learning. Not recommended for production — tokens are user-based and need manual rotation.

2. Service Principal + Client Secret

Create a Microsoft Entra ID service principal, assign it to the workspace, and generate a scoped token. Best for CI/CD pipelines and production — authentication is not tied to a user account.

3. Microsoft Entra ID (OAuth)

The modern recommended approach for Azure. Avoids storing static tokens in config files.

databricks auth login --host https://adb-123456780.12.azuredatabricks.net

How CLI commands are structured

Most commands follow the same pattern: resource → subcommand → flags

databricks <resource> <subcommand> [flags]

# Examples
databricks jobs list
databricks jobs create
databricks jobs delete --job-id 123
databricks jobs run-now --job-id 123

Exploring commands with –help

# See all top-level commands
databricks --help

# See subcommands for a resource
databricks jobs --help

# See options for a specific subcommand
databricks jobs create --help

Other ways to use the Databricks CLI

Besides local installation, you can use the CLI via the Web Terminal inside your Databricks workspace (no local install needed), or via the %sh magic command inside a Databricks notebook to run CLI commands as part of a job or workflow.

Summary

The Databricks CLI is the essential bridge between your local machine and your Databricks workspace — and the foundation for everything from deploying asset bundles to running production CI/CD pipelines. Once you understand the install, auth options, and command structure, the whole Databricks ecosystem becomes much easier to work with.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top