Skip to main content
Databricks is a data intelligence platform that lets your Lovable app work with warehouse data, SQL queries, and cluster resources. The Databricks connector lets you build apps and dashboards on top of your existing Databricks data without exporting CSVs or waiting for engineering tickets. With Databricks, your app can:
  • Run SQL queries against your warehouse data
  • List and manage SQL warehouses
  • List clusters in your Databricks workspace
  • Build live dashboards that query data in real time
Authentication uses a Databricks service principal. Your credentials are stored securely in Lovable’s gateway and are never exposed to the browser or your app’s frontend code.

Common use cases and example apps

Example appExample promptDescription
Live KPI dashboardBuild a dashboard that queries our Databricks warehouse and shows MRR, DAU, and churn rate. Auto-refresh every 5 minutes.Replace static slides with a live dashboard on your warehouse data. The app queries Databricks directly and displays key metrics that stay up to date without manual exports.
Revenue pipeline trackerBuild a pipeline tracker that pulls revenue and deal data from our Databricks tables and shows a funnel view with filters by region and quarter.Give RevOps a self-serve view of pipeline data that lives in the warehouse. The app queries Databricks tables where your CRM data lands and presents it in a structured, filterable view.
Team metrics explorerBuild a metrics explorer where users pick a team and date range, then see charts for their key metrics pulled from Databricks.Let teams explore their own metrics without filing data requests. The app runs parameterized SQL queries and renders results as charts, scoped to each team’s data.
Data quality monitorBuild an internal tool that runs data quality checks against our warehouse tables and flags anomalies.Catch data issues before they reach downstream consumers. The app runs validation queries on a schedule and surfaces failures in a clean internal view.
Executive summary botBuild a Slack bot that answers natural language data questions by querying our Databricks warehouse.Turn your warehouse into a conversational interface for leadership. The app translates questions into SQL, queries Databricks, and posts formatted answers to Slack.

How Databricks connections work

The Databricks connector uses service principal authentication (M2M OAuth). Instead of connecting as an individual user, you create a service principal in Databricks with access to specific tables and views, then provide its credentials to Lovable.

What this means for data access

The service principal’s permissions determine what data is available to everyone who uses that connection. Lovable does not filter results based on the individual user’s Databricks permissions. For example, if you create a service principal with access to HR tables, everyone with access to that connection in Lovable can query HR data. Recommended approach: one service principal per access role. Create separate service principals scoped to different data:
  • databricks-engineering: full warehouse access, only engineers get this connection in Lovable
  • databricks-sales: pipeline and revenue tables only, sales team gets this connection
  • databricks-company: company-wide safe metrics, everyone gets this connection
Lovable controls who can use each connection. Databricks controls what each service principal can query. Together, they provide role-based data access without requiring per-user OAuth.
You can create multiple Databricks connections in a workspace, each with a different service principal and different access settings.
Databricks uses Lovable’s gateway architecture for secure OAuth handling and automatic token refresh. See Gateway-based connectors for details on authentication and usage limits.

How to connect Databricks

Workspace admins and owners can connect Databricks.

Prerequisites

Before connecting, make sure you have:
  • A Databricks workspace with at least one SQL warehouse
  • A service principal configured in Databricks with an OAuth secret (see Databricks M2M OAuth setup)
  • The service principal’s client ID and client secret
  • Your Databricks workspace URL (e.g. https://dbc-abc123.cloud.databricks.com)
  • Lovable workspace admin or owner role

Set up your Databricks connection

1

Navigate to Databricks connector

Go to Settings → Connectors → Shared connectors and select Databricks.
2

Add a new connection

Click Add connection.
3

Name the connection

In Display name, name the connection (for example, Databricks Engineering or Databricks Sales). Use a name that reflects the access level of the service principal.
4

Enter your credentials

  • Workspace URL: your Databricks workspace URL (e.g. https://dbc-abc123.cloud.databricks.com)
  • Client ID: the service principal’s OAuth client ID
  • Client secret: the service principal’s OAuth client secret
5

Create the connection

Click Create. Lovable verifies the credentials and connects to your Databricks workspace.
When connected, you can link the connection to projects and start building apps that query your Databricks data.

Configure who has access

After creating a connection, you can choose who in your workspace can use it. See Connection-level access for details. This is especially important for Databricks, since the service principal’s access level determines what data is visible. Restricting connection access to the right team ensures that only authorized people can build with that data.

Building a semantic layer

Every Databricks use case benefits from a semantic layer: a shared definition of what your key metrics mean, which tables to use, and what assumptions they carry. What counts as a “daily active user”? How is MRR calculated? Which view should be used for churn, and does it exclude trials? Without this shared context, each app or dashboard risks computing the same metric differently.

If you already have a semantic layer

If your Databricks workspace already has a semantic layer (for example, dbt metrics, Unity Catalog tags, or a YAML definitions file), point Lovable to it:
Use our semantic layer at catalog.schema.metrics_definitions when computing any KPIs. MRR is defined there as monthly_recurring_revenue.

If you don’t have one yet

You can build a semantic layer quickly in Lovable using a dedicated project. Create a new project, connect it to Databricks, and ask the agent to explore your warehouse and draft definitions:
Let's build a full semantic layer for our Databricks warehouse. Please start by exploring the tables and forming your own understanding of the data. Create a sample dashboard that showcases your analysis of the key metrics, the tables they use, and the assumptions behind each query. We will walk through them together and correct anything that is wrong. Use these tables as the main starting point: [table names]
Drop in any existing context you have (prior dashboards, a data dictionary, a dbt schema file) and Lovable will incorporate it. Ask the agent to save the output as Markdown or YAML files in its project directory:
Save the finalized metric definitions as YAML files in this project. Other projects will reference them.
Once saved, other Lovable projects in the same Lovable workspace can reference that project’s knowledge to get consistent metric definitions out of the box.

Limitations

  • No per-user data scoping. Everyone using a connection sees the same data (the service principal’s data). Create separate service principals per access role as a workaround.
  • No automatic caching. Query results are not cached by default. You can ask Lovable to add caching logic to your app at your chosen interval.
  • Published apps are publicly accessible. Connection-level access controls who can build and edit, not who can use the published app. If your app surfaces sensitive data, add your own authentication layer before publishing.
  • Customer-managed cost controls. Lovable does not impose query cost caps. Use Databricks-side controls like warehouse auto-stop, query timeouts, and per-warehouse budgets to manage costs. See Databricks usage and cost monitoring for details.
Editors and above can remove specific projects from a connection without deleting the connection entirely. The connection will remain available for other projects. To unlink projects:
1

Navigate to connectors

Go to Settings → Connectors → Shared connectors and select .
2

Open the connection

Open the connection you want to manage.
3

Select projects

Under Linked projects, check the projects you want to unlink.
4

Confirm

Click Unlink projects and confirm.
When unlinked, those projects will no longer have access to through this connection. If a project needs again, you can link it to any available connection.

How to delete a connection

Workspace admins and owners can delete connections.
Deleting a connection is permanent and cannot be undone. It will remove the credentials from all linked projects, and any apps using this connection will stop working until a new connection is added.
Before deleting, review the Linked projects section to see which projects are currently using the connection. To delete a connection:
1

Navigate to connectors

Go to Settings → Connectors → Shared connectors and select .
2

Open the connection

Open the connection you want to remove.
3

Review linked projects

Review the Linked projects section.
4

Delete

Under Delete this connection, click Delete and confirm.

FAQ

No. Lovable enforces who on your team can use a connection. The service principal’s access level determines what data is queryable. If the service principal can see HR tables, everyone with access to that connection can query HR tables. Create separate service principals per access role to scope data.
Lovable does not impose query cost caps. Use Databricks-side controls to manage costs: warehouse auto-stop, query timeouts, and per-warehouse budgets. We recommend starting with a small warehouse and scaling up as needed. See Databricks usage and cost monitoring for details.
Not by default. Lovable queries Databricks at runtime with no automatic data replication or caching. Caching is opt-in: you can ask Lovable to add caching logic to your app at an interval you choose.
The published app uses the service principal to query Databricks. Anyone with the app URL can see the results. Connection-level access only controls who can build and edit the project, not who can use the published app. If the data is sensitive, add your own authentication layer in the app before publishing.
No. The service principal credentials are stored server-side in Lovable’s gateway and are never exposed to the browser or your app’s frontend code.