- Run SQL queries against your warehouse data
- List and manage SQL warehouses
- List clusters in your Databricks workspace
- Build live dashboards that query data in real time
Common use cases and example apps
| Example app | Example prompt | Description |
|---|---|---|
| Live KPI dashboard | Build a dashboard that queries our Databricks warehouse and shows MRR, DAU, and churn rate. Auto-refresh every 5 minutes. | Replace static slides with a live dashboard on your warehouse data. The app queries Databricks directly and displays key metrics that stay up to date without manual exports. |
| Revenue pipeline tracker | Build a pipeline tracker that pulls revenue and deal data from our Databricks tables and shows a funnel view with filters by region and quarter. | Give RevOps a self-serve view of pipeline data that lives in the warehouse. The app queries Databricks tables where your CRM data lands and presents it in a structured, filterable view. |
| Team metrics explorer | Build a metrics explorer where users pick a team and date range, then see charts for their key metrics pulled from Databricks. | Let teams explore their own metrics without filing data requests. The app runs parameterized SQL queries and renders results as charts, scoped to each team’s data. |
| Data quality monitor | Build an internal tool that runs data quality checks against our warehouse tables and flags anomalies. | Catch data issues before they reach downstream consumers. The app runs validation queries on a schedule and surfaces failures in a clean internal view. |
| Executive summary bot | Build a Slack bot that answers natural language data questions by querying our Databricks warehouse. | Turn your warehouse into a conversational interface for leadership. The app translates questions into SQL, queries Databricks, and posts formatted answers to Slack. |
How Databricks connections work
The Databricks connector uses service principal authentication (M2M OAuth). Instead of connecting as an individual user, you create a service principal in Databricks with access to specific tables and views, then provide its credentials to Lovable.What this means for data access
The service principal’s permissions determine what data is available to everyone who uses that connection. Lovable does not filter results based on the individual user’s Databricks permissions. For example, if you create a service principal with access to HR tables, everyone with access to that connection in Lovable can query HR data. Recommended approach: one service principal per access role. Create separate service principals scoped to different data:databricks-engineering: full warehouse access, only engineers get this connection in Lovabledatabricks-sales: pipeline and revenue tables only, sales team gets this connectiondatabricks-company: company-wide safe metrics, everyone gets this connection
You can create multiple Databricks connections in a workspace, each with a different service principal and different access settings.
How to connect Databricks
Workspace admins and owners can connect Databricks.Prerequisites
Before connecting, make sure you have:- A Databricks workspace with at least one SQL warehouse
- A service principal configured in Databricks with an OAuth secret (see Databricks M2M OAuth setup)
- The service principal’s client ID and client secret
- Your Databricks workspace URL (e.g.
https://dbc-abc123.cloud.databricks.com) - Lovable workspace admin or owner role
Set up your Databricks connection
Navigate to Databricks connector
Go to Settings → Connectors → Shared connectors and select Databricks.
Name the connection
In Display name, name the connection (for example,
Databricks Engineering or Databricks Sales). Use a name that reflects the access level of the service principal.Enter your credentials
- Workspace URL: your Databricks workspace URL (e.g.
https://dbc-abc123.cloud.databricks.com) - Client ID: the service principal’s OAuth client ID
- Client secret: the service principal’s OAuth client secret
Configure who has access
After creating a connection, you can choose who in your workspace can use it. See Connection-level access for details. This is especially important for Databricks, since the service principal’s access level determines what data is visible. Restricting connection access to the right team ensures that only authorized people can build with that data.Building a semantic layer
Every Databricks use case benefits from a semantic layer: a shared definition of what your key metrics mean, which tables to use, and what assumptions they carry. What counts as a “daily active user”? How is MRR calculated? Which view should be used for churn, and does it exclude trials? Without this shared context, each app or dashboard risks computing the same metric differently.If you already have a semantic layer
If your Databricks workspace already has a semantic layer (for example, dbt metrics, Unity Catalog tags, or a YAML definitions file), point Lovable to it:If you don’t have one yet
You can build a semantic layer quickly in Lovable using a dedicated project. Create a new project, connect it to Databricks, and ask the agent to explore your warehouse and draft definitions:Limitations
- No per-user data scoping. Everyone using a connection sees the same data (the service principal’s data). Create separate service principals per access role as a workaround.
- No automatic caching. Query results are not cached by default. You can ask Lovable to add caching logic to your app at your chosen interval.
- Published apps are publicly accessible. Connection-level access controls who can build and edit, not who can use the published app. If your app surfaces sensitive data, add your own authentication layer before publishing.
- Customer-managed cost controls. Lovable does not impose query cost caps. Use Databricks-side controls like warehouse auto-stop, query timeouts, and per-warehouse budgets to manage costs. See Databricks usage and cost monitoring for details.
How to unlink projects from a connection
Editors and above can remove specific projects from a connection without deleting the connection entirely. The connection will remain available for other projects. To unlink projects:
When unlinked, those projects will no longer have access to through this connection. If a project needs again, you can link it to any available connection.
How to delete a connection
Workspace admins and owners can delete connections. Before deleting, review the Linked projects section to see which projects are currently using the connection. To delete a connection:FAQ
Does Lovable enforce my Databricks permissions?
Does Lovable enforce my Databricks permissions?
No. Lovable enforces who on your team can use a connection. The service principal’s access level determines what data is queryable. If the service principal can see HR tables, everyone with access to that connection can query HR tables. Create separate service principals per access role to scope data.
What if someone runs an expensive query?
What if someone runs an expensive query?
Lovable does not impose query cost caps. Use Databricks-side controls to manage costs: warehouse auto-stop, query timeouts, and per-warehouse budgets. We recommend starting with a small warehouse and scaling up as needed. See Databricks usage and cost monitoring for details.
Is my data cached or stored in Lovable?
Is my data cached or stored in Lovable?
Not by default. Lovable queries Databricks at runtime with no automatic data replication or caching. Caching is opt-in: you can ask Lovable to add caching logic to your app at an interval you choose.
What happens when I publish an app that queries Databricks?
What happens when I publish an app that queries Databricks?
The published app uses the service principal to query Databricks. Anyone with the app URL can see the results. Connection-level access only controls who can build and edit the project, not who can use the published app. If the data is sensitive, add your own authentication layer in the app before publishing.
Can someone leak the Databricks credentials?
Can someone leak the Databricks credentials?
No. The service principal credentials are stored server-side in Lovable’s gateway and are never exposed to the browser or your app’s frontend code.