When the feature flag is enabled, these permissions are enforced on both the frontend (disabled buttons with tooltips) and backend (403 responses from API endpoints). When disabled, legacy `can_csv` behavior is preserved.
**Migration behavior:** All three new permissions are granted to every role that currently has `can_csv`, preserving existing access. Admins can then selectively revoke individual export permissions from specific roles as needed.
### Deck.gl MapBox viewport and opacity controls are functional
The Deck.gl MapBox chart's **Opacity**, **Default longitude**, **Default latitude**, and **Zoom** controls were previously non-functional — changing them had no effect on the rendered map. These controls are now wired up correctly.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
---
title: AWS IAM Authentication
sidebar_label: AWS IAM Authentication
sidebar_position: 15
---
# AWS IAM Authentication for AWS Databases
Superset supports IAM-based authentication for **Amazon Aurora** (PostgreSQL and MySQL) and **Amazon Redshift**. IAM auth eliminates the need for database passwords — Superset generates a short-lived auth token using temporary AWS credentials instead.
Cross-account IAM role assumption via STS `AssumeRole` is supported, allowing a Superset deployment in one AWS account to connect to databases in a different account.
## Prerequisites
- Enable the `AWS_DATABASE_IAM_AUTH` feature flag in `superset_config.py`. IAM authentication is gated behind this flag; if it is disabled, connections using `aws_iam` fail with *"AWS IAM database authentication is not enabled."*
```python
FEATURE_FLAGS = {
"AWS_DATABASE_IAM_AUTH": True,
}
```
- `boto3` must be installed in your Superset environment:
```bash
pip install boto3
```
- The Superset server's IAM role (or static credentials) must have permission to call `sts:AssumeRole` (for cross-account) or the same-account permissions for the target service:
- **Redshift Serverless**: `redshift-serverless:GetCredentials` and `redshift-serverless:GetWorkgroup`
- SSL must be enabled on the Aurora / Redshift endpoint (required for IAM token auth).
## Configuration
IAM authentication is configured via the **encrypted_extra** field of the database connection. Access this field in the **Advanced** → **Security** section of the database connection form, under **Secure Extra**.
**3. Configure the database connection in Superset** using the `role_arn` and `external_id` from the trust policy (as shown in the configuration example above).
## Credential Caching
STS credentials are cached in memory keyed by `(role_arn, region, external_id)` with a 10-minute TTL. This reduces the number of STS API calls when multiple queries are executed with the same connection. Tokens are refreshed automatically before expiry.
Additional selenium web drive configuration can be set using `WEBDRIVER_CONFIGURATION`. You can
implement a custom function to authenticate selenium. The default function uses the `flask-login`
To control which user account is used for rendering thumbnails and warming up caches, configure
`THUMBNAIL_EXECUTORS` and `CACHE_WARMUP_EXECUTORS`. Each accepts a list of executor types (which
resolve to an owner, creator, modifier, or the currently-logged-in user) and/or a `FixedExecutor`
pinned to a specific username. By default, thumbnails render as the current user
(`ExecutorType.CURRENT_USER`) and cache warmup runs as the chart/dashboard owner
(`ExecutorType.OWNER`).
To force both to run as a dedicated service account (`admin` in this example):
```python
from superset.tasks.types import ExecutorType, FixedExecutor
THUMBNAIL_EXECUTORS = [FixedExecutor("admin")]
CACHE_WARMUP_EXECUTORS = [FixedExecutor("admin")]
```
Use a dedicated read-only service account here rather than a personal admin account, so that
thumbnail rendering and cache warmup tasks don't fail if a specific user's credentials change.
Additional Selenium WebDriver configuration can be set using `WEBDRIVER_CONFIGURATION`. You can
implement a custom function to authenticate Selenium. The default function uses the `flask-login`
session cookie. Here's an example of a custom function signature:
```python
@@ -159,6 +178,20 @@ Then on configuration:
WEBDRIVER_AUTH_FUNC = auth_driver
```
## ETag Support for Thumbnails
Thumbnail and screenshot endpoints return `ETag` response headers based on the cached content digest. Clients can use conditional requests to avoid downloading unchanged images:
```
GET /api/v1/chart/42/thumbnail/
If-None-Match: "abc123..."
→ 304 Not Modified (if unchanged)
→ 200 OK (with new image if changed)
```
This is particularly useful for embedded dashboards and external integrations that periodically poll for updated screenshots — unchanged thumbnails return immediately with no payload.
## Distributed Coordination Backend
Superset supports an optional distributed coordination (`DISTRIBUTED_COORDINATION_CONFIG`) for
For public OAuth2 clients that cannot securely store a client secret, enable Proof Key for Code Exchange (PKCE) by adding `code_challenge_method` to the `remote_app` configuration:
```python
OAUTH_PROVIDERS = [
{
'name': 'myProvider',
'remote_app': {
'client_id': 'myClientId',
'client_secret': 'mySecret', # may be empty for pure public clients
PKCE (`S256`) is recommended for all OAuth2 flows, even when a client secret is present, as it protects against authorization code interception attacks.
## LDAP Authentication
FAB supports authenticating user credentials against an LDAP server.
The superset cli allows you to import and export datasources from and to YAML. Datasources include
databases. The data is expected to be organized in the following hierarchy:
:::info
Superset's ZIP-based import/export also covers **dashboards**, **charts**, and **saved queries**, exercised through the UI and REST API. The [Dashboard Import Overwrite Behavior](#dashboard-import-overwrite-behavior) and [UUIDs in API Responses](#uuids-in-api-responses) sections below document the behavior shared across all asset types.
:::
```text
├──databases
| ├──database_1
@@ -26,6 +30,10 @@ databases. The data is expected to be organized in the following hierarchy:
| └── ... (more databases)
```
:::note
When you export a database connection, the `masked_encrypted_extra` field (used for sensitive connection parameters such as service account JSON, OAuth tokens, and other encrypted credentials) is included in the export. When importing on another instance, these values are decrypted and re-encrypted using the destination instance's `SECRET_KEY`. Ensure the receiving instance has a valid `SECRET_KEY` configured before importing.
:::
## Exporting Datasources to YAML
You can print your current datasources to stdout by running:
@@ -75,6 +83,29 @@ The optional username flag **-u** sets the user used for the datasource import.
When importing a dashboard ZIP with the **overwrite** option enabled, any existing charts that are part of the dashboard are **replaced** rather than duplicated. This applies to:
- Charts whose UUID matches a chart already present in the target instance
- The full chart configuration (query, visualization type, columns, metrics) is replaced by the imported version
If you import without the overwrite flag, existing charts with conflicting UUIDs are left unchanged and the import skips those objects. Use overwrite when you want to push a fully updated dashboard (including chart definitions) from a development or staging environment to production.
## UUIDs in API Responses
The REST API POST endpoints for **datasets**, **charts**, and **dashboards** include the auto-generated `uuid` field in the response body:
```json
{
"id": 42,
"uuid": "b8a8d5c3-1234-4abc-8def-0123456789ab",
...
}
```
UUIDs remain stable across import/export cycles and can be used for cross-environment workflows — for example, recording a UUID when creating a chart in development and using it to identify the matching chart after importing into production.
## Legacy Importing Datasources
### From older versions of Superset to current version
@@ -501,6 +501,7 @@ All MCP settings go in `superset_config.py`. Defaults are defined in `superset/m
| `MCP_SERVICE_URL` | `None` | Public base URL for MCP-generated links (set this when behind a reverse proxy) |
| `MCP_DEBUG` | `False` | Enable debug logging |
| `MCP_DEV_USERNAME` | -- | Superset username for development mode (no auth) |
| `MCP_RBAC_ENABLED` | `True` | Enforce Superset's role-based access control on MCP tool calls. When `True`, each tool checks that the authenticated user has the required FAB permission before executing. Disable only for testing or trusted-network deployments. |
### Authentication
@@ -516,6 +517,7 @@ All MCP settings go in `superset_config.py`. Defaults are defined in `superset/m
| `MCP_USER_RESOLVER` | `None` | Custom function `(app, access_token) -> username` to extract a Superset username from a validated JWT token. When `None`, the default resolver checks `preferred_username`, `username`, `email`, and `sub` claims in that order. |
### Response Size Guard
@@ -599,6 +601,43 @@ MCP_STORE_CONFIG = {
| `event_store_max_events` | `100` | Maximum events retained per session |
| `event_store_ttl` | `3600` | Event TTL in seconds |
### Tool Search
By default the MCP server exposes a lightweight tool-search interface instead of advertising every tool at once. This reduces the initial context sent to the LLM by ~70%, which lowers cost and latency. The AI client discovers tools on demand by calling `search_tools` and then invokes them via `call_tool`.
```python
MCP_TOOL_SEARCH_CONFIG = {
"enabled": True,
"strategy": "bm25", # "bm25" (natural language) or "regex"
| `max_results` | `5` | Maximum tools returned per search query |
| `always_visible` | See above | Tools that always appear in `list_tools`, regardless of search |
| `include_schemas` | `False` | When `False` (default, "summary mode"), search results omit `inputSchema` entirely and include a lightweight `parameters_hint` listing top-level parameter names. Set to `True` to include the full `inputSchema` in search results. Full schemas are always used when a tool is actually invoked via `call_tool`. |
| `compact_schemas` | `True` | Strip `$defs` / `$ref` and replace with `{"type": "object"}` in search results to reduce token cost. Only takes effect when `include_schemas=True` — ignored in summary mode. |
| `max_description_length` | `300` | Truncate tool descriptions in search results (0 = no truncation). Applies in both summary and full-schema modes. |
:::tip
Set `enabled: False` to revert to the traditional "show all tools at once" behavior, which some clients or workflows may prefer.
:::
Tool search reduces the initial token cost from ~15–20K tokens (full catalog) down to ~4–5K tokens (pinned tools + search interface) — roughly 85% savings at the start of each conversation.
### Session & CSRF
These values are flat-merged into the Flask app config used by the MCP server process:
@@ -620,6 +659,102 @@ MCP_CSRF_CONFIG = {
---
## Access Control
### RBAC Enforcement
The MCP server respects Superset's full role-based access control (RBAC). Every authenticated user can only access the data and operations their Superset roles permit — the same rules that apply in the Superset UI apply through MCP.
Each tool declares one or more required FAB permissions. The table below maps tool groups to their permission requirements:
| Tool group | Required FAB permission |
|------------|------------------------|
| `list_charts`, `get_chart_info`, `get_chart_data`, `get_chart_preview`, `generate_chart`, `update_chart` | `can_read` on `Chart` (read), `can_write` on `Chart` (mutate) |
| `list_dashboards`, `get_dashboard_info`, `generate_dashboard`, `add_chart_to_existing_dashboard` | `can_read` on `Dashboard` (read), `can_write` on `Dashboard` (mutate) |
| `list_datasets`, `get_dataset_info`, `create_virtual_dataset` | `can_read` on `Dataset` (read), `can_write` on `Dataset` (mutate) |
| `list_databases`, `get_database_info` | `can_read` on `Database` |
| `execute_sql` | `can_execute_sql_query` on `SQLLab` |
| `open_sql_lab_with_context` | `can_read` on `SQLLab` |
| `save_sql_query` | `can_write` on `SavedQuery` |
| `health_check` | None (public) |
To disable RBAC checking globally (for trusted-network deployments or testing), set:
```python
# superset_config.py
MCP_RBAC_ENABLED = False
```
:::warning
Disabling RBAC removes all permission checks from MCP tool calls. Only do this on isolated, internal deployments where all MCP users are trusted admins.
:::
### Audit Log
All MCP tool calls are recorded in Superset's action log. You can view them at **Settings → Action Log** (admin only). Each log entry records:
- The tool name (e.g., `mcp.generate_chart.db_write`)
- The authenticated user
- A timestamp
This makes MCP activity fully auditable alongside regular Superset activity. The action log uses the same event logger as the rest of Superset, so existing log ingestion pipelines (e.g., sending logs to Elasticsearch or a SIEM) capture MCP events automatically.
### Middleware Pipeline
Every MCP request passes through a middleware stack before reaching the tool function. The default stack (assembled in `build_middleware_list()` in `server.py`) is:
| Middleware | Purpose | Default |
|------------|---------|---------|
| `StructuredContentStripperMiddleware` | Strips `structuredContent` from responses for Claude.ai bridge compatibility | Enabled |
| `LoggingMiddleware` | Logs each tool call with user, parameters, and duration | Enabled |
| `GlobalErrorHandlerMiddleware` | Catches unhandled exceptions and sanitizes sensitive data before it reaches the client | Enabled |
| `ResponseSizeGuardMiddleware` | Estimates token count, warns at 80% of limit, blocks at limit | Enabled (configurable via `MCP_RESPONSE_SIZE_CONFIG`) |
| `ResponseCachingMiddleware` | Caches read-heavy tool responses (in-memory or Redis) | Disabled (enable via `MCP_CACHE_CONFIG`) |
Additional middleware classes (`RateLimitMiddleware`, `FieldPermissionsMiddleware`, `PrivateToolMiddleware`) are implemented in `superset/mcp_service/middleware.py` but are not added to the default pipeline. They are available for operators who want to layer them in via a custom startup path.
### Error Sanitization
The `GlobalErrorHandlerMiddleware` automatically redacts sensitive information from all error messages before they reach the LLM client. The following are replaced with generic messages:
- **Database connection strings** — replaced with a generic connection error message
- **API keys and tokens** — redacted from error traces
- **File system paths** — stripped to prevent information disclosure
- **IP addresses** — removed from error context
This ensures that a misconfigured database connection or an unexpected exception never leaks credentials or internal topology to the LLM or its users. All regex patterns used for redaction are bounded to prevent ReDoS attacks.
---
## Performance
### Connection Pooling
Each MCP server process maintains its own SQLAlchemy connection pool to the database. For multi-worker deployments, total open connections = **workers × pool size**.
```python
# superset_config.py
SQLALCHEMY_POOL_SIZE = 5
SQLALCHEMY_MAX_OVERFLOW = 10
SQLALCHEMY_POOL_TIMEOUT = 30
SQLALCHEMY_POOL_RECYCLE = 3600 # Recycle connections after 1 hour
```
For a 3-pod Kubernetes deployment with the defaults above, expect up to 3 × (5 + 10) = 45 connections. Size your database's `max_connections` accordingly.
### Response Caching
Enable response caching for read-heavy workloads (dashboards/datasets that don't change frequently). With the in-memory backend (default when `MCP_STORE_CONFIG` is disabled), caching is per-process. Use Redis-backed caching for consistent cache hits across multiple pods:
Mutating tools (`generate_chart`, `update_chart`, `execute_sql`, `generate_dashboard`) are always excluded from caching regardless of this setting.
---
## Troubleshooting
### Server won't start
@@ -664,6 +799,32 @@ MCP_CSRF_CONFIG = {
---
## Audit Events
All MCP tool calls are logged to Superset's event logger, the same system used by the web UI (viewable at **Settings → Action Log**). Each event captures:
- **User**: the resolved Superset username from the JWT or dev config
- **Timestamp**: when the operation ran
This means MCP activity is auditable alongside normal user activity. No additional configuration is required — logging is on by default whenever the event logger is enabled in your Superset deployment.
## Tool Pagination
MCP list tools (`list_datasets`, `list_charts`, `list_dashboards`, `list_databases`) use **offset pagination** via `page` (1-based) and `page_size` parameters. Responses include `page`, `page_size`, `total_count`, `total_pages`, `has_previous`, and `has_next`. To iterate through all results:
```python
# Example: fetch all charts across pages
all_charts = []
page = 1
while True:
result = mcp.list_charts(page=page, page_size=50)
all_charts.extend(result["charts"])
if not result.get("has_next"):
break
page += 1
```
## Security Best Practices
- **Use TLS** for all production MCP endpoints -- place the server behind a reverse proxy with HTTPS
@@ -64,7 +64,7 @@ There are two approaches to making dashboards publicly accessible:
3. Edit each dashboard's properties and add the "Public" role
4. Only dashboards with the Public role explicitly assigned are visible to anonymous users
See the [Public role documentation](/admin-docs/security/security#public) for more details.
See the [Public role documentation](/admin-docs/security/#public) for more details.
#### Embedding a Public Dashboard
@@ -111,7 +111,7 @@ FEATURE_FLAGS = {
This flag only hides the logout button when Superset detects it is running inside an iframe. Users accessing Superset directly (not embedded) will still see the logout button regardless of this setting.
:::note
When embedding with SSO, also set `SESSION_COOKIE_SAMESITE = 'None'` and `SESSION_COOKIE_SECURE = True`. See [Security documentation](/docs/security/securing_superset) for details.
When embedding with SSO, also set `SESSION_COOKIE_SAMESITE = 'None'` and `SESSION_COOKIE_SECURE = True`. See [Security documentation](/admin-docs/security/securing_superset) for details.
# - OS preference detection is automatically enabled
```
### App Branding
The application name shown in the browser title bar and navigation can be
set via the `brandAppName` theme token:
```python
THEME_DEFAULT = {
"token": {
"brandAppName": "Acme Analytics",
# ... other tokens
}
}
```
Or in the theme CRUD UI JSON editor:
```json
{
"token": {
"brandAppName": "Acme Analytics"
}
}
```
The existing `APP_NAME` Python config key continues to work for backward compatibility.
`brandAppName` takes precedence when both are set, and allows different themes to carry different brand names.
Email and alert/report notification subjects are driven by backend settings such as
`EMAIL_REPORTS_SUBJECT_PREFIX` and `APP_NAME`, not by this theme token.
### Migration from Configuration to UI
When `ENABLE_UI_THEME_ADMINISTRATION = True`:
@@ -312,11 +341,25 @@ Available chart types for `echartsOptionsOverridesByChartType`:
- `echarts_heatmap` - Heatmaps
- `echarts_mixed_timeseries` - Mixed time series
### Array Property Overrides
Array properties (such as color palettes) are fully supported in overrides. Arrays are **replaced entirely** rather than merged, so specify the complete array:
```python
THEME_DEFAULT = {
"token": { ... },
"echartsOptionsOverrides": {
# Replace the default color palette for all ECharts visualizations
@@ -20,7 +20,7 @@ To help make the problem somewhat tractable—given that Apache Superset has no
To strive for data consistency (regardless of the timezone of the client) the Apache Superset backend tries to ensure that any timestamp sent to the client has an explicit (or semi-explicit as in the case with [Epoch time](https://en.wikipedia.org/wiki/Unix_time) which is always in reference to UTC) timezone encoded within.
The challenge however lies with the slew of [database engines](/admin-docs/databases#installing-drivers-in-docker) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serializd to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
The challenge however lies with the slew of [database engines](/user-docs/databases#installing-drivers-in-docker) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serialized to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
For example the following is a comparison of MySQL and Presto,
@@ -52,6 +52,15 @@ only see the objects that they have access to.
The **sql_lab** role grants access to SQL Lab. Note that while **Admin** users have access
to all databases by default, both **Alpha** and **Gamma** users need to be given access on a per database basis.
Beyond the base `sql_lab` role, two additional SQL Lab permissions must be explicitly granted for users who need these capabilities:
| Permission | Feature |
|------------|---------|
| `can_estimate_query_cost` on `SQLLab` | Estimate query cost before running |
| `can_format_sql` on `SQLLab` | Format SQL using the database's dialect |
Grant these in **Security → List Roles** by adding the permissions to the relevant role.
### Public
The **Public** role is the most restrictive built-in role, designed specifically for anonymous/unauthenticated
@@ -182,6 +191,8 @@ However, it is crucial to understand the following:
By combining Superset's configurable safeguards with strong database-level security practices, you can achieve a more robust and layered security posture.
**Dataset Sample Access**: The `get_samples()` endpoint now enforces datasource-level access control. Users can only fetch sample rows from datasets they have been explicitly granted access to — the same permission check applied when running chart queries. This closes a prior gap where unauthenticated or under-privileged access could retrieve sample data.
### REST API for user & role management
Flask-AppBuilder supports a REST API for user CRUD,
@@ -239,26 +250,143 @@ based on the roles and permissions that were attributed.
### Row Level Security
Using Row Level Security filters (under the **Security** menu) you can create filters
that are assigned to a particular table, as well as a set of roles.
that are assigned to a particular dataset, as well as a set of roles.
If you want members of the Finance team to only have access to
rows where `department = "finance"`, you could:
- Create a Row Level Security filter with that clause (`department = "finance"`)
- Then assign the clause to the **Finance** role and the table it applies to
- Then assign the clause to the **Finance** role and the dataset it applies to
The **clause** field, which can contain arbitrary text, is then added to the generated
SQL statement’s WHERE clause. So you could even do something like create a filter
SQL statement's WHERE clause. So you could even do something like create a filter
for the last 30 days and apply it to a specific role, with a clause
like `date_field > DATE_SUB(NOW(), INTERVAL 30 DAY)`. It can also support
multiple conditions: `client_id = 6` AND `advertiser="foo"`, etc.
All relevant Row level security filters will be combined together (under the hood,
the different SQL clauses are combined using AND statements). This means it's
possible to create a situation where two roles conflict in such a way as to limit a table subset to empty.
RLS clauses also support **Jinja templating** when `ENABLE_TEMPLATE_PROCESSING` is enabled, so you can write dynamic filters such as
`user_id = '{{ current_username() }}'` to restrict rows based on the logged-in user.
For example, the filters `client_id=4` and `client_id=5`, applied to a role,
will result in users of that role having `client_id=4` AND `client_id=5`
added to their query, which can never be true.
#### Filter Types
There are two types of RLS filters:
- **Regular** — The filter clause is applied when the querying user belongs to one of the
roles assigned to the filter. Use this to restrict what specific roles can see.
- **Base** — The filter clause is applied to **all** users _except_ those in the assigned
roles. Use this to define a default restriction that privileged roles (e.g. Admin) are
exempt from. For example, a Base filter with clause `1 = 0` and the Admin role would
hide all rows from everyone except Admin — useful as a deny-by-default baseline.
#### Group Keys and Filter Combination
All applicable RLS filters are combined before being added to the query. The combination
rules are:
- Filters that share the **same group key** are combined with **OR** (any match within
the group is sufficient).
- Different filter groups (different group keys, or no group key) are combined with
**AND** (all groups must match).
- Filters with **no group key** are each treated as their own group and are always AND'd.
| `POST` | [Create a new dashboard](/developer-docs/api/create-a-new-dashboard) | `/api/v1/dashboard/` |
| `GET` | [Get metadata information about this API resource (dashboard--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-dashboard-info) | `/api/v1/dashboard/_info` |
| `POST` | [Create a copy of an existing dashboard](/developer-docs/api/create-a-copy-of-an-existing-dashboard) | `/api/v1/dashboard/{id_or_slug}/copy/` |
| `DELETE` | [Delete a dashboard](/developer-docs/api/delete-a-dashboard) | `/api/v1/dashboard/{pk}` |
| `PUT` | [Update a dashboard](/developer-docs/api/update-a-dashboard) | `/api/v1/dashboard/{pk}` |
| `POST` | [Compute and cache a screenshot (dashboard-pk-cache-dashboard-screenshot)](/developer-docs/api/compute-and-cache-a-screenshot-dashboard-pk-cache-dashboard-screenshot) | `/api/v1/dashboard/{pk}/cache_dashboard_screenshot/` |
| `PUT` | [Update chart customizations configuration for a dashboard.](/developer-docs/api/update-chart-customizations-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/chart_customizations` |
| `PUT` | [Update colors configuration for a dashboard.](/developer-docs/api/update-colors-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/colors` |
| `GET` | [Export dashboard as example bundle](/developer-docs/api/export-dashboard-as-example-bundle) | `/api/v1/dashboard/{pk}/export_as_example/` |
| `DELETE` | [Remove the dashboard from the user favorite list](/developer-docs/api/remove-the-dashboard-from-the-user-favorite-list) | `/api/v1/dashboard/{pk}/favorites/` |
| `POST` | [Mark the dashboard as favorite for the current user](/developer-docs/api/mark-the-dashboard-as-favorite-for-the-current-user) | `/api/v1/dashboard/{pk}/favorites/` |
| `PUT` | [Update native filters configuration for a dashboard.](/developer-docs/api/update-native-filters-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/filters` |
| `GET` | [Get a computed screenshot from cache (dashboard-pk-screenshot-digest)](/developer-docs/api/get-a-computed-screenshot-from-cache-dashboard-pk-screenshot-digest) | `/api/v1/dashboard/{pk}/screenshot/{digest}/` |
| `GET` | [Get a list of charts](/developer-docs/api/get-a-list-of-charts) | `/api/v1/chart/` |
| `POST` | [Create a new chart](/developer-docs/api/create-a-new-chart) | `/api/v1/chart/` |
| `GET` | [Get metadata information about this API resource (chart--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-chart-info) | `/api/v1/chart/_info` |
| `GET` | [Get a list of datasets](/developer-docs/api/get-a-list-of-datasets) | `/api/v1/dataset/` |
| `POST` | [Create a new dataset](/developer-docs/api/create-a-new-dataset) | `/api/v1/dataset/` |
| `GET` | [Get metadata information about this API resource (dataset--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-dataset-info) | `/api/v1/dataset/_info` |
| `GET` | [Get a dataset](/developer-docs/api/get-a-dataset) | `/api/v1/dataset/{id_or_uuid}` |
| `GET` | [Get charts and dashboards count associated to a dataset](/developer-docs/api/get-charts-and-dashboards-count-associated-to-a-dataset) | `/api/v1/dataset/{id_or_uuid}/related_objects` |
| `DELETE` | [Delete a dataset](/developer-docs/api/delete-a-dataset) | `/api/v1/dataset/{pk}` |
| `GET` | [Get a dataset](/developer-docs/api/get-a-dataset) | `/api/v1/dataset/{pk}` |
| `PUT` | [Update a dataset](/developer-docs/api/update-a-dataset) | `/api/v1/dataset/{pk}` |
| `DELETE` | [Delete a dataset column](/developer-docs/api/delete-a-dataset-column) | `/api/v1/dataset/{pk}/column/{column_id}` |
| `DELETE` | [Delete a dataset metric](/developer-docs/api/delete-a-dataset-metric) | `/api/v1/dataset/{pk}/metric/{metric_id}` |
| `PUT` | [Refresh and update columns of a dataset](/developer-docs/api/refresh-and-update-columns-of-a-dataset) | `/api/v1/dataset/{pk}/refresh` |
| `GET` | [Get charts and dashboards count associated to a dataset](/developer-docs/api/get-charts-and-dashboards-count-associated-to-a-dataset) | `/api/v1/dataset/{pk}/related_objects` |
| `GET` | [Get distinct values from field data (dataset-distinct-column-name)](/developer-docs/api/get-distinct-values-from-field-data-dataset-distinct-column-name) | `/api/v1/dataset/distinct/{column_name}` |
| `POST` | [Duplicate a dataset](/developer-docs/api/duplicate-a-dataset) | `/api/v1/dataset/duplicate` |
| `GET` | [Get all schemas from a database](/developer-docs/api/get-all-schemas-from-a-database) | `/api/v1/database/{pk}/schemas/` |
| `GET` | [Get database select star for table (database-pk-select-star-table-name)](/developer-docs/api/get-database-select-star-for-table-database-pk-select-star-table-name) | `/api/v1/database/{pk}/select_star/{table_name}/` |
| `GET` | [Get database select star for table (database-pk-select-star-table-name-schema-name)](/developer-docs/api/get-database-select-star-for-table-database-pk-select-star-table-name-schema-name) | `/api/v1/database/{pk}/select_star/{table_name}/{schema_name}/` |
| `DELETE` | [Delete a SSH tunnel](/developer-docs/api/delete-a-ssh-tunnel) | `/api/v1/database/{pk}/ssh_tunnel/` |
| `POST` | [Re-sync all permissions for a database connection](/developer-docs/api/re-sync-all-permissions-for-a-database-connection) | `/api/v1/database/{pk}/sync_permissions/` |
| `GET` | [Get names of databases currently available](/developer-docs/api/get-names-of-databases-currently-available) | `/api/v1/database/available/` |
| `GET` | [Download database(s) and associated dataset(s) as a zip file](/developer-docs/api/download-database-s-and-associated-dataset-s-as-a-zip-file) | `/api/v1/database/export/` |
<summary><strong>Datasources</strong> (1 endpoints) — Query datasource metadata and column values.</summary>
<summary><strong>Datasources</strong> (2 endpoints) — Query datasource metadata and column values.</summary>
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | [Get possible values for a datasource column](/developer-docs/api/get-possible-values-for-a-datasource-column) | `/api/v1/datasource/{datasource_type}/{datasource_id}/column/{column_name}/values/` |
| `POST` | [Validate a SQL expression against a datasource](/developer-docs/api/validate-a-sql-expression-against-a-datasource) | `/api/v1/datasource/{datasource_type}/{datasource_id}/validate_expression/` |
</details>
<details>
<summary><strong>Advanced Data Type</strong> (2 endpoints) — Endpoints for advanced data type operations and conversions.</summary>
<summary><strong>Advanced Data Type</strong> (2 endpoints) — Advanced data type operations and conversions.</summary>
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | [Return an AdvancedDataTypeResponse](/developer-docs/api/return-an-advanceddatatyperesponse) | `/api/v1/advanced_data_type/convert` |
| `GET` | [Return an AdvancedDataTypeResponse](/developer-docs/api/return-an-advanced-data-type-response) | `/api/v1/advanced_data_type/convert` |
| `GET` | [Return a list of available advanced data types](/developer-docs/api/return-a-list-of-available-advanced-data-types) | `/api/v1/advanced_data_type/types` |
| `POST` | [Create a new permanent link (explore-permalink)](/developer-docs/api/create-a-new-permanent-link-explore-permalink) | `/api/v1/explore/permalink` |
| `POST` | [Create a new permanent link (sqllab-permalink)](/developer-docs/api/create-a-new-permanent-link-sqllab-permalink) | `/api/v1/sqllab/permalink` |
| `GET` | [Get permanent link state for SQLLab editor.](/developer-docs/api/get-permanent-link-state-for-sqllab-editor) | `/api/v1/sqllab/permalink/{key}` |
| `GET` | [Get permanent link state for SQLLab editor.](/developer-docs/api/get-permanent-link-state-for-sql-lab-editor) | `/api/v1/sqllab/permalink/{key}` |
| `GET` | [Get a list of themes](/developer-docs/api/get-a-list-of-themes) | `/api/v1/theme/` |
| `POST` | [Create a theme](/developer-docs/api/create-a-theme) | `/api/v1/theme/` |
| `GET` | [Get metadata information about this API resource (theme--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-theme-info) | `/api/v1/theme/_info` |
| `DELETE` | [Delete a theme](/developer-docs/api/delete-a-theme) | `/api/v1/theme/{pk}` |
| `GET` | [Get a theme](/developer-docs/api/get-a-theme) | `/api/v1/theme/{pk}` |
| `PUT` | [Update a theme](/developer-docs/api/update-a-theme) | `/api/v1/theme/{pk}` |
| `PUT` | [Set a theme as the system dark theme](/developer-docs/api/set-a-theme-as-the-system-dark-theme) | `/api/v1/theme/{pk}/set_system_dark` |
| `PUT` | [Set a theme as the system default theme](/developer-docs/api/set-a-theme-as-the-system-default-theme) | `/api/v1/theme/{pk}/set_system_default` |
| `DELETE` | [Delete security user registrations by pk](/developer-docs/api/delete-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `GET` | [Get security user registrations by pk](/developer-docs/api/get-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `PUT` | [Update security user registrations by pk](/developer-docs/api/update-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `GET` | [Get distinct values from field data (security-user-registrations-distinct-column-name)](/developer-docs/api/get-distinct-values-from-field-data-security-user-registrations-distinct-column-name) | `/api/v1/security/user_registrations/distinct/{column_name}` |
| `GET` | [Get related fields data (security-user-registrations-related-column-name)](/developer-docs/api/get-related-fields-data-security-user-registrations-related-column-name) | `/api/v1/security/user_registrations/related/{column_name}` |
@@ -485,7 +485,7 @@ Frontend assets (TypeScript, JavaScript, CSS, and images) must be compiled in or
First, be sure you are using the following versions of Node.js and npm:
- `Node.js`: Version 20
- `Node.js`: Version 22 (LTS)
- `npm`: Version 10
We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage your node environment:
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.