A plain string value (e.g. FAB_API_KEY_PREFIXES = "sst_") would iterate
as individual characters ['s','s','t','_'], matching far too many tokens.
Wrap strings in a list at the config-read boundary so CompositeTokenVerifier
always receives a proper sequence regardless of how the config is set.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Remove mock_sm.find_user_with_relationships.return_value = None from
_mock_sm_ctx: load_user_with_relationships delegates to the global
security_manager (not app.appbuilder.sm), so setting it on mock_sm had
no effect and broke MagicMock(spec=[]) tests.
- Add _patch_load_user_not_found() helper that patches
superset.mcp_service.auth.load_user_with_relationships directly.
- Apply it to the 3 JWT-path tests that expect ValueError("not found"):
test_jwt_access_token_skips_api_key_auth,
test_namespaced_claim_without_api_key_client_id_is_ignored,
test_unnamespaced_passthrough_claim_does_not_trigger_api_key_path.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Empty-string prefixes match every Bearer token (DoS/misclassification vector).
Non-string entries cause TypeError in str.startswith(). Filter both in __init__,
warn on invalid entries, and only store valid non-empty string prefixes.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
_mock_sm_ctx now sets find_user_with_relationships.return_value = None so
JWT/dev-user lookups that delegate through the (now refactored)
load_user_with_relationships → security_manager.find_user_with_relationships
path behave as "user not found" in unit tests that don't patch the DB — matching
the behavior of the previous direct db.session.query() implementation.
Without this, tests that expected ValueError("not found") received a truthy
MagicMock() from the unspecified mock method, causing them to fail.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fixes a gap identified in code review: the standalone load_user_with_relationships()
in auth.py duplicated SecurityManager.find_user() logic but dropped two FAB behaviors:
- auth_username_ci (case-insensitive username lookup)
- MultipleResultsFound guard (username uniqueness not guaranteed at DB level in all FAB versions)
It also hard-coded User/Group models instead of sm.user_model.
Changes:
- Add SupersetSecurityManager.find_user_with_relationships() to security/manager.py,
mirroring FAB's find_user() (auth_username_ci, MultipleResultsFound handling,
self.user_model) and adding eager loading of roles and group.roles via joinedload
- Simplify load_user_with_relationships() in auth.py to a thin delegate to the
new method, removing the duplicated query logic and raw Group/User imports
- Add regression test asserting find_user_with_relationships() exists on the SM
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- _tool_allowed_for_current_user (server.py): catch PermissionError
alongside ValueError so invalid API keys return False instead of
propagating through the tool-search permission filter
- _setup_user_context (auth.py): catch PermissionError alongside
ValueError so g.user is cleared and the error is logged consistently
regardless of which failure type get_user_from_request() raises
- _resolve_user_from_api_key (auth.py): require client_id=="api_key"
(set by CompositeTokenVerifier) in addition to API_KEY_PASSTHROUGH_CLAIM
to prevent an external IdP JWT that happens to include the claim name
from being misclassified as an API-key pass-through (DoS vector)
- _resolve_user_from_jwt_context (auth.py): same client_id guard so
a rogue-claim JWT continues through JWT resolution instead of deferring
to the API-key path (which would raise PermissionError for the user)
- _resolve_user_from_api_key (auth.py): raise PermissionError (not
return None) when the pass-through claim is present but the raw token
is absent — fail closed rather than falling through to weaker auth
- Tests: set client_id="api_key" on _passthrough_access_token helper;
update test_jwt_context_with_api_key_passthrough_returns_none docstring;
add test for namespaced claim on non-API-key client_id being ignored
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Use superset.mcp_service.auth.has_request_context as patch target in
test_mcp_auth_hook_clears_stale_g_user tests; patching flask.has_request_context
has no effect on the module-level import already bound in auth.py
- Update test_jwt_access_token_skips_api_key_auth docstring to reference
API_KEY_PASSTHROUGH_CLAIM instead of the legacy _api_key_passthrough name
- Add noqa: BLE001 to broad exception catch in mcp_config.py to document
that the wide catch is intentional (JWT libs raise many types, secrets guard)
DetailedJWTVerifier and JWTVerifier have no circular-import or optional-
dependency reason to be imported inline — fastmcp is already pulled in
at module top via composite_token_verifier, and authlib is already a
hard dependency. Moving them up for consistency with the rest of the
module's imports.
``superset init`` calls ``appbuilder.add_permissions(update_perms=True)``
before ``sync_role_definitions()`` (cli/main.py:84), which forces FAB to
walk all registered baseviews — including ``ApiKeyApi`` (registered when
``FAB_API_KEY_ENABLED=True``) — and create their PVMs via
``add_permissions_view``. The explicit ``add_permission_view_menu`` calls
in ``create_custom_permissions`` were redundant.
With ``"ApiKey"`` already in ``ADMIN_ONLY_VIEW_MENUS``, the role
predicate ``_is_admin_only`` gates the auto-created PVMs to Admin.
Per Daniel Gaspar's review: "Adding ApiKey to ADMIN_ONLY_VIEW_MENUS
should just work when FAB_API_KEY_ENABLED is True".
The API_KEY_PASSTHROUGH_CLAIM constant in auth.py and CompositeTokenVerifier
in mcp_config.py have no circular-import or optional-dependency reason to
be imported inline. Moved them to module top.
Three independent bugs let MCP requests presenting Bearer tokens with the
sst_ prefix authenticate as MCP_DEV_USERNAME without any validation under
streamable-http:
1. _resolve_user_from_api_key read the token from flask.request.headers,
but the streamable-http transport never pushes a Flask request context
— has_request_context() was always False, so the function returned
None before validating, falling through to the dev-user fallback.
Now reads the token from FastMCP's per-request AccessToken (which the
CompositeTokenVerifier already populated) and fails closed when the
key is invalid.
2. CompositeTokenVerifier was only installed when MCP_AUTH_ENABLED=True.
With FAB_API_KEY_ENABLED=True alone, no transport-level verifier
existed at all. The factory now builds an API-key-only verifier in
that case (jwt_verifier=None) that rejects non-API-key Bearer tokens
at the transport instead of silently accepting them.
3. The pass-through AccessToken was minted with scopes=[], which would
make FastMCP's RequireAuthMiddleware 403 every API-key request when
MCP_REQUIRED_SCOPES is non-empty. Pass-through now propagates
self.required_scopes.
Also addresses Daniel's review comment on superset/security/manager.py:
adds "ApiKey" to ADMIN_ONLY_VIEW_MENUS so the FAB ApiKeyApi PVMs are
gated to Admin instead of leaking to Alpha and Gamma.
Renames the pass-through claim from _api_key_passthrough to the
namespaced _superset_mcp_api_key_passthrough (exported as
API_KEY_PASSTHROUGH_CLAIM) so a custom claim from an external IdP can't
accidentally divert a JWT into the API-key validation path.
Tests updated to mock get_access_token instead of app.test_request_context
(the simulated Flask context was the reason the prior tests passed while
production failed). New tests cover API-key-only verifier mode, scope
propagation on pass-through, and the namespaced-claim isolation.
Wire CompositeTokenVerifier into create_default_mcp_auth_factory,
add _api_key_passthrough detection in _resolve_user_from_jwt_context,
create ApiKey permissions in create_custom_permissions, and update
test_auth_api_key with pass-through and non-matching prefix tests.
Two fixes for MCP API key authentication:
1. superset init now creates ApiKey FAB permissions (can_list, can_create,
can_get, can_delete) when FAB_API_KEY_ENABLED=True. Previously, because
Superset uses AppBuilder(update_perms=False), FAB skipped permission
creation during blueprint registration and superset init never picked
them up, causing 403 errors on /api/v1/security/api_keys/.
2. CompositeTokenVerifier allows API key tokens (e.g. sst_...) to coexist
with JWT auth on the MCP transport layer. Previously, when
MCP_AUTH_ENABLED=True, the JWTVerifier rejected all non-JWT Bearer
tokens at the transport layer before they could reach the Flask-level
_resolve_user_from_api_key() handler. The composite verifier detects
API key prefixes and passes them through with a marker claim, letting
the existing auth priority chain handle validation.
When the feature flag is enabled, these permissions are enforced on both the frontend (disabled buttons with tooltips) and backend (403 responses from API endpoints). When disabled, legacy `can_csv` behavior is preserved.
**Migration behavior:** All three new permissions are granted to every role that currently has `can_csv`, preserving existing access. Admins can then selectively revoke individual export permissions from specific roles as needed.
### Deck.gl MapBox viewport and opacity controls are functional
The Deck.gl MapBox chart's **Opacity**, **Default longitude**, **Default latitude**, and **Zoom** controls were previously non-functional — changing them had no effect on the rendered map. These controls are now wired up correctly.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
---
title: AWS IAM Authentication
sidebar_label: AWS IAM Authentication
sidebar_position: 15
---
# AWS IAM Authentication for AWS Databases
Superset supports IAM-based authentication for **Amazon Aurora** (PostgreSQL and MySQL) and **Amazon Redshift**. IAM auth eliminates the need for database passwords — Superset generates a short-lived auth token using temporary AWS credentials instead.
Cross-account IAM role assumption via STS `AssumeRole` is supported, allowing a Superset deployment in one AWS account to connect to databases in a different account.
## Prerequisites
- Enable the `AWS_DATABASE_IAM_AUTH` feature flag in `superset_config.py`. IAM authentication is gated behind this flag; if it is disabled, connections using `aws_iam` fail with *"AWS IAM database authentication is not enabled."*
```python
FEATURE_FLAGS = {
"AWS_DATABASE_IAM_AUTH": True,
}
```
- `boto3` must be installed in your Superset environment:
```bash
pip install boto3
```
- The Superset server's IAM role (or static credentials) must have permission to call `sts:AssumeRole` (for cross-account) or the same-account permissions for the target service:
- **Redshift Serverless**: `redshift-serverless:GetCredentials` and `redshift-serverless:GetWorkgroup`
- SSL must be enabled on the Aurora / Redshift endpoint (required for IAM token auth).
## Configuration
IAM authentication is configured via the **encrypted_extra** field of the database connection. Access this field in the **Advanced** → **Security** section of the database connection form, under **Secure Extra**.
**3. Configure the database connection in Superset** using the `role_arn` and `external_id` from the trust policy (as shown in the configuration example above).
## Credential Caching
STS credentials are cached in memory keyed by `(role_arn, region, external_id)` with a 10-minute TTL. This reduces the number of STS API calls when multiple queries are executed with the same connection. Tokens are refreshed automatically before expiry.
The superset cli allows you to import and export datasources from and to YAML. Datasources include
databases. The data is expected to be organized in the following hierarchy:
:::info
Superset's ZIP-based import/export also covers **dashboards**, **charts**, and **saved queries**, exercised through the UI and REST API. The [Dashboard Import Overwrite Behavior](#dashboard-import-overwrite-behavior) and [UUIDs in API Responses](#uuids-in-api-responses) sections below document the behavior shared across all asset types.
:::
```text
├──databases
| ├──database_1
@@ -75,6 +79,29 @@ The optional username flag **-u** sets the user used for the datasource import.
When importing a dashboard ZIP with the **overwrite** option enabled, any existing charts that are part of the dashboard are **replaced** rather than duplicated. This applies to:
- Charts whose UUID matches a chart already present in the target instance
- The full chart configuration (query, visualization type, columns, metrics) is replaced by the imported version
If you import without the overwrite flag, existing charts with conflicting UUIDs are left unchanged and the import skips those objects. Use overwrite when you want to push a fully updated dashboard (including chart definitions) from a development or staging environment to production.
## UUIDs in API Responses
The REST API POST endpoints for **datasets**, **charts**, and **dashboards** include the auto-generated `uuid` field in the response body:
```json
{
"id": 42,
"uuid": "b8a8d5c3-1234-4abc-8def-0123456789ab",
...
}
```
UUIDs remain stable across import/export cycles and can be used for cross-environment workflows — for example, recording a UUID when creating a chart in development and using it to identify the matching chart after importing into production.
## Legacy Importing Datasources
### From older versions of Superset to current version
@@ -501,6 +501,7 @@ All MCP settings go in `superset_config.py`. Defaults are defined in `superset/m
| `MCP_SERVICE_URL` | `None` | Public base URL for MCP-generated links (set this when behind a reverse proxy) |
| `MCP_DEBUG` | `False` | Enable debug logging |
| `MCP_DEV_USERNAME` | -- | Superset username for development mode (no auth) |
| `MCP_PARSE_REQUEST_ENABLED` | `True` | Pre-parse MCP tool inputs from JSON strings into objects. Set to `False` for clients (Claude Desktop, LangChain) that do not double-serialize arguments — this produces cleaner tool schemas for those clients |
### Authentication
@@ -664,6 +665,32 @@ MCP_CSRF_CONFIG = {
---
## Audit Events
All MCP tool calls are logged to Superset's event logger, the same system used by the web UI (viewable at **Settings → Action Log**). Each event captures:
- **User**: the resolved Superset username from the JWT or dev config
- **Timestamp**: when the operation ran
This means MCP activity is auditable alongside normal user activity. No additional configuration is required — logging is on by default whenever the event logger is enabled in your Superset deployment.
## Tool Pagination
MCP list tools (`list_datasets`, `list_charts`, `list_dashboards`, `list_databases`) use **offset pagination** via `page` (1-based) and `page_size` parameters. Responses include `page`, `page_size`, `total_count`, `total_pages`, `has_previous`, and `has_next`. To iterate through all results:
```python
# Example: fetch all charts across pages
all_charts = []
page = 1
while True:
result = mcp.list_charts(page=page, page_size=50)
all_charts.extend(result["charts"])
if not result.get("has_next"):
break
page += 1
```
## Security Best Practices
- **Use TLS** for all production MCP endpoints -- place the server behind a reverse proxy with HTTPS
@@ -64,7 +64,7 @@ There are two approaches to making dashboards publicly accessible:
3. Edit each dashboard's properties and add the "Public" role
4. Only dashboards with the Public role explicitly assigned are visible to anonymous users
See the [Public role documentation](/admin-docs/security/security#public) for more details.
See the [Public role documentation](/admin-docs/security/#public) for more details.
#### Embedding a Public Dashboard
@@ -111,7 +111,7 @@ FEATURE_FLAGS = {
This flag only hides the logout button when Superset detects it is running inside an iframe. Users accessing Superset directly (not embedded) will still see the logout button regardless of this setting.
:::note
When embedding with SSO, also set `SESSION_COOKIE_SAMESITE = 'None'` and `SESSION_COOKIE_SECURE = True`. See [Security documentation](/docs/security/securing_superset) for details.
When embedding with SSO, also set `SESSION_COOKIE_SAMESITE = 'None'` and `SESSION_COOKIE_SECURE = True`. See [Security documentation](/admin-docs/security/securing_superset) for details.
@@ -20,7 +20,7 @@ To help make the problem somewhat tractable—given that Apache Superset has no
To strive for data consistency (regardless of the timezone of the client) the Apache Superset backend tries to ensure that any timestamp sent to the client has an explicit (or semi-explicit as in the case with [Epoch time](https://en.wikipedia.org/wiki/Unix_time) which is always in reference to UTC) timezone encoded within.
The challenge however lies with the slew of [database engines](/admin-docs/databases#installing-drivers-in-docker) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serializd to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
The challenge however lies with the slew of [database engines](/user-docs/databases#installing-drivers-in-docker) which Apache Superset supports and various inconsistencies between their [Python Database API (DB-API)](https://www.python.org/dev/peps/pep-0249/) implementations combined with the fact that we use [Pandas](https://pandas.pydata.org/) to read SQL into a DataFrame prior to serializing to JSON. Regrettably Pandas ignores the DB-API [type_code](https://www.python.org/dev/peps/pep-0249/#type-objects) relying by default on the underlying Python type returned by the DB-API. Currently only a subset of the supported database engines work correctly with Pandas, i.e., ensuring timestamps without an explicit timestamp are serialized to JSON with the server timezone, thus guaranteeing the client will display timestamps in a consistent manner irrespective of the client's timezone.
For example the following is a comparison of MySQL and Presto,
| `POST` | [Create a new dashboard](/developer-docs/api/create-a-new-dashboard) | `/api/v1/dashboard/` |
| `GET` | [Get metadata information about this API resource (dashboard--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-dashboard-info) | `/api/v1/dashboard/_info` |
| `POST` | [Create a copy of an existing dashboard](/developer-docs/api/create-a-copy-of-an-existing-dashboard) | `/api/v1/dashboard/{id_or_slug}/copy/` |
| `DELETE` | [Delete a dashboard](/developer-docs/api/delete-a-dashboard) | `/api/v1/dashboard/{pk}` |
| `PUT` | [Update a dashboard](/developer-docs/api/update-a-dashboard) | `/api/v1/dashboard/{pk}` |
| `POST` | [Compute and cache a screenshot (dashboard-pk-cache-dashboard-screenshot)](/developer-docs/api/compute-and-cache-a-screenshot-dashboard-pk-cache-dashboard-screenshot) | `/api/v1/dashboard/{pk}/cache_dashboard_screenshot/` |
| `PUT` | [Update chart customizations configuration for a dashboard.](/developer-docs/api/update-chart-customizations-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/chart_customizations` |
| `PUT` | [Update colors configuration for a dashboard.](/developer-docs/api/update-colors-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/colors` |
| `GET` | [Export dashboard as example bundle](/developer-docs/api/export-dashboard-as-example-bundle) | `/api/v1/dashboard/{pk}/export_as_example/` |
| `DELETE` | [Remove the dashboard from the user favorite list](/developer-docs/api/remove-the-dashboard-from-the-user-favorite-list) | `/api/v1/dashboard/{pk}/favorites/` |
| `POST` | [Mark the dashboard as favorite for the current user](/developer-docs/api/mark-the-dashboard-as-favorite-for-the-current-user) | `/api/v1/dashboard/{pk}/favorites/` |
| `PUT` | [Update native filters configuration for a dashboard.](/developer-docs/api/update-native-filters-configuration-for-a-dashboard) | `/api/v1/dashboard/{pk}/filters` |
| `GET` | [Get a computed screenshot from cache (dashboard-pk-screenshot-digest)](/developer-docs/api/get-a-computed-screenshot-from-cache-dashboard-pk-screenshot-digest) | `/api/v1/dashboard/{pk}/screenshot/{digest}/` |
| `GET` | [Get a list of charts](/developer-docs/api/get-a-list-of-charts) | `/api/v1/chart/` |
| `POST` | [Create a new chart](/developer-docs/api/create-a-new-chart) | `/api/v1/chart/` |
| `GET` | [Get metadata information about this API resource (chart--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-chart-info) | `/api/v1/chart/_info` |
| `GET` | [Get a list of datasets](/developer-docs/api/get-a-list-of-datasets) | `/api/v1/dataset/` |
| `POST` | [Create a new dataset](/developer-docs/api/create-a-new-dataset) | `/api/v1/dataset/` |
| `GET` | [Get metadata information about this API resource (dataset--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-dataset-info) | `/api/v1/dataset/_info` |
| `GET` | [Get a dataset](/developer-docs/api/get-a-dataset) | `/api/v1/dataset/{id_or_uuid}` |
| `GET` | [Get charts and dashboards count associated to a dataset](/developer-docs/api/get-charts-and-dashboards-count-associated-to-a-dataset) | `/api/v1/dataset/{id_or_uuid}/related_objects` |
| `DELETE` | [Delete a dataset](/developer-docs/api/delete-a-dataset) | `/api/v1/dataset/{pk}` |
| `GET` | [Get a dataset](/developer-docs/api/get-a-dataset) | `/api/v1/dataset/{pk}` |
| `PUT` | [Update a dataset](/developer-docs/api/update-a-dataset) | `/api/v1/dataset/{pk}` |
| `DELETE` | [Delete a dataset column](/developer-docs/api/delete-a-dataset-column) | `/api/v1/dataset/{pk}/column/{column_id}` |
| `DELETE` | [Delete a dataset metric](/developer-docs/api/delete-a-dataset-metric) | `/api/v1/dataset/{pk}/metric/{metric_id}` |
| `PUT` | [Refresh and update columns of a dataset](/developer-docs/api/refresh-and-update-columns-of-a-dataset) | `/api/v1/dataset/{pk}/refresh` |
| `GET` | [Get charts and dashboards count associated to a dataset](/developer-docs/api/get-charts-and-dashboards-count-associated-to-a-dataset) | `/api/v1/dataset/{pk}/related_objects` |
| `GET` | [Get distinct values from field data (dataset-distinct-column-name)](/developer-docs/api/get-distinct-values-from-field-data-dataset-distinct-column-name) | `/api/v1/dataset/distinct/{column_name}` |
| `POST` | [Duplicate a dataset](/developer-docs/api/duplicate-a-dataset) | `/api/v1/dataset/duplicate` |
| `GET` | [Get all schemas from a database](/developer-docs/api/get-all-schemas-from-a-database) | `/api/v1/database/{pk}/schemas/` |
| `GET` | [Get database select star for table (database-pk-select-star-table-name)](/developer-docs/api/get-database-select-star-for-table-database-pk-select-star-table-name) | `/api/v1/database/{pk}/select_star/{table_name}/` |
| `GET` | [Get database select star for table (database-pk-select-star-table-name-schema-name)](/developer-docs/api/get-database-select-star-for-table-database-pk-select-star-table-name-schema-name) | `/api/v1/database/{pk}/select_star/{table_name}/{schema_name}/` |
| `DELETE` | [Delete a SSH tunnel](/developer-docs/api/delete-a-ssh-tunnel) | `/api/v1/database/{pk}/ssh_tunnel/` |
| `POST` | [Re-sync all permissions for a database connection](/developer-docs/api/re-sync-all-permissions-for-a-database-connection) | `/api/v1/database/{pk}/sync_permissions/` |
| `GET` | [Get names of databases currently available](/developer-docs/api/get-names-of-databases-currently-available) | `/api/v1/database/available/` |
| `GET` | [Download database(s) and associated dataset(s) as a zip file](/developer-docs/api/download-database-s-and-associated-dataset-s-as-a-zip-file) | `/api/v1/database/export/` |
<summary><strong>Datasources</strong> (1 endpoints) — Query datasource metadata and column values.</summary>
<summary><strong>Datasources</strong> (2 endpoints) — Query datasource metadata and column values.</summary>
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | [Get possible values for a datasource column](/developer-docs/api/get-possible-values-for-a-datasource-column) | `/api/v1/datasource/{datasource_type}/{datasource_id}/column/{column_name}/values/` |
| `POST` | [Validate a SQL expression against a datasource](/developer-docs/api/validate-a-sql-expression-against-a-datasource) | `/api/v1/datasource/{datasource_type}/{datasource_id}/validate_expression/` |
</details>
<details>
<summary><strong>Advanced Data Type</strong> (2 endpoints) — Endpoints for advanced data type operations and conversions.</summary>
<summary><strong>Advanced Data Type</strong> (2 endpoints) — Advanced data type operations and conversions.</summary>
| Method | Endpoint | Description |
|--------|----------|-------------|
| `GET` | [Return an AdvancedDataTypeResponse](/developer-docs/api/return-an-advanceddatatyperesponse) | `/api/v1/advanced_data_type/convert` |
| `GET` | [Return an AdvancedDataTypeResponse](/developer-docs/api/return-an-advanced-data-type-response) | `/api/v1/advanced_data_type/convert` |
| `GET` | [Return a list of available advanced data types](/developer-docs/api/return-a-list-of-available-advanced-data-types) | `/api/v1/advanced_data_type/types` |
| `POST` | [Create a new permanent link (explore-permalink)](/developer-docs/api/create-a-new-permanent-link-explore-permalink) | `/api/v1/explore/permalink` |
| `POST` | [Create a new permanent link (sqllab-permalink)](/developer-docs/api/create-a-new-permanent-link-sqllab-permalink) | `/api/v1/sqllab/permalink` |
| `GET` | [Get permanent link state for SQLLab editor.](/developer-docs/api/get-permanent-link-state-for-sqllab-editor) | `/api/v1/sqllab/permalink/{key}` |
| `GET` | [Get permanent link state for SQLLab editor.](/developer-docs/api/get-permanent-link-state-for-sql-lab-editor) | `/api/v1/sqllab/permalink/{key}` |
| `GET` | [Get a list of themes](/developer-docs/api/get-a-list-of-themes) | `/api/v1/theme/` |
| `POST` | [Create a theme](/developer-docs/api/create-a-theme) | `/api/v1/theme/` |
| `GET` | [Get metadata information about this API resource (theme--info)](/developer-docs/api/get-metadata-information-about-this-api-resource-theme-info) | `/api/v1/theme/_info` |
| `DELETE` | [Delete a theme](/developer-docs/api/delete-a-theme) | `/api/v1/theme/{pk}` |
| `GET` | [Get a theme](/developer-docs/api/get-a-theme) | `/api/v1/theme/{pk}` |
| `PUT` | [Update a theme](/developer-docs/api/update-a-theme) | `/api/v1/theme/{pk}` |
| `PUT` | [Set a theme as the system dark theme](/developer-docs/api/set-a-theme-as-the-system-dark-theme) | `/api/v1/theme/{pk}/set_system_dark` |
| `PUT` | [Set a theme as the system default theme](/developer-docs/api/set-a-theme-as-the-system-default-theme) | `/api/v1/theme/{pk}/set_system_default` |
| `DELETE` | [Delete security user registrations by pk](/developer-docs/api/delete-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `GET` | [Get security user registrations by pk](/developer-docs/api/get-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `PUT` | [Update security user registrations by pk](/developer-docs/api/update-security-user-registrations-by-pk) | `/api/v1/security/user_registrations/{pk}` |
| `GET` | [Get distinct values from field data (security-user-registrations-distinct-column-name)](/developer-docs/api/get-distinct-values-from-field-data-security-user-registrations-distinct-column-name) | `/api/v1/security/user_registrations/distinct/{column_name}` |
| `GET` | [Get related fields data (security-user-registrations-related-column-name)](/developer-docs/api/get-related-fields-data-security-user-registrations-related-column-name) | `/api/v1/security/user_registrations/related/{column_name}` |
Extension developers have access to pre-built UI components via `@apache-superset/core/components`. Browse all available components on the [UI Components](/docs/components/) page and filter by **Extension Compatible** to see components available to extensions.
Extension developers have access to pre-built UI components via `@apache-superset/core/components`. Browse all available components on the [UI Components](/developer-docs/components/) page and filter by **Extension Compatible** to see components available to extensions.
Superset provides granular, permission-based controls for data export, image export, and clipboard operations. These replace the legacy `can_csv` permission with three fine-grained permissions that can be assigned independently to roles.
## Feature Flag
Granular export controls are gated behind the `GRANULAR_EXPORT_CONTROLS` feature flag. When the flag is disabled, the legacy `can_csv` permission behavior is preserved.
| `can_export_data` | `Superset` | CSV, Excel, and JSON data exports from charts, dashboards, and SQL Lab |
| `can_export_image` | `Superset` | Screenshot (JPEG/PNG) and PDF exports from charts and dashboards |
| `can_copy_clipboard` | `Superset` | Copy-to-clipboard operations in SQL Lab and the Explore data pane |
## Default Role Assignments
The migration grants all three new permissions (`can_export_data`, `can_export_image`, `can_copy_clipboard`) to every role that currently has `can_csv`. This preserves existing behavior — no role loses access during the upgrade.
After the migration, admins can selectively revoke individual export permissions from any role to restrict access. For example, to prevent Gamma users from exporting data or images while still allowing clipboard operations, revoke `can_export_data` and `can_export_image` from the Gamma role.
## Configuration Steps
1. **Enable the feature flag** in `superset_config.py`:
```python
FEATURE_FLAGS = {
"GRANULAR_EXPORT_CONTROLS": True,
}
```
2. **Run the database migration** to register the new permissions:
```bash
superset db upgrade
```
3. **Initialize permissions** so roles are populated:
```bash
superset init
```
4. **Verify role assignments** in **Settings > List Roles**. Confirm that each role has the expected permissions from the table above.
5. **Customize as needed**: Grant or revoke individual export permissions on any role through the role editor.
## User Experience
When a user lacks a required export permission:
- **Menu items** (CSV, Excel, JSON, screenshot) appear **disabled** with an info tooltip icon explaining the restriction
- **Buttons** (SQL Lab download, clipboard copy) appear **disabled** with a tooltip on hover
- **API endpoints** return **403 Forbidden** when the corresponding permission is missing
## API Enforcement
The following API endpoints enforce granular export permissions when the feature flag is enabled:
@@ -63,6 +63,12 @@ by clicking the **Connect** button in the bottom right corner of the modal windo
Congratulations, you've just added a new data source in Superset!
### Sharing a Database Connection
When adding a new database, you can share the connection with other Superset users. Shared connections appear in other users' database lists, making it easier to collaborate on the same data without requiring each user to configure the same connection separately.
To share a connection, enable the **Share connection with other users** option in the **Advanced** tab of the database connection modal before saving. You can change sharing settings later by editing the database connection.
### Registering a new table
Now that you’ve configured a data source, you can select specific tables (called **Datasets** in Superset)
@@ -80,6 +86,22 @@ we register the **cleaned_sales_data** table from the **examples** database.
To finish, click the **Add** button in the bottom right corner. You should now see your dataset in the list of datasets.
### Organizing Datasets into Folders
The Datasets list view supports **folders** for organizing datasets into groups. To create and manage folders:
1. In the **Datasets** list, click the **Folders** panel on the left sidebar.
2. Click **+ New Folder** to create a top-level folder, or drag an existing folder to nest it.
3. Drag dataset rows onto a folder to move them in, or right-click a dataset and select **Move to folder**.
Folders are per-user organizational aids — they do not affect dataset access permissions or how other users see the datasets.
### Uploading Files via the OS File Manager (PWA)
When Superset is installed as a **Progressive Web App (PWA)** from your browser, your operating system will offer Superset as an option when opening CSV, Excel (`.xls`/`.xlsx`), and Parquet files. Double-clicking or right-clicking a supported file and selecting "Open with Superset" navigates directly to the upload workflow for that file.
To install Superset as a PWA, look for the install icon in your browser's address bar (Chrome, Edge) when visiting your Superset instance over HTTPS. PWA installation requires HTTPS and a valid manifest — your admin needs to confirm the app manifest is served correctly.
### Customizing column properties
Now that you've registered your dataset, you can configure column properties
@@ -193,7 +215,7 @@ Access to dashboards is managed via owners and permissions. Non-owner access can
through dataset permissions or dashboard-level roles (using the `DASHBOARD_RBAC` feature flag).
For detailed information on configuring dashboard access, see the
[Dashboard Access Control](/admin-docs/security/security#dashboard-access-control) section in the
[Dashboard Access Control](/admin-docs/security/#dashboard-access-control) section in the
The **AG Grid Interactive Table** chart type is Superset's fully-featured data grid, suitable for large paginated datasets where the standard Table chart is not enough.
#### Server-Side Column Filters
AG Grid supports server-side column filters that query the full dataset — not just the loaded page. Filters are applied before data is sent to the browser, so results are correct even across millions of rows.
**Available filter types:**
| Column type | Filter options |
|---|---|
| Text | Contains, equals, starts with, ends with |
| Number | Equals, not equal, less than, greater than, between |
| Date | Before, after, between, blank |
| Set | Select from a list of distinct values |
**AND / OR logic:** Each column supports combining multiple conditions with AND or OR. Filters from different columns are always combined with AND.
**Interaction with pagination:** Server-side filters run as WHERE clauses in the underlying SQL query, so pagination always operates over the already-filtered result set.
#### Time Shift (Time Comparison)
AG Grid Interactive Table supports **Time Shift** (time comparison), matching the behavior of the standard Table chart. In the **Advanced Analytics** → **Time Comparison** section of the chart configuration, enter a shift expression (e.g., `1 year ago`, `minus 7 days`) to add comparison columns showing values from the offset period. Dashboard-level time range overrides apply to both the base and comparison periods.
### Dynamic Currency Formatting
Chart metric values can display currencies dynamically rather than using a fixed currency code. To enable:
1. Open the dataset editor for your dataset (**Datasets → Edit**).
2. In the **Advanced** tab, set **Currency Code Column** to the name of a column in your dataset that contains ISO 4217 currency codes (e.g., `USD`, `EUR`, `GBP`).
3. In the Explore chart configuration, open the metric's **Number format** section and select **Auto-detect** for currency.
When Auto-detect is active, each row uses the currency code from the designated column, so a single chart can display values in multiple currencies — each formatted correctly for its currency.
### ECharts Option Editor
For ECharts-based chart types (line, bar, area, scatter, pie, and others), Explore includes an advanced **ECharts Option Editor** that accepts raw JSON overrides for the underlying ECharts configuration.
Access it via the **Customize** tab → **ECharts Options** section at the bottom of the panel. The JSON you enter is deep-merged on top of Superset's generated ECharts config, so you can override specific options without rewriting the entire config.
**Example:** override the legend position and add a custom title:
"title": { "text": "My Custom Title", "left": "center" }
}
```
:::caution
ECharts option overrides bypass Superset's validation layer. Invalid option keys are silently ignored by ECharts. Overrides that conflict with Superset-generated options (e.g., `series`) may produce unexpected results.
:::
### Table Chart: Exporting Filtered Data
When the **Search Box** is visible in a Table chart, the **Download** action exports only the rows currently visible after the search filter is applied — not the full underlying dataset. This matches the visual output and is intentional. To export the full dataset regardless of search state, use the **Download as CSV** option from the chart's three-dot menu in the dashboard or from the Explore chart toolbar before applying a search filter.
:::resources
- [Dashboard Customization](https://docs.preset.io/docs/dashboard-customization) - Advanced dashboard styling and layout options
- [Blog: BI Dashboard Best Practices](https://preset.io/blog/bi-dashboard-best-practices/)
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.