Compare commits

..

17 Commits

Author SHA1 Message Date
Amin Ghadersohi
3e997cd5c2 fix(mcp): normalize FAB_API_KEY_PREFIXES from config before passing to CompositeTokenVerifier
A plain string value (e.g. FAB_API_KEY_PREFIXES = "sst_") would iterate
as individual characters ['s','s','t','_'], matching far too many tokens.
Wrap strings in a list at the config-read boundary so CompositeTokenVerifier
always receives a proper sequence regardless of how the config is set.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:08:34 +00:00
Amin Ghadersohi
c1fcabbc55 fix(mcp): fix stale patch target in auth tests and update stale docstring
- Remove mock_sm.find_user_with_relationships.return_value = None from
  _mock_sm_ctx: load_user_with_relationships delegates to the global
  security_manager (not app.appbuilder.sm), so setting it on mock_sm had
  no effect and broke MagicMock(spec=[]) tests.
- Add _patch_load_user_not_found() helper that patches
  superset.mcp_service.auth.load_user_with_relationships directly.
- Apply it to the 3 JWT-path tests that expect ValueError("not found"):
  test_jwt_access_token_skips_api_key_auth,
  test_namespaced_claim_without_api_key_client_id_is_ignored,
  test_unnamespaced_passthrough_claim_does_not_trigger_api_key_path.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 19:05:42 +00:00
Amin Ghadersohi
fd80f76661 fix(mcp): validate api_key_prefixes in CompositeTokenVerifier — filter empty/non-string entries
Empty-string prefixes match every Bearer token (DoS/misclassification vector).
Non-string entries cause TypeError in str.startswith(). Filter both in __init__,
warn on invalid entries, and only store valid non-empty string prefixes.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-14 16:34:36 +00:00
Amin Ghadersohi
5bb315591b fix(mcp): fix stale patch target in auth tests and update stale docstring
_mock_sm_ctx now sets find_user_with_relationships.return_value = None so
JWT/dev-user lookups that delegate through the (now refactored)
load_user_with_relationships → security_manager.find_user_with_relationships
path behave as "user not found" in unit tests that don't patch the DB — matching
the behavior of the previous direct db.session.query() implementation.

Without this, tests that expected ValueError("not found") received a truthy
MagicMock() from the unspecified mock method, causing them to fail.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-13 23:04:02 +00:00
Amin Ghadersohi
d8db1c9230 refactor(mcp): delegate load_user_with_relationships to SecurityManager.find_user_with_relationships
Fixes a gap identified in code review: the standalone load_user_with_relationships()
in auth.py duplicated SecurityManager.find_user() logic but dropped two FAB behaviors:
- auth_username_ci (case-insensitive username lookup)
- MultipleResultsFound guard (username uniqueness not guaranteed at DB level in all FAB versions)
It also hard-coded User/Group models instead of sm.user_model.

Changes:
- Add SupersetSecurityManager.find_user_with_relationships() to security/manager.py,
  mirroring FAB's find_user() (auth_username_ci, MultipleResultsFound handling,
  self.user_model) and adding eager loading of roles and group.roles via joinedload
- Simplify load_user_with_relationships() in auth.py to a thin delegate to the
  new method, removing the duplicated query logic and raw Group/User imports
- Add regression test asserting find_user_with_relationships() exists on the SM

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-13 22:10:36 +00:00
Amin Ghadersohi
202b19951a fix(mcp): harden auth — PermissionError propagation, passthrough client_id guard, fail-closed on missing token
- _tool_allowed_for_current_user (server.py): catch PermissionError
  alongside ValueError so invalid API keys return False instead of
  propagating through the tool-search permission filter
- _setup_user_context (auth.py): catch PermissionError alongside
  ValueError so g.user is cleared and the error is logged consistently
  regardless of which failure type get_user_from_request() raises
- _resolve_user_from_api_key (auth.py): require client_id=="api_key"
  (set by CompositeTokenVerifier) in addition to API_KEY_PASSTHROUGH_CLAIM
  to prevent an external IdP JWT that happens to include the claim name
  from being misclassified as an API-key pass-through (DoS vector)
- _resolve_user_from_jwt_context (auth.py): same client_id guard so
  a rogue-claim JWT continues through JWT resolution instead of deferring
  to the API-key path (which would raise PermissionError for the user)
- _resolve_user_from_api_key (auth.py): raise PermissionError (not
  return None) when the pass-through claim is present but the raw token
  is absent — fail closed rather than falling through to weaker auth
- Tests: set client_id="api_key" on _passthrough_access_token helper;
  update test_jwt_context_with_api_key_passthrough_returns_none docstring;
  add test for namespaced claim on non-API-key client_id being ignored

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-05-13 21:26:42 +00:00
Amin Ghadersohi
d675a97686 refactor(mcp): extract duplicated app context + sm setup into helper
Add _mock_sm_ctx() context manager to eliminate repeated boilerplate
(g.user = None / app.appbuilder = MagicMock() / appbuilder.sm = mock_sm)
across seven API key auth unit tests.
2026-05-13 19:57:41 +00:00
Amin Ghadersohi
ba97a29468 fix(mcp): fix stale patch target in auth tests and update stale docstring
- Use superset.mcp_service.auth.has_request_context as patch target in
  test_mcp_auth_hook_clears_stale_g_user tests; patching flask.has_request_context
  has no effect on the module-level import already bound in auth.py
- Update test_jwt_access_token_skips_api_key_auth docstring to reference
  API_KEY_PASSTHROUGH_CLAIM instead of the legacy _api_key_passthrough name
- Add noqa: BLE001 to broad exception catch in mcp_config.py to document
  that the wide catch is intentional (JWT libs raise many types, secrets guard)
2026-05-13 06:34:46 +00:00
Amin Ghadersohi
637f74d0d8 Potential fix for pull request finding
Co-authored-by: Copilot Autofix powered by AI <175728472+Copilot@users.noreply.github.com>
2026-05-08 16:04:09 -04:00
Amin Ghadersohi
5649b28495 refactor(mcp): hoist JWT verifier imports to module top
DetailedJWTVerifier and JWTVerifier have no circular-import or optional-
dependency reason to be imported inline — fastmcp is already pulled in
at module top via composite_token_verifier, and authlib is already a
hard dependency. Moving them up for consistency with the rest of the
module's imports.
2026-05-08 14:57:06 -04:00
Amin Ghadersohi
aaabec317e fix(security): drop redundant explicit ApiKey perm creation
``superset init`` calls ``appbuilder.add_permissions(update_perms=True)``
before ``sync_role_definitions()`` (cli/main.py:84), which forces FAB to
walk all registered baseviews — including ``ApiKeyApi`` (registered when
``FAB_API_KEY_ENABLED=True``) — and create their PVMs via
``add_permissions_view``. The explicit ``add_permission_view_menu`` calls
in ``create_custom_permissions`` were redundant.

With ``"ApiKey"`` already in ``ADMIN_ONLY_VIEW_MENUS``, the role
predicate ``_is_admin_only`` gates the auto-created PVMs to Admin.

Per Daniel Gaspar's review: "Adding ApiKey to ADMIN_ONLY_VIEW_MENUS
should just work when FAB_API_KEY_ENABLED is True".
2026-05-08 14:47:58 -04:00
Amin Ghadersohi
9ece5f42d7 refactor(mcp): hoist API key auth imports to module top
The API_KEY_PASSTHROUGH_CLAIM constant in auth.py and CompositeTokenVerifier
in mcp_config.py have no circular-import or optional-dependency reason to
be imported inline. Moved them to module top.
2026-05-08 14:36:16 -04:00
Amin Ghadersohi
76ad5e1bf7 fix(mcp): validate API keys via FastMCP AccessToken and lock down ApiKey perms
Three independent bugs let MCP requests presenting Bearer tokens with the
sst_ prefix authenticate as MCP_DEV_USERNAME without any validation under
streamable-http:

1. _resolve_user_from_api_key read the token from flask.request.headers,
   but the streamable-http transport never pushes a Flask request context
   — has_request_context() was always False, so the function returned
   None before validating, falling through to the dev-user fallback.
   Now reads the token from FastMCP's per-request AccessToken (which the
   CompositeTokenVerifier already populated) and fails closed when the
   key is invalid.

2. CompositeTokenVerifier was only installed when MCP_AUTH_ENABLED=True.
   With FAB_API_KEY_ENABLED=True alone, no transport-level verifier
   existed at all. The factory now builds an API-key-only verifier in
   that case (jwt_verifier=None) that rejects non-API-key Bearer tokens
   at the transport instead of silently accepting them.

3. The pass-through AccessToken was minted with scopes=[], which would
   make FastMCP's RequireAuthMiddleware 403 every API-key request when
   MCP_REQUIRED_SCOPES is non-empty. Pass-through now propagates
   self.required_scopes.

Also addresses Daniel's review comment on superset/security/manager.py:
adds "ApiKey" to ADMIN_ONLY_VIEW_MENUS so the FAB ApiKeyApi PVMs are
gated to Admin instead of leaking to Alpha and Gamma.

Renames the pass-through claim from _api_key_passthrough to the
namespaced _superset_mcp_api_key_passthrough (exported as
API_KEY_PASSTHROUGH_CLAIM) so a custom claim from an external IdP can't
accidentally divert a JWT into the API-key validation path.

Tests updated to mock get_access_token instead of app.test_request_context
(the simulated Flask context was the reason the prior tests passed while
production failed). New tests cover API-key-only verifier mode, scope
propagation on pass-through, and the namespaced-claim isolation.
2026-05-08 14:26:05 -04:00
Amin Ghadersohi
38b9ea5484 fix(mcp): remove prefixes from log to satisfy CodeQL
Remove API key prefixes from log message to avoid CodeQL
false positive about clear-text logging of sensitive data.
2026-05-08 11:42:45 -04:00
Amin Ghadersohi
135579b8e1 fix(mcp): add type annotations to test fixtures and parameters
Address code review feedback: add explicit type annotations
to all new test function parameters and fixture return types.
2026-05-08 11:42:45 -04:00
Amin Ghadersohi
b9aee62f5f fix(mcp): wire composite verifier and add ApiKey permission sync
Wire CompositeTokenVerifier into create_default_mcp_auth_factory,
add _api_key_passthrough detection in _resolve_user_from_jwt_context,
create ApiKey permissions in create_custom_permissions, and update
test_auth_api_key with pass-through and non-matching prefix tests.
2026-05-08 11:42:45 -04:00
Amin Ghadersohi
7ad0b5e3f8 fix(mcp): create ApiKey permissions on init and support API keys with JWT auth
Two fixes for MCP API key authentication:

1. superset init now creates ApiKey FAB permissions (can_list, can_create,
   can_get, can_delete) when FAB_API_KEY_ENABLED=True. Previously, because
   Superset uses AppBuilder(update_perms=False), FAB skipped permission
   creation during blueprint registration and superset init never picked
   them up, causing 403 errors on /api/v1/security/api_keys/.

2. CompositeTokenVerifier allows API key tokens (e.g. sst_...) to coexist
   with JWT auth on the MCP transport layer. Previously, when
   MCP_AUTH_ENABLED=True, the JWTVerifier rejected all non-JWT Bearer
   tokens at the transport layer before they could reach the Flask-level
   _resolve_user_from_api_key() handler. The composite verifier detects
   API key prefixes and passes them through with a marker claim, letting
   the existing auth priority chain handle validation.
2026-05-08 11:42:45 -04:00
664 changed files with 24712 additions and 94542 deletions

4
.github/CODEOWNERS vendored
View File

@@ -36,10 +36,6 @@
**/*.geojson @villebro @rusackas
/superset-frontend/plugins/legacy-plugin-chart-country-map/ @villebro @rusackas
# Notify translation maintainers of changes to translations
/superset/translations/ @sfirke
# Notify PMC members of changes to extension-related files
/docs/developer_portal/extensions/ @michael-s-molina @villebro @rusackas

10
.github/labeler.yml vendored
View File

@@ -77,11 +77,6 @@
- any-glob-to-any-file:
- 'superset/translations/zh/**'
"i18n:czech":
- changed-files:
- any-glob-to-any-file:
- 'superset/translations/cs/**'
"i18n:traditional-chinese":
- changed-files:
- any-glob-to-any-file:
@@ -127,11 +122,6 @@
- any-glob-to-any-file:
- 'superset/translations/sk/**'
"i18n:latvian":
- changed-files:
- any-glob-to-any-file:
- 'superset/translations/lv/**'
"i18n:ukrainian":
- changed-files:
- any-glob-to-any-file:

View File

@@ -59,15 +59,6 @@ build-assets() {
say "::endgroup::"
}
build-embedded-sdk() {
cd "$GITHUB_WORKSPACE/superset-embedded-sdk"
say "::group::Build embedded SDK bundle for E2E tests"
npm ci
npm run build
say "::endgroup::"
}
build-instrumented-assets() {
cd "$GITHUB_WORKSPACE/superset-frontend"
@@ -136,20 +127,6 @@ playwright_testdata() {
superset load_test_users
superset load_examples
superset init
# Enable DML on the examples database so Playwright tests can create/drop
# temporary tables via SQL Lab without depending on external data sources.
superset shell <<'PYEOF'
import sys
from superset.extensions import db
from superset.models.core import Database
examples_db = db.session.query(Database).filter_by(database_name='examples').first()
if not examples_db:
sys.exit('ERROR: examples database not found. load_examples may have failed.')
examples_db.allow_dml = True
db.session.commit()
print('Enabled allow_dml on examples database')
PYEOF
say "::endgroup::"
}

View File

@@ -58,7 +58,7 @@ jobs:
- name: Login to Amazon ECR
if: steps.describe-services.outputs.active == 'true'
id: login-ecr
uses: aws-actions/amazon-ecr-login@fa648b43de3d4d023bcb3f89ed6940096949c419 # v2
uses: aws-actions/amazon-ecr-login@19d944daaa35f0fa1d3f7f8af1d3f2e5de25c5b7 # v2
- name: Delete ECR image tag
if: steps.describe-services.outputs.active == 'true'

View File

@@ -199,7 +199,7 @@ jobs:
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@fa648b43de3d4d023bcb3f89ed6940096949c419 # v2
uses: aws-actions/amazon-ecr-login@19d944daaa35f0fa1d3f7f8af1d3f2e5de25c5b7 # v2
- name: Load, tag and push image to ECR
id: push-image
@@ -235,7 +235,7 @@ jobs:
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@fa648b43de3d4d023bcb3f89ed6940096949c419 # v2
uses: aws-actions/amazon-ecr-login@19d944daaa35f0fa1d3f7f8af1d3f2e5de25c5b7 # v2
- name: Check target image exists in ECR
id: check-image
@@ -265,7 +265,7 @@ jobs:
- name: Fill in the new image ID in the Amazon ECS task definition
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@6853cfae8c3a7d978fbf68b5a55453395541dfbb # v1
uses: aws-actions/amazon-ecs-render-task-definition@77954e213ba1f9f9cb016b86a1d4f6fcdea0d57e # v1
with:
task-definition: .github/workflows/ecs-task-definition.json
container-name: superset-ci
@@ -300,7 +300,7 @@ jobs:
--tags key=pr,value=$PR_NUMBER key=github_user,value=${{ github.actor }}
- name: Deploy Amazon ECS task definition
id: deploy-task
uses: aws-actions/amazon-ecs-deploy-task-definition@a310a830f5c14e583e35d84e4e1ec7dd177c3c9c # v2
uses: aws-actions/amazon-ecs-deploy-task-definition@fc8fc60f3a60ffd500fcb13b209c59d221ac8c8c # v2
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: pr-${{ github.event.inputs.issue_number || github.event.pull_request.number }}-service

View File

@@ -17,16 +17,6 @@ on:
workflow_dispatch: {}
# Serialize deploys: the action pushes to apache/superset-site without
# rebasing, so concurrent runs race on the final push and the loser fails
# with `! [rejected] asf-site -> asf-site (fetch first)`. Cancel any
# in-progress run as soon as a newer one starts — the destination repo
# isn't touched until the final push step, so canceling mid-build is safe,
# and the freshest content always wins.
concurrency:
group: docs-deploy-asf-site
cancel-in-progress: true
jobs:
config:
runs-on: ubuntu-24.04
@@ -80,7 +70,7 @@ jobs:
yarn install --check-cache
- name: Download database diagnostics (if triggered by integration tests)
if: github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success'
uses: dawidd6/action-download-artifact@b6e2e70617bc3265edd6dab6c906732b2f1ae151 # v21
uses: dawidd6/action-download-artifact@8305c0f1062bb0d184d09ef4493ecb9288447732 # v20
continue-on-error: true
with:
workflow: superset-python-integrationtest.yml
@@ -89,7 +79,7 @@ jobs:
path: docs/src/data/
- name: Try to download latest diagnostics (for push/dispatch triggers)
if: github.event_name != 'workflow_run'
uses: dawidd6/action-download-artifact@b6e2e70617bc3265edd6dab6c906732b2f1ae151 # v21
uses: dawidd6/action-download-artifact@8305c0f1062bb0d184d09ef4493ecb9288447732 # v20
continue-on-error: true
with:
workflow: superset-python-integrationtest.yml

View File

@@ -111,7 +111,7 @@ jobs:
run: |
yarn install --check-cache
- name: Download database diagnostics from integration tests
uses: dawidd6/action-download-artifact@b6e2e70617bc3265edd6dab6c906732b2f1ae151 # v21
uses: dawidd6/action-download-artifact@8305c0f1062bb0d184d09ef4493ecb9288447732 # v20
with:
workflow: superset-python-integrationtest.yml
run_id: ${{ github.event.workflow_run.id }}

View File

@@ -169,7 +169,6 @@ jobs:
PYTHONPATH: ${{ github.workspace }}
REDIS_PORT: 16379
GITHUB_TOKEN: ${{ github.token }}
SUPERSET_FEATURE_EMBEDDED_SUPERSET: "true"
services:
postgres:
image: postgres:17-alpine
@@ -240,11 +239,6 @@ jobs:
uses: ./.github/actions/cached-dependencies
with:
run: build-instrumented-assets
- name: Build embedded SDK
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies
with:
run: build-embedded-sdk
- name: Install Playwright
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies

View File

@@ -43,7 +43,6 @@ jobs:
PYTHONPATH: ${{ github.workspace }}
REDIS_PORT: 16379
GITHUB_TOKEN: ${{ github.token }}
SUPERSET_FEATURE_EMBEDDED_SUPERSET: "true"
services:
postgres:
image: postgres:17-alpine
@@ -114,11 +113,6 @@ jobs:
uses: ./.github/actions/cached-dependencies
with:
run: build-instrumented-assets
- name: Build embedded SDK
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies
with:
run: build-embedded-sdk
- name: Install Playwright
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies

View File

@@ -54,7 +54,6 @@ jobs:
SUPERSET_SECRET_KEY: not-a-secret
run: |
pytest --durations-min=0.5 --cov=superset/sql/ ./tests/unit_tests/sql/ --cache-clear --cov-fail-under=100
pytest --durations-min=0.5 --cov=superset/semantic_layers/ ./tests/unit_tests/semantic_layers/ --cache-clear --cov-fail-under=100
- name: Upload code coverage
uses: codecov/codecov-action@57e3a136b779b570ffcdbf80b3bdc90e7fab3de2 # v5
with:

View File

@@ -56,33 +56,8 @@ def verify_sha512(filename: str) -> str:
# Part 2: Verify RSA key - this is the same as running `gpg --verify {release}.asc {release}` and comparing the RSA key and email address against the KEYS file # noqa: E501
KEYS_URL = "https://downloads.apache.org/superset/KEYS"
def ensure_keys_imported() -> None:
"""Import the Apache Superset KEYS file into the local GPG keyring.
Without this, `gpg --verify` returns "No public key" and the signature
cannot actually be verified — only the key ID in the signature metadata
is visible.
"""
try:
keys = requests.get(KEYS_URL, timeout=30)
except requests.RequestException as exc:
print(f"Warning: could not fetch KEYS file for import: {exc}")
return
if keys.status_code != 200:
print(f"Warning: could not fetch KEYS file (HTTP {keys.status_code})")
return
subprocess.run( # noqa: S603
["gpg", "--import"], # noqa: S607
input=keys.content,
capture_output=True,
)
def get_gpg_info(filename: str) -> tuple[Optional[str], Optional[str]]:
"""Run the GPG verify command and extract RSA/EDDSA key and email address."""
"""Run the GPG verify command and extract RSA key and email address."""
asc_filename = filename + ".asc"
result = subprocess.run( # noqa: S603
["gpg", "--verify", asc_filename, filename], # noqa: S607
@@ -90,50 +65,25 @@ def get_gpg_info(filename: str) -> tuple[Optional[str], Optional[str]]:
)
output = result.stderr.decode()
# If no public key was available, import KEYS and retry so that
# `Good signature from "Name <email>"` appears in the output.
if "No public key" in output:
ensure_keys_imported()
result = subprocess.run( # noqa: S603
["gpg", "--verify", asc_filename, filename], # noqa: S607
capture_output=True, # noqa: S607
)
output = result.stderr.decode()
rsa_key = re.search(r"RSA key ([0-9A-F]+)", output)
eddsa_key = re.search(r"EDDSA key ([0-9A-F]+)", output)
# Try multiple patterns — `Good signature from` is the most reliable
# source of the email; `issuer` is a fallback for older gpg output.
email_patterns = (
r'Good signature from ".*?<([^>]+)>"',
r'aka ".*?<([^>]+)>"',
r'issuer "([^"]+)"',
)
email_result: Optional[str] = None
for pattern in email_patterns:
match = re.search(pattern, output)
if match:
email_result = match.group(1)
break
email = re.search(r'issuer "([^"]+)"', output)
rsa_key_result = rsa_key.group(1) if rsa_key else None
eddsa_key_result = eddsa_key.group(1) if eddsa_key else None
email_result = email.group(1) if email else None
key_result = rsa_key_result or eddsa_key_result
# Debugging:
if key_result:
print("RSA or EDDSA Key found")
else:
print("Warning: No RSA or EDDSA key found in GPG verification output.")
if email_result:
print(f"Email found: {email_result}")
print("email found")
else:
print("Warning: No email address found in GPG verification output.")
if "No public key" in output:
print(
"Hint: public key is not in your keyring. Import it with:\n"
f" curl -s {KEYS_URL} | gpg --import"
)
return key_result, email_result

View File

@@ -58,10 +58,6 @@ categories:
url: https://www.ontruck.com/
Financial Services:
- name: Aadhar Housing Finance Limited
url: https://www.aadharhousing.com
contributors: ["@thakerhardiks"]
- name: Aktia Bank plc
url: https://www.aktia.com

View File

@@ -46,13 +46,6 @@ The Deck.gl MapBox chart's **Opacity**, **Default longitude**, **Default latitud
**To restore fit-to-data behavior:** Open the chart in Explore, clear the **Default longitude**, **Default latitude**, and **Zoom** fields in the Viewport section, and re-save the chart.
### Combined datasource list endpoint
Added a new combined datasource list endpoint at `GET /api/v1/datasource/` to serve datasets and semantic views in one response.
- The endpoint is available to users with at least one of `can_read` on `Dataset` or `SemanticView`.
- Semantic views are included only when the `SEMANTIC_LAYERS` feature flag is enabled.
- The endpoint enforces strict `order_column` validation and returns `400` for invalid sort columns.
### ClickHouse minimum driver version bump
The minimum required version of `clickhouse-connect` has been raised to `>=0.13.0`. If you are using the ClickHouse connector, please upgrade your `clickhouse-connect` package. The `_mutate_label` workaround that appended hash suffixes to column aliases has also been removed, as it is no longer needed with modern versions of the driver.

View File

@@ -105,13 +105,7 @@ class CeleryConfig:
CELERY_CONFIG = CeleryConfig
FEATURE_FLAGS = {
"ALERT_REPORTS": True,
"DATASET_FOLDERS": True,
"ENABLE_EXTENSIONS": True,
"SEMANTIC_LAYERS": True,
}
EXTENSIONS_PATH = "/app/docker/extensions"
FEATURE_FLAGS = {"ALERT_REPORTS": True, "DATASET_FOLDERS": True}
ALERT_REPORTS_NOTIFICATION_DRY_RUN = True
WEBDRIVER_BASEURL = f"http://superset_app{os.environ.get('SUPERSET_APP_ROOT', '/')}/" # When using docker compose baseurl should be http://superset_nginx{ENV{BASEPATH}}/ # noqa: E501
# The base URL for the email report hyperlinks.

View File

@@ -18,8 +18,6 @@
# Configuration for docker-compose-light.yml - disables Redis and uses minimal services
# Import all settings from the main config first
import os
from flask_caching.backends.filesystemcache import FileSystemCache
from superset_config import * # noqa: F403
@@ -38,31 +36,3 @@ THUMBNAIL_CACHE_CONFIG = CACHE_CONFIG
# Disable Celery entirely for lightweight mode
CELERY_CONFIG = None # type: ignore[assignment,misc]
# Honor SUPERSET_FEATURE_<NAME> env vars on top of any flags inherited from
# superset_config. Lets local dev/e2e enable features (e.g. EMBEDDED_SUPERSET)
# without editing shipped config files. Only the literal string "true"
# (case-insensitive) is treated as enabled — "1"/"yes"/"on" are not, matching
# the strict-string convention used elsewhere in Superset's env parsing.
FEATURE_FLAGS = {
**FEATURE_FLAGS, # noqa: F405
**{
name[len("SUPERSET_FEATURE_") :]: value.strip().lower() == "true"
for name, value in os.environ.items()
if name.startswith("SUPERSET_FEATURE_")
},
}
# Disable Talisman so /embedded/<uuid> doesn't return X-Frame-Options:SAMEORIGIN.
# Without this, browsers refuse to render Superset inside an iframe from a
# different origin (i.e. the embedded SDK use case). Production/CI configures
# Talisman with explicit `frame-ancestors`; for the lightweight local stack we
# just turn it off.
TALISMAN_ENABLED = False
# Guest tokens (used by the embedded SDK) inherit the "Public" role's perms.
# Out of the box Public has zero perms, so embedded dashboards immediately fail
# their first call (`/api/v1/me/roles/`) with 403. Mirror Public to Gamma —
# the standard read-only viewer role — so the embedded flow can authenticate
# and load dashboard data in local dev.
PUBLIC_ROLE_LIKE = "Gamma"

View File

@@ -81,87 +81,6 @@ SLACK_CACHE_TIMEOUT = int(timedelta(days=2).total_seconds())
SLACK_API_RATE_LIMIT_RETRY_COUNT = 5
```
### Webhook integration
Superset can send alert and report notifications to any HTTP endpoint — useful for chat platforms, incident management tools, or custom automation.
#### Enabling Webhooks
Enable the feature flag in `superset_config.py`:
```python
FEATURE_FLAGS = {
"ALERT_REPORTS": True,
"ALERT_REPORT_WEBHOOK": True,
}
```
#### Configuring a Webhook Recipient
When creating or editing an alert or report, select **Webhook** as the notification method and enter your endpoint URL.
#### Payload Format
Superset sends an HTTP POST with `Content-Type: application/json`:
```json
{
"name": "My Alert",
"header": {
"notification_format": "JSON",
"notification_type": "Alert",
"notification_source": "Alert",
"chart_id": 42,
"dashboard_id": null
},
"text": "Alert condition met: value exceeded threshold",
"description": "Monthly revenue dropped below target",
"url": "https://your-superset-host/superset/dashboard/1/"
}
```
When a report includes file attachments (CSV, PDF, or PNG screenshots), the request is sent as `multipart/form-data` instead. In that case, each top-level payload field (`name`, `text`, `description`, `url`) becomes its own form field, and nested structures like `header` are serialized as a JSON-encoded string in their own field. Every attachment is added as a repeated form field named `files`:
```
POST /webhook HTTP/1.1
Content-Type: multipart/form-data; boundary=...
--...
Content-Disposition: form-data; name="name"
My Alert
--...
Content-Disposition: form-data; name="header"
{"notification_format": "JSON", "notification_type": "Alert", ...}
--...
Content-Disposition: form-data; name="text"
Alert condition met: value exceeded threshold
--...
Content-Disposition: form-data; name="files"; filename="report.csv"
Content-Type: text/csv
<file bytes>
--...
```
Webhook consumers should branch on `Content-Type`: parse the body as JSON when `application/json`, or read the individual form fields (decoding `header` as JSON) when `multipart/form-data`.
#### HTTPS Enforcement
To require HTTPS webhook URLs (recommended for production), set:
```python
ALERT_REPORTS_WEBHOOK_HTTPS_ONLY = True
```
When enabled, Superset rejects webhook configurations that use `http://` URLs.
#### Retry Behavior
Superset automatically retries webhook deliveries on `429 Too Many Requests` and `5xx` server errors using exponential backoff.
### Kubernetes-specific
- You must have a `celery beat` pod running. If you're using the chart included in the GitHub repository under [helm/superset](https://github.com/apache/superset/tree/master/helm/superset), you need to put `supersetCeleryBeat.enabled = true` in your values override.

View File

@@ -138,33 +138,14 @@ THUMBNAIL_CACHE_CONFIG = init_thumbnail_cache
```
Using the above example cache keys for dashboards will be `superset_thumb__dashboard__{ID}`. You can
override the base URL for Selenium using:
override the base URL for selenium using:
```
WEBDRIVER_BASEURL = "https://superset.company.com"
```
To control which user account is used for rendering thumbnails and warming up caches, configure
`THUMBNAIL_EXECUTORS` and `CACHE_WARMUP_EXECUTORS`. Each accepts a list of executor types (which
resolve to an owner, creator, modifier, or the currently-logged-in user) and/or a `FixedExecutor`
pinned to a specific username. By default, thumbnails render as the current user
(`ExecutorType.CURRENT_USER`) and cache warmup runs as the chart/dashboard owner
(`ExecutorType.OWNER`).
To force both to run as a dedicated service account (`admin` in this example):
```python
from superset.tasks.types import ExecutorType, FixedExecutor
THUMBNAIL_EXECUTORS = [FixedExecutor("admin")]
CACHE_WARMUP_EXECUTORS = [FixedExecutor("admin")]
```
Use a dedicated read-only service account here rather than a personal admin account, so that
thumbnail rendering and cache warmup tasks don't fail if a specific user's credentials change.
Additional Selenium WebDriver configuration can be set using `WEBDRIVER_CONFIGURATION`. You can
implement a custom function to authenticate Selenium. The default function uses the `flask-login`
Additional selenium web drive configuration can be set using `WEBDRIVER_CONFIGURATION`. You can
implement a custom function to authenticate selenium. The default function uses the `flask-login`
session cookie. Here's an example of a custom function signature:
```python
@@ -178,20 +159,6 @@ Then on configuration:
WEBDRIVER_AUTH_FUNC = auth_driver
```
## ETag Support for Thumbnails
Thumbnail and screenshot endpoints return `ETag` response headers based on the cached content digest. Clients can use conditional requests to avoid downloading unchanged images:
```
GET /api/v1/chart/42/thumbnail/
If-None-Match: "abc123..."
→ 304 Not Modified (if unchanged)
→ 200 OK (with new image if changed)
```
This is particularly useful for embedded dashboards and external integrations that periodically poll for updated screenshots — unchanged thumbnails return immediately with no payload.
## Distributed Coordination Backend
Superset supports an optional distributed coordination (`DISTRIBUTED_COORDINATION_CONFIG`) for

View File

@@ -372,26 +372,6 @@ CUSTOM_SECURITY_MANAGER = CustomSsoSecurityManager
]
```
### PKCE Support
For public OAuth2 clients that cannot securely store a client secret, enable Proof Key for Code Exchange (PKCE) by adding `code_challenge_method` to the `remote_app` configuration:
```python
OAUTH_PROVIDERS = [
{
'name': 'myProvider',
'remote_app': {
'client_id': 'myClientId',
'client_secret': 'mySecret', # may be empty for pure public clients
'code_challenge_method': 'S256', # enables PKCE
'server_metadata_url': 'https://myAuthorizationServer/.well-known/openid-configuration'
}
}
]
```
PKCE (`S256`) is recommended for all OAuth2 flows, even when a client secret is present, as it protects against authorization code interception attacks.
## LDAP Authentication
FAB supports authenticating user credentials against an LDAP server.
@@ -472,38 +452,6 @@ FEATURE_FLAGS = {
A current list of feature flags can be found in the [Feature Flags](/admin-docs/configuration/feature-flags) documentation.
## Security Configuration
### HASH_ALGORITHM
Controls the hashing algorithm used for internal checksums and cache keys (thumbnails, cache keys, etc.). The default is `sha256`, which satisfies environments with stricter compliance requirements (e.g., FedRAMP). Set it to `md5` to retain the legacy behavior from older Superset deployments:
```python
HASH_ALGORITHM = "sha256" # default; set to "md5" for legacy behavior
```
A companion `HASH_ALGORITHM_FALLBACKS` list (default: `["md5"]`) lets UUID lookups fall back to older algorithms, which enables gradual migration without breaking existing entries. Set it to `[]` for strict mode (use only `HASH_ALGORITHM`).
:::note
This setting affects internal Superset operations only, not user passwords or authentication tokens. Changing it in an existing deployment may invalidate cached values but does not require a database migration.
:::
## SQL Lab Query History Pruning
SQL Lab query history is stored in the metadata database and is **not** pruned by default. To trim older rows, enable the `prune_query` Celery beat task by uncommenting it in `CELERY_BEAT_SCHEDULE` and choosing a retention window:
```python
CELERY_BEAT_SCHEDULE = {
"prune_query": {
"task": "prune_query",
"schedule": crontab(minute=0, hour=0, day_of_month=1),
"kwargs": {"retention_period_days": 180},
},
}
```
Adjust `retention_period_days` to control how long query rows are kept. Companion opt-in tasks (`prune_logs`, `prune_tasks`) exist for pruning the logs and tasks tables; see the commented-out examples in `superset/config.py`. Without enabling these tasks, the metadata database will grow unbounded over time.
:::resources
- [Blog: Feature Flags in Apache Superset](https://preset.io/blog/feature-flags-in-apache-superset-and-preset/)
:::

View File

@@ -30,10 +30,6 @@ Superset's ZIP-based import/export also covers **dashboards**, **charts**, and *
| └── ... (more databases)
```
:::note
When you export a database connection, the `masked_encrypted_extra` field (used for sensitive connection parameters such as service account JSON, OAuth tokens, and other encrypted credentials) is included in the export. When importing on another instance, these values are decrypted and re-encrypted using the destination instance's `SECRET_KEY`. Ensure the receiving instance has a valid `SECRET_KEY` configured before importing.
:::
## Exporting Datasources to YAML
You can print your current datasources to stdout by running:

View File

@@ -501,7 +501,7 @@ All MCP settings go in `superset_config.py`. Defaults are defined in `superset/m
| `MCP_SERVICE_URL` | `None` | Public base URL for MCP-generated links (set this when behind a reverse proxy) |
| `MCP_DEBUG` | `False` | Enable debug logging |
| `MCP_DEV_USERNAME` | -- | Superset username for development mode (no auth) |
| `MCP_RBAC_ENABLED` | `True` | Enforce Superset's role-based access control on MCP tool calls. When `True`, each tool checks that the authenticated user has the required FAB permission before executing. Disable only for testing or trusted-network deployments. |
| `MCP_PARSE_REQUEST_ENABLED` | `True` | Pre-parse MCP tool inputs from JSON strings into objects. Set to `False` for clients (Claude Desktop, LangChain) that do not double-serialize arguments — this produces cleaner tool schemas for those clients |
### Authentication
@@ -517,7 +517,6 @@ All MCP settings go in `superset_config.py`. Defaults are defined in `superset/m
| `MCP_REQUIRED_SCOPES` | `[]` | Required JWT scopes |
| `MCP_JWT_DEBUG_ERRORS` | `False` | Log detailed JWT errors server-side (never exposed in HTTP responses per RFC 6750) |
| `MCP_AUTH_FACTORY` | `None` | Custom auth provider factory `(flask_app) -> auth_provider`. Takes precedence over built-in JWT |
| `MCP_USER_RESOLVER` | `None` | Custom function `(app, access_token) -> username` to extract a Superset username from a validated JWT token. When `None`, the default resolver checks `preferred_username`, `username`, `email`, and `sub` claims in that order. |
### Response Size Guard
@@ -601,43 +600,6 @@ MCP_STORE_CONFIG = {
| `event_store_max_events` | `100` | Maximum events retained per session |
| `event_store_ttl` | `3600` | Event TTL in seconds |
### Tool Search
By default the MCP server exposes a lightweight tool-search interface instead of advertising every tool at once. This reduces the initial context sent to the LLM by ~70%, which lowers cost and latency. The AI client discovers tools on demand by calling `search_tools` and then invokes them via `call_tool`.
```python
MCP_TOOL_SEARCH_CONFIG = {
"enabled": True,
"strategy": "bm25", # "bm25" (natural language) or "regex"
"max_results": 5,
"always_visible": [ # Tools always listed (pinned)
"health_check",
"get_instance_info",
],
"search_tool_name": "search_tools",
"call_tool_name": "call_tool",
"include_schemas": False, # False=summary mode (name + parameters_hint)
"compact_schemas": True, # Strip $defs (only applies when include_schemas=True)
"max_description_length": 300,
}
```
| Key | Default | Description |
|-----|---------|-------------|
| `enabled` | `True` | Enable tool search. When `False`, all tools are listed upfront |
| `strategy` | `"bm25"` | Search ranking algorithm. `"bm25"` supports natural language; `"regex"` supports pattern matching |
| `max_results` | `5` | Maximum tools returned per search query |
| `always_visible` | See above | Tools that always appear in `list_tools`, regardless of search |
| `include_schemas` | `False` | When `False` (default, "summary mode"), search results omit `inputSchema` entirely and include a lightweight `parameters_hint` listing top-level parameter names. Set to `True` to include the full `inputSchema` in search results. Full schemas are always used when a tool is actually invoked via `call_tool`. |
| `compact_schemas` | `True` | Strip `$defs` / `$ref` and replace with `{"type": "object"}` in search results to reduce token cost. Only takes effect when `include_schemas=True` — ignored in summary mode. |
| `max_description_length` | `300` | Truncate tool descriptions in search results (0 = no truncation). Applies in both summary and full-schema modes. |
:::tip
Set `enabled: False` to revert to the traditional "show all tools at once" behavior, which some clients or workflows may prefer.
:::
Tool search reduces the initial token cost from ~1520K tokens (full catalog) down to ~45K tokens (pinned tools + search interface) — roughly 85% savings at the start of each conversation.
### Session & CSRF
These values are flat-merged into the Flask app config used by the MCP server process:
@@ -659,102 +621,6 @@ MCP_CSRF_CONFIG = {
---
## Access Control
### RBAC Enforcement
The MCP server respects Superset's full role-based access control (RBAC). Every authenticated user can only access the data and operations their Superset roles permit — the same rules that apply in the Superset UI apply through MCP.
Each tool declares one or more required FAB permissions. The table below maps tool groups to their permission requirements:
| Tool group | Required FAB permission |
|------------|------------------------|
| `list_charts`, `get_chart_info`, `get_chart_data`, `get_chart_preview`, `generate_chart`, `update_chart` | `can_read` on `Chart` (read), `can_write` on `Chart` (mutate) |
| `list_dashboards`, `get_dashboard_info`, `generate_dashboard`, `add_chart_to_existing_dashboard` | `can_read` on `Dashboard` (read), `can_write` on `Dashboard` (mutate) |
| `list_datasets`, `get_dataset_info`, `create_virtual_dataset` | `can_read` on `Dataset` (read), `can_write` on `Dataset` (mutate) |
| `list_databases`, `get_database_info` | `can_read` on `Database` |
| `execute_sql` | `can_execute_sql_query` on `SQLLab` |
| `open_sql_lab_with_context` | `can_read` on `SQLLab` |
| `save_sql_query` | `can_write` on `SavedQuery` |
| `health_check` | None (public) |
To disable RBAC checking globally (for trusted-network deployments or testing), set:
```python
# superset_config.py
MCP_RBAC_ENABLED = False
```
:::warning
Disabling RBAC removes all permission checks from MCP tool calls. Only do this on isolated, internal deployments where all MCP users are trusted admins.
:::
### Audit Log
All MCP tool calls are recorded in Superset's action log. You can view them at **Settings → Action Log** (admin only). Each log entry records:
- The tool name (e.g., `mcp.generate_chart.db_write`)
- The authenticated user
- A timestamp
This makes MCP activity fully auditable alongside regular Superset activity. The action log uses the same event logger as the rest of Superset, so existing log ingestion pipelines (e.g., sending logs to Elasticsearch or a SIEM) capture MCP events automatically.
### Middleware Pipeline
Every MCP request passes through a middleware stack before reaching the tool function. The default stack (assembled in `build_middleware_list()` in `server.py`) is:
| Middleware | Purpose | Default |
|------------|---------|---------|
| `StructuredContentStripperMiddleware` | Strips `structuredContent` from responses for Claude.ai bridge compatibility | Enabled |
| `LoggingMiddleware` | Logs each tool call with user, parameters, and duration | Enabled |
| `GlobalErrorHandlerMiddleware` | Catches unhandled exceptions and sanitizes sensitive data before it reaches the client | Enabled |
| `ResponseSizeGuardMiddleware` | Estimates token count, warns at 80% of limit, blocks at limit | Enabled (configurable via `MCP_RESPONSE_SIZE_CONFIG`) |
| `ResponseCachingMiddleware` | Caches read-heavy tool responses (in-memory or Redis) | Disabled (enable via `MCP_CACHE_CONFIG`) |
Additional middleware classes (`RateLimitMiddleware`, `FieldPermissionsMiddleware`, `PrivateToolMiddleware`) are implemented in `superset/mcp_service/middleware.py` but are not added to the default pipeline. They are available for operators who want to layer them in via a custom startup path.
### Error Sanitization
The `GlobalErrorHandlerMiddleware` automatically redacts sensitive information from all error messages before they reach the LLM client. The following are replaced with generic messages:
- **Database connection strings** — replaced with a generic connection error message
- **API keys and tokens** — redacted from error traces
- **File system paths** — stripped to prevent information disclosure
- **IP addresses** — removed from error context
This ensures that a misconfigured database connection or an unexpected exception never leaks credentials or internal topology to the LLM or its users. All regex patterns used for redaction are bounded to prevent ReDoS attacks.
---
## Performance
### Connection Pooling
Each MCP server process maintains its own SQLAlchemy connection pool to the database. For multi-worker deployments, total open connections = **workers × pool size**.
```python
# superset_config.py
SQLALCHEMY_POOL_SIZE = 5
SQLALCHEMY_MAX_OVERFLOW = 10
SQLALCHEMY_POOL_TIMEOUT = 30
SQLALCHEMY_POOL_RECYCLE = 3600 # Recycle connections after 1 hour
```
For a 3-pod Kubernetes deployment with the defaults above, expect up to 3 × (5 + 10) = 45 connections. Size your database's `max_connections` accordingly.
### Response Caching
Enable response caching for read-heavy workloads (dashboards/datasets that don't change frequently). With the in-memory backend (default when `MCP_STORE_CONFIG` is disabled), caching is per-process. Use Redis-backed caching for consistent cache hits across multiple pods:
```python
MCP_CACHE_CONFIG = {"enabled": True, "call_tool_ttl": 3600}
MCP_STORE_CONFIG = {"enabled": True, "CACHE_REDIS_URL": "redis://redis:6379/0"}
```
Mutating tools (`generate_chart`, `update_chart`, `execute_sql`, `generate_dashboard`) are always excluded from caching regardless of this setting.
---
## Troubleshooting
### Server won't start

View File

@@ -84,35 +84,6 @@ THEME_DARK = {
# - OS preference detection is automatically enabled
```
### App Branding
The application name shown in the browser title bar and navigation can be
set via the `brandAppName` theme token:
```python
THEME_DEFAULT = {
"token": {
"brandAppName": "Acme Analytics",
# ... other tokens
}
}
```
Or in the theme CRUD UI JSON editor:
```json
{
"token": {
"brandAppName": "Acme Analytics"
}
}
```
The existing `APP_NAME` Python config key continues to work for backward compatibility.
`brandAppName` takes precedence when both are set, and allows different themes to carry different brand names.
Email and alert/report notification subjects are driven by backend settings such as
`EMAIL_REPORTS_SUBJECT_PREFIX` and `APP_NAME`, not by this theme token.
### Migration from Configuration to UI
When `ENABLE_UI_THEME_ADMINISTRATION = True`:
@@ -122,17 +93,6 @@ When `ENABLE_UI_THEME_ADMINISTRATION = True`:
3. Administrators can change system themes without restarting Superset
4. Configuration file themes serve as fallbacks when no UI themes are set
### Theme Validation and Fallback
Superset validates theme JSON when it is saved, either through the UI or via configuration. If a theme contains invalid tokens or an unrecognized structure, Superset logs a warning and falls back to the built-in default theme rather than applying a broken configuration. This prevents a bad theme from rendering the application unusable.
The fallback order is:
1. **UI-configured system theme** (highest priority, if `ENABLE_UI_THEME_ADMINISTRATION = True`)
2. **`THEME_DEFAULT` / `THEME_DARK`** from `superset_config.py`
3. **Built-in Superset default theme** (always present as a safety net)
If you see unexpected styling after a config change, check the Superset server logs for theme validation warnings.
### Copying Themes Between Systems
To export a theme for use in configuration files or another instance:
@@ -154,11 +114,7 @@ Superset supports custom fonts through the theme configuration, allowing you to
### Default Fonts
By default, Superset uses **Inter** for UI text and **IBM Plex Mono** for code (SQL editors, JSON fields, and other monospace contexts). Both fonts are bundled with the application via `@fontsource` packages and work offline without any external network calls.
:::note
IBM Plex Mono replaced Fira Code as the default code font in Superset 6.1. If you have an existing theme that explicitly sets `fontFamilyCode: "Fira Code, ..."`, you may want to update it.
:::
By default, Superset uses Inter and Fira Code fonts which are bundled with the application via `@fontsource` packages. These fonts work offline and require no external network calls.
### Configuring Custom Fonts
@@ -356,25 +312,11 @@ Available chart types for `echartsOptionsOverridesByChartType`:
- `echarts_heatmap` - Heatmaps
- `echarts_mixed_timeseries` - Mixed time series
### Array Property Overrides
Array properties (such as color palettes) are fully supported in overrides. Arrays are **replaced entirely** rather than merged, so specify the complete array:
```python
THEME_DEFAULT = {
"token": { ... },
"echartsOptionsOverrides": {
# Replace the default color palette for all ECharts visualizations
"color": ["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd", "#8c564b"]
}
}
```
### Best Practices
1. **Start with global overrides** for consistent styling across all charts
2. **Use chart-specific overrides** for unique requirements per visualization type
3. **Test thoroughly** as overrides use deep merge for objects, but arrays are completely replaced — always specify the full array value
3. **Test thoroughly** as overrides use deep merge - nested objects are combined, but arrays are completely replaced
4. **Document your overrides** to help team members understand custom styling
5. **Consider performance** - complex overrides may impact chart rendering speed

View File

@@ -52,15 +52,6 @@ only see the objects that they have access to.
The **sql_lab** role grants access to SQL Lab. Note that while **Admin** users have access
to all databases by default, both **Alpha** and **Gamma** users need to be given access on a per database basis.
Beyond the base `sql_lab` role, two additional SQL Lab permissions must be explicitly granted for users who need these capabilities:
| Permission | Feature |
|------------|---------|
| `can_estimate_query_cost` on `SQLLab` | Estimate query cost before running |
| `can_format_sql` on `SQLLab` | Format SQL using the database's dialect |
Grant these in **Security → List Roles** by adding the permissions to the relevant role.
### Public
The **Public** role is the most restrictive built-in role, designed specifically for anonymous/unauthenticated
@@ -191,8 +182,6 @@ However, it is crucial to understand the following:
By combining Superset's configurable safeguards with strong database-level security practices, you can achieve a more robust and layered security posture.
**Dataset Sample Access**: The `get_samples()` endpoint now enforces datasource-level access control. Users can only fetch sample rows from datasets they have been explicitly granted access to — the same permission check applied when running chart queries. This closes a prior gap where unauthenticated or under-privileged access could retrieve sample data.
### REST API for user & role management
Flask-AppBuilder supports a REST API for user CRUD,
@@ -205,57 +194,6 @@ FAB_ADD_SECURITY_API = True
Once configured, the documentation for additional "Security" endpoints will be visible in Swagger for you to explore.
### API Key Authentication
Superset supports long-lived API keys for service accounts, CI/CD pipelines, and programmatic integrations (including MCP clients).
#### Enabling API Key Authentication
API key authentication is **disabled by default**. To turn it on, set the Flask-AppBuilder config value in `superset_config.py` and also enable the matching feature flag so the management UI is exposed:
```python
FAB_API_KEY_ENABLED = True
FEATURE_FLAGS = {
"FAB_API_KEY_ENABLED": True,
}
```
The config value registers the `ApiKeyApi` blueprint on the backend; the feature flag controls whether the UI for managing keys appears for the user. See the [Feature Flags](/admin-docs/configuration/feature-flags) documentation for more on feature flag configuration.
#### Creating an API Key
Once enabled, each user manages their own keys from their profile page:
1. Open the user menu (top-right) and click **Info** to navigate to the User Info page
2. Expand the **API Keys** section
3. Click **+ API Key**
4. Enter a name and (optionally) an expiration date
5. Copy the generated token — it is shown only once
Only users with the `can_read` and `can_write` permissions on `ApiKey` (granted by default to Admins) can manage API keys.
#### Using an API Key
Pass the key as a Bearer token in the `Authorization` header:
```
Authorization: Bearer <your-api-key>
```
This works for all REST API endpoints and the MCP server. The request is executed with the permissions of the user who created the key.
#### Use Cases
- **CI/CD pipelines** — automated chart/dashboard exports and imports
- **MCP integrations** — connect AI assistants without interactive login
- **External services** — dashboards embedded in other applications
- **Service accounts** — long-lived credentials that don't expire with session cookies
:::caution
Store API keys securely. Anyone with a valid key can make requests on behalf of the creating user. Revoke keys promptly if they are compromised by deleting them from the **API Keys** section of your User Info page.
:::
### Customizing Permissions
The permissions exposed by FAB are very granular and allow for a great level of

View File

@@ -485,7 +485,7 @@ Frontend assets (TypeScript, JavaScript, CSS, and images) must be compiled in or
First, be sure you are using the following versions of Node.js and npm:
- `Node.js`: Version 22 (LTS)
- `Node.js`: Version 20
- `npm`: Version 10
We recommend using [nvm](https://github.com/nvm-sh/nvm) to manage your node environment:

View File

@@ -224,52 +224,3 @@ async def analysis_guide(ctx: Context) -> str:
```
See [MCP Integration](./mcp) for implementation details.
### Semantic Layers
Extensions can register custom semantic layer implementations that allow Superset to connect to external data modeling frameworks. Each semantic layer defines how to authenticate, discover semantic views (tables/metrics/dimensions), and execute queries against the external system.
```python
from superset_core.semantic_layers.decorators import semantic_layer
from superset_core.semantic_layers.layer import SemanticLayer
from my_extension.config import MyConfig
from my_extension.view import MySemanticView
@semantic_layer(
id="my_platform",
name="My Data Platform",
description="Connect to My Data Platform's semantic layer",
)
class MySemanticLayer(SemanticLayer[MyConfig, MySemanticView]):
configuration_class = MyConfig
@classmethod
def from_configuration(cls, configuration: dict) -> "MySemanticLayer":
config = MyConfig.model_validate(configuration)
return cls(config)
@classmethod
def get_configuration_schema(cls, configuration=None) -> dict:
return MyConfig.model_json_schema()
@classmethod
def get_runtime_schema(cls, configuration=None, runtime_data=None) -> dict:
return {"type": "object", "properties": {}}
def get_semantic_views(self, runtime_configuration: dict) -> set[MySemanticView]:
# Return available views from the external platform
...
def get_semantic_view(self, name: str, additional_configuration: dict) -> MySemanticView:
# Return a specific view by name
...
```
**Note**: The `@semantic_layer` decorator automatically detects context and applies appropriate ID prefixing:
- **Extension context**: ID prefixed as `extensions.{publisher}.{name}.{id}`
- **Host context**: Original ID used as-is
The decorator registers the class in the semantic layers registry, making it available in the UI for users to create connections. The `configuration_class` should be a Pydantic model that defines the fields needed to connect (credentials, project, database, etc.). Superset uses the model's JSON schema to render the configuration form dynamically.

View File

@@ -256,34 +256,6 @@ For example, when running the local development build, the following will disabl
Top Nav and remove the Filter Bar:
`http://localhost:8088/superset/dashboard/my-dashboard/?standalone=1&show_filters=0`
### Table Chart Features
The **Table** chart type has several advanced capabilities worth knowing:
#### Conditional Formatting
Conditional formatting rules highlight cells based on their values. Rules can be applied to:
- **Numeric columns** — color cells above/below a threshold, or use a gradient across a range
- **String columns** — highlight cells matching specific text values or patterns
- **Boolean columns** — color cells that are `true` or `false`, or `null`/`not null`
Each rule has a **"Use gradient"** toggle: enabled applies a varying opacity (lighter = further from threshold), disabled applies a solid fill at full opacity regardless of value.
#### HTML Rendering in Table Cells
Table chart cells can render raw HTML, enabling rich formatting such as hyperlinks, colored badges, and icons directly in the data. Enable this per-column in the chart's **Column Configuration** panel by toggling **Render HTML**.
:::caution
Only enable HTML rendering for columns sourced from data you control. Rendering untrusted HTML can expose users to cross-site scripting (XSS) risks.
:::
#### Column Header Tooltips
Column headers display a tooltip with the column's **Description** from the dataset editor when the user hovers over them. Keep dataset column descriptions up to date to improve chart discoverability.
#### Display Controls
In dashboard view mode (without entering Edit mode), charts with configurable display options expose a **Display Controls** panel accessible from the chart's context menu. This surfaces controls such as Time Grain, Time Column, and layer visibility for applicable chart types — making it easy to adjust a chart's view without going to Explore.
### AG Grid Interactive Table
The **AG Grid Interactive Table** chart type is Superset's fully-featured data grid, suitable for large paginated datasets where the standard Table chart is not enough.
@@ -342,26 +314,6 @@ ECharts option overrides bypass Superset's validation layer. Invalid option keys
When the **Search Box** is visible in a Table chart, the **Download** action exports only the rows currently visible after the search filter is applied — not the full underlying dataset. This matches the visual output and is intentional. To export the full dataset regardless of search state, use the **Download as CSV** option from the chart's three-dot menu in the dashboard or from the Explore chart toolbar before applying a search filter.
### Sharing a Specific Tab
When a dashboard has tabs, each tab gets its own shareable URL. Navigate to the tab you want to share and copy the URL from your browser's address bar — the tab anchor is encoded in the URL so that anyone opening the link lands directly on that tab.
### Auto-Refresh
Dashboards can be configured to refresh automatically at a fixed interval without user interaction. Open a dashboard, click the **⋮** (more options) menu in the top-right, and select **Set auto-refresh interval**. Choose an interval (e.g., every 10 seconds, 1 minute, or 10 minutes). The setting is per-session and resets when you close the tab.
:::note
Auto-refresh triggers a full data reload for all charts on the dashboard. For dashboards with expensive queries, choose longer intervals to avoid overloading your database.
:::
### Last Queried Timestamp
Charts can display a "Last queried at" timestamp showing when the chart data was last fetched. This is useful on auto-refreshing dashboards to confirm data freshness. Enable it in **Dashboard Properties → Styling → Show last queried time**.
### Saving a Chart to a Specific Tab
When saving or adding a chart to a dashboard from Explore, you can select which tab it should land on using the tab tree-select dropdown in the "Add to dashboard" modal.
:::resources
- [Dashboard Customization](https://docs.preset.io/docs/dashboard-customization) - Advanced dashboard styling and layout options
- [Blog: BI Dashboard Best Practices](https://preset.io/blog/bi-dashboard-best-practices/)

View File

@@ -1,8 +1,3 @@
---
title: Embedding Superset
sidebar_position: 6
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
@@ -22,6 +17,10 @@ specific language governing permissions and limitations
under the License.
*/}
---
title: Embedding Superset
sidebar_position: 6
---
# Embedding Superset

View File

@@ -1,143 +0,0 @@
---
title: Handlebars Chart
hide_title: true
sidebar_position: 10
version: 1
---
## Handlebars Chart
The Handlebars chart lets you render query results using a custom [Handlebars](https://handlebarsjs.com/) template. This gives you full control over how your data is displayed — from simple tables to rich HTML layouts.
### Basic Usage
In the chart editor, write a Handlebars template in the **Template** field. Your query results are available as `data`, an array of row objects.
```handlebars
{{#each data}}
<p>{{this.name}}: {{this.value}}</p>
{{/each}}
```
### Built-in Helpers
Superset registers several custom helpers on top of the standard Handlebars built-ins.
#### `dateFormat`
Formats a date value using [Day.js](https://day.js.org/) format strings.
```handlebars
{{dateFormat my_date format="MMMM YYYY"}}
```
| Option | Default | Description |
|--------|---------|-------------|
| `format` | `YYYY-MM-DD` | A Day.js-compatible format string |
---
#### `stringify`
Converts an object to a JSON string, or any other value to its string representation.
```handlebars
{{stringify myObj}}
```
---
#### `formatNumber`
Formats a number using locale-aware formatting.
```handlebars
{{formatNumber myNumber "en-US"}}
```
| Option | Default | Description |
|--------|---------|-------------|
| `locale` | `en-US` | A BCP 47 language tag |
---
#### `parseJson`
Parses a JSON string into an object that can be used in your template.
```handlebars
{{parseJson myJsonString}}
```
---
#### `groupBy`
Groups an array of objects by a key, powered by [handlebars-group-by](https://github.com/nicktindall/handlebars-group-by).
```handlebars
{{#groupBy data "department"}}
<h3>{{value}}</h3>
{{#each items}}
<p>{{this.name}}</p>
{{/each}}
{{/groupBy}}
```
---
### Helpers from just-handlebars-helpers
Superset also registers all helpers from the [just-handlebars-helpers](https://github.com/leapfrogtechnology/just-handlebars-helpers) library. These include a wide range of comparison, math, string, and conditional helpers. Commonly used ones include:
#### Comparison
| Helper | Description | Example |
|--------|-------------|---------|
| `eq` | Strict equality | `{{#if (eq status "active")}}` |
| `eqw` | Weak equality | `{{#if (eqw count "5")}}` |
| `neq` | Strict inequality | `{{#if (neq role "admin")}}` |
| `lt` | Less than | `{{#if (lt score 50)}}` |
| `lte` | Less than or equal | `{{#if (lte score 100)}}` |
| `gt` | Greater than | `{{#if (gt price 0)}}` |
| `gte` | Greater than or equal | `{{#if (gte age 18)}}` |
#### Logical
| Helper | Description | Example |
|--------|-------------|---------|
| `and` | Logical AND | `{{#if (and isActive isVerified)}}` |
| `or` | Logical OR | `{{#if (or isAdmin isMod)}}` |
| `not` | Logical NOT | `{{#if (not isDisabled)}}` |
| `ifx` | Inline conditional | `{{ifx isActive "Yes" "No"}}` |
| `coalesce` | Returns first non-falsy value | `{{coalesce nickname name "Anonymous"}}` |
#### String
| Helper | Description | Example |
|--------|-------------|---------|
| `capitalize` | Capitalizes first letter | `{{capitalize name}}` |
| `uppercase` | Converts to uppercase | `{{uppercase status}}` |
| `lowercase` | Converts to lowercase | `{{lowercase email}}` |
| `truncate` | Truncates a string | `{{truncate description 100}}` |
| `contains` | Checks if string contains substring | `{{#if (contains tag "urgent")}}` |
#### Math
| Helper | Description | Example |
|--------|-------------|---------|
| `add` | Addition | `{{add a b}}` |
| `subtract` | Subtraction | `{{subtract total discount}}` |
| `multiply` | Multiplication | `{{multiply price quantity}}` |
| `divide` | Division | `{{divide total count}}` |
| `ceil` | Ceiling | `{{ceil value}}` |
| `floor` | Floor | `{{floor value}}` |
| `round` | Round | `{{round value}}` |
For the full list of available helpers, see the [just-handlebars-helpers documentation](https://github.com/leapfrogtechnology/just-handlebars-helpers).
### Tips
- Use raw blocks to escape Handlebars syntax if you need to display double curly braces literally.
- Comparison helpers like `eq` must be wrapped in a subexpression when used with `#if`: `{{#if (eq myVal "foo")}}`.
- HTML output is sanitized by default based on your Superset configuration (`HTML_SANITIZATION`).

View File

@@ -33,29 +33,6 @@ SQL templating must be enabled by your administrator via the `ENABLE_TEMPLATE_PR
For advanced configuration options, see the [SQL Templating Configuration Guide](/admin-docs/configuration/sql-templating).
:::
## Using Jinja in Calculated Columns
Jinja template macros are available in calculated column expressions in the dataset editor — not just in SQL Lab queries and virtual datasets. This allows column expressions to reference the current user or dynamic context.
**Example: User-scoped calculated column**
```sql
CASE WHEN sales_rep = '{{ current_username() }}' THEN amount ELSE 0 END
```
**Example: Conditional display based on role**
Because `current_user_roles()` returns a Python list, test role membership with a Jinja
conditional at template time rather than matching against the list's string representation:
```sql
{% if 'Finance' in current_user_roles() %}revenue{% else %}NULL{% endif %} AS finance_revenue
```
:::note
The `ENABLE_TEMPLATE_PROCESSING` feature flag must be enabled by your administrator for Jinja in calculated columns to work.
:::
## Basic Usage
Jinja templates use double curly braces `{{ }}` for expressions and `{% %}` for logic blocks.
@@ -266,7 +243,6 @@ Using `remove_filter=True` applies the filter in the inner query for better perf
- Use `|tojson` to serialize arrays as JSON strings
- Test queries with explicit parameter values before relying on filter context
- For complex templating needs, ask your administrator about custom Jinja macros
- **Format SQL is Jinja-aware**: The "Format SQL" button in SQL Lab correctly preserves `{{ }}` and `{% %}` template syntax and applies your selected database's SQL dialect when formatting.
:::resources
- [Admin Guide: SQL Templating Configuration](/admin-docs/configuration/sql-templating)

View File

@@ -55,10 +55,9 @@ Ask your AI assistant to browse what's available in your Superset instance:
Describe the visualization you want and AI creates it for you:
- **Preview-first workflow** -- by default AI generates an Explore link so you can review the chart before it is saved. Say "save it" to commit permanently
- **Create charts from natural language** -- describe what you want to see and AI picks the right chart type, metrics, and dimensions
- **Preview before saving** -- `generate_chart` defaults to `save_chart=False`, showing the chart in Explore before it's committed. Ask AI to save once you're satisfied.
- **Modify existing charts** -- `update_chart` also supports preview mode so you can review changes before saving (update filters, change chart types, add metrics)
- **Modify existing charts** -- `update_chart` also supports preview mode so you can review changes before saving
- **Get Explore links** -- open any chart in Superset's Explore view for further refinement
**Example prompts:**
@@ -66,16 +65,6 @@ Describe the visualization you want and AI creates it for you:
> "Update chart 42 to use a line chart instead"
> "Give me a link to explore this chart further"
:::tip Preview-first workflow
Charts are **not saved by default**. The workflow is intentionally iterative:
1. **Explore** — AI generates an Explore link so you can see the chart before it exists in Superset
2. **Iterate** — ask the AI to adjust the chart; changes are previewed without touching the database
3. **Save** — when you're happy, say "save it" and the chart is permanently stored
To skip the preview and save immediately, include "and save it" in your prompt.
:::
### Create Dashboards
Build dashboards from a collection of charts:
@@ -87,40 +76,16 @@ Build dashboards from a collection of charts:
> "Create a dashboard called 'Q4 Sales Overview' with charts 10, 15, and 22"
> "Add the revenue trend chart to the executive dashboard"
### Browse Databases
Discover what database connections are configured in your Superset instance:
- **List databases** -- see all database connections you have access to
- **Get database details** -- name, backend type (PostgreSQL, Snowflake, etc.), and connection status
**Example prompts:**
> "What databases are connected to Superset?"
> "Show me details about the data warehouse connection"
### Create Virtual Datasets
Build ad-hoc SQL datasets that can be used as the basis for charts:
- **Create virtual datasets** -- write a SQL query and save it as a reusable dataset
- **Use immediately in charts** -- the returned dataset ID can be passed directly to chart creation
**Example prompts:**
> "Create a dataset from: SELECT region, SUM(revenue) as total_revenue FROM orders GROUP BY region"
> "Make a virtual dataset called 'monthly_signups' from the users table filtered to last 12 months"
### Run SQL Queries
Execute SQL directly through your AI assistant:
- **Run queries** -- execute SQL with full Superset RBAC enforcement (you can only query data your roles allow)
- **Open SQL Lab** -- get a link to SQL Lab pre-populated with a query, ready to run and explore
- **Save queries** -- save a SQL query to SQL Lab's Saved Queries for later reuse
**Example prompts:**
> "Run this query: SELECT region, SUM(revenue) FROM sales GROUP BY region"
> "Open SQL Lab with a query to show the top 10 customers by order count"
> "Save this query as 'Weekly Revenue Report'"
### Analyze Chart Data

View File

@@ -40,13 +40,13 @@
"version:remove:components": "node scripts/manage-versions.mjs remove components"
},
"dependencies": {
"@ant-design/icons": "^6.2.2",
"@docusaurus/core": "^3.10.1",
"@docusaurus/faster": "^3.10.1",
"@docusaurus/plugin-client-redirects": "^3.10.1",
"@docusaurus/preset-classic": "3.10.1",
"@docusaurus/theme-live-codeblock": "^3.10.1",
"@docusaurus/theme-mermaid": "^3.10.1",
"@ant-design/icons": "^6.2.0",
"@docusaurus/core": "^3.10.0",
"@docusaurus/faster": "^3.10.0",
"@docusaurus/plugin-client-redirects": "^3.10.0",
"@docusaurus/preset-classic": "3.10.0",
"@docusaurus/theme-live-codeblock": "^3.10.0",
"@docusaurus/theme-mermaid": "^3.10.0",
"@emotion/core": "^11.0.0",
"@emotion/react": "^11.13.3",
"@emotion/styled": "^11.14.1",
@@ -67,12 +67,12 @@
"@storybook/preview-api": "^8.6.18",
"@storybook/theming": "^8.6.15",
"@superset-ui/core": "^0.20.4",
"@swc/core": "^1.15.33",
"@swc/core": "^1.15.32",
"antd": "^6.3.7",
"baseline-browser-mapping": "^2.10.29",
"caniuse-lite": "^1.0.30001792",
"docusaurus-plugin-openapi-docs": "^5.0.2",
"docusaurus-theme-openapi-docs": "^5.0.2",
"baseline-browser-mapping": "^2.10.23",
"caniuse-lite": "^1.0.30001791",
"docusaurus-plugin-openapi-docs": "^5.0.1",
"docusaurus-theme-openapi-docs": "^5.0.1",
"js-yaml": "^4.1.1",
"js-yaml-loader": "^1.2.2",
"json-bigint": "^1.0.0",
@@ -86,14 +86,14 @@
"remark-import-partial": "^0.0.2",
"reselect": "^5.1.1",
"storybook": "^8.6.18",
"swagger-ui-react": "^5.32.5",
"swagger-ui-react": "^5.32.4",
"swc-loader": "^0.2.7",
"tinycolor2": "^1.4.2",
"unist-util-visit": "^5.1.0"
},
"devDependencies": {
"@docusaurus/module-type-aliases": "^3.10.1",
"@docusaurus/tsconfig": "^3.10.1",
"@docusaurus/module-type-aliases": "^3.10.0",
"@docusaurus/tsconfig": "^3.10.0",
"@eslint/js": "^9.39.2",
"@types/js-yaml": "^4.0.9",
"@types/react": "^19.1.8",
@@ -103,10 +103,10 @@
"eslint-config-prettier": "^10.1.8",
"eslint-plugin-prettier": "^5.5.5",
"eslint-plugin-react": "^7.37.5",
"globals": "^17.6.0",
"globals": "^17.5.0",
"prettier": "^3.8.3",
"typescript": "~6.0.3",
"typescript-eslint": "^8.59.2",
"typescript-eslint": "^8.59.0",
"webpack": "^5.106.2"
},
"browserslist": {
@@ -124,7 +124,8 @@
"resolutions": {
"react-redux": "^9.2.0",
"@reduxjs/toolkit": "^2.5.0",
"baseline-browser-mapping": "^2.9.19"
"baseline-browser-mapping": "^2.9.19",
"webpackbar": "^7.0.0"
},
"packageManager": "yarn@1.22.22+sha1.ac34549e6aa8e7ead463a7407e1c7390f61a6610"
}

View File

@@ -141,47 +141,6 @@ def eval_node(node):
return "<f-string>"
return None
def static_return_bool(func_node):
"""
Statically resolve a method's return value to a bool when possible.
Returns True/False for functions whose body is (effectively) a single
\`return True\` / \`return False\` — allowing a leading docstring and
ignoring pure-comment/pass statements. Returns None for anything more
complex (conditional returns, computed values, no return, etc.).
Used by \`has_implicit_cancel\` handling: \`diagnose()\` in lib.py calls
the method and checks the return value, so an override that explicitly
returns False must NOT be treated as enabling query cancelation.
"""
returns = []
other_logic = False
docstring_skipped = False
for stmt in func_node.body:
# Skip docstring (only the FIRST expression statement that is a
# string constant — later bare string literals are not docstrings
# and should count as non-trivial logic).
if (not docstring_skipped
and isinstance(stmt, ast.Expr)
and isinstance(stmt.value, ast.Constant)
and isinstance(stmt.value.value, str)):
docstring_skipped = True
continue
if isinstance(stmt, ast.Pass):
continue
if isinstance(stmt, ast.Return):
returns.append(stmt)
continue
# Any other statement (if/for/assign/etc.) means control flow is
# non-trivial; bail out to be conservative.
other_logic = True
break
if other_logic or len(returns) != 1:
return None
val = eval_node(returns[0].value)
return val if isinstance(val, bool) else None
def deep_merge(base, override):
"""Deep merge two dictionaries. Override values take precedence."""
if base is None:
@@ -227,55 +186,8 @@ if not os.path.isdir(specs_dir):
print(json.dumps({"error": f"Directory not found: {specs_dir}", "cwd": os.getcwd()}))
sys.exit(1)
# Capability flag attributes with their defaults from BaseEngineSpec
CAP_ATTR_DEFAULTS = {
'supports_dynamic_schema': False,
'supports_catalog': False,
'supports_dynamic_catalog': False,
'disable_ssh_tunneling': False,
'supports_file_upload': True,
'allows_joins': True,
'allows_subqueries': True,
}
# Maps source capability attribute -> output field name used in databases.json.
# When a cap attr is assigned an unevaluable expression (e.g.
# allows_joins = is_feature_enabled("DRUID_JOINS")), the JS layer uses this
# mapping to preserve the corresponding field from the previously-generated
# JSON rather than silently inheriting an incorrect parent default.
CAP_ATTR_TO_OUTPUT_FIELD = {
'allows_joins': 'joins',
'allows_subqueries': 'subqueries',
'supports_dynamic_schema': 'supports_dynamic_schema',
'supports_catalog': 'supports_catalog',
'supports_dynamic_catalog': 'supports_dynamic_catalog',
'disable_ssh_tunneling': 'ssh_tunneling',
'supports_file_upload': 'supports_file_upload',
}
# Methods that indicate a capability when overridden by a non-BaseEngineSpec class.
# Mirrors the has_custom_method checks in superset/db_engine_specs/lib.py.
# cancel_query / has_implicit_cancel -> query_cancelation
# (diagnose() checks cancel_query override OR has_implicit_cancel() == True;
# base has_implicit_cancel returns False, so overriding it is the static
# equivalent of that method returning True. get_cancel_query_id is NOT
# part of the diagnose() heuristic and is intentionally excluded.)
# estimate_statement_cost / estimate_query_cost -> query_cost_estimation
# impersonate_user / update_impersonation_config / get_url_for_impersonation -> user_impersonation
# validate_sql -> sql_validation (not used yet; validation is engine-based)
CAP_METHODS = {
'cancel_query', 'has_implicit_cancel',
'estimate_statement_cost', 'estimate_query_cost',
'impersonate_user', 'update_impersonation_config', 'get_url_for_impersonation',
'validate_sql',
}
# Only the literal BaseEngineSpec is excluded from method-override tracking.
# Intermediate base classes (e.g. PrestoBaseEngineSpec) do count as overrides.
TRUE_BASE_CLASS = 'BaseEngineSpec'
# First pass: collect all class info (name, bases, metadata, cap_attrs, direct_methods)
class_info = {} # class_name -> {bases: [], metadata: {}, engine_name: str, filename: str, ...}
# First pass: collect all class info (name, bases, metadata)
class_info = {} # class_name -> {bases: [], metadata: {}, engine_name: str, filename: str}
for filename in sorted(os.listdir(specs_dir)):
if not filename.endswith('.py') or filename in ('__init__.py', 'lib.py', 'lint_metadata.py'):
@@ -306,54 +218,30 @@ for filename in sorted(os.listdir(specs_dir)):
# Extract class attributes
engine_name = None
engine_attr = None
metadata = None
cap_attrs = {} # capability flag attributes defined directly in this class
# Cap attrs assigned via expressions we can't statically resolve
# (e.g. is_feature_enabled("FLAG")). Tracked so the JS layer can
# fall back to the previously-generated databases.json value
# rather than inherit a parent default that would be wrong.
unresolved_cap_attrs = set()
direct_methods = set() # capability methods defined directly in this class
for item in node.body:
if isinstance(item, ast.Assign):
for target in item.targets:
if not isinstance(target, ast.Name):
continue
if target.id == 'engine_name':
val = eval_node(item.value)
if isinstance(val, str):
engine_name = val
elif target.id == 'engine':
val = eval_node(item.value)
if isinstance(val, str):
engine_attr = val
elif target.id == 'metadata':
metadata = eval_node(item.value)
elif target.id in CAP_ATTR_DEFAULTS:
val = eval_node(item.value)
if isinstance(val, bool):
cap_attrs[target.id] = val
else:
# Unevaluable expression — defer to JS fallback.
unresolved_cap_attrs.add(target.id)
elif isinstance(item, (ast.FunctionDef, ast.AsyncFunctionDef)):
if item.name in CAP_METHODS:
# has_implicit_cancel is special: diagnose() uses the
# method's RETURN VALUE, not just its presence. If the
# override statically returns False, treat it as if
# the method weren't overridden so query_cancelation
# matches diagnose(). Unresolvable / True / anything
# else falls through as an override (conservative).
if item.name == 'has_implicit_cancel':
if static_return_bool(item) is False:
continue
direct_methods.add(item.name)
if isinstance(target, ast.Name):
if target.id == 'engine_name':
val = eval_node(item.value)
if isinstance(val, str):
engine_name = val
elif target.id == 'metadata':
metadata = eval_node(item.value)
# Check for engine attribute with non-empty value to distinguish
# true base classes from product classes like OceanBaseEngineSpec
has_non_empty_engine = engine_attr is not None and bool(engine_attr)
has_non_empty_engine = False
for item in node.body:
if isinstance(item, ast.Assign):
for target in item.targets:
if isinstance(target, ast.Name) and target.id == 'engine':
# Check if engine value is non-empty string
if isinstance(item.value, ast.Constant):
has_non_empty_engine = bool(item.value.value)
break
# True base classes: end with BaseEngineSpec AND don't define engine
# or have empty engine (like PostgresBaseEngineSpec with engine = "")
@@ -366,18 +254,13 @@ for filename in sorted(os.listdir(specs_dir)):
'bases': base_names,
'metadata': metadata,
'engine_name': engine_name,
'engine': engine_attr,
'filename': filename,
'is_base_or_mixin': is_true_base,
'cap_attrs': cap_attrs,
'unresolved_cap_attrs': unresolved_cap_attrs,
'direct_methods': direct_methods,
}
except Exception as e:
errors.append(f"{filename}: {str(e)}")
# Second pass: resolve inheritance and build final metadata + capability flags
# Second pass: resolve inheritance and build final metadata
def get_inherited_metadata(class_name, visited=None):
"""Recursively get metadata from parent classes."""
if visited is None:
@@ -403,64 +286,6 @@ def get_inherited_metadata(class_name, visited=None):
return inherited
def get_resolved_caps(class_name, visited=None):
"""
Resolve capability flags and method overrides with inheritance.
Returns (attr_values, unresolved, methods):
- attr_values: {attr: bool} for attrs where the nearest MRO assignment
was a literal bool. Defaults are applied at the call site.
- unresolved: attrs where the nearest MRO assignment was an unevaluable
expression (e.g. is_feature_enabled("FLAG")). The JS layer falls
back to the previously-generated JSON value for these.
- methods: capability methods defined directly in some non-base ancestor,
matching the has_custom_method() logic in db_engine_specs/lib.py.
attr_values and unresolved are disjoint — an attr is in at most one.
"""
if visited is None:
visited = set()
if class_name in visited:
return {}, set(), set()
visited.add(class_name)
info = class_info.get(class_name)
if not info:
return {}, set(), set()
attr_values = {}
unresolved = set()
resolved_methods = set()
# Collect from parents, iterating right-to-left so leftmost bases win
# (matches Python MRO: for class C(A, B), A's attributes take precedence).
for base_name in reversed(info['bases']):
p_vals, p_unres, p_meth = get_resolved_caps(base_name, visited.copy())
# A parent's literal assignments overwrite whatever we inherited so far.
for attr, val in p_vals.items():
attr_values[attr] = val
unresolved.discard(attr)
# A parent's unresolved assignments likewise take precedence.
for attr in p_unres:
unresolved.add(attr)
attr_values.pop(attr, None)
resolved_methods.update(p_meth)
# Apply this class's own assignments (override parents).
for attr, val in info['cap_attrs'].items():
attr_values[attr] = val
unresolved.discard(attr)
for attr in info['unresolved_cap_attrs']:
unresolved.add(attr)
attr_values.pop(attr, None)
# Accumulate method overrides, but skip the literal BaseEngineSpec
# (its implementations are stubs; only non-base overrides count).
if class_name != TRUE_BASE_CLASS:
resolved_methods.update(info['direct_methods'])
return attr_values, unresolved, resolved_methods
for class_name, info in class_info.items():
# Skip base classes and mixins
if info['is_base_or_mixin']:
@@ -485,14 +310,7 @@ for class_name, info in class_info.items():
if final_metadata and isinstance(final_metadata, dict) and display_name:
debug_info["classes_with_metadata"] += 1
# Resolve capability flags from Python source
attr_values, unresolved_caps, cap_methods = get_resolved_caps(class_name)
cap_attrs = dict(CAP_ATTR_DEFAULTS)
cap_attrs.update(attr_values)
engine_attr = info.get('engine') or ''
entry = {
databases[display_name] = {
'engine': display_name.lower().replace(' ', '_'),
'engine_name': display_name,
'module': info['filename'][:-3], # Remove .py extension
@@ -500,40 +318,19 @@ for class_name, info in class_info.items():
'time_grains': {},
'score': 0,
'max_score': 0,
# Capability flags read from engine spec class attributes/methods
'joins': cap_attrs['allows_joins'],
'subqueries': cap_attrs['allows_subqueries'],
'supports_dynamic_schema': cap_attrs['supports_dynamic_schema'],
'supports_catalog': cap_attrs['supports_catalog'],
'supports_dynamic_catalog': cap_attrs['supports_dynamic_catalog'],
'ssh_tunneling': not cap_attrs['disable_ssh_tunneling'],
'supports_file_upload': cap_attrs['supports_file_upload'],
# Method-based flags: True only when a non-base class overrides them.
# Matches diagnose() in lib.py: cancel_query override OR
# has_implicit_cancel() returning True (which, given the base
# returns False, is equivalent to overriding has_implicit_cancel).
'query_cancelation': bool({'cancel_query', 'has_implicit_cancel'} & cap_methods),
'query_cost_estimation': bool({'estimate_statement_cost', 'estimate_query_cost'} & cap_methods),
# SQL validation is implemented in external validator classes keyed by engine name
'sql_validation': engine_attr in {'presto', 'postgresql'},
'user_impersonation': bool(
{'impersonate_user', 'update_impersonation_config', 'get_url_for_impersonation'} & cap_methods
),
'joins': True,
'subqueries': True,
'supports_dynamic_schema': False,
'supports_catalog': False,
'supports_dynamic_catalog': False,
'ssh_tunneling': False,
'query_cancelation': False,
'supports_file_upload': False,
'user_impersonation': False,
'query_cost_estimation': False,
'sql_validation': False,
}
# Tell the JS layer which output fields were populated from the
# BaseEngineSpec default because the source assignment was an
# unevaluable expression; those get overridden from existing JSON.
unresolved_fields = sorted(
CAP_ATTR_TO_OUTPUT_FIELD[attr]
for attr in unresolved_caps
if attr in CAP_ATTR_TO_OUTPUT_FIELD
)
if unresolved_fields:
entry['_unresolved_cap_fields'] = unresolved_fields
databases[display_name] = entry
if errors and not databases:
print(json.dumps({"error": "Parse errors", "details": errors, "debug": debug_info}), file=sys.stderr)
@@ -1054,52 +851,24 @@ function loadExistingData() {
}
}
/**
* Fall back to the previously-generated databases.json for capability flags
* whose source assignment couldn't be statically resolved (e.g.
* `allows_joins = is_feature_enabled("DRUID_JOINS")`). The Python extractor
* flags these via the internal `_unresolved_cap_fields` marker; without this
* fallback those fields would silently inherit the BaseEngineSpec default
* and disagree with runtime behavior. The marker is stripped before output.
*/
function fallbackUnresolvedCaps(newDatabases, existingData) {
for (const [name, db] of Object.entries(newDatabases)) {
const unresolved = db._unresolved_cap_fields;
if (!unresolved || unresolved.length === 0) {
delete db._unresolved_cap_fields;
continue;
}
const existingDb = existingData?.databases?.[name];
if (existingDb) {
for (const field of unresolved) {
if (existingDb[field] !== undefined) {
db[field] = existingDb[field];
}
}
}
delete db._unresolved_cap_fields;
}
return newDatabases;
}
/**
* Merge new documentation with existing diagnostics
* Preserves score, max_score, and time_grains from existing data (these require
* Flask context to generate and cannot be derived from static source analysis).
* Capability flags (joins, supports_catalog, etc.) are NOT preserved here — they
* are read fresh from the Python engine spec source by extractEngineSpecMetadata(),
* with a separate fallback for expression-based assignments (see fallbackUnresolvedCaps).
* Preserves score, time_grains, and feature flags from existing data
*/
function mergeWithExistingDiagnostics(newDatabases, existingData) {
if (!existingData?.databases) return newDatabases;
// Only preserve fields that require Flask/runtime context to generate
const diagnosticFields = ['score', 'max_score', 'time_grains'];
const diagnosticFields = [
'score', 'max_score', 'time_grains', 'joins', 'subqueries',
'supports_dynamic_schema', 'supports_catalog', 'supports_dynamic_catalog',
'ssh_tunneling', 'query_cancelation', 'supports_file_upload',
'user_impersonation', 'query_cost_estimation', 'sql_validation'
];
for (const [name, db] of Object.entries(newDatabases)) {
const existingDb = existingData.databases[name];
if (existingDb && existingDb.score > 0) {
// Preserve score/time_grain diagnostics from existing data
// Preserve diagnostics from existing data
for (const field of diagnosticFields) {
if (existingDb[field] !== undefined) {
db[field] = existingDb[field];
@@ -1110,7 +879,7 @@ function mergeWithExistingDiagnostics(newDatabases, existingData) {
const preserved = Object.values(newDatabases).filter(d => d.score > 0).length;
if (preserved > 0) {
console.log(`Preserved score/time_grains for ${preserved} databases from existing data`);
console.log(`Preserved diagnostics for ${preserved} databases from existing data`);
}
return newDatabases;
@@ -1158,12 +927,6 @@ async function main() {
databases = mergeWithExistingDiagnostics(databases, existingData);
}
// For cap flags assigned via unevaluable expressions (e.g.
// `is_feature_enabled(...)`), prefer the value from a previously-generated
// JSON. Runs regardless of scores since it addresses static-analysis gaps,
// not missing Flask diagnostics. Always strips the internal marker.
databases = fallbackUnresolvedCaps(databases, existingData);
// Extract and merge custom_errors for troubleshooting documentation
const customErrors = extractCustomErrors();
mergeCustomErrors(databases, customErrors);

File diff suppressed because it is too large Load Diff

View File

@@ -81,12 +81,6 @@
"lifecycle": "development",
"description": "Expand nested types in Presto into extra columns/arrays. Experimental, doesn't work with all nested types."
},
{
"name": "SEMANTIC_LAYERS",
"default": false,
"lifecycle": "development",
"description": "Enable semantic layers and show semantic views alongside datasets"
},
{
"name": "TABLE_V2_TIME_COMPARISON_ENABLED",
"default": false,

Binary file not shown.

Before

Width:  |  Height:  |  Size: 132 KiB

After

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 99 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 240 KiB

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.1 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 141 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -29,7 +29,7 @@ maintainers:
- name: craig-rueda
email: craig@craigrueda.com
url: https://github.com/craig-rueda
version: 0.15.5 # See [README](https://github.com/apache/superset/blob/master/helm/superset/README.md#versioning) for version details.
version: 0.15.4 # See [README](https://github.com/apache/superset/blob/master/helm/superset/README.md#versioning) for version details.
dependencies:
- name: postgresql
version: 16.7.27

View File

@@ -23,7 +23,7 @@ NOTE: This file is generated by helm-docs: https://github.com/norwoodj/helm-docs
# superset
![Version: 0.15.5](https://img.shields.io/badge/Version-0.15.5-informational?style=flat-square)
![Version: 0.15.4](https://img.shields.io/badge/Version-0.15.4-informational?style=flat-square)
Apache Superset is a modern, enterprise-ready business intelligence web application

View File

@@ -844,8 +844,6 @@ postgresql:
database: superset
image:
registry: docker.io
repository: bitnamilegacy/postgresql
tag: "14.17.0-debian-12-r3"
## PostgreSQL Primary parameters
@@ -920,11 +918,6 @@ redis:
accessModes:
- ReadWriteOnce
image:
registry: docker.io
repository: bitnamilegacy/redis
tag: 7.0.10-debian-11-r4
nodeSelector: {}
tolerations: []

View File

@@ -70,13 +70,13 @@ dependencies = [
# marshmallow>=4 has issues: https://github.com/apache/superset/issues/33162
"marshmallow>=3.0, <4",
"marshmallow-union>=0.1",
"msgpack>=1.0.0, <1.2",
"nh3>=0.2.11, <0.4",
"msgpack>=1.0.0, <1.1",
"nh3>=0.2.11, <0.3",
"numpy>1.23.5, <2.3",
"packaging",
# --------------------------
# pandas and related (wanting pandas[performance] without numba as it's 100+MB and not needed)
"pandas[excel]>=2.1.4, <2.4",
"pandas[excel]>=2.1.4, <2.2",
"bottleneck", # recommended performance dependency for pandas, see https://pandas.pydata.org/docs/getting_started/install.html#performance-dependencies-recommended
# --------------------------
"parsedatetime",
@@ -95,7 +95,7 @@ dependencies = [
"redis>=5.0.0, <6.0",
"rison>=2.0.0, <3.0",
"selenium>=4.14.0, <5.0",
"shillelagh[gsheetsapi]>=1.4.4, <2.0",
"shillelagh[gsheetsapi]>=1.4.3, <2.0",
"sshtunnel>=0.4.0, <0.5",
"simplejson>=3.15.0",
"slack_sdk>=3.19.0, <4",
@@ -109,12 +109,12 @@ dependencies = [
"watchdog>=6.0.0",
"wtforms>=2.3.3, <4",
"wtforms-json",
"xlsxwriter>=3.0.7, <3.3",
"xlsxwriter>=3.0.7, <3.1",
]
[project.optional-dependencies]
athena = ["pyathena[pandas]>=2, <4"]
athena = ["pyathena[pandas]>=2, <3"]
aurora-data-api = ["preset-sqlalchemy-aurora-data-api>=0.2.8,<0.3"]
bigquery = [
"pandas-gbq>=0.19.1",
@@ -131,31 +131,25 @@ d1 = [
]
databend = ["databend-sqlalchemy>=0.3.2, <1.0"]
databricks = [
"databricks-sql-connector==4.2.6",
"databricks-sql-connector==4.1.2",
"databricks-sqlalchemy==1.0.5",
]
db2 = ["ibm-db-sa>0.3.8, <=0.4.4"]
denodo = ["denodo-sqlalchemy>=1.0.6,<2.1.0"]
db2 = ["ibm-db-sa>0.3.8, <=0.4.0"]
denodo = ["denodo-sqlalchemy~=1.0.6"]
dremio = ["sqlalchemy-dremio>=1.2.1, <4"]
drill = ["sqlalchemy-drill>=1.1.4, <2"]
druid = ["pydruid>=0.6.5,<0.7"]
duckdb = ["duckdb>=1.4.2,<2", "duckdb-engine>=0.17.0"]
dynamodb = ["pydynamodb>=0.4.2"]
solr = ["sqlalchemy-solr >= 0.2.0"]
elasticsearch = ["elasticsearch-dbapi>=0.2.13, <0.3.0"]
elasticsearch = ["elasticsearch-dbapi>=0.2.12, <0.3.0"]
exasol = ["sqlalchemy-exasol >= 2.4.0, <3.0"]
excel = ["xlrd>=1.2.0, <1.3"]
fastmcp = [
"fastmcp>=3.2.4,<4.0",
# tiktoken backs the response-size-guard token estimator. Without
# it, the middleware falls back to a coarser character-based
# heuristic that under-counts JSON-heavy MCP responses.
"tiktoken>=0.7.0,<1.0",
]
firebird = ["sqlalchemy-firebird>=0.7.0, <2.2"]
fastmcp = ["fastmcp>=3.1.0,<4.0"]
firebird = ["sqlalchemy-firebird>=0.7.0, <0.8"]
firebolt = ["firebolt-sqlalchemy>=1.0.0, <2"]
gevent = ["gevent>=23.9.1"]
gsheets = ["shillelagh[gsheetsapi]>=1.4.4, <2"]
gsheets = ["shillelagh[gsheetsapi]>=1.4.3, <2"]
hana = ["hdbcli==2.4.162", "sqlalchemy_hana==0.4.0"]
hive = [
"pyhive[hive]>=0.6.5;python_version<'3.11'",
@@ -164,7 +158,7 @@ hive = [
"thrift>=0.14.1, <1.0.0",
"thrift_sasl>=0.4.3, < 1.0.0",
]
impala = ["impyla>0.16.2, <0.23"]
impala = ["impyla>0.16.2, <0.17"]
kusto = ["sqlalchemy-kusto>=3.0.0, <4"]
kylin = ["kylinpy>=2.8.1, <2.9"]
mssql = ["pymssql>=2.2.8, <3"]
@@ -177,17 +171,17 @@ ocient = [
"shapely",
"geojson",
]
oracle = ["cx-Oracle>8.0.0, <8.4"]
oracle = ["cx-Oracle>8.0.0, <8.1"]
parseable = ["sqlalchemy-parseable>=0.1.3,<0.2.0"]
pinot = ["pinotdb>=5.0.0, <10.0.0"]
pinot = ["pinotdb>=5.0.0, <6.0.0"]
playwright = ["playwright>=1.37.0, <2"]
postgres = ["psycopg2-binary==2.9.12"]
postgres = ["psycopg2-binary==2.9.9"]
presto = ["pyhive[presto]>=0.6.5"]
trino = ["trino>=0.328.0"]
prophet = ["prophet>=1.1.6, <2"]
redshift = ["sqlalchemy-redshift>=0.8.1, <0.9"]
risingwave = ["sqlalchemy-risingwave"]
shillelagh = ["shillelagh[all]>=1.4.4, <2"]
shillelagh = ["shillelagh[all]>=1.4.3, <2"]
singlestore = ["sqlalchemy-singlestoredb>=1.1.1, <2"]
snowflake = ["snowflake-sqlalchemy>=1.2.4, <2"]
sqlite = ["syntaqlite>=0.1.0"]
@@ -203,7 +197,7 @@ tdengine = [
]
teradata = ["teradatasql>=16.20.0.23"]
thumbnails = [] # deprecated, will be removed in 7.0
vertica = ["sqlalchemy-vertica-python>= 0.5.9, < 0.7"]
vertica = ["sqlalchemy-vertica-python>=0.5.9, < 0.6"]
netezza = ["nzalchemy>=11.0.2"]
starrocks = ["starrocks>=1.0.0"]
doris = ["pydoris>=1.0.0, <2.0.0"]
@@ -224,7 +218,7 @@ development = [
"progress>=1.5,<2",
"psutil",
"pyfakefs",
"pyinstrument>=4.0.2,<6",
"pyinstrument>=4.0.2,<5",
"pylint",
"pytest<8.0.0", # hairy issue with pytest >=8 where current_app proxies are not set in time
"pytest-asyncio",
@@ -294,7 +288,6 @@ module = [
"superset.tags.filters",
"superset.commands.security.update",
"superset.commands.security.create",
"superset.semantic_layers.api",
]
warn_unused_ignores = false
@@ -383,7 +376,6 @@ dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[tool.ruff.lint.per-file-ignores]
"superset/mcp_service/app.py" = ["S608", "E501"] # LLM instruction text: SQL examples (S608) and long lines in multiline string (E501)
"superset/mcp_service/*/tool/list_*.py" = ["E501"] # LLM docstring examples show full request shapes which exceed line length
"scripts/*" = ["TID251"]
"setup.py" = ["TID251"]
"superset/config.py" = ["TID251"]

View File

@@ -183,9 +183,7 @@ idna==3.10
# trio
# url-normalize
isodate==0.7.2
# via
# apache-superset (pyproject.toml)
# apache-superset-core
# via apache-superset (pyproject.toml)
itsdangerous==2.2.0
# via
# flask
@@ -298,7 +296,6 @@ pyarrow==20.0.0
# via
# -r requirements/base.in
# apache-superset (pyproject.toml)
# apache-superset-core
pyasn1==0.6.3
# via
# pyasn1-modules
@@ -384,7 +381,7 @@ selenium==4.32.0
# via apache-superset (pyproject.toml)
setuptools==80.9.0
# via -r requirements/base.in
shillelagh==1.4.4
shillelagh==1.4.3
# via apache-superset (pyproject.toml)
simplejson==3.20.1
# via apache-superset (pyproject.toml)

View File

@@ -236,7 +236,7 @@ et-xmlfile==2.0.0
# openpyxl
exceptiongroup==1.3.0
# via fastmcp
fastmcp==3.2.4
fastmcp==3.1.0
# via apache-superset
filelock==3.20.3
# via
@@ -379,8 +379,6 @@ greenlet==3.1.1
# gevent
# shillelagh
# sqlalchemy
griffelib==2.0.2
# via fastmcp
grpcio==1.71.0
# via
# apache-superset
@@ -442,7 +440,6 @@ isodate==0.7.2
# via
# -c requirements/base-constraint.txt
# apache-superset
# apache-superset-core
isort==6.0.1
# via pylint
itsdangerous==2.2.0
@@ -708,7 +705,7 @@ protobuf==4.25.8
# proto-plus
psutil==6.1.0
# via apache-superset
psycopg2-binary==2.9.12
psycopg2-binary==2.9.9
# via apache-superset
py-key-value-aio==0.4.4
# via fastmcp
@@ -716,7 +713,6 @@ pyarrow==20.0.0
# via
# -c requirements/base-constraint.txt
# apache-superset
# apache-superset-core
# db-dtypes
# pandas-gbq
pyasn1==0.6.3
@@ -868,8 +864,6 @@ referencing==0.36.2
# jsonschema
# jsonschema-path
# jsonschema-specifications
regex==2026.4.4
# via tiktoken
requests==2.33.0
# via
# -c requirements/base-constraint.txt
@@ -882,7 +876,6 @@ requests==2.33.0
# requests-cache
# requests-oauthlib
# shillelagh
# tiktoken
# trino
requests-cache==1.2.1
# via
@@ -936,7 +929,7 @@ setuptools==80.9.0
# pydata-google-auth
# zope-event
# zope-interface
shillelagh==1.4.4
shillelagh==1.4.3
# via
# -c requirements/base-constraint.txt
# apache-superset
@@ -1008,8 +1001,6 @@ tabulate==0.9.0
# via
# -c requirements/base-constraint.txt
# apache-superset
tiktoken==0.12.0
# via apache-superset
tomli-w==1.2.0
# via apache-superset-extensions-cli
tomlkit==0.13.3

View File

@@ -45,17 +45,7 @@ if [ ${#js_ts_files[@]} -gt 0 ]; then
# Skip custom OXC build in pre-commit for speed
export SKIP_CUSTOM_OXC=true
# Use quiet mode in pre-commit to reduce noise (only show errors)
# Capture output so we can treat "No files found" (all files ignored by
# ignorePatterns) as success rather than a false-positive failure.
output=$(npx oxlint --config oxlint.json --fix --quiet "${js_ts_files[@]}" 2>&1) || {
if echo "$output" | grep -q "No files found"; then
echo "No files to lint after applying ignore patterns"
exit 0
fi
echo "$output" >&2
exit 1
}
[ -n "$output" ] && echo "$output"
npx oxlint --config oxlint.json --fix --quiet "${js_ts_files[@]}"
else
echo "No JavaScript/TypeScript files to lint"
fi

View File

@@ -18,7 +18,7 @@
[project]
name = "apache-superset-core"
version = "0.1.0rc3"
version = "0.1.0rc2"
description = "Core Python package for building Apache Superset backend extensions and integrations"
readme = "README.md"
authors = [
@@ -43,8 +43,6 @@ classifiers = [
]
dependencies = [
"flask-appbuilder>=5.0.2,<6",
"isodate>=0.7.0",
"pyarrow>=16.0.0",
"pydantic>=2.8.0",
"sqlalchemy>=1.4.0,<2.0",
"sqlalchemy-utils>=0.38.0, <0.43", # expanding lowerbound to work with pydoris

View File

@@ -1,73 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import annotations
from typing import Any
from pydantic import BaseModel
def build_configuration_schema(
config_class: type[BaseModel],
configuration: BaseModel | None = None,
) -> dict[str, Any]:
"""
Build a JSON schema from a Pydantic configuration class.
Handles generic boilerplate that any semantic layer with dynamic fields needs:
- Reorders properties to match model field order (Pydantic sorts alphabetically)
- When ``configuration`` is None, sets ``enum: []`` on all ``x-dynamic`` properties
so the frontend renders them as empty dropdowns
Semantic layer implementations call this instead of
``model_json_schema()`` directly,
then only need to add their own dynamic population logic.
"""
schema = config_class.model_json_schema()
# Pydantic sorts properties alphabetically; restore model field order
field_order = [
field.alias or name for name, field in config_class.model_fields.items()
]
schema["properties"] = {
key: schema["properties"][key]
for key in field_order
if key in schema["properties"]
}
if configuration is None:
for prop_schema in schema["properties"].values():
if prop_schema.get("x-dynamic"):
prop_schema["enum"] = []
return schema
def check_dependencies(
prop_schema: dict[str, Any],
configuration: BaseModel,
) -> bool:
"""
Check whether a dynamic property's dependencies are satisfied.
Reads the ``x-dependsOn`` list from the property schema and returns ``True``
when every referenced attribute on ``configuration`` is truthy.
"""
dependencies = prop_schema.get("x-dependsOn", [])
return all(getattr(configuration, dep, None) for dep in dependencies)

View File

@@ -1,169 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Semantic layer DAO interfaces for superset-core.
Provides abstract DAO classes for semantic layers and views that define the
interface contract. Host implementations replace these with concrete classes
backed by SQLAlchemy during initialization.
Usage:
from superset_core.semantic_layers.daos import (
AbstractSemanticLayerDAO,
AbstractSemanticViewDAO,
)
"""
from __future__ import annotations
from abc import abstractmethod
from typing import Any, ClassVar
from superset_core.common.daos import BaseDAO
from superset_core.semantic_layers.models import SemanticLayerModel, SemanticViewModel
class AbstractSemanticLayerDAO(BaseDAO[SemanticLayerModel]):
"""
Abstract DAO interface for SemanticLayer.
Host implementations will replace this class during initialization
with a concrete DAO providing actual database access.
"""
model_cls: ClassVar[type[Any] | None] = None
base_filter = None
id_column_name = "uuid"
uuid_column_name = "uuid"
@classmethod
@abstractmethod
def validate_uniqueness(cls, name: str) -> bool:
"""
Validate that a semantic layer name is unique.
:param name: Semantic layer name to validate
:return: True if the name is unique, False otherwise
"""
...
@classmethod
@abstractmethod
def validate_update_uniqueness(cls, layer_uuid: str, name: str) -> bool:
"""
Validate that a semantic layer name is unique for an update operation,
excluding the layer being updated.
:param layer_uuid: UUID of the semantic layer being updated
:param name: New name to validate
:return: True if the name is unique, False otherwise
"""
...
@classmethod
@abstractmethod
def find_by_name(cls, name: str) -> SemanticLayerModel | None:
"""
Find a semantic layer by name.
:param name: Semantic layer name
:return: SemanticLayerModel instance or None
"""
...
@classmethod
@abstractmethod
def get_semantic_views(cls, layer_uuid: str) -> list[SemanticViewModel]:
"""
Get all semantic views associated with a semantic layer.
:param layer_uuid: UUID of the semantic layer
:return: List of SemanticViewModel instances
"""
...
class AbstractSemanticViewDAO(BaseDAO[SemanticViewModel]):
"""
Abstract DAO interface for SemanticView.
Host implementations will replace this class during initialization
with a concrete DAO providing actual database access.
"""
model_cls: ClassVar[type[Any] | None] = None
base_filter = None
id_column_name = "id"
uuid_column_name = "uuid"
@classmethod
@abstractmethod
def validate_uniqueness(
cls,
name: str,
layer_uuid: str,
configuration: dict[str, Any],
) -> bool:
"""
Validate that a semantic view is unique within a semantic layer.
Uniqueness is determined by the combination of name, layer UUID, and
configuration.
:param name: View name
:param layer_uuid: UUID of the parent semantic layer
:param configuration: Configuration dict to compare
:return: True if unique, False otherwise
"""
...
@classmethod
@abstractmethod
def validate_update_uniqueness(
cls,
view_uuid: str,
name: str,
layer_uuid: str,
configuration: dict[str, Any],
) -> bool:
"""
Validate that a semantic view is unique within a semantic layer for an
update operation, excluding the view being updated.
:param view_uuid: UUID of the view being updated
:param name: New name to validate
:param layer_uuid: UUID of the parent semantic layer
:param configuration: Configuration dict to compare
:return: True if unique, False otherwise
"""
...
@classmethod
@abstractmethod
def find_by_name(cls, name: str, layer_uuid: str) -> SemanticViewModel | None:
"""
Find a semantic view by name within a semantic layer.
:param name: View name
:param layer_uuid: UUID of the parent semantic layer
:return: SemanticViewModel instance or None
"""
...
__all__ = ["AbstractSemanticLayerDAO", "AbstractSemanticViewDAO"]

View File

@@ -1,102 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Semantic layer registration decorator for Superset.
This module provides a decorator interface to register semantic layer
implementations with the host application, enabling automatic discovery
by the extensions framework.
Usage:
from superset_core.semantic_layers.decorators import semantic_layer
@semantic_layer(
id="snowflake",
name="Snowflake Cortex",
description="Snowflake semantic layer via Cortex Analyst",
)
class SnowflakeSemanticLayer(SemanticLayer[SnowflakeConfig, SnowflakeView]):
...
# Or with minimal arguments:
@semantic_layer(id="dbt", name="dbt Semantic Layer")
class DbtSemanticLayer(SemanticLayer[DbtConfig, DbtView]):
...
"""
from __future__ import annotations
from typing import Callable, TypeVar
# Type variable for decorated semantic layer classes
T = TypeVar("T")
def semantic_layer(
id: str,
name: str,
description: str | None = None,
) -> Callable[[T], T]:
"""
Decorator to register a semantic layer implementation.
Automatically detects extension context and applies appropriate
namespacing to prevent ID conflicts between host and extension
semantic layers.
Host implementations will replace this function during initialization
with a concrete implementation providing actual functionality.
Args:
id: Unique semantic layer type identifier (e.g., "snowflake",
"dbt"). Used as the key in the semantic layers registry and
stored in the ``type`` column of the ``SemanticLayer`` model.
name: Human-readable display name (e.g., "Snowflake Cortex").
Shown in the UI when listing available semantic layer types.
description: Optional description for documentation and UI
tooltips.
Returns:
Decorated semantic layer class registered with the host
application.
Raises:
NotImplementedError: If called before host implementation is
initialized.
Example:
from superset_core.semantic_layers.decorators import semantic_layer
from superset_core.semantic_layers.layer import SemanticLayer
@semantic_layer(
id="snowflake",
name="Snowflake Cortex",
description="Connect to Snowflake Cortex Analyst",
)
class SnowflakeSemanticLayer(
SemanticLayer[SnowflakeConfig, SnowflakeView]
):
...
"""
raise NotImplementedError(
"Semantic layer decorator not initialized. "
"This decorator should be replaced during Superset startup."
)
__all__ = ["semantic_layer"]

View File

@@ -1,129 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import annotations
from abc import ABC, abstractmethod
from typing import Any, Generic, TypeVar
from pydantic import BaseModel
from superset_core.semantic_layers.view import SemanticView
ConfigT = TypeVar("ConfigT", bound=BaseModel)
SemanticViewT = TypeVar("SemanticViewT", bound="SemanticView")
class SemanticLayer(ABC, Generic[ConfigT, SemanticViewT]):
"""
Abstract base class for semantic layers.
"""
configuration_class: type[BaseModel]
@classmethod
@abstractmethod
def from_configuration(
cls,
configuration: dict[str, Any],
) -> SemanticLayer[ConfigT, SemanticViewT]:
"""
Create a semantic layer from its configuration.
"""
raise NotImplementedError(
"Semantic layers must implement the from_configuration method"
)
@classmethod
@abstractmethod
def get_configuration_schema(
cls,
configuration: ConfigT | None = None,
) -> dict[str, Any]:
"""
Get the JSON schema for the configuration needed to add the semantic layer.
A partial configuration `configuration` can be sent to improve the schema,
allowing for progressive validation and better UX. For example, a semantic
layer might require:
- auth information
- a database
If the user provides the auth information, a client can send the partial
configuration to this method, and the resulting JSON schema would include
the list of databases the user has access to, allowing a dropdown to be
populated.
The Snowflake semantic layer has an example implementation of this method, where
database and schema names are populated based on the provided connection info.
"""
raise NotImplementedError(
"Semantic layers must implement the get_configuration_schema method"
)
@classmethod
@abstractmethod
def get_runtime_schema(
cls,
configuration: ConfigT,
runtime_data: dict[str, Any] | None = None,
) -> dict[str, Any]:
"""
Get the JSON schema for the runtime parameters needed to load semantic views.
This returns the schema needed to connect to a semantic view given the
configuration for the semantic layer. For example, a semantic layer might
be configured by:
- auth information
- an optional database
If the user does not provide a database when creating the semantic layer, the
runtime schema would require the database name to be provided before loading any
semantic views. This allows users to create semantic layers that connect to a
specific database (or project, account, etc.), or that allow users to select it
at query time.
The Snowflake semantic layer has an example implementation of this method, where
database and schema names are required if they were not provided in the initial
configuration.
"""
raise NotImplementedError(
"Semantic layers must implement the get_runtime_schema method"
)
@abstractmethod
def get_semantic_views(
self,
runtime_configuration: dict[str, Any],
) -> set[SemanticViewT]:
"""
Get the semantic views available in the semantic layer.
The runtime configuration can provide information like a given project or
schema, used to restrict the semantic views returned.
"""
@abstractmethod
def get_semantic_view(
self,
name: str,
additional_configuration: dict[str, Any],
) -> SemanticViewT:
"""
Get a specific semantic view by its name and additional configuration.
"""

View File

@@ -1,85 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Semantic layer model interfaces for superset-core.
Provides abstract model classes for semantic layers and views that will be
replaced by the host implementation's concrete SQLAlchemy models during
initialization.
Usage:
from superset_core.semantic_layers.models import (
SemanticLayerModel,
SemanticViewModel,
)
"""
from __future__ import annotations
from datetime import datetime
from uuid import UUID
from superset_core.common.models import CoreModel
class SemanticLayerModel(CoreModel):
"""
Abstract interface for the SemanticLayer database model.
Host implementations will replace this class during initialization
with a concrete SQLAlchemy model providing actual persistence.
"""
__abstract__ = True
# Type hints for expected column attributes
uuid: UUID
name: str
description: str | None
type: str
configuration: str
configuration_version: int
cache_timeout: int | None
created_on: datetime | None
changed_on: datetime | None
class SemanticViewModel(CoreModel):
"""
Abstract interface for the SemanticView database model.
Host implementations will replace this class during initialization
with a concrete SQLAlchemy model providing actual persistence.
"""
__abstract__ = True
# Type hints for expected column attributes
id: int
uuid: UUID
name: str
description: str | None
configuration: str
configuration_version: int
cache_timeout: int | None
semantic_layer_uuid: UUID
created_on: datetime | None
changed_on: datetime | None
__all__ = ["SemanticLayerModel", "SemanticViewModel"]

View File

@@ -1,209 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import annotations
import enum
from dataclasses import dataclass
from datetime import date, datetime, time, timedelta
import isodate
import pyarrow as pa
@dataclass(frozen=True)
class Grain:
"""
Represents a time grain (e.g., day, month, year).
Attributes:
name: Human-readable name of the grain (e.g., "Second")
representation: ISO 8601 duration (e.g., "PT1S", "P1D", "P1M")
"""
name: str
representation: str
def __post_init__(self) -> None:
isodate.parse_duration(self.representation)
def __eq__(self, other: object) -> bool:
if isinstance(other, Grain):
return self.representation == other.representation
return NotImplemented
def __hash__(self) -> int:
return hash(self.representation)
class Grains:
"""Pre-defined common grains and factory for custom ones."""
SECOND = Grain("Second", "PT1S")
MINUTE = Grain("Minute", "PT1M")
HOUR = Grain("Hour", "PT1H")
DAY = Grain("Day", "P1D")
WEEK = Grain("Week", "P1W")
MONTH = Grain("Month", "P1M")
QUARTER = Grain("Quarter", "P3M")
YEAR = Grain("Year", "P1Y")
_REGISTRY: dict[str, Grain] = {
"PT1S": SECOND,
"PT1M": MINUTE,
"PT1H": HOUR,
"P1D": DAY,
"P1W": WEEK,
"P1M": MONTH,
"P3M": QUARTER,
"P1Y": YEAR,
}
@classmethod
def get(cls, representation: str, name: str | None = None) -> Grain:
"""Return a pre-defined grain or create a custom one."""
if grain := cls._REGISTRY.get(representation):
return grain
return Grain(name or representation, representation)
@dataclass(frozen=True)
class Dimension:
id: str
name: str
type: pa.DataType
definition: str | None = None
description: str | None = None
grain: Grain | None = None
@dataclass(frozen=True)
class Metric:
id: str
name: str
type: pa.DataType
definition: str
description: str | None = None
@dataclass(frozen=True)
class AdhocExpression:
id: str
definition: str
class Operator(str, enum.Enum):
EQUALS = "="
NOT_EQUALS = "!="
GREATER_THAN = ">"
LESS_THAN = "<"
GREATER_THAN_OR_EQUAL = ">="
LESS_THAN_OR_EQUAL = "<="
IN = "IN"
NOT_IN = "NOT IN"
LIKE = "LIKE"
NOT_LIKE = "NOT LIKE"
IS_NULL = "IS NULL"
IS_NOT_NULL = "IS NOT NULL"
ADHOC = "ADHOC"
FilterValues = str | int | float | bool | datetime | date | time | timedelta | None
class PredicateType(enum.Enum):
WHERE = "WHERE"
HAVING = "HAVING"
@dataclass(frozen=True, order=True)
class Filter:
type: PredicateType
column: Dimension | Metric | None
operator: Operator
value: FilterValues | frozenset[FilterValues]
class OrderDirection(enum.Enum):
ASC = "ASC"
DESC = "DESC"
OrderTuple = tuple[Metric | Dimension | AdhocExpression, OrderDirection]
@dataclass(frozen=True)
class GroupLimit:
"""
Limit query to top/bottom N combinations of specified dimensions.
The `filters` parameter allows specifying separate filter constraints for the
group limit subquery. This is useful when you want to determine the top N groups
using different criteria (e.g., a different time range) than the main query.
For example, you might want to find the top 10 products by sales over the last
30 days, but then show daily sales for those products over the last 7 days.
"""
dimensions: list[Dimension]
top: int
metric: Metric | None
direction: OrderDirection = OrderDirection.DESC
group_others: bool = False
filters: set[Filter] | None = None
@dataclass(frozen=True)
class SemanticRequest:
"""
Represents a request made to obtain semantic results.
This could be a SQL query, an HTTP request, etc.
"""
type: str
definition: str
@dataclass(frozen=True)
class SemanticResult:
"""
Represents the results of a semantic query.
This includes any requests (SQL queries, HTTP requests) that were performed in order
to obtain the results, in order to help troubleshooting.
"""
requests: list[SemanticRequest]
results: pa.Table
@dataclass(frozen=True)
class SemanticQuery:
"""
Represents a semantic query.
"""
metrics: list[Metric]
dimensions: list[Dimension]
filters: set[Filter] | None = None
order: list[OrderTuple] | None = None
limit: int | None = None
offset: int | None = None
group_limit: GroupLimit | None = None

View File

@@ -1,113 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from __future__ import annotations
import enum
from abc import ABC, abstractmethod
from superset_core.semantic_layers.types import (
Dimension,
Filter,
Metric,
SemanticQuery,
SemanticResult,
)
# TODO (betodealmeida): move to the extension JSON
class SemanticViewFeature(enum.Enum):
"""
Custom features supported by semantic layers.
"""
ADHOC_EXPRESSIONS_IN_ORDERBY = "ADHOC_EXPRESSIONS_IN_ORDERBY"
GROUP_LIMIT = "GROUP_LIMIT"
GROUP_OTHERS = "GROUP_OTHERS"
class SemanticView(ABC):
"""
Abstract base class for semantic views.
"""
features: frozenset[SemanticViewFeature]
# Implementations must expose a display name for the view.
# Declared here as a type annotation (not abstract) so that existing
# implementations are not required to add a formal @abstractmethod.
name: str
@abstractmethod
def uid(self) -> str:
"""
Returns a unique identifier for the semantic view.
"""
@abstractmethod
def get_dimensions(self) -> set[Dimension]:
"""
Get the dimensions defined in the semantic view.
"""
@abstractmethod
def get_metrics(self) -> set[Metric]:
"""
Get the metrics defined in the semantic view.
"""
@abstractmethod
def get_values(
self,
dimension: Dimension,
filters: set[Filter] | None = None,
) -> SemanticResult:
"""
Return distinct values for a dimension.
"""
@abstractmethod
def get_table(self, query: SemanticQuery) -> SemanticResult:
"""
Execute a semantic query and return the results.
"""
@abstractmethod
def get_row_count(self, query: SemanticQuery) -> SemanticResult:
"""
Execute a query and return the number of rows the result would have.
"""
@abstractmethod
def get_compatible_metrics(
self,
selected_metrics: set[Metric],
selected_dimensions: set[Dimension],
) -> set[Metric]:
"""
Return metrics compatible with the selected dimensions.
"""
@abstractmethod
def get_compatible_dimensions(
self,
selected_metrics: set[Metric],
selected_dimensions: set[Dimension],
) -> set[Dimension]:
"""
Return dimensions compatible with the selected metrics.
"""

View File

@@ -66,7 +66,7 @@ export type EmbedDashboardParams = {
iframeTitle?: string;
/** additional iframe sandbox attributes ex (allow-top-navigation, allow-popups-to-escape-sandbox) **/
iframeSandboxExtras?: string[];
/** Additional Permissions Policy features for the iframe's `allow` attribute (e.g., ['camera', 'microphone']). `fullscreen` and `clipboard-write` are granted by default. **/
/** iframe allow attribute for Permissions Policy (e.g., ['clipboard-write', 'fullscreen']) **/
iframeAllowExtras?: string[];
/** force a specific refererPolicy to be used in the iframe request **/
referrerPolicy?: ReferrerPolicy;
@@ -233,14 +233,9 @@ export async function embedDashboard({
iframe.src = `${supersetDomain}/embedded/${id}${urlParamsString}`;
iframe.title = iframeTitle;
iframe.style.background = 'transparent';
// Permissions Policy features the embedded dashboard relies on. Modern
// browsers gate these APIs on the iframe's `allow` attribute regardless
// of sandbox flags, so we include them by default. Host apps can extend
// the list via `iframeAllowExtras`.
const allowFeatures = Array.from(
new Set(['fullscreen', 'clipboard-write', ...iframeAllowExtras]),
);
iframe.setAttribute('allow', allowFeatures.join('; '));
if (iframeAllowExtras.length > 0) {
iframe.setAttribute('allow', iframeAllowExtras.join('; '));
}
//@ts-ignore
mountPoint.replaceChildren(iframe);
log('placed the iframe');

View File

@@ -17,7 +17,7 @@
[project]
name = "apache-superset-extensions-cli"
version = "0.1.0rc3"
version = "0.1.0rc2"
description = "Official command-line interface for building, bundling, and managing Apache Superset extensions"
readme = "README.md"
authors = [

File diff suppressed because it is too large Load Diff

View File

@@ -117,14 +117,7 @@
"@luma.gl/gltf": "~9.2.5",
"@luma.gl/shadertools": "~9.2.5",
"@luma.gl/webgl": "~9.2.5",
"@fontsource/fira-code": "^5.2.7",
"@fontsource/inter": "^5.2.8",
"@great-expectations/jsonforms-antd-renderers": "^2.2.10",
"@jsonforms/core": "^3.7.0",
"@jsonforms/react": "^3.7.0",
"@jsonforms/vanilla-renderers": "^3.7.0",
"@reduxjs/toolkit": "^1.9.3",
"@rjsf/antd": "^5.24.13",
"@rjsf/core": "^5.24.13",
"@rjsf/utils": "^5.24.3",
"@rjsf/validator-ajv8": "^5.24.13",
@@ -177,7 +170,7 @@
"fs-extra": "^11.3.4",
"fuse.js": "^7.3.0",
"geolib": "^3.3.14",
"geostyler": "^18.5.1",
"geostyler": "^18.5.0",
"geostyler-data": "^1.1.0",
"geostyler-openlayers-parser": "^5.7.0",
"geostyler-style": "11.0.2",
@@ -190,24 +183,24 @@
"json-bigint": "^1.0.0",
"json-stringify-pretty-compact": "^2.0.0",
"lodash": "^4.18.1",
"mapbox-gl": "^3.23.0",
"mapbox-gl": "^3.22.0",
"markdown-to-jsx": "^9.7.16",
"match-sorter": "^8.3.0",
"memoize-one": "^5.2.1",
"mousetrap": "^1.6.5",
"mustache": "^4.2.0",
"nanoid": "^5.1.11",
"nanoid": "^5.1.9",
"ol": "^10.9.0",
"pretty-ms": "^9.3.0",
"query-string": "9.3.1",
"re-resizable": "^6.11.2",
"react": "^18.2.0",
"react": "^17.0.2",
"react-arborist": "^3.5.0",
"react-checkbox-tree": "^1.8.0",
"react-diff-viewer-continued": "^4.2.2",
"react-dnd": "^11.1.3",
"react-dnd-html5-backend": "^11.1.3",
"react-dom": "^18.2.0",
"react-dom": "^17.0.2",
"react-google-recaptcha": "^3.1.0",
"react-intersection-observer": "^10.0.3",
"react-json-tree": "^0.20.0",
@@ -218,6 +211,7 @@
"react-reverse-portal": "^2.3.0",
"react-router-dom": "^5.3.4",
"react-search-input": "^0.11.3",
"react-sortable-hoc": "^2.0.0",
"react-split": "^2.0.9",
"react-table": "^7.8.0",
"react-transition-group": "^4.4.5",
@@ -250,13 +244,14 @@
"@babel/plugin-transform-export-namespace-from": "^7.27.1",
"@babel/plugin-transform-modules-commonjs": "^7.28.6",
"@babel/plugin-transform-runtime": "^7.29.0",
"@babel/preset-env": "^7.29.3",
"@babel/preset-env": "^7.29.2",
"@babel/preset-react": "^7.28.5",
"@babel/preset-typescript": "^7.28.5",
"@babel/register": "^7.23.7",
"@babel/runtime": "^7.29.2",
"@babel/runtime-corejs3": "^7.29.2",
"@babel/types": "^7.28.6",
"@cypress/react": "^8.0.2",
"@emotion/babel-plugin": "^11.13.5",
"@emotion/jest": "^11.14.2",
"@istanbuljs/nyc-config-typescript": "^1.0.1",
@@ -276,11 +271,12 @@
"@storybook/test-runner": "^0.17.0",
"@svgr/webpack": "^8.1.0",
"@swc/core": "^1.15.32",
"@swc/plugin-emotion": "^14.9.0",
"@swc/plugin-emotion": "^14.8.0",
"@swc/plugin-transform-imports": "^12.5.0",
"@testing-library/dom": "^9.3.4",
"@testing-library/dom": "^8.20.1",
"@testing-library/jest-dom": "^6.9.1",
"@testing-library/react": "^14.0.0",
"@testing-library/react": "^12.1.5",
"@testing-library/react-hooks": "^8.0.1",
"@testing-library/user-event": "^12.8.3",
"@types/content-disposition": "^0.5.9",
"@types/dom-to-image": "^2.6.7",
@@ -290,8 +286,8 @@
"@types/json-bigint": "^1.0.4",
"@types/mousetrap": "^1.6.15",
"@types/node": "^25.6.0",
"@types/react": "^18.2.0",
"@types/react-dom": "^18.2.0",
"@types/react": "^17.0.83",
"@types/react-dom": "^17.0.26",
"@types/react-loadable": "^5.5.11",
"@types/react-redux": "^7.1.10",
"@types/react-resizable": "^3.0.8",
@@ -303,14 +299,14 @@
"@types/rison": "0.1.0",
"@types/tinycolor2": "^1.4.3",
"@types/unzipper": "^0.10.11",
"@typescript-eslint/eslint-plugin": "^8.59.2",
"@typescript-eslint/eslint-plugin": "^8.59.1",
"@typescript-eslint/parser": "^8.58.2",
"babel-jest": "^30.0.2",
"babel-loader": "^10.1.1",
"babel-plugin-dynamic-import-node": "^2.3.3",
"babel-plugin-jsx-remove-data-test-id": "^3.0.0",
"babel-plugin-lodash": "^3.3.4",
"baseline-browser-mapping": "^2.10.24",
"baseline-browser-mapping": "^2.10.21",
"cheerio": "1.2.0",
"concurrently": "^9.2.1",
"copy-webpack-plugin": "^14.0.0",
@@ -327,10 +323,10 @@
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-jest-dom": "^5.5.0",
"eslint-plugin-lodash": "^7.4.0",
"eslint-plugin-no-only-tests": "^3.4.0",
"eslint-plugin-no-only-tests": "^3.3.0",
"eslint-plugin-prettier": "^5.5.5",
"eslint-plugin-react-prefer-function-component": "^5.0.0",
"eslint-plugin-react-you-might-not-need-an-effect": "^0.10.0",
"eslint-plugin-react-you-might-not-need-an-effect": "^0.9.3",
"eslint-plugin-storybook": "^0.8.0",
"eslint-plugin-testing-library": "^7.16.2",
"eslint-plugin-theme-colors": "file:eslint-rules/eslint-plugin-theme-colors",
@@ -345,7 +341,7 @@
"jest-html-reporter": "^4.4.0",
"jest-websocket-mock": "^2.5.0",
"js-yaml-loader": "^1.2.2",
"jsdom": "^29.1.1",
"jsdom": "^29.1.0",
"lerna": "^9.0.4",
"lightningcss": "^1.32.0",
"mini-css-extract-plugin": "^2.10.2",
@@ -365,19 +361,20 @@
"style-loader": "^4.0.0",
"swc-loader": "^0.2.7",
"terser-webpack-plugin": "^5.5.0",
"thread-loader": "^4.0.4",
"ts-jest": "^29.4.9",
"tscw-config": "^1.1.2",
"tsx": "^4.21.0",
"typescript": "5.4.5",
"unzipper": "^0.12.3",
"vm-browserify": "^1.1.2",
"wait-on": "^9.0.6",
"wait-on": "^9.0.5",
"webpack": "^5.106.2",
"webpack-bundle-analyzer": "^5.3.0",
"webpack-cli": "^6.0.1",
"webpack-dev-server": "^5.2.3",
"webpack-manifest-plugin": "^5.0.1",
"webpack-sources": "^3.4.1",
"webpack-sources": "^3.4.0",
"webpack-visualizer-plugin2": "^2.0.0"
},
"peerDependencies": {

View File

@@ -37,7 +37,7 @@
"cross-env": "^10.1.0",
"fs-extra": "^11.3.4",
"jest": "^30.3.0",
"yeoman-test": "^11.4.2"
"yeoman-test": "^11.3.1"
},
"engines": {
"npm": ">= 4.0.0",

View File

@@ -1,6 +1,6 @@
{
"name": "@apache-superset/core",
"version": "0.1.0-rc3",
"version": "0.1.0-rc2",
"description": "This package contains UI elements, APIs, and utility functions used by Superset.",
"sideEffects": false,
"main": "lib/index.js",
@@ -75,15 +75,16 @@
"devDependencies": {
"@babel/cli": "^7.28.6",
"@babel/core": "^7.29.0",
"@babel/preset-env": "^7.29.3",
"@babel/preset-env": "^7.29.2",
"@babel/preset-react": "^7.28.5",
"@babel/preset-typescript": "^7.28.5",
"typescript": "^5.0.0",
"@emotion/styled": "^11.14.1",
"@types/lodash": "^4.17.24",
"@testing-library/dom": "^9.3.4",
"@testing-library/dom": "^8.20.1",
"@testing-library/jest-dom": "*",
"@testing-library/react": "^14.0.0",
"@testing-library/react": "^12.1.5",
"@testing-library/react-hooks": "*",
"@testing-library/user-event": "*",
"@types/react": "*",
"@types/react-loadable": "*",
@@ -97,8 +98,8 @@
"@fontsource/ibm-plex-mono": "^5.2.7",
"@fontsource/inter": "^5.2.6",
"nanoid": "^5.0.9",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-loadable": "^5.5.0",
"tinycolor2": "*",
"lodash": "^4.18.1",

View File

@@ -18,7 +18,7 @@
*/
import userEvent from '@testing-library/user-event';
import { ReactElement } from 'react';
import { render, RenderOptions, RenderResult } from '@testing-library/react';
import { render, RenderOptions } from '@testing-library/react';
import '@testing-library/jest-dom';
import { themeObject } from './theme';
@@ -33,7 +33,7 @@ const Providers = ({ children }: { children: React.ReactNode }) => (
const customRender = (
ui: ReactElement,
options?: Omit<RenderOptions, 'wrapper'>,
): RenderResult => render(ui, { wrapper: Providers, ...options });
) => render(ui, { wrapper: Providers, ...options });
export {
createEvent,

View File

@@ -38,17 +38,9 @@ import {
import { normalizeThemeConfig, serializeThemeConfig } from './utils';
export class Theme {
// Forward-compat: TS 6.0 enforces strictPropertyInitialization here;
// both fields are assigned via setConfig() during construction, so we
// use a definite-assignment assertion rather than hoisting the logic
// out of setConfig().
//
// Assigned via setConfig() in the constructor; TypeScript 6.0's
// strictPropertyInitialization can't trace that call chain, so we use
// a definite-assignment assertion.
theme!: SupersetTheme;
theme: SupersetTheme;
private antdConfig!: AntdThemeConfig;
private antdConfig: AntdThemeConfig;
private constructor({ config }: { config?: AnyThemeConfig }) {
this.SupersetThemeProvider = this.SupersetThemeProvider.bind(this);

View File

@@ -17,7 +17,7 @@
* under the License.
*/
import React from 'react';
import { renderHook } from '@testing-library/react';
import { renderHook } from '@testing-library/react-hooks';
import { ThemeProvider } from '@emotion/react';
import { theme as antdTheme } from 'antd';
import {

View File

@@ -20,10 +20,3 @@
* Stub for the untyped jed module.
*/
declare module 'jed';
/**
* CSS side-effect imports from @fontsource packages. These are bundler-only
* artifacts and carry no type information at runtime; declaring them here
* silences TS2882 under TypeScript 6.0's stricter module-resolution rules.
*/
declare module '@fontsource/*';

View File

@@ -33,16 +33,17 @@
"@ant-design/icons": "^5.6.1",
"@emotion/react": "^11.4.1",
"@superset-ui/core": "*",
"@testing-library/dom": "^9.3.4",
"@testing-library/dom": "^8.20.1",
"@testing-library/jest-dom": "*",
"@testing-library/react": "^14.0.0",
"@testing-library/react": "^12.1.5",
"@testing-library/react-hooks": "*",
"@testing-library/user-event": "*",
"ace-builds": "^1.4.14",
"brace": "^0.11.1",
"memoize-one": "^5.1.1",
"react": "^18.2.0",
"react": "^17.0.2",
"react-ace": "^10.1.0",
"react-dom": "^18.2.0"
"react-dom": "^17.0.2"
},
"publishConfig": {
"access": "public"

View File

@@ -18,7 +18,6 @@
*/
import { isMatrixifyVisible } from './matrixifyControls';
import type { ControlStateMapping } from '../types';
/**
* Helper to build a controls object matching the shape used by
@@ -26,7 +25,7 @@ import type { ControlStateMapping } from '../types';
*/
function makeControls(
overrides: Record<string, unknown> = {},
): ControlStateMapping {
): Record<string, { value: unknown }> {
const defaults: Record<string, unknown> = {
matrixify_enable: false,
matrixify_mode_rows: 'disabled',
@@ -37,7 +36,7 @@ function makeControls(
const merged = { ...defaults, ...overrides };
return Object.fromEntries(
Object.entries(merged).map(([k, v]) => [k, { value: v }]),
) as ControlStateMapping;
);
}
// ── matrixify_enable guard ──────────────────────────────────────────

View File

@@ -20,7 +20,7 @@
import { t } from '@apache-superset/core/translation';
import { validateNonEmpty } from '@superset-ui/core';
import { ControlStateMapping, SharedControlConfig } from '../types';
import { SharedControlConfig } from '../types';
import { dndAdhocMetricControl } from './dndControls';
import { defineSavedMetrics } from '../utils';
@@ -29,12 +29,9 @@ import { defineSavedMetrics } from '../utils';
* Controls for transforming charts into matrix/grid layouts
*/
// Utility function to check if matrixify controls should be visible.
// Controls both visibility callbacks and validator injection via mapStateToProps.
// The matrixify_enable guard prevents hidden validators from firing on
// pre-revamp charts with stale matrixify_mode defaults (fix for #38519).
// Utility function to check if matrixify controls should be visible
const isMatrixifyVisible = (
controls: ControlStateMapping | undefined,
controls: any,
axis: 'rows' | 'columns',
mode?: 'metrics' | 'dimensions',
selectionMode?: 'members' | 'topn' | 'all',

View File

@@ -1,238 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
/**
* Tests for the matrixify_enable guard in isMatrixifyVisible() and
* validator injection via mapStateToProps on real matrixify control definitions.
*
* These are TDD tests for the fix to apache/superset#38519 regression:
* isMatrixifyVisible() must check matrixify_enable before evaluating mode,
* otherwise pre-revamp charts with stale matrixify_mode defaults trigger
* hidden validators that block save.
*/
import {
matrixifyControls,
isMatrixifyVisible,
} from '../../src/shared-controls/matrixifyControls';
import type { ControlPanelState, ControlStateMapping } from '../../src/types';
// Helper: build a minimal controls object for ControlPanelState
const buildControls = (
overrides: Record<string, any> = {},
): ControlStateMapping => {
const controls: Record<string, { value: any }> = {};
Object.entries(overrides).forEach(([key, value]) => {
controls[key] = { value };
});
return controls as ControlStateMapping;
};
// Helper: build a minimal ControlPanelState for mapStateToProps.
// Only provides fields that isMatrixifyVisible and mapStateToProps actually read.
const buildState = (
controlValues: Record<string, any> = {},
formData: Record<string, any> = {},
) =>
({
controls: buildControls(controlValues),
datasource: { columns: [], type: 'table' },
form_data: formData,
common: {},
metadata: {},
slice: { slice_id: 0 },
}) as unknown as ControlPanelState;
// ============================================================
// Validator injection tests via real mapStateToProps (rows)
// ============================================================
// --- matrixify_dimension_rows ---
test('matrixify_dimension_rows: validators empty when matrixify_enable is falsy', () => {
const control = matrixifyControls.matrixify_dimension_rows;
const state = buildState(
{
matrixify_enable: undefined,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'members',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators).toEqual([]);
});
test('matrixify_dimension_rows: validators present when matrixify_enable is true', () => {
const control = matrixifyControls.matrixify_dimension_rows;
const state = buildState(
{
matrixify_enable: true,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'members',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators.length).toBeGreaterThan(0);
});
// --- matrixify_topn_value_rows ---
test('matrixify_topn_value_rows: validators empty when matrixify_enable is falsy', () => {
const control = matrixifyControls.matrixify_topn_value_rows;
const state = buildState(
{
matrixify_enable: undefined,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'topn',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators).toEqual([]);
});
test('matrixify_topn_value_rows: validators present when matrixify_enable is true', () => {
const control = matrixifyControls.matrixify_topn_value_rows;
const state = buildState(
{
matrixify_enable: true,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'topn',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators.length).toBeGreaterThan(0);
});
// --- matrixify_topn_metric_rows ---
test('matrixify_topn_metric_rows: validators empty when matrixify_enable is falsy', () => {
const control = matrixifyControls.matrixify_topn_metric_rows;
const state = buildState(
{
matrixify_enable: undefined,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'topn',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators).toEqual([]);
});
test('matrixify_topn_metric_rows: validators present when matrixify_enable is true', () => {
const control = matrixifyControls.matrixify_topn_metric_rows;
const state = buildState(
{
matrixify_enable: true,
matrixify_mode_rows: 'dimensions',
matrixify_dimension_selection_mode_rows: 'topn',
},
{ matrixify_mode_rows: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators.length).toBeGreaterThan(0);
});
// ============================================================
// Validator injection tests via real mapStateToProps (columns)
// ============================================================
test('matrixify_dimension_columns: validators empty when matrixify_enable is falsy', () => {
const control = matrixifyControls.matrixify_dimension_columns;
const state = buildState(
{
matrixify_enable: undefined,
matrixify_mode_columns: 'dimensions',
matrixify_dimension_selection_mode_columns: 'members',
},
{ matrixify_mode_columns: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators).toEqual([]);
});
test('matrixify_dimension_columns: validators present when matrixify_enable is true', () => {
const control = matrixifyControls.matrixify_dimension_columns;
const state = buildState(
{
matrixify_enable: true,
matrixify_mode_columns: 'dimensions',
matrixify_dimension_selection_mode_columns: 'members',
},
{ matrixify_mode_columns: 'dimensions' },
);
const result = control.mapStateToProps!(state, {} as any);
expect(result.validators.length).toBeGreaterThan(0);
});
// ============================================================
// Direct isMatrixifyVisible guard tests
// ============================================================
test.each([
['undefined', undefined],
['null', null],
['false', false],
['0', 0],
])(
'isMatrixifyVisible returns false when matrixify_enable is %s',
(_, value) => {
const controls = buildControls({
matrixify_enable: value,
matrixify_mode_rows: 'dimensions',
});
expect(isMatrixifyVisible(controls, 'rows')).toBe(false);
},
);
test('isMatrixifyVisible returns true when matrixify_enable is true and mode matches', () => {
const controls = buildControls({
matrixify_enable: true,
matrixify_mode_rows: 'dimensions',
});
expect(isMatrixifyVisible(controls, 'rows', 'dimensions')).toBe(true);
});
test('isMatrixifyVisible returns false when matrixify_enable is true but mode is disabled', () => {
const controls = buildControls({
matrixify_enable: true,
matrixify_mode_rows: 'disabled',
});
expect(isMatrixifyVisible(controls, 'rows')).toBe(false);
});
test('isMatrixifyVisible returns true when matrixify_enable is true and any non-disabled mode (no mode filter)', () => {
const controls = buildControls({
matrixify_enable: true,
matrixify_mode_columns: 'metrics',
});
expect(isMatrixifyVisible(controls, 'columns')).toBe(true);
});

View File

@@ -91,9 +91,10 @@
"@emotion/cache": "^11.4.0",
"@emotion/react": "^11.4.1",
"@emotion/styled": "^11.14.1",
"@testing-library/dom": "^9.3.4",
"@testing-library/dom": "^8.20.1",
"@testing-library/jest-dom": "*",
"@testing-library/react": "^14.0.0",
"@testing-library/react": "^12.1.5",
"@testing-library/react-hooks": "*",
"@testing-library/user-event": "*",
"@types/react": "*",
"@types/react-loadable": "*",
@@ -101,8 +102,8 @@
"@types/tinycolor2": "*",
"antd": "^5.26.0",
"nanoid": "^5.0.9",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react": "^17.0.2",
"react-dom": "^17.0.2",
"react-loadable": "^5.5.0",
"tinycolor2": "*"
},

View File

@@ -17,7 +17,8 @@
* under the License.
*/
import { ReactNode, useCallback, useEffect, useMemo, useState } from 'react';
/* eslint react/sort-comp: 'off' */
import { PureComponent, ReactNode } from 'react';
import {
SupersetClientInterface,
RequestConfig,
@@ -66,112 +67,103 @@ export type ChartDataProviderState = {
error?: ProvidedProps['error'];
};
function ChartDataProvider({
children,
client,
formData,
sliceId,
loadDatasource,
onError,
onLoaded,
formDataRequestOptions,
datasourceRequestOptions,
queryRequestOptions,
}: ChartDataProviderProps): JSX.Element | null {
const [state, setState] = useState<ChartDataProviderState>({
status: 'uninitialized',
});
class ChartDataProvider extends PureComponent<
ChartDataProviderProps,
ChartDataProviderState
> {
readonly chartClient: ChartClient;
const chartClient = useMemo(() => new ChartClient({ client }), [client]);
constructor(props: ChartDataProviderProps) {
super(props);
this.state = { status: 'uninitialized' };
this.chartClient = new ChartClient({ client: props.client });
}
const extractSliceIdAndFormData = useCallback(
(): SliceIdAndOrFormData =>
formData ? { formData } : { sliceId: sliceId as number },
[formData, sliceId],
);
componentDidMount() {
this.handleFetchData();
}
const handleReceiveData = useCallback(
(payload?: Payload) => {
if (onLoaded) onLoaded(payload);
setState({ payload, status: 'loaded' });
},
[onLoaded],
);
const handleError = useCallback(
(error: ProvidedProps['error']) => {
if (onError) onError(error);
setState({ error, status: 'error' });
},
[onError],
);
const handleFetchData = useCallback(() => {
setState({ status: 'loading' });
try {
chartClient
.loadFormData(extractSliceIdAndFormData(), formDataRequestOptions)
.then(loadedFormData =>
Promise.all([
loadDatasource
? chartClient.loadDatasource(
loadedFormData.datasource,
datasourceRequestOptions,
)
: Promise.resolve(undefined),
chartClient.loadQueryData(loadedFormData, queryRequestOptions),
]).then(
([datasource, queriesData]) =>
({
datasource,
formData: loadedFormData,
queriesData,
}) as Payload,
),
)
.then(handleReceiveData)
.catch(handleError);
} catch (error) {
handleError(error as Error);
componentDidUpdate(prevProps: ChartDataProviderProps) {
const { formData, sliceId } = this.props;
if (formData !== prevProps.formData || sliceId !== prevProps.sliceId) {
this.handleFetchData();
}
}, [
chartClient,
extractSliceIdAndFormData,
formDataRequestOptions,
loadDatasource,
datasourceRequestOptions,
queryRequestOptions,
handleReceiveData,
handleError,
]);
}
// Fetch on mount and only refetch when formData or sliceId changes.
// This preserves the original class component's componentDidUpdate
// semantics (which compared only formData and sliceId). Other
// fetch-related inputs referenced by handleFetchData (callbacks and
// request option props) are intentionally excluded from the dependency
// array, so the exhaustive-deps rule is suppressed here.
useEffect(() => {
handleFetchData();
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [formData, sliceId]);
private extractSliceIdAndFormData() {
const { formData, sliceId } = this.props;
return formData ? { formData } : { sliceId: sliceId as number };
}
const { status, payload, error } = state;
private handleFetchData = () => {
const {
loadDatasource,
formDataRequestOptions,
datasourceRequestOptions,
queryRequestOptions,
} = this.props;
// Wrap the children result in a Fragment so the component's return type
// stays `JSX.Element | null` (which TypeScript requires for JSX components)
// while still letting consumers return any ReactNode (strings, fragments,
// arrays, null, etc.) from the render prop.
switch (status) {
case 'loading':
return <>{children({ loading: true })}</>;
case 'loaded':
return <>{children({ payload })}</>;
case 'error':
return <>{children({ error })}</>;
case 'uninitialized':
default:
return null;
this.setState({ status: 'loading' }, () => {
try {
this.chartClient
.loadFormData(
this.extractSliceIdAndFormData(),
formDataRequestOptions,
)
.then(formData =>
Promise.all([
loadDatasource
? this.chartClient.loadDatasource(
formData.datasource,
datasourceRequestOptions,
)
: Promise.resolve(undefined),
this.chartClient.loadQueryData(formData, queryRequestOptions),
]).then(
([datasource, queriesData]) =>
// eslint-disable-next-line @typescript-eslint/consistent-type-assertions
({
datasource,
formData,
queriesData,
}) as Payload,
),
)
.then(this.handleReceiveData)
.catch(this.handleError);
} catch (error) {
this.handleError(error as Error);
}
});
};
private handleReceiveData = (payload?: Payload) => {
const { onLoaded } = this.props;
if (onLoaded) onLoaded(payload);
this.setState({ payload, status: 'loaded' });
};
private handleError = (error: ProvidedProps['error']) => {
const { onError } = this.props;
if (onError) onError(error);
this.setState({ error, status: 'error' });
};
render() {
const { children } = this.props;
const { status, payload, error } = this.state;
switch (status) {
case 'loading':
return children({ loading: true });
case 'loaded':
return children({ payload });
case 'error':
return children({ error });
case 'uninitialized':
default:
return null;
}
}
}

View File

@@ -21,11 +21,8 @@ import {
ReactNode,
RefObject,
ComponentType,
PureComponent,
Fragment,
memo,
useCallback,
useMemo,
useRef,
} from 'react';
import {
@@ -35,19 +32,23 @@ import {
} from 'react-error-boundary';
import { ParentSize } from '@visx/responsive';
import { createSelector } from 'reselect';
import { useTheme } from '@emotion/react';
import { withTheme } from '@emotion/react';
import { parseLength, Dimension } from '../../dimension';
import getChartMetadataRegistry from '../registries/ChartMetadataRegistrySingleton';
import SuperChartCore, {
Props as SuperChartCoreProps,
SuperChartCoreRef,
} from './SuperChartCore';
import SuperChartCore, { Props as SuperChartCoreProps } from './SuperChartCore';
import DefaultFallbackComponent from './FallbackComponent';
import ChartProps, { ChartPropsConfig } from '../models/ChartProps';
import NoResultsComponent from './NoResultsComponent';
import { isMatrixifyEnabled } from '../types/matrixify';
import MatrixifyGridRenderer from './Matrixify/MatrixifyGridRenderer';
import { supersetTheme, SupersetTheme } from '@apache-superset/core/theme';
const defaultProps = {
FallbackComponent: DefaultFallbackComponent,
height: 400 as string | number,
width: '100%' as string | number,
enableNoResults: true,
};
export type FallbackPropsWithDimension = FallbackProps & Partial<Dimension>;
export type WrapperProps = Dimension & {
@@ -55,9 +56,7 @@ export type WrapperProps = Dimension & {
};
export type Props = Omit<SuperChartCoreProps, 'chartProps'> &
Omit<ChartPropsConfig, 'width' | 'height' | 'theme'> & {
/** Theme object (optional, falls back to ThemeProvider context) */
theme?: SupersetTheme;
Omit<ChartPropsConfig, 'width' | 'height'> & {
/**
* Set this to true to disable error boundary built-in in SuperChart
* and let the error propagate to upper level
@@ -103,269 +102,215 @@ export type Props = Omit<SuperChartCoreProps, 'chartProps'> &
inContextMenu?: boolean;
};
function SuperChart({
id,
className,
chartType,
preTransformProps,
overrideTransformProps,
postTransformProps,
onRenderSuccess,
onRenderFailure,
disableErrorBoundary,
FallbackComponent = DefaultFallbackComponent,
onErrorBoundary,
Wrapper,
queriesData,
enableNoResults = true,
noResults,
theme: themeProp,
debounceTime,
height = 400,
width = '100%',
...rest
}: Props): JSX.Element {
type PropsWithDefault = Props & Readonly<typeof defaultProps>;
class SuperChart extends PureComponent<Props, {}> {
/**
* SuperChart's core ref
* SuperChart's core
*/
const coreRef = useRef<SuperChartCoreRef | null>(null);
core?: SuperChartCore | null;
// Use theme from prop if provided, otherwise from context.
// When no ThemeProvider is present, useTheme() returns an empty object,
// so we fall back to the default supersetTheme to avoid passing an invalid theme downstream.
const themeFromContext = useTheme() as Partial<SupersetTheme>;
const theme =
themeProp ??
(Object.keys(themeFromContext).length > 0
? (themeFromContext as SupersetTheme)
: supersetTheme);
private createChartProps = ChartProps.createSelector();
const createChartProps = useMemo(() => ChartProps.createSelector(), []);
const parseDimension = useMemo(
() =>
createSelector(
[
({ width: w }: { width: string | number; height: string | number }) =>
w,
({
height: h,
}: {
width: string | number;
height: string | number;
}) => h,
],
(w, h) => {
// Parse them in case they are % or 'auto'
const widthInfo = parseLength(w);
const heightInfo = parseLength(h);
const boxHeight = heightInfo.isDynamic
? `${heightInfo.multiplier * 100}%`
: heightInfo.value;
const boxWidth = widthInfo.isDynamic
? `${widthInfo.multiplier * 100}%`
: widthInfo.value;
const style = {
height: boxHeight,
width: boxWidth,
};
// bounding box will ensure that when one dimension is not dynamic
// e.g. height = 300
// the auto size will be bound to that value instead of being 100% by default
// e.g. height: 300 instead of height: '100%'
const BoundingBox =
widthInfo.isDynamic &&
heightInfo.isDynamic &&
widthInfo.multiplier === 1 &&
heightInfo.multiplier === 1
? Fragment
: ({ children }: { children: ReactNode }) => (
<div style={style}>{children}</div>
);
return { BoundingBox, heightInfo, widthInfo };
},
),
[],
);
const setRef = useCallback((core: SuperChartCoreRef | null) => {
coreRef.current = core;
}, []);
const getQueryCount = useCallback(
() => getChartMetadataRegistry().get(chartType)?.queryObjectCount ?? 1,
[chartType],
);
const renderChart = useCallback(
(chartWidth: number, chartHeight: number) => {
const chartProps = createChartProps({
...rest,
queriesData,
height: chartHeight,
width: chartWidth,
theme,
});
// Check if Matrixify is enabled - use rawFormData (snake_case)
const matrixifyEnabled = isMatrixifyEnabled(chartProps.rawFormData);
if (matrixifyEnabled) {
// When matrixify is enabled, queriesData is expected to be empty
// since each cell fetches its own data via StatefulChart
const matrixifyChart = (
<MatrixifyGridRenderer
formData={chartProps.rawFormData}
datasource={chartProps.datasource}
width={chartWidth}
height={chartHeight}
hooks={chartProps.hooks}
/>
);
// Apply wrapper if provided
const wrappedChart = Wrapper ? (
<Wrapper width={chartWidth} height={chartHeight}>
{matrixifyChart}
</Wrapper>
) : (
matrixifyChart
);
// Include error boundary unless disabled
return disableErrorBoundary === true ? (
wrappedChart
) : (
<ErrorBoundary
FallbackComponent={props => (
<FallbackComponent
width={chartWidth}
height={chartHeight}
{...props}
/>
)}
onError={onErrorBoundary}
>
{wrappedChart}
</ErrorBoundary>
);
}
// Check for no results only for non-matrixified charts
const noResultQueries =
enableNoResults &&
(!queriesData ||
queriesData
.slice(0, getQueryCount())
.every(
({ data }) => !data || (Array.isArray(data) && data.length === 0),
));
let chart: JSX.Element;
if (noResultQueries) {
chart = noResults ? (
<>{noResults}</>
) : (
<NoResultsComponent
id={id}
className={className}
height={chartHeight}
width={chartWidth}
/>
);
} else {
const chartWithoutWrapper = (
<SuperChartCore
ref={setRef}
id={id}
className={className}
chartType={chartType}
chartProps={chartProps}
preTransformProps={preTransformProps}
overrideTransformProps={overrideTransformProps}
postTransformProps={postTransformProps}
onRenderSuccess={onRenderSuccess}
onRenderFailure={onRenderFailure}
/>
);
chart = Wrapper ? (
<Wrapper width={chartWidth} height={chartHeight}>
{chartWithoutWrapper}
</Wrapper>
) : (
chartWithoutWrapper
);
}
// Include the error boundary by default unless it is specifically disabled.
return disableErrorBoundary === true ? (
chart
) : (
<ErrorBoundary
FallbackComponent={props => (
<FallbackComponent
width={chartWidth}
height={chartHeight}
{...props}
/>
)}
onError={onErrorBoundary}
>
{chart}
</ErrorBoundary>
);
},
private parseDimension = createSelector(
[
createChartProps,
rest,
queriesData,
theme,
Wrapper,
disableErrorBoundary,
FallbackComponent,
onErrorBoundary,
enableNoResults,
getQueryCount,
noResults,
({ width }: { width: string | number; height: string | number }) => width,
({ height }) => height,
],
(width, height) => {
// Parse them in case they are % or 'auto'
const widthInfo = parseLength(width);
const heightInfo = parseLength(height);
const boxHeight = heightInfo.isDynamic
? `${heightInfo.multiplier * 100}%`
: heightInfo.value;
const boxWidth = widthInfo.isDynamic
? `${widthInfo.multiplier * 100}%`
: widthInfo.value;
const style = {
height: boxHeight,
width: boxWidth,
};
// bounding box will ensure that when one dimension is not dynamic
// e.g. height = 300
// the auto size will be bound to that value instead of being 100% by default
// e.g. height: 300 instead of height: '100%'
const BoundingBox =
widthInfo.isDynamic &&
heightInfo.isDynamic &&
widthInfo.multiplier === 1 &&
heightInfo.multiplier === 1
? Fragment
: ({ children }: { children: ReactNode }) => (
<div style={style}>{children}</div>
);
return { BoundingBox, heightInfo, widthInfo };
},
);
static defaultProps = defaultProps;
private setRef = (core: SuperChartCore | null) => {
this.core = core;
};
private getQueryCount = () =>
getChartMetadataRegistry().get(this.props.chartType)?.queryObjectCount ?? 1;
renderChart(width: number, height: number) {
const {
id,
className,
setRef,
chartType,
preTransformProps,
overrideTransformProps,
postTransformProps,
onRenderSuccess,
onRenderFailure,
],
);
disableErrorBoundary,
FallbackComponent,
onErrorBoundary,
Wrapper,
queriesData,
enableNoResults,
noResults,
theme,
...rest
} = this.props as PropsWithDefault;
const { heightInfo, widthInfo, BoundingBox } = parseDimension({
width,
height,
});
const chartProps = this.createChartProps({
...rest,
queriesData,
height,
width,
theme,
});
// If any of the dimension is dynamic, get parent's dimension
if (widthInfo.isDynamic || heightInfo.isDynamic) {
return (
<BoundingBox>
<ParentSize debounceTime={debounceTime}>
{({ width: parentWidth, height: parentHeight }) =>
renderChart(
widthInfo.isDynamic ? Math.floor(parentWidth) : widthInfo.value,
heightInfo.isDynamic
? Math.floor(parentHeight)
: heightInfo.value,
)
}
</ParentSize>
</BoundingBox>
// Check if Matrixify is enabled - use rawFormData (snake_case)
const matrixifyEnabled = isMatrixifyEnabled(chartProps.rawFormData);
if (matrixifyEnabled) {
// When matrixify is enabled, queriesData is expected to be empty
// since each cell fetches its own data via StatefulChart
const matrixifyChart = (
<MatrixifyGridRenderer
formData={chartProps.rawFormData}
datasource={chartProps.datasource}
width={width}
height={height}
hooks={chartProps.hooks}
/>
);
// Apply wrapper if provided
const wrappedChart = Wrapper ? (
<Wrapper width={width} height={height}>
{matrixifyChart}
</Wrapper>
) : (
matrixifyChart
);
// Include error boundary unless disabled
return disableErrorBoundary === true ? (
wrappedChart
) : (
<ErrorBoundary
FallbackComponent={props => (
<FallbackComponent width={width} height={height} {...props} />
)}
onError={onErrorBoundary}
>
{wrappedChart}
</ErrorBoundary>
);
}
// Check for no results only for non-matrixified charts
const noResultQueries =
enableNoResults &&
(!queriesData ||
queriesData
.slice(0, this.getQueryCount())
.every(
({ data }) => !data || (Array.isArray(data) && data.length === 0),
));
let chart;
if (noResultQueries) {
chart = noResults || (
<NoResultsComponent
id={id}
className={className}
height={height}
width={width}
/>
);
} else {
const chartWithoutWrapper = (
<SuperChartCore
ref={this.setRef}
id={id}
className={className}
chartType={chartType}
chartProps={chartProps}
preTransformProps={preTransformProps}
overrideTransformProps={overrideTransformProps}
postTransformProps={postTransformProps}
onRenderSuccess={onRenderSuccess}
onRenderFailure={onRenderFailure}
/>
);
chart = Wrapper ? (
<Wrapper width={width} height={height}>
{chartWithoutWrapper}
</Wrapper>
) : (
chartWithoutWrapper
);
}
// Include the error boundary by default unless it is specifically disabled.
return disableErrorBoundary === true ? (
chart
) : (
<ErrorBoundary
FallbackComponent={props => (
<FallbackComponent width={width} height={height} {...props} />
)}
onError={onErrorBoundary}
>
{chart}
</ErrorBoundary>
);
}
return renderChart(widthInfo.value, heightInfo.value);
render() {
const { heightInfo, widthInfo, BoundingBox } = this.parseDimension(
this.props as PropsWithDefault,
);
// If any of the dimension is dynamic, get parent's dimension
if (widthInfo.isDynamic || heightInfo.isDynamic) {
const { debounceTime } = this.props;
return (
<BoundingBox>
<ParentSize debounceTime={debounceTime}>
{({ width, height }) =>
this.renderChart(
widthInfo.isDynamic ? Math.floor(width) : widthInfo.value,
heightInfo.isDynamic ? Math.floor(height) : heightInfo.value,
)
}
</ParentSize>
</BoundingBox>
);
}
return this.renderChart(widthInfo.value, heightInfo.value);
}
}
// Wrap in memo to preserve the shallow-prop-comparison behavior
// of the original PureComponent implementation.
export default memo(SuperChart);
export default withTheme(SuperChart);

View File

@@ -17,13 +17,8 @@
* under the License.
*/
import {
forwardRef,
useCallback,
useImperativeHandle,
useMemo,
useRef,
} from 'react';
/* eslint-disable react/jsx-sort-default-props */
import { PureComponent } from 'react';
import { t } from '@apache-superset/core/translation';
import { createSelector } from 'reselect';
import getChartComponentRegistry from '../registries/ChartComponentRegistrySingleton';
@@ -44,6 +39,16 @@ function IDENTITY<T>(x: T) {
const EMPTY = () => null;
const defaultProps = {
id: '',
className: '',
preTransformProps: IDENTITY,
overrideTransformProps: undefined,
postTransformProps: IDENTITY,
onRenderSuccess() {},
onRenderFailure() {},
};
interface LoadingProps {
error: { toString(): string };
}
@@ -73,231 +78,174 @@ export type Props = {
onRenderFailure?: HandlerFunction;
};
export interface SuperChartCoreRef {
container: HTMLElement | null;
}
export default class SuperChartCore extends PureComponent<Props, {}> {
/**
* The HTML element that wraps all chart content
*/
container?: HTMLElement | null;
const SuperChartCore = forwardRef<SuperChartCoreRef, Props>(
function SuperChartCore(
{
id = '',
className = '',
chartProps = BLANK_CHART_PROPS,
chartType,
preTransformProps = IDENTITY,
overrideTransformProps,
postTransformProps = IDENTITY,
onRenderSuccess = () => {},
onRenderFailure = () => {},
},
ref,
) {
const containerRef = useRef<HTMLElement | null>(null);
// Expose container via ref
useImperativeHandle(
ref,
() => ({
get container() {
return containerRef.current;
},
}),
[],
);
/**
* memoized function so it will not recompute and return previous value
* unless one of
* - preTransformProps
* - chartProps
* is changed.
*/
const preSelector = useMemo(
() =>
createSelector(
[
(input: {
chartProps: ChartProps;
preTransformProps?: PreTransformProps;
}) => input.chartProps,
input => input.preTransformProps,
],
(inputChartProps, pre = IDENTITY) => pre(inputChartProps),
),
[],
);
/**
* memoized function so it will not recompute and return previous value
* unless one of the input arguments have changed.
*/
const transformSelector = useMemo(
() =>
createSelector(
[
(input: {
chartProps: ChartProps;
transformProps?: TransformProps;
}) => input.chartProps,
input => input.transformProps,
],
(preprocessedChartProps, transform = IDENTITY) =>
transform(preprocessedChartProps),
),
[],
);
/**
* memoized function so it will not recompute and return previous value
* unless one of the input arguments have changed.
*/
const postSelector = useMemo(
() =>
createSelector(
[
(input: {
chartProps: ChartProps;
postTransformProps?: PostTransformProps;
}) => input.chartProps,
input => input.postTransformProps,
],
(transformedChartProps, post = IDENTITY) =>
post(transformedChartProps),
),
[],
);
/**
* Using each memoized function to retrieve the computed chartProps
*/
const processChartProps = useCallback(
({
chartProps: inputChartProps,
preTransformProps: pre,
transformProps,
postTransformProps: post,
}: {
/**
* memoized function so it will not recompute and return previous value
* unless one of
* - preTransformProps
* - chartProps
* is changed.
*/
preSelector = createSelector(
[
(input: {
chartProps: ChartProps;
preTransformProps?: PreTransformProps;
transformProps?: TransformProps;
}) => input.chartProps,
input => input.preTransformProps,
],
(chartProps, pre = IDENTITY) => pre(chartProps),
);
/**
* memoized function so it will not recompute and return previous value
* unless one of the input arguments have changed.
*/
transformSelector = createSelector(
[
(input: { chartProps: ChartProps; transformProps?: TransformProps }) =>
input.chartProps,
input => input.transformProps,
],
(preprocessedChartProps, transform = IDENTITY) =>
transform(preprocessedChartProps),
);
/**
* memoized function so it will not recompute and return previous value
* unless one of the input arguments have changed.
*/
postSelector = createSelector(
[
(input: {
chartProps: ChartProps;
postTransformProps?: PostTransformProps;
}) =>
postSelector({
chartProps: transformSelector({
chartProps: preSelector({
chartProps: inputChartProps,
preTransformProps: pre,
}),
transformProps,
}),
postTransformProps: post,
}),
[preSelector, transformSelector, postSelector],
);
}) => input.chartProps,
input => input.postTransformProps,
],
(transformedChartProps, post = IDENTITY) => post(transformedChartProps),
);
const renderLoading = useCallback(
(loadingProps: LoadingProps, loadingChartType: string) => {
const { error } = loadingProps;
/**
* Using each memoized function to retrieve the computed chartProps
*/
processChartProps = ({
chartProps,
preTransformProps,
transformProps,
postTransformProps,
}: {
chartProps: ChartProps;
preTransformProps?: PreTransformProps;
transformProps?: TransformProps;
postTransformProps?: PostTransformProps;
}) =>
this.postSelector({
chartProps: this.transformSelector({
chartProps: this.preSelector({ chartProps, preTransformProps }),
transformProps,
}),
postTransformProps,
});
if (error) {
return (
<div className="alert alert-warning" role="alert">
<strong>{t('ERROR')}</strong>&nbsp;
<code>chartType=&quot;{loadingChartType}&quot;</code> &mdash;
{error.toString()}
</div>
);
}
return null;
},
[],
);
const renderChart = useCallback(
(loaded: LoadedModules, props: RenderProps) => {
const { Chart, transformProps } = loaded;
const {
chartProps: renderChartProps,
preTransformProps: pre,
postTransformProps: post,
} = props;
return (
<Chart
{...processChartProps({
chartProps: renderChartProps,
preTransformProps: pre,
transformProps,
postTransformProps: post,
})}
/>
);
},
[processChartProps],
);
/**
* memoized function so it will not recompute
* and return previous value
* unless one of
* - chartType
* - overrideTransformProps
* is changed.
*/
const createLoadableRendererSelector = useMemo(
() =>
createSelector(
[
(input: {
chartType: string;
overrideTransformProps?: TransformProps;
}) => input.chartType,
input => input.overrideTransformProps,
],
(selectorChartType, selectorOverrideTransformProps) => {
if (selectorChartType) {
const Renderer = createLoadableRenderer({
loader: {
Chart: () =>
getChartComponentRegistry().getAsPromise(selectorChartType),
transformProps: selectorOverrideTransformProps
? () => Promise.resolve(selectorOverrideTransformProps)
: () =>
getChartTransformPropsRegistry().getAsPromise(
selectorChartType,
),
},
loading: (loadingProps: LoadingProps) =>
renderLoading(loadingProps, selectorChartType),
render: renderChart,
});
// Trigger preloading.
Renderer.preload();
return Renderer;
}
return EMPTY;
/**
* memoized function so it will not recompute
* and return previous value
* unless one of
* - chartType
* - overrideTransformProps
* is changed.
*/
private createLoadableRenderer = createSelector(
[
(input: { chartType: string; overrideTransformProps?: TransformProps }) =>
input.chartType,
input => input.overrideTransformProps,
],
(chartType, overrideTransformProps) => {
if (chartType) {
const Renderer = createLoadableRenderer({
loader: {
Chart: () => getChartComponentRegistry().getAsPromise(chartType),
transformProps: overrideTransformProps
? () => Promise.resolve(overrideTransformProps)
: () => getChartTransformPropsRegistry().getAsPromise(chartType),
},
),
[renderLoading, renderChart],
);
loading: (loadingProps: LoadingProps) =>
this.renderLoading(loadingProps, chartType),
render: this.renderChart,
});
const setRef = useCallback((container: HTMLElement | null) => {
containerRef.current = container;
}, []);
// Trigger preloading.
Renderer.preload();
return Renderer;
}
return EMPTY;
},
);
static defaultProps = defaultProps;
private renderChart = (loaded: LoadedModules, props: RenderProps) => {
const { Chart, transformProps } = loaded;
const { chartProps, preTransformProps, postTransformProps } = props;
return (
<Chart
{...this.processChartProps({
chartProps,
preTransformProps,
transformProps,
postTransformProps,
})}
/>
);
};
private renderLoading = (loadingProps: LoadingProps, chartType: string) => {
const { error } = loadingProps;
if (error) {
return (
<div className="alert alert-warning" role="alert">
<strong>{t('ERROR')}</strong>&nbsp;
<code>chartType=&quot;{chartType}&quot;</code> &mdash;
{error.toString()}
</div>
);
}
return null;
};
private setRef = (container: HTMLElement | null) => {
this.container = container;
};
render() {
const {
id,
className,
preTransformProps,
postTransformProps,
chartProps = BLANK_CHART_PROPS,
onRenderSuccess,
onRenderFailure,
} = this.props;
// Create LoadableRenderer and start preloading
// the lazy-loaded Chart components
const Renderer = createLoadableRendererSelector({
chartType,
overrideTransformProps,
});
const Renderer = this.createLoadableRenderer(this.props);
// Do not render if chartProps is set to null.
// but the pre-loading has been started in createLoadableRendererSelector
// but the pre-loading has been started in this.createLoadableRenderer
// to prepare for rendering once chartProps becomes available.
if (chartProps === null) {
return null;
@@ -315,7 +263,7 @@ const SuperChartCore = forwardRef<SuperChartCoreRef, Props>(
}
return (
<div {...containerProps} ref={setRef}>
<div {...containerProps} ref={this.setRef}>
<Renderer
preTransformProps={preTransformProps}
postTransformProps={postTransformProps}
@@ -325,7 +273,5 @@ const SuperChartCore = forwardRef<SuperChartCoreRef, Props>(
/>
</div>
);
},
);
export default SuperChartCore;
}
}

View File

@@ -94,20 +94,11 @@ class CategoricalColorScale extends ExtensibleFunction {
/**
* Increment the color range with analogous colors
*
* @param forceMinimumExpansion When true, expand at least once even if the
* ordinal domain is still shorter than the palette. Shared dashboard labels
* can resolve from the global map without entering the scale domain, so
* domain-based sizing alone would skip expansion while collision resolution
* still needs analogous colors.
*/
incrementColorRange(forceMinimumExpansion = false) {
const domainBasedMultiple = Math.floor(
incrementColorRange() {
const multiple = Math.floor(
this.domain().length / this.originColors.length,
);
const multiple = forceMinimumExpansion
? Math.max(domainBasedMultiple, 1)
: domainBasedMultiple;
// the domain has grown larger than the original range
// increments the range with analogous colors
if (multiple > this.multiple) {
@@ -153,7 +144,6 @@ class CategoricalColorScale extends ExtensibleFunction {
if (isFeatureEnabled(FeatureFlag.UseAnalogousColors)) {
this.incrementColorRange();
}
if (
// feature flag to be deprecated (will become standard behaviour)
isFeatureEnabled(FeatureFlag.AvoidColorsCollision) &&
@@ -164,39 +154,6 @@ class CategoricalColorScale extends ExtensibleFunction {
}
}
if (
isFeatureEnabled(FeatureFlag.AvoidColorsCollision) &&
source === LabelsColorMapSource.Dashboard &&
(forcedColor || isExistingLabel)
) {
const colliding = [...this.chartLabelsColorMap.entries()].filter(
([labelKey, c]) => c === color && labelKey !== cleanedValue,
);
if (
colliding.length > 0 &&
isFeatureEnabled(FeatureFlag.UseAnalogousColors)
) {
this.incrementColorRange(true);
}
for (const [otherLabel] of colliding) {
if (
Object.prototype.hasOwnProperty.call(this.forcedColors, otherLabel)
) {
continue;
}
const newColor = this.getNextAvailableColor(otherLabel, color);
this.chartLabelsColorMap.set(otherLabel, newColor);
if (sliceId) {
this.labelsColorMapInstance.addSlice(
otherLabel,
newColor,
sliceId,
appliedColorScheme,
);
}
}
}
// keep track of values in this slice
this.chartLabelsColorMap.set(cleanedValue, color);

View File

@@ -17,15 +17,19 @@
* under the License.
*/
import type { ReactElement, ReactNode } from 'react';
import { Tooltip, type TooltipPlacement } from '@superset-ui/core/components';
import type { ReactElement } from 'react';
import {
Tooltip,
type TooltipPlacement,
type IconType,
} from '@superset-ui/core/components';
import { css, useTheme } from '@apache-superset/core/theme';
export interface ActionProps {
label: string;
tooltip?: string | ReactElement;
placement?: TooltipPlacement;
icon: ReactNode;
icon: IconType;
onClick: () => void;
}

View File

@@ -16,7 +16,7 @@
* specific language governing permissions and limitations
* under the License.
*/
import { renderHook } from '@testing-library/react';
import { renderHook } from '@testing-library/react-hooks';
import { useJsonValidation } from './useJsonValidation';
describe('useJsonValidation', () => {

View File

@@ -16,7 +16,16 @@
* specific language governing permissions and limitations
* under the License.
*/
import React, { useEffect, useState, forwardRef, ComponentType } from 'react';
import {
useEffect,
useState,
RefObject,
forwardRef,
ComponentType,
ForwardRefExoticComponent,
PropsWithoutRef,
RefAttributes,
} from 'react';
import { Loading } from '../Loading';
import type { PlaceholderProps } from './types';
@@ -84,16 +93,15 @@ export function AsyncEsmComponent<
return promise;
}
type AsyncComponent = React.ForwardRefExoticComponent<
React.PropsWithoutRef<FullProps> & React.RefAttributes<unknown>
type AsyncComponent = ForwardRefExoticComponent<
PropsWithoutRef<FullProps> & RefAttributes<ComponentType<FullProps>>
> & {
preload?: typeof waitForPromise;
};
// @ts-expect-error -- generic forwardRef has PropsWithoutRef incompatibility with FullProps
const AsyncComponent: AsyncComponent = forwardRef(function AsyncComponent(
props: FullProps,
ref,
ref: RefObject<ComponentType<FullProps>>,
) {
const [loaded, setLoaded] = useState(component !== undefined);
useEffect(() => {

View File

@@ -24,6 +24,7 @@ import type {
ButtonVariantType,
ButtonColorType,
} from 'antd/es/button';
import { IconType } from '@superset-ui/core/components/Icons/types';
import type { TooltipPlacement } from '../Tooltip/types';
export type { AntdButtonProps, ButtonType, ButtonVariantType, ButtonColorType };
@@ -48,5 +49,5 @@ export type ButtonProps = Omit<AntdButtonProps, 'css'> & {
buttonStyle?: ButtonStyle;
cta?: boolean;
showMarginRight?: boolean;
icon?: ReactNode;
icon?: IconType;
};

View File

@@ -73,7 +73,7 @@ export const Component = (props: DropdownContainerProps) => {
const [overflowingState, setOverflowingState] = useState<OverflowingState>();
const containerRef = useRef<DropdownRef>(null);
const onOverflowingStateChange = useCallback(
(value: OverflowingState) => {
value => {
if (!isEqual(overflowingState, value)) {
setItems(generateItems(value));
setOverflowingState(value);

View File

@@ -17,6 +17,7 @@
* under the License.
*/
import type { CSSProperties, ReactElement, RefObject, ReactNode } from 'react';
import { IconType } from '../Icons';
/**
* Container item.
@@ -69,7 +70,7 @@ export interface DropdownContainerProps {
/**
* Icon of the dropdown trigger.
*/
dropdownTriggerIcon?: ReactNode;
dropdownTriggerIcon?: IconType;
/**
* Text of the dropdown trigger.
*/

View File

@@ -1,80 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { fireEvent, render, screen, userEvent } from '@superset-ui/core/spec';
import { useState } from 'react';
import { DynamicEditableTitle } from '.';
const Harness = ({ initialTitle = 'Original' }: { initialTitle?: string }) => {
const [title, setTitle] = useState(initialTitle);
return (
<DynamicEditableTitle
title={title}
placeholder="placeholder"
canEdit
label="Title"
onSave={setTitle}
/>
);
};
test('rapid typing then backspacing keeps every keystroke', async () => {
render(<Harness />);
const input = screen.getByRole('textbox') as HTMLInputElement;
userEvent.click(input);
await userEvent.type(input, 'abc', { delay: 1 });
expect(input.value).toBe('Originalabc');
await userEvent.type(input, '{backspace}{backspace}{backspace}', {
delay: 1,
});
expect(input.value).toBe('Original');
});
test('a change event that arrives before isEditing flips is not dropped', () => {
// Reproduces the regression: the input is focused but `isEditing` is still
// false because no click has been registered yet (e.g. focus arrived via
// tab, autofocus, or programmatic focus). The pre-fix `handleChange`
// bailed out with `!isEditing`, dropping the keystroke. Because the
// input is controlled, antd's internal `useMergedState` then resyncs the
// DOM value back to the (stale) `props.value`, so the user sees their
// typed character disappear. This test fires a raw change event so it
// doesn't go through userEvent's implicit click.
const onSave = jest.fn();
render(
<DynamicEditableTitle
title="Foo"
placeholder="placeholder"
canEdit
label="Title"
onSave={onSave}
/>,
);
const input = screen.getByRole('textbox') as HTMLInputElement;
fireEvent.change(input, { target: { value: 'FooX' } });
expect(input.value).toBe('FooX');
});
test('prop changes mid-edit do not clobber unsaved typing', async () => {
const { rerender } = render(<Harness initialTitle="Foo" />);
const input = screen.getByRole('textbox') as HTMLInputElement;
userEvent.click(input);
await userEvent.type(input, 'X', { delay: 1 });
expect(input.value).toBe('FooX');
rerender(<Harness initialTitle="Foo" />);
expect(input.value).toBe('FooX');
});

View File

@@ -23,7 +23,6 @@ import {
useCallback,
useEffect,
useLayoutEffect,
useRef,
useState,
} from 'react';
import { t } from '@apache-superset/core/translation';
@@ -31,7 +30,6 @@ import { css, SupersetTheme, useTheme } from '@apache-superset/core/theme';
import { useResizeDetector } from 'react-resize-detector';
import { Tooltip } from '../Tooltip';
import { Input } from '../Input';
import type { InputRef } from '../Input';
import type { DynamicEditableTitleProps } from './types';
const titleStyles = (theme: SupersetTheme) => css`
@@ -77,10 +75,8 @@ export const DynamicEditableTitle = memo(
const [isEditing, setIsEditing] = useState(false);
const [showTooltip, setShowTooltip] = useState(false);
const [currentTitle, setCurrentTitle] = useState(title || '');
const [inputWidth, setInputWidth] = useState<number>(0);
const sizerRef = useRef<HTMLSpanElement>(null);
const inputRef = useRef<InputRef>(null);
const { width: inputWidth, ref: sizerRef } = useResizeDetector();
const { width: containerWidth, ref: containerRef } = useResizeDetector({
refreshMode: 'debounce',
});
@@ -89,33 +85,27 @@ export const DynamicEditableTitle = memo(
setCurrentTitle(title);
}, [title]);
useEffect(() => {
if (isEditing) {
if (isEditing && sizerRef?.current) {
// move cursor and scroll to the end
const inputElement = inputRef.current?.input;
if (inputElement) {
const { length } = inputElement.value;
inputElement.setSelectionRange(length, length);
inputElement.scrollLeft = inputElement.scrollWidth;
if (sizerRef.current.setSelectionRange) {
const { length } = sizerRef.current.value;
sizerRef.current.setSelectionRange(length, length);
sizerRef.current.scrollLeft = sizerRef.current.scrollWidth;
}
}
}, [isEditing]);
// a trick to make the input grow when user types text
// we make an additional span component, place it somewhere out of view and
// mirror the input value, then measure the span synchronously (pre-paint)
// to resize the input element. Reading offsetWidth in a useLayoutEffect
// forces a sync layout, so the input width updates in the same commit as
// the value change — preventing a flicker frame where the input is shown
// with new value but stale width.
// we make additional span component, place it somewhere out of view and copy input
// then we can measure the width of that span to resize the input element
useLayoutEffect(() => {
if (sizerRef.current) {
if (sizerRef?.current) {
sizerRef.current.textContent = currentTitle || placeholder;
setInputWidth(sizerRef.current.offsetWidth);
}
}, [currentTitle, placeholder]);
}, [currentTitle, placeholder, sizerRef]);
useEffect(() => {
const inputElement = inputRef.current?.input;
const inputElement = sizerRef.current?.input;
if (inputElement) {
if (inputElement.scrollWidth > inputElement.clientWidth) {
@@ -147,17 +137,9 @@ export const DynamicEditableTitle = memo(
const handleChange = useCallback(
(ev: ChangeEvent<HTMLInputElement>) => {
if (!canEdit) {
if (!canEdit || !isEditing) {
return;
}
// Any change implies the user is editing. Ensure isEditing is true
// even if the change event arrives before the click handler has
// committed (e.g. focus via tab, autofocus, or batched click+type
// events). Otherwise the keystroke would be dropped and the
// controlled input would revert to the previous value.
if (!isEditing) {
setIsEditing(true);
}
setCurrentTitle(ev.target.value);
},
[canEdit, isEditing],
@@ -186,7 +168,6 @@ export const DynamicEditableTitle = memo(
}
>
<Input
ref={inputRef}
data-test="editable-title-input"
variant="borderless"
aria-label={label ?? t('Title')}

View File

@@ -17,6 +17,7 @@
* under the License.
*/
import type { ReactNode, SyntheticEvent } from 'react';
import type { IconType } from '@superset-ui/core/components';
export type EmptyStateSize = 'small' | 'medium' | 'large';
@@ -25,7 +26,7 @@ export type EmptyStateProps = {
description?: ReactNode;
image?: ReactNode | string;
buttonText?: ReactNode;
buttonIcon?: ReactNode;
buttonIcon?: IconType;
buttonAction?: (event: SyntheticEvent) => void;
/** Controls image size. Defaults to 'medium'. */
size?: EmptyStateSize;

View File

@@ -20,7 +20,7 @@ import { Form as AntdForm } from 'antd';
import { FormProps } from './types';
function CustomForm(props: FormProps) {
return <AntdForm {...(props as any)} />;
return <AntdForm {...props} />;
}
export const Form = Object.assign(CustomForm, {

View File

@@ -41,6 +41,7 @@ test('renders with monospace prop', () => {
// test stories from the storybook!
test('renders all the storybook gallery variants', () => {
// @ts-expect-error: Suppress TypeScript error for LabelGallery usage
const { container } = render(<LabelGallery />);
const nonInteractiveLabelCount = 4;
const renderedLabelCount = options.length * 2 + nonInteractiveLabelCount;

View File

@@ -23,7 +23,7 @@ import { Label } from '..';
// Define the prop types for DatasetTypeLabel
interface DatasetTypeLabelProps {
datasetType: 'physical' | 'virtual' | 'semantic_view';
datasetType: 'physical' | 'virtual'; // Accepts only 'physical' or 'virtual'
}
const SIZE = 's'; // Define the size as a constant
@@ -32,22 +32,6 @@ export const DatasetTypeLabel: React.FC<DatasetTypeLabelProps> = ({
datasetType,
}) => {
const theme = useTheme();
if (datasetType === 'semantic_view') {
return (
<Label
icon={
<Icons.ApartmentOutlined
iconSize={SIZE}
iconColor={theme.colorInfo}
/>
}
type="info"
style={{ color: theme.colorInfo }}
>
{t('Semantic')}
</Label>
);
}
const isPhysical = datasetType === 'physical';
const label: string = isPhysical ? t('Physical') : t('Virtual');
const labelType = isPhysical ? 'primary' : 'default';

View File

@@ -21,7 +21,6 @@ import type { BackgroundPosition } from './ImageLoader';
export interface LinkProps {
to: string;
children?: ReactNode;
}
export interface ListViewCardProps {

View File

@@ -194,7 +194,7 @@ const MetadataBar = ({ items, tooltipPlacement = 'top' }: MetadataBarProps) => {
}
const onResize = useCallback(
(width: number | undefined) => {
width => {
// Calculates the breakpoint width to collapse the bar.
// The last item does not have a space, so we subtract SPACE_BETWEEN_ITEMS from the total.
const breakpoint =

View File

@@ -54,7 +54,7 @@ export function FormModal({
}, [onSave, resetForm]);
const handleFormSubmit = useCallback(
async (values: object) => {
async values => {
try {
setIsSaving(true);
await formSubmitHandler(values);

View File

@@ -104,9 +104,6 @@ export const StyledModal = styled(BaseModal)<StyledModalProps>`
right: 0;
display: flex;
justify-content: center;
// Keep the close button clickable when modal body content uses
// position: sticky with elevated z-index (e.g. DatabaseModal header).
z-index: ${theme.zIndexPopupBase + 1};
}
.ant-modal-close:hover {

View File

@@ -17,7 +17,7 @@
* under the License.
*/
import type { CSSProperties, ReactNode } from 'react';
import type { FormInstance, ModalFuncProps } from 'antd';
import type { ModalFuncProps } from 'antd';
import type { ResizableProps } from 're-resizable';
import type { DraggableProps } from 'react-draggable';
import { ButtonStyle } from '../Button/types';
@@ -68,8 +68,7 @@ export interface StyledModalProps {
export type { ModalFuncProps };
export interface FormModalProps extends Omit<ModalProps, 'children'> {
children: ReactNode | ((form: FormInstance) => ReactNode);
export interface FormModalProps extends ModalProps {
initialValues?: object;
formSubmitHandler: (values: object) => Promise<void>;
onSave: () => void;

View File

@@ -16,7 +16,7 @@
* specific language governing permissions and limitations
* under the License.
*/
import { ReactNode, ReactElement, memo } from 'react';
import { ReactNode, ReactElement } from 'react';
import { t } from '@apache-superset/core/translation';
import { css, SupersetTheme, useTheme } from '@apache-superset/core/theme';
import { Icons } from '@superset-ui/core/components/Icons';
@@ -118,64 +118,62 @@ export type PageHeaderWithActionsProps = {
};
};
export const PageHeaderWithActions = memo(
({
editableTitleProps,
showTitlePanelItems,
certificatiedBadgeProps,
showFaveStar,
faveStarProps,
titlePanelAdditionalItems,
rightPanelAdditionalItems,
additionalActionsMenu,
menuDropdownProps,
showMenuDropdown = true,
tooltipProps,
}: PageHeaderWithActionsProps) => {
const theme = useTheme();
return (
<div css={headerStyles} className="header-with-actions">
<div className="title-panel">
<DynamicEditableTitle {...editableTitleProps} />
{showTitlePanelItems && (
<div css={buttonsStyles}>
{certificatiedBadgeProps?.certifiedBy && (
<CertifiedBadge {...certificatiedBadgeProps} />
)}
{showFaveStar && <FaveStar {...faveStarProps} />}
{titlePanelAdditionalItems}
</div>
export const PageHeaderWithActions = ({
editableTitleProps,
showTitlePanelItems,
certificatiedBadgeProps,
showFaveStar,
faveStarProps,
titlePanelAdditionalItems,
rightPanelAdditionalItems,
additionalActionsMenu,
menuDropdownProps,
showMenuDropdown = true,
tooltipProps,
}: PageHeaderWithActionsProps) => {
const theme = useTheme();
return (
<div css={headerStyles} className="header-with-actions">
<div className="title-panel">
<DynamicEditableTitle {...editableTitleProps} />
{showTitlePanelItems && (
<div css={buttonsStyles}>
{certificatiedBadgeProps?.certifiedBy && (
<CertifiedBadge {...certificatiedBadgeProps} />
)}
{showFaveStar && <FaveStar {...faveStarProps} />}
{titlePanelAdditionalItems}
</div>
)}
</div>
<div className="right-button-panel">
{rightPanelAdditionalItems}
<div css={additionalActionsContainerStyles}>
{showMenuDropdown && (
<Dropdown
trigger={['click']}
popupRender={() => additionalActionsMenu}
{...menuDropdownProps}
>
<span>
<Button
css={menuTriggerStyles}
buttonStyle="tertiary"
aria-label={t('Menu actions trigger')}
tooltip={tooltipProps?.text}
placement={tooltipProps?.placement}
data-test="actions-trigger"
>
<Icons.EllipsisOutlined
iconColor={theme.colorPrimary}
iconSize="l"
/>
</Button>
</span>
</Dropdown>
)}
</div>
<div className="right-button-panel">
{rightPanelAdditionalItems}
<div css={additionalActionsContainerStyles}>
{showMenuDropdown && (
<Dropdown
trigger={['click']}
popupRender={() => additionalActionsMenu}
{...menuDropdownProps}
>
<span>
<Button
css={menuTriggerStyles}
buttonStyle="tertiary"
aria-label={t('Menu actions trigger')}
tooltip={tooltipProps?.text}
placement={tooltipProps?.placement}
data-test="actions-trigger"
>
<Icons.EllipsisOutlined
iconColor={theme.colorPrimary}
iconSize="l"
/>
</Button>
</span>
</Dropdown>
)}
</div>
</div>
</div>
);
},
);
</div>
);
};

View File

@@ -1160,7 +1160,7 @@ test('does not fire onChange if the same value is selected in single mode', asyn
// Reference for the bug this tests: https://github.com/apache/superset/pull/33043#issuecomment-2809419640
test('typing and deleting the last character for a new option displays correctly', async () => {
jest.useFakeTimers({ advanceTimers: true });
jest.useFakeTimers();
render(<Select {...defaultProps} allowNewOptions />);
await open();

View File

@@ -17,7 +17,7 @@
* under the License.
*/
import { render, screen, fireEvent } from '@superset-ui/core/spec';
import { renderHook } from '@testing-library/react';
import { renderHook } from '@testing-library/react-hooks';
import { TableInstance, useTable } from 'react-table';
import TableCollection from '.';

View File

@@ -91,7 +91,7 @@ export function mapColumns<T extends object>(
return columns.map(column => {
const { isSorted, isSortedDesc } = getSortingInfo(headerGroups, column.id);
return {
title: column.Header as ReactNode,
title: column.Header,
dataIndex: column.id?.includes('.') ? column.id.split('.') : column.id,
hidden: column.hidden,
key: column.id,
@@ -121,7 +121,7 @@ export function mapColumns<T extends object>(
column,
});
}
return val as ReactNode;
return val;
},
className: column.className,
};

Some files were not shown because too many files have changed in this diff Show More