Compare commits

..

11 Commits

Author SHA1 Message Date
Joe Li
8b62bf0935 fix(embedded-e2e): strengthen assertion signal-to-noise from review
Address /review-code findings — the previous round's hardening fixed
flake but a few assertions still gave weak signals:

- The chart-rendered selector matched a still-loading chart cell, since
  Superset's `Loading` spinner itself renders an SVG. Exclude the spinner
  via `:not(:has([data-test="loading-indicator"]))` and centralize the
  selector as `EmbeddedPage.RENDERED_CHART_SELECTOR`.
- The "dashboard renders" test only proved iframe/header chrome, not the
  dashboard. Add `waitForChartRendered()` so the test name matches what
  it asserts.
- The `hideTitle` test passed for the wrong reason if the locator
  drifted (`toBeHidden()` succeeds for absent elements). Add an explicit
  `toHaveCount(0)` so the contrast against the baseline visibility check
  in test 1 is load-bearing.
- `tokenCallCount` was a `>=1` check that any rendered dashboard would
  satisfy. Tighten to `=== 1` to actually exercise the SDK's caching
  contract.
- Drop the redundant `appUrl` shadow of `appServer.url`.
- Move `import os` to module top in the docker-light config; document
  the strict `"true"`-only env-var truthiness convention.

Pre-commit clean (type-check, prettier, oxlint, ruff, mypy). Local
re-verification blocked by an unrelated worktree env issue (semantic
layers feature has incomplete state — the docker-compose-light stack
doesn't bind-mount superset-core/, so the image's stale copy lacks the
new submodule); CI on the chromium-embedded project will validate.
Changes are strictly stronger assertions and refactors so they cannot
turn a previously-passing test into a false positive.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 16:11:17 -07:00
Joe Li
dc136e8898 feat(docker-light): enable embedded-dashboard support in local stack
Three additions to the lightweight local config so the embedded-dashboard
flow works against docker-compose-light without manually patching state:

- Read SUPERSET_FEATURE_<NAME> env vars into FEATURE_FLAGS so a docker
  .env-local can toggle features without editing tracked config.
- Disable Talisman so /embedded/<uuid> doesn't serve X-Frame-Options:
  SAMEORIGIN, which otherwise blocks cross-origin iframe embedding.
- Mirror Public to Gamma via PUBLIC_ROLE_LIKE so guest tokens can hit
  /api/v1/me/roles/ (CI does this implicitly via load_test_users; the
  light stack does not).

Required for the chromium-embedded Playwright project to run locally.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 16:11:17 -07:00
Joe Li
fdf8525c5d fix(embedded-e2e): harden Playwright suite against flake
Replaces racy one-shot checks with auto-retrying assertions, asserts the
referrer-block test against the deterministic 403 response (not iframe
content), uses an OS-allocated port for the static test app with
connection-tracked teardown, caches the JWT access token across tests,
sends CSRF on the guest-token call (page.request always carries the
storageState cookie, so JWT-only doesn't actually skip CSRF), and waits
for a real viz element inside chart containers rather than a class that
doesn't exist. Verified with --repeat-each=5 (25/25 passing).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-11 16:11:17 -07:00
Joe Li
f7b5de25e9 fix(embedded-e2e): use route allowlist in static test server
The test app server only ever serves /, /index.html, and /sdk/index.js,
so replace dynamic path joining with a fixed allowlist. This eliminates
the data flow from req.url to readFileSync that CodeQL flagged as a
path-traversal sink — the previous resolve+startsWith containment check
was correct but not recognized as a sanitizer by the analyzer.
2026-05-11 16:11:17 -07:00
Joe Li
918143fe90 ci(embedded-e2e): build SDK and configure test environment
- Add a build-embedded-sdk step to bashlib.sh and wire it into the
  superset-playwright and superset-e2e workflows so the SDK bundle is
  compiled before Playwright runs.
- Set SUPERSET_FEATURE_EMBEDDED_SUPERSET=true via workflow env so the
  feature flag only affects Playwright jobs. Setting it in the shared
  integration test config breaks unrelated Python tests because the
  security manager's guest-user paths access g.user through paths that
  most tests don't mock.
- Add CORS for localhost:9000 and TALISMAN_ENABLED=False to the
  integration test config. Talisman defaults to X-Frame-Options:
  SAMEORIGIN, which blocks the embedded dashboard from rendering
  inside an iframe hosted on a different port.
2026-05-11 16:11:17 -07:00
Joe Li
335a08a81b feat(embedded-e2e): add Playwright E2E tests for embedded dashboards
Adds five tests covering the embedded dashboard flow against the
world_health example: render, hideTitle UI config, chart rendering,
allowed_domains referrer check, and guest-token data access. Includes:

- A chromium-embedded Playwright project, excluded from the main
  project via testIgnore so it can be opted into separately.
- An EmbeddedPage page object and API helpers for embedding/guest
  tokens plus dashboard lookup by slug.
- A static test app (embedded-app/index.html) loaded from a minimal
  Node static server. Playwright bridges the guest-token fetch from
  Node into the browser via page.exposeFunction.
- EMBEDDED timeout/config constants.

Workflow integration and test-environment configuration land in a
follow-up commit.
2026-05-11 16:11:17 -07:00
Evan Rusackas
cfb704dbeb test(sqllab): stabilize SaveDatasetModal overwrite-flow test helper (#40036)
Co-authored-by: Superset Dev <dev@superset.apache.org>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-11 15:48:10 -07:00
Evan Rusackas
e77f6ece92 fix(ci): serialize Docs Deployment runs to avoid push races (#40030)
Co-authored-by: Claude Code <noreply@anthropic.com>
2026-05-11 11:25:31 -07:00
Evan Rusackas
785a08c7d5 chore(frontend): export typed useAppDispatch / useAppSelector hooks (#40027)
Co-authored-by: Claude Code <noreply@anthropic.com>
2026-05-11 11:01:57 -07:00
Maxime Beauchemin
d90d3a2dea fix(importexport): honor overwrite flag on /api/v1/assets/import (#39502)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-11 10:24:42 -07:00
Maxime Beauchemin
6ee4d694bc fix(sqllab): include template_params when overwriting a dataset (#39501)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-05-11 10:24:15 -07:00
24 changed files with 1556 additions and 72 deletions

View File

@@ -59,6 +59,15 @@ build-assets() {
say "::endgroup::"
}
build-embedded-sdk() {
cd "$GITHUB_WORKSPACE/superset-embedded-sdk"
say "::group::Build embedded SDK bundle for E2E tests"
npm ci
npm run build
say "::endgroup::"
}
build-instrumented-assets() {
cd "$GITHUB_WORKSPACE/superset-frontend"

View File

@@ -17,6 +17,16 @@ on:
workflow_dispatch: {}
# Serialize deploys: the action pushes to apache/superset-site without
# rebasing, so concurrent runs race on the final push and the loser fails
# with `! [rejected] asf-site -> asf-site (fetch first)`. Cancel any
# in-progress run as soon as a newer one starts — the destination repo
# isn't touched until the final push step, so canceling mid-build is safe,
# and the freshest content always wins.
concurrency:
group: docs-deploy-asf-site
cancel-in-progress: true
jobs:
config:
runs-on: ubuntu-24.04

View File

@@ -169,6 +169,7 @@ jobs:
PYTHONPATH: ${{ github.workspace }}
REDIS_PORT: 16379
GITHUB_TOKEN: ${{ github.token }}
SUPERSET_FEATURE_EMBEDDED_SUPERSET: "true"
services:
postgres:
image: postgres:17-alpine
@@ -239,6 +240,11 @@ jobs:
uses: ./.github/actions/cached-dependencies
with:
run: build-instrumented-assets
- name: Build embedded SDK
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies
with:
run: build-embedded-sdk
- name: Install Playwright
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies

View File

@@ -43,6 +43,7 @@ jobs:
PYTHONPATH: ${{ github.workspace }}
REDIS_PORT: 16379
GITHUB_TOKEN: ${{ github.token }}
SUPERSET_FEATURE_EMBEDDED_SUPERSET: "true"
services:
postgres:
image: postgres:17-alpine
@@ -113,6 +114,11 @@ jobs:
uses: ./.github/actions/cached-dependencies
with:
run: build-instrumented-assets
- name: Build embedded SDK
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies
with:
run: build-embedded-sdk
- name: Install Playwright
if: steps.check.outputs.python || steps.check.outputs.frontend
uses: ./.github/actions/cached-dependencies

View File

@@ -18,6 +18,8 @@
# Configuration for docker-compose-light.yml - disables Redis and uses minimal services
# Import all settings from the main config first
import os
from flask_caching.backends.filesystemcache import FileSystemCache
from superset_config import * # noqa: F403
@@ -36,3 +38,31 @@ THUMBNAIL_CACHE_CONFIG = CACHE_CONFIG
# Disable Celery entirely for lightweight mode
CELERY_CONFIG = None # type: ignore[assignment,misc]
# Honor SUPERSET_FEATURE_<NAME> env vars on top of any flags inherited from
# superset_config. Lets local dev/e2e enable features (e.g. EMBEDDED_SUPERSET)
# without editing shipped config files. Only the literal string "true"
# (case-insensitive) is treated as enabled — "1"/"yes"/"on" are not, matching
# the strict-string convention used elsewhere in Superset's env parsing.
FEATURE_FLAGS = {
**FEATURE_FLAGS, # noqa: F405
**{
name[len("SUPERSET_FEATURE_") :]: value.strip().lower() == "true"
for name, value in os.environ.items()
if name.startswith("SUPERSET_FEATURE_")
},
}
# Disable Talisman so /embedded/<uuid> doesn't return X-Frame-Options:SAMEORIGIN.
# Without this, browsers refuse to render Superset inside an iframe from a
# different origin (i.e. the embedded SDK use case). Production/CI configures
# Talisman with explicit `frame-ancestors`; for the lightweight local stack we
# just turn it off.
TALISMAN_ENABLED = False
# Guest tokens (used by the embedded SDK) inherit the "Public" role's perms.
# Out of the box Public has zero perms, so embedded dashboards immediately fail
# their first call (`/api/v1/me/roles/`) with 403. Mirror Public to Gamma —
# the standard read-only viewer role — so the embedded flow can authenticate
# and load dashboard data in local dev.
PUBLIC_ROLE_LIKE = "Gamma"

View File

@@ -229,7 +229,7 @@
"babel-plugin-dynamic-import-node": "^2.3.3",
"babel-plugin-jsx-remove-data-test-id": "^3.0.0",
"babel-plugin-lodash": "^3.3.4",
"baseline-browser-mapping": "^2.10.29",
"baseline-browser-mapping": "^2.10.24",
"cheerio": "1.2.0",
"concurrently": "^9.2.1",
"copy-webpack-plugin": "^14.0.0",
@@ -17358,9 +17358,9 @@
"license": "MIT"
},
"node_modules/baseline-browser-mapping": {
"version": "2.10.29",
"resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.10.29.tgz",
"integrity": "sha512-Asa2krT+XTPZINCS+2QcyS8WTkObE77RwkydwF7h6DmnKqbvlalz93m/dnphUyCa6SWSP51VgtEUf2FN+gelFQ==",
"version": "2.10.24",
"resolved": "https://registry.npmjs.org/baseline-browser-mapping/-/baseline-browser-mapping-2.10.24.tgz",
"integrity": "sha512-I2NkZOOrj2XuguvWCK6OVh9GavsNjZjK908Rq3mIBK25+GD8vPX5w2WdxVqnQ7xx3SrZJiCiZFu+/Oz50oSYSA==",
"dev": true,
"license": "Apache-2.0",
"bin": {

View File

@@ -310,7 +310,7 @@
"babel-plugin-dynamic-import-node": "^2.3.3",
"babel-plugin-jsx-remove-data-test-id": "^3.0.0",
"babel-plugin-lodash": "^3.3.4",
"baseline-browser-mapping": "^2.10.29",
"baseline-browser-mapping": "^2.10.24",
"cheerio": "1.2.0",
"concurrently": "^9.2.1",
"copy-webpack-plugin": "^14.0.0",

View File

@@ -95,6 +95,7 @@ export default defineConfig({
testIgnore: [
'**/tests/auth/**/*.spec.ts',
'**/tests/sqllab/**/*.spec.ts',
'**/tests/embedded/**/*.spec.ts',
...(process.env.INCLUDE_EXPERIMENTAL ? [] : ['**/experimental/**']),
],
use: {
@@ -132,6 +133,22 @@ export default defineConfig({
// No storageState = clean browser with no cached cookies
},
},
{
// Embedded dashboard tests - validates the full embedding flow:
// external app -> SDK -> iframe -> guest token -> dashboard render.
// Each spec file mutates per-dashboard embedding state (UUID,
// allowed_domains) on a single shared Superset, so files must not
// run in parallel even if more are added later.
name: 'chromium-embedded',
testMatch: '**/tests/embedded/**/*.spec.ts',
fullyParallel: false,
use: {
browserName: 'chromium',
testIdAttribute: 'data-test',
// Uses admin auth for API calls to configure embedding and get guest tokens
storageState: 'playwright/.auth/user.json',
},
},
],
// Web server setup - disabled in CI (Flask started separately in workflow)

View File

@@ -0,0 +1,96 @@
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Embedded Dashboard Test App</title>
<style>
html, body { margin: 0; padding: 0; height: 100%; }
#superset-container { width: 100%; height: 100vh; }
#superset-container iframe { width: 100%; height: 100%; border: none; }
#error { color: red; padding: 20px; display: none; }
#status { padding: 10px; font-family: monospace; font-size: 12px; }
</style>
</head>
<body>
<div id="status">Initializing embedded dashboard...</div>
<div id="error"></div>
<div id="superset-container" data-test="embedded-container"></div>
<script src="/sdk/index.js"></script>
<script>
(async function () {
const params = new URLSearchParams(window.location.search);
const uuid = params.get('uuid');
const supersetDomain = params.get('supersetDomain');
if (!uuid || !supersetDomain) {
document.getElementById('error').style.display = 'block';
document.getElementById('error').textContent =
'Missing required query params: uuid, supersetDomain';
return;
}
const statusEl = document.getElementById('status');
// fetchGuestToken is injected by Playwright via page.exposeFunction()
// Falls back to window.__guestToken for simple/static token injection
async function fetchGuestToken() {
if (typeof window.__fetchGuestToken === 'function') {
statusEl.textContent = 'Fetching guest token...';
const token = await window.__fetchGuestToken();
statusEl.textContent = 'Guest token received, loading dashboard...';
return token;
}
if (window.__guestToken) {
return window.__guestToken;
}
throw new Error('No guest token source available');
}
try {
// Parse optional UI config from query params
const uiConfig = {};
if (params.get('hideTitle') === 'true') uiConfig.hideTitle = true;
if (params.get('hideTab') === 'true') uiConfig.hideTab = true;
if (params.get('hideChartControls') === 'true') uiConfig.hideChartControls = true;
const dashboard = await supersetEmbeddedSdk.embedDashboard({
id: uuid,
supersetDomain: supersetDomain,
mountPoint: document.getElementById('superset-container'),
fetchGuestToken: fetchGuestToken,
dashboardUiConfig: Object.keys(uiConfig).length > 0 ? uiConfig : undefined,
debug: params.get('debug') === 'true',
});
statusEl.textContent = 'Dashboard embedded successfully';
// Expose dashboard API on window for Playwright assertions
window.__embeddedDashboard = dashboard;
} catch (err) {
document.getElementById('error').style.display = 'block';
document.getElementById('error').textContent = 'Embed failed: ' + err.message;
statusEl.textContent = 'Error';
}
})();
</script>
</body>
</html>

View File

@@ -132,26 +132,14 @@ export interface DashboardResult {
published?: boolean;
}
/**
* Get a dashboard by its title
* @param page - Playwright page instance (provides authentication context)
* @param title - The dashboard_title to search for
* @returns Dashboard object if found, null if not found
*/
export async function getDashboardByName(
async function getDashboardByFilter(
page: Page,
title: string,
col: 'dashboard_title' | 'slug',
value: string,
): Promise<DashboardResult | null> {
const filter = {
filters: [
{
col: 'dashboard_title',
opr: 'eq',
value: title,
},
],
};
const queryParam = rison.encode(filter);
const queryParam = rison.encode({
filters: [{ col, opr: 'eq', value }],
});
const response = await apiGet(
page,
`${ENDPOINTS.DASHBOARD}?q=${queryParam}`,
@@ -169,3 +157,29 @@ export async function getDashboardByName(
return null;
}
/**
* Get a dashboard by its title
* @param page - Playwright page instance (provides authentication context)
* @param title - The dashboard_title to search for
* @returns Dashboard object if found, null if not found
*/
export async function getDashboardByName(
page: Page,
title: string,
): Promise<DashboardResult | null> {
return getDashboardByFilter(page, 'dashboard_title', title);
}
/**
* Get a dashboard by its slug
* @param page - Playwright page instance (provides authentication context)
* @param slug - The slug to search for
* @returns Dashboard object if found, null if not found
*/
export async function getDashboardBySlug(
page: Page,
slug: string,
): Promise<DashboardResult | null> {
return getDashboardByFilter(page, 'slug', slug);
}

View File

@@ -0,0 +1,136 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { Page } from '@playwright/test';
import { apiPost, apiPut, getCsrfToken } from './requests';
import { ENDPOINTS as DASHBOARD_ENDPOINTS } from './dashboard';
export const ENDPOINTS = {
SECURITY_LOGIN: 'api/v1/security/login',
GUEST_TOKEN: 'api/v1/security/guest_token/',
} as const;
export interface EmbeddedConfig {
uuid: string;
allowed_domains: string[];
dashboard_id: number;
}
/**
* Enable embedding on a dashboard and return the embedded UUID.
* Uses PUT (upsert) to preserve UUID across repeated calls.
* @param page - Playwright page instance (provides authentication context)
* @param dashboardIdOrSlug - Numeric dashboard id or slug
* @param allowedDomains - Domains allowed to embed; empty array allows all
* @returns Embedded config with UUID, allowed_domains, and dashboard_id
*/
export async function apiEnableEmbedding(
page: Page,
dashboardIdOrSlug: number | string,
allowedDomains: string[] = [],
): Promise<EmbeddedConfig> {
const response = await apiPut(
page,
`${DASHBOARD_ENDPOINTS.DASHBOARD}${dashboardIdOrSlug}/embedded`,
{ allowed_domains: allowedDomains },
);
const body = await response.json();
return body.result as EmbeddedConfig;
}
/**
* Login as admin and return the JWT access token used by the guest_token
* endpoint. Cache the result at suite level (`beforeAll`) and pass it into
* `getGuestToken` to avoid a fresh login on every test.
*
* Defaults match `playwright/global-setup.ts` so credentials come from one
* source (env vars). Overrides via `options` win.
*/
export async function getAccessToken(
page: Page,
options?: { username?: string; password?: string },
): Promise<string> {
const username =
options?.username ?? process.env.PLAYWRIGHT_ADMIN_USERNAME ?? 'admin';
const password =
options?.password ?? process.env.PLAYWRIGHT_ADMIN_PASSWORD ?? 'general';
const loginResponse = await apiPost(
page,
ENDPOINTS.SECURITY_LOGIN,
{
username,
password,
provider: 'db',
refresh: true,
},
{ allowMissingCsrf: true },
);
const loginBody = await loginResponse.json();
return loginBody.access_token;
}
/**
* Get a guest token for an embedded dashboard.
* If `accessToken` is provided, the login round-trip is skipped — preferred
* for tests that fetch many tokens from a single suite.
* @returns Signed guest token string
*/
export async function getGuestToken(
page: Page,
dashboardId: number | string,
options?: {
accessToken?: string;
username?: string;
password?: string;
rls?: Array<{ dataset: number; clause: string }>;
},
): Promise<string> {
const accessToken =
options?.accessToken ??
(await getAccessToken(page, {
username: options?.username,
password: options?.password,
}));
const rls = options?.rls ?? [];
// The guest_token endpoint authenticates via JWT Bearer, but if the
// request also carries a session cookie (which page.request inherits from
// storageState), Flask-WTF still requires a matching X-CSRFToken. Send it
// unconditionally so this works whether the caller is authenticated via
// session, JWT, or both.
const { token: csrfToken } = await getCsrfToken(page);
const guestResponse = await page.request.post(ENDPOINTS.GUEST_TOKEN, {
data: {
user: {
username: 'embedded_test_user',
first_name: 'Embedded',
last_name: 'TestUser',
},
resources: [{ type: 'dashboard', id: String(dashboardId) }],
rls,
},
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
...(csrfToken ? { 'X-CSRFToken': csrfToken } : {}),
},
});
const guestBody = await guestResponse.json();
return guestBody.token;
}

View File

@@ -39,7 +39,7 @@ function getBaseUrl(): string {
return url.endsWith('/') ? url : `${url}/`;
}
interface CsrfResult {
export interface CsrfResult {
token: string;
error?: string;
}
@@ -49,7 +49,7 @@ interface CsrfResult {
* Superset provides a CSRF token via api/v1/security/csrf_token/
* The session cookie is automatically included by page.request
*/
async function getCsrfToken(page: Page): Promise<CsrfResult> {
export async function getCsrfToken(page: Page): Promise<CsrfResult> {
try {
const response = await page.request.get('api/v1/security/csrf_token/', {
failOnStatusCode: false,

View File

@@ -0,0 +1,172 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { Page, FrameLocator, Locator, expect } from '@playwright/test';
import { EMBEDDED } from '../utils/constants';
/**
* Page object for the embedded dashboard test app.
*
* The test app runs on a separate origin (its origin is assigned per-suite
* via an OS-allocated port) and uses the @superset-ui/embedded-sdk to render
* a Superset dashboard in an iframe. Playwright's page.exposeFunction()
* bridges the guest token from Node.js into the browser page.
*/
export class EmbeddedPage {
private readonly page: Page;
private static readonly SELECTORS = {
CONTAINER: '[data-test="embedded-container"]',
IFRAME: 'iframe[title="Embedded Dashboard"]',
STATUS: '#status',
ERROR: '#error',
} as const;
constructor(page: Page) {
this.page = page;
}
/**
* Set up the guest token bridge before navigating.
* Must be called BEFORE goto() since embedDashboard() calls fetchGuestToken
* immediately on page load.
*/
async exposeTokenFetcher(tokenFn: () => Promise<string>): Promise<void> {
await this.page.exposeFunction('__fetchGuestToken', tokenFn);
}
/**
* Navigate to the embedded test app with the given parameters.
* `appUrl` is the origin of the static test app (assigned dynamically by
* the spec's beforeAll fixture so workers don't collide on a fixed port).
*/
async goto(params: {
appUrl: string;
uuid: string;
supersetDomain: string;
hideTitle?: boolean;
hideTab?: boolean;
hideChartControls?: boolean;
debug?: boolean;
}): Promise<void> {
const searchParams = new URLSearchParams({
uuid: params.uuid,
supersetDomain: params.supersetDomain,
});
if (params.hideTitle) searchParams.set('hideTitle', 'true');
if (params.hideTab) searchParams.set('hideTab', 'true');
if (params.hideChartControls) searchParams.set('hideChartControls', 'true');
if (params.debug) searchParams.set('debug', 'true');
await this.page.goto(`${params.appUrl}/?${searchParams.toString()}`);
}
/**
* FrameLocator for the embedded dashboard iframe.
*/
get iframe(): FrameLocator {
return this.page.frameLocator(EmbeddedPage.SELECTORS.IFRAME);
}
/**
* Wait for the iframe to appear in the DOM AND have its src set.
* The SDK appends the iframe element before assigning src, so a bare
* `state: 'attached'` wait races the src read.
*/
async waitForIframe(options?: { timeout?: number }): Promise<void> {
const locator = this.page.locator(EmbeddedPage.SELECTORS.IFRAME);
await locator.waitFor({
state: 'attached',
timeout: options?.timeout ?? EMBEDDED.IFRAME_LOAD,
});
await expect(locator).toHaveAttribute('src', /.+/, {
timeout: options?.timeout ?? EMBEDDED.IFRAME_LOAD,
});
}
/**
* Wait for dashboard content to render inside the iframe.
* Looks for the grid-container which indicates charts are loading/loaded.
*/
async waitForDashboardContent(options?: { timeout?: number }): Promise<void> {
const frame = this.iframe;
await frame
.locator('.grid-container, [data-test="grid-container"]')
.first()
.waitFor({
state: 'visible',
timeout: options?.timeout ?? EMBEDDED.DASHBOARD_RENDER,
});
}
/**
* Matches a chart cell that has finished loading: it contains a real viz
* element (svg, canvas, table) AND no longer hosts the `Loading` spinner
* (`data-test="loading-indicator"`). Excluding the spinner matters —
* the spinner itself renders an SVG, so a `:has(svg)`-only check can match
* a still-loading chart for the wrong reason.
*/
static readonly RENDERED_CHART_SELECTOR =
'[data-test="chart-container"]:has(svg, canvas, table):not(:has([data-test="loading-indicator"]))';
/**
* Wait for at least one chart to finish rendering — viz drawn AND no
* loading spinner in that cell.
*/
async waitForChartRendered(options?: { timeout?: number }): Promise<void> {
await this.iframe
.locator(EmbeddedPage.RENDERED_CHART_SELECTOR)
.first()
.waitFor({
state: 'visible',
timeout: options?.timeout ?? EMBEDDED.CHART_RENDER,
});
}
/**
* Locator for the dashboard title input inside the iframe.
* Returned as a `Locator` so callers can use `expect(...).toBeVisible()` /
* `.toBeHidden()` with auto-retry instead of one-shot `.isVisible()`.
*/
get titleLocator(): Locator {
return this.iframe.locator(
'[data-test="dashboard-header-container"] [data-test="editable-title-input"]',
);
}
/**
* Get the status text from the test app.
*/
async getStatus(): Promise<string> {
return (
(await this.page.locator(EmbeddedPage.SELECTORS.STATUS).textContent()) ??
''
);
}
/**
* Get the error text, if any.
*/
async getError(): Promise<string> {
const errorEl = this.page.locator(EmbeddedPage.SELECTORS.ERROR);
const display = await errorEl.evaluate(el => getComputedStyle(el).display);
if (display === 'none') return '';
return (await errorEl.textContent()) ?? '';
}
}

View File

@@ -0,0 +1,356 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { test, expect, Browser, BrowserContext, Page } from '@playwright/test';
import { createServer, IncomingMessage, ServerResponse, Server } from 'http';
import { AddressInfo, Socket } from 'net';
import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
import {
apiEnableEmbedding,
getAccessToken,
getGuestToken,
} from '../../helpers/api/embedded';
import { getDashboardBySlug } from '../../helpers/api/dashboard';
import { EmbeddedPage } from '../../pages/EmbeddedPage';
/**
* Superset domain (Flask server) — set by CI or defaults to local dev
*/
const SUPERSET_DOMAIN = (() => {
const url = process.env.PLAYWRIGHT_BASE_URL || 'http://localhost:8088';
return url.replace(/\/+$/, '');
})();
const SUPERSET_BASE_URL = SUPERSET_DOMAIN.endsWith('/')
? SUPERSET_DOMAIN
: `${SUPERSET_DOMAIN}/`;
/**
* Path to the SDK bundle built from superset-embedded-sdk/
*/
const SDK_BUNDLE_PATH = join(
__dirname,
'../../../../superset-embedded-sdk/bundle/index.js',
);
/**
* Path to the embedded test app static files
*/
const EMBED_APP_DIR = join(__dirname, '../../embedded-app');
/**
* Create a minimal static file server for the embedded test app.
* Serves only a fixed allowlist of routes — the test app references just
* its index.html and the SDK bundle, so anything else is 404.
*/
const INDEX_HTML_PATH = join(EMBED_APP_DIR, 'index.html');
interface EmbedAppServer {
server: Server;
url: string;
close: () => Promise<void>;
}
/**
* Start the static test app on an OS-assigned ephemeral port. Tracks open
* sockets so close() doesn't hang on iframe keep-alive connections, and so
* different workers/retries never collide on a fixed port.
*/
async function startEmbedAppServer(): Promise<EmbedAppServer> {
const sockets = new Set<Socket>();
const server = createServer((req: IncomingMessage, res: ServerResponse) => {
const urlPath = req.url?.split('?')[0] || '/';
if (urlPath === '/sdk/index.js') {
if (!existsSync(SDK_BUNDLE_PATH)) {
res.writeHead(404);
res.end(
'SDK bundle not found. Run: cd superset-embedded-sdk && npm ci && npm run build',
);
return;
}
res.writeHead(200, { 'Content-Type': 'text/javascript' });
res.end(readFileSync(SDK_BUNDLE_PATH));
return;
}
if (urlPath === '/' || urlPath === '/index.html') {
res.writeHead(200, { 'Content-Type': 'text/html' });
res.end(readFileSync(INDEX_HTML_PATH));
return;
}
res.writeHead(404);
res.end('Not found');
});
server.on('connection', socket => {
sockets.add(socket);
socket.once('close', () => sockets.delete(socket));
});
await new Promise<void>((resolve, reject) => {
server.once('error', reject);
server.listen(0, '127.0.0.1', () => {
server.removeListener('error', reject);
resolve();
});
});
const address = server.address() as AddressInfo;
const url = `http://127.0.0.1:${address.port}`;
return {
server,
url,
close: () =>
new Promise<void>(resolve => {
for (const socket of sockets) socket.destroy();
sockets.clear();
server.close(() => resolve());
}),
};
}
/**
* Create a browser context authenticated as admin for API-only work
* (enabling embedding, restoring config). Caller is responsible for closing.
*/
function createAdminContext(browser: Browser): Promise<BrowserContext> {
return browser.newContext({
storageState: 'playwright/.auth/user.json',
baseURL: SUPERSET_BASE_URL,
});
}
// ─── Test Suite ────────────────────────────────────────────────────────────
// Describe wrapper is needed for shared server state and serial execution:
// all tests share a static file server and must not run in parallel.
test.describe('Embedded Dashboard E2E', () => {
test.describe.configure({ mode: 'serial' });
// The full embedded chain (login → guest token → iframe → dashboard render
// → chart render) routinely exceeds the 30s default on cold CI starts.
test.setTimeout(60000);
let appServer: EmbedAppServer;
let accessToken: string;
let embedUuid: string;
let dashboardId: number;
/**
* Set up a page to render the default embedded dashboard.
* Tests that need a different UUID or UI config should not use this helper.
*/
async function setupEmbeddedPage(page: Page): Promise<EmbeddedPage> {
const embeddedPage = new EmbeddedPage(page);
await embeddedPage.exposeTokenFetcher(async () =>
getGuestToken(page, dashboardId, { accessToken }),
);
await embeddedPage.goto({
appUrl: appServer.url,
uuid: embedUuid,
supersetDomain: SUPERSET_DOMAIN,
});
await embeddedPage.waitForIframe();
await embeddedPage.waitForDashboardContent();
return embeddedPage;
}
test.beforeAll(async ({ browser }) => {
// Skip all tests if the SDK bundle hasn't been built
test.skip(
!existsSync(SDK_BUNDLE_PATH),
'Embedded SDK bundle not found. Build it with: cd superset-embedded-sdk && npm ci && npm run build',
);
appServer = await startEmbedAppServer();
// Use a fresh context with auth to set up test data via API
const context = await createAdminContext(browser);
const setupPage = await context.newPage();
try {
const dashboard = await getDashboardBySlug(setupPage, 'world_health');
if (!dashboard) {
throw new Error(
'Dashboard "world_health" not found. Ensure load_examples ran in CI setup.',
);
}
dashboardId = dashboard.id;
// Enable embedding on the dashboard (empty allowed_domains = allow all)
const embedded = await apiEnableEmbedding(setupPage, dashboardId);
embedUuid = embedded.uuid;
// Cache the JWT access token so tests don't re-login per guest token.
accessToken = await getAccessToken(setupPage);
} finally {
await context.close();
}
});
test.afterAll(async ({ browser }) => {
// Defensive restore in case the allowed_domains test failed mid-flight.
if (dashboardId !== undefined) {
const context = await createAdminContext(browser);
try {
const setupPage = await context.newPage();
await apiEnableEmbedding(setupPage, dashboardId, []);
} catch {
// Best-effort cleanup — never fail teardown.
} finally {
await context.close();
}
}
if (appServer) await appServer.close();
});
test('dashboard renders in embedded iframe', async ({ page }) => {
const embeddedPage = await setupEmbeddedPage(page);
// Verify the iframe src points to Superset's /embedded/ endpoint
await expect(
page.locator('iframe[title="Embedded Dashboard"]'),
).toHaveAttribute('src', new RegExp(`/embedded/${embedUuid}`));
// Verify no errors in the test app
expect(await embeddedPage.getError()).toBe('');
// Baseline: title should be visible when hideTitle is not set. This
// doubles as a positive existence check the `hideTitle` test relies on
// for distinguishing "title was hidden" from "selector is wrong".
await expect(embeddedPage.titleLocator).toBeVisible();
// Prove the dashboard actually renders, not just the chrome.
await embeddedPage.waitForChartRendered();
});
test('UI config hideTitle hides dashboard title', async ({ page }) => {
const embeddedPage = new EmbeddedPage(page);
await embeddedPage.exposeTokenFetcher(async () =>
getGuestToken(page, dashboardId, { accessToken }),
);
await embeddedPage.goto({
appUrl: appServer.url,
uuid: embedUuid,
supersetDomain: SUPERSET_DOMAIN,
hideTitle: true,
});
await embeddedPage.waitForIframe();
await embeddedPage.waitForDashboardContent();
// The iframe URL should include uiConfig parameter
await expect(
page.locator('iframe[title="Embedded Dashboard"]'),
).toHaveAttribute('src', /uiConfig=/);
// hideTitle removes the header from the DOM (rather than CSS-hiding it),
// so toBeHidden + toHaveCount(0) together assert: not visible AND
// confirmed-removed (so the test can't pass for the wrong reason if the
// selector ever drifts — the baseline test asserts the selector matches
// when hideTitle is off).
await expect(embeddedPage.titleLocator).toBeHidden();
await expect(embeddedPage.titleLocator).toHaveCount(0);
});
test('charts render inside embedded iframe', async ({ page }) => {
const embeddedPage = await setupEmbeddedPage(page);
await embeddedPage.waitForChartRendered();
const renderedCharts = embeddedPage.iframe.locator(
EmbeddedPage.RENDERED_CHART_SELECTOR,
);
expect(await renderedCharts.count()).toBeGreaterThan(0);
});
test('allowed_domains blocks unauthorized referrer', async ({
page,
browser,
}) => {
const context = await createAdminContext(browser);
const setupPage = await context.newPage();
try {
// Restrict to a domain that is NOT the test app's origin
const restrictedEmbed = await apiEnableEmbedding(setupPage, dashboardId, [
'https://allowed.example.com',
]);
const embeddedPage = new EmbeddedPage(page);
await embeddedPage.exposeTokenFetcher(async () =>
getGuestToken(page, dashboardId, { accessToken }),
);
// The deterministic signal that the referrer check fired is the HTTP
// status of the /embedded/<uuid> response — assert that directly rather
// than racing against cross-origin iframe rendering.
const embeddedResponsePromise = page.waitForResponse(
resp =>
resp.url().includes(`/embedded/${restrictedEmbed.uuid}`) &&
resp.request().resourceType() === 'document',
);
await embeddedPage.goto({
appUrl: appServer.url,
uuid: restrictedEmbed.uuid,
supersetDomain: SUPERSET_DOMAIN,
});
const response = await embeddedResponsePromise;
expect(response.status()).toBe(403);
} finally {
// Restore the open embedding config for other tests in this file.
await apiEnableEmbedding(setupPage, dashboardId, []);
await context.close();
}
});
test('guest token enables dashboard data access', async ({ page }) => {
const embeddedPage = new EmbeddedPage(page);
let tokenCallCount = 0;
await embeddedPage.exposeTokenFetcher(async () => {
tokenCallCount += 1;
return getGuestToken(page, dashboardId, { accessToken });
});
await embeddedPage.goto({
appUrl: appServer.url,
uuid: embedUuid,
supersetDomain: SUPERSET_DOMAIN,
});
await embeddedPage.waitForIframe();
await embeddedPage.waitForDashboardContent();
await embeddedPage.waitForChartRendered();
// The SDK fetches the token exactly once per embed (caching is the
// SDK's responsibility, not ours) — assert the stronger invariant.
expect(tokenCallCount).toBe(1);
// Confirm at least one chart actually rendered with data, not just its shell
const renderedCharts = embeddedPage.iframe.locator(
EmbeddedPage.RENDERED_CHART_SELECTOR,
);
expect(await renderedCharts.count()).toBeGreaterThan(0);
});
});

View File

@@ -75,3 +75,16 @@ export const TIMEOUT = {
*/
SLOW_TEST: 60000, // 60s for tests that chain multiple slow operations
} as const;
/**
* Embedded dashboard test app configuration.
* The test app is served by a Node.js http server started in the test fixture.
*/
export const EMBEDDED = {
/** Timeout for iframe to appear in the DOM */
IFRAME_LOAD: 15000, // 15s
/** Timeout for dashboard content to render inside the iframe */
DASHBOARD_RENDER: 30000, // 30s
/** Timeout for individual chart cells to finish rendering */
CHART_RENDER: 30000, // 30s
} as const;

View File

@@ -30,7 +30,7 @@ import fetchMock from 'fetch-mock';
import { SaveDatasetModal } from 'src/SqlLab/components/SaveDatasetModal';
import { createDatasource } from 'src/SqlLab/actions/sqlLab';
import { user, testQuery, mockdatasets } from 'src/SqlLab/fixtures';
import { FeatureFlag } from '@superset-ui/core';
import { FeatureFlag, SupersetClient } from '@superset-ui/core';
const mockedProps = {
visible: true,
@@ -354,6 +354,131 @@ describe('SaveDatasetModal', () => {
});
});
const setupOverwriteFlow = async () => {
// Select the "Overwrite existing" radio
await userEvent.click(
screen.getByRole('radio', { name: /overwrite existing/i }),
);
// Open the select to load existing-dataset options
await userEvent.click(
screen.getByRole('combobox', { name: /existing dataset/i }),
);
// Advance timers to flush debounced fetches in AsyncSelect
await act(async () => {
jest.runAllTimers();
});
// Wait for the loading indicator to clear
await waitFor(() => {
const loading = screen.queryByText('Loading...');
expect(loading === null || !loading.checkVisibility()).toBe(true);
});
// Pick an existing dataset (use the listbox item, not the input mirror)
const options = await screen.findAllByText('coolest table 0');
await userEvent.click(options[1]);
// First overwrite click → confirmation screen
await userEvent.click(screen.getByRole('button', { name: /overwrite/i }));
// Wait for the confirmation screen to render
await screen.findByText(/are you sure you want to overwrite this dataset/i);
// Second overwrite click → triggers the PUT
await userEvent.click(screen.getByRole('button', { name: /overwrite/i }));
};
test('sends template_params when overwriting a dataset with include template parameters checked', async () => {
// @ts-expect-error
global.featureFlags = {
[FeatureFlag.EnableTemplateProcessing]: true,
};
const putSpy = jest
.spyOn(SupersetClient, 'put')
.mockResolvedValue({ json: { result: { id: 0 } } } as any);
const dummyDispatch = jest.fn().mockResolvedValue({});
useDispatchMock.mockReturnValue(dummyDispatch);
useSelectorMock.mockReturnValue({ ...user });
const propsWithTemplateParam = {
...mockedProps,
datasource: {
...testQuery,
templateParams: JSON.stringify({ my_param: 12, _filters: 'foo' }),
},
};
render(<SaveDatasetModal {...propsWithTemplateParam} />, {
useRedux: true,
});
// Check the "Include Template Parameters" checkbox
await userEvent.click(screen.getByRole('checkbox'));
await setupOverwriteFlow();
await waitFor(() => {
expect(
putSpy.mock.calls.some(([req]) =>
req.endpoint?.includes('api/v1/dataset/'),
),
).toBe(true);
});
const datasetPutCall = putSpy.mock.calls.find(([req]) =>
req.endpoint?.includes('api/v1/dataset/'),
)!;
const [req] = datasetPutCall;
expect(req.endpoint).toContain('override_columns=true');
const body = JSON.parse(req.body as string);
// _filters should be stripped, but my_param should be preserved
expect(body.template_params).toEqual(JSON.stringify({ my_param: 12 }));
putSpy.mockRestore();
});
test('does not send template_params when overwriting a dataset with include template parameters unchecked', async () => {
// @ts-expect-error
global.featureFlags = {
[FeatureFlag.EnableTemplateProcessing]: true,
};
const putSpy = jest
.spyOn(SupersetClient, 'put')
.mockResolvedValue({ json: { result: { id: 0 } } } as any);
const dummyDispatch = jest.fn().mockResolvedValue({});
useDispatchMock.mockReturnValue(dummyDispatch);
useSelectorMock.mockReturnValue({ ...user });
const propsWithTemplateParam = {
...mockedProps,
datasource: {
...testQuery,
templateParams: JSON.stringify({ my_param: 12 }),
},
};
render(<SaveDatasetModal {...propsWithTemplateParam} />, {
useRedux: true,
});
// Do NOT check the "Include Template Parameters" checkbox
await setupOverwriteFlow();
await waitFor(() => {
expect(
putSpy.mock.calls.some(([req]) =>
req.endpoint?.includes('api/v1/dataset/'),
),
).toBe(true);
});
const datasetPutCall = putSpy.mock.calls.find(([req]) =>
req.endpoint?.includes('api/v1/dataset/'),
)!;
const [req] = datasetPutCall;
const body = JSON.parse(req.body as string);
expect(body.template_params).toBeUndefined();
putSpy.mockRestore();
});
test('clears dataset cache when creating new dataset', async () => {
const clearDatasetCache = jest.spyOn(
require('src/utils/cachedSupersetGet'),

View File

@@ -149,14 +149,25 @@ const Styles = styled.div`
}
`}
`;
const updateDataset = async (
dbId: number,
datasetId: number,
sql: string,
columns: Array<Record<string, any>>,
owners: [number],
overrideColumns: boolean,
) => {
type UpdateDatasetPayload = {
dbId: number;
datasetId: number;
sql: string;
columns: Array<Record<string, any>>;
owners: number[];
overrideColumns: boolean;
templateParams?: string;
};
const updateDataset = async ({
dbId,
datasetId,
sql,
columns,
owners,
overrideColumns,
templateParams,
}: UpdateDatasetPayload) => {
const endpoint = `api/v1/dataset/${datasetId}?override_columns=${overrideColumns}`;
const headers = { 'Content-Type': 'application/json' };
const body = JSON.stringify({
@@ -164,6 +175,7 @@ const updateDataset = async (
columns,
owners,
database_id: dbId,
...(templateParams !== undefined && { template_params: templateParams }),
});
const data: JsonResponse = await SupersetClient.put({
@@ -179,6 +191,26 @@ const updateDataset = async (
const UNTITLED = t('Untitled Dataset');
// The filters param is only used to test jinja templates.
// Remove the special filters entry from the templateParams
// before saving the dataset.
const sanitizeTemplateParams = (
templateParams: string | object | null | undefined,
): string | undefined => {
if (typeof templateParams !== 'string') {
return undefined;
}
try {
const parsed = JSON.parse(templateParams) as Record<string, unknown>;
// Remove the special _filters entry — it is only used to test jinja templates.
const { _filters: _ignored, ...clean } = parsed;
return JSON.stringify(clean);
} catch (e) {
// malformed templateParams, do not include it
return undefined;
}
};
export const SaveDatasetModal = ({
visible,
onHide,
@@ -232,22 +264,27 @@ export const SaveDatasetModal = ({
}
setLoading(true);
const templateParams = includeTemplateParameters
? sanitizeTemplateParams(datasource?.templateParams)
: undefined;
try {
const [, key] = await Promise.all([
updateDataset(
datasource?.dbId,
datasetToOverwrite?.datasetid,
datasource?.sql,
datasource?.columns?.map(
updateDataset({
dbId: datasource?.dbId,
datasetId: datasetToOverwrite?.datasetid,
sql: datasource?.sql,
columns: datasource?.columns?.map(
(d: { column_name: string; type: string; is_dttm: boolean }) => ({
column_name: d.column_name,
type: d.type,
is_dttm: d.is_dttm,
}),
),
datasetToOverwrite?.owners?.map((o: DatasetOwner) => o.id),
true,
),
owners: datasetToOverwrite?.owners?.map((o: DatasetOwner) => o.id),
overrideColumns: true,
templateParams,
}),
postFormData(datasetToOverwrite.datasetid, 'table', {
...formDataWithDefaults,
datasource: `${datasetToOverwrite.datasetid}__table`,
@@ -319,27 +356,9 @@ export const SaveDatasetModal = ({
setLoading(true);
const selectedColumns = datasource?.columns ?? [];
// The filters param is only used to test jinja templates.
// Remove the special filters entry from the templateParams
// before saving the dataset.
let templateParams;
if (
typeof datasource?.templateParams === 'string' &&
includeTemplateParameters
) {
try {
const p = JSON.parse(datasource.templateParams);
/* eslint-disable-next-line no-underscore-dangle */
if (p._filters) {
/* eslint-disable-next-line no-underscore-dangle */
delete p._filters;
}
templateParams = JSON.stringify(p);
} catch (e) {
// malformed templateParams, do not include it
templateParams = undefined;
}
}
const templateParams = includeTemplateParameters
? sanitizeTemplateParams(datasource?.templateParams)
: undefined;
dispatch(
createDatasource({

View File

@@ -251,7 +251,7 @@ describe('useTables hook', () => {
fetchMock.get(`glob:*/api/v1/database/${expectDbId}/schemas/*`, {
result: fakeSchemaApiResult,
});
const { result, waitFor } = renderHook(
const { result } = renderHook(
() =>
useTables({
dbId: expectDbId,

View File

@@ -22,6 +22,11 @@ import {
createListenerMiddleware,
StoreEnhancer,
} from '@reduxjs/toolkit';
import {
useDispatch,
useSelector,
type TypedUseSelectorHook,
} from 'react-redux';
import thunk from 'redux-thunk';
import { api } from 'src/hooks/apiResources/queryApi';
import messageToastReducer from 'src/components/MessageToasts/reducers';
@@ -177,3 +182,12 @@ export function setupStore({
export const store = setupStore();
export type RootState = ReturnType<typeof store.getState>;
// Typed Redux hooks. Prefer these over the raw `useDispatch` / `useSelector`
// from react-redux: `useAppDispatch` understands the store's middleware (so
// thunks resolve correctly), and `useAppSelector` infers `RootState` without
// callers having to annotate every selector. Required ahead of the
// react-redux v8+ bump, which tightens dispatch typing — see #39927.
export type AppDispatch = typeof store.dispatch;
export const useAppDispatch: () => AppDispatch = useDispatch;
export const useAppSelector: TypedUseSelectorHook<RootState> = useSelector;

View File

@@ -49,8 +49,9 @@ from superset.datasets.schemas import ImportV1DatasetSchema
from superset.extensions import feature_flag_manager
from superset.migrations.shared.native_filters import migrate_dashboard
from superset.models.core import Database
from superset.models.dashboard import dashboard_slices
from superset.models.dashboard import Dashboard, dashboard_slices
from superset.models.slice import Slice
from superset.models.sql_lab import SavedQuery
from superset.queries.saved_queries.schemas import ImportV1SavedQuerySchema
from superset.utils.decorators import on_error, transaction
@@ -89,6 +90,9 @@ class ImportAssetsCommand(BaseCommand):
)
self._configs: dict[str, Any] = {}
self.sparse = kwargs.get("sparse", False)
# Defaults to ``True`` for backwards compatibility: historically this
# command always overwrote existing assets.
self.overwrite: bool = kwargs.get("overwrite", True)
# pylint: disable=too-many-locals
@staticmethod
@@ -96,6 +100,7 @@ class ImportAssetsCommand(BaseCommand):
configs: dict[str, Any],
sparse: bool = False,
contents: Optional[dict[str, Any]] = None,
overwrite: bool = True,
) -> None:
contents = {} if contents is None else contents
# import databases first
@@ -116,20 +121,20 @@ class ImportAssetsCommand(BaseCommand):
for file_name, config in configs.items():
if file_name.startswith("databases/"):
database = import_database(config, overwrite=True)
database = import_database(config, overwrite=overwrite)
database_ids[str(database.uuid)] = database.id
# import saved queries
for file_name, config in configs.items():
if file_name.startswith("queries/"):
config["db_id"] = database_ids[config["database_uuid"]]
import_saved_query(config, overwrite=True)
import_saved_query(config, overwrite=overwrite)
# import datasets
for file_name, config in configs.items():
if file_name.startswith("datasets/"):
config["database_id"] = database_ids[config["database_uuid"]]
dataset = import_dataset(config, overwrite=True)
dataset = import_dataset(config, overwrite=overwrite)
dataset_info[str(dataset.uuid)] = {
"datasource_id": dataset.id,
"datasource_type": dataset.datasource_type,
@@ -142,7 +147,7 @@ class ImportAssetsCommand(BaseCommand):
if file_name.startswith("charts/"):
dataset_dict = dataset_info[config["dataset_uuid"]]
config = update_chart_config_dataset(config, dataset_dict)
chart = import_chart(config, overwrite=True)
chart = import_chart(config, overwrite=overwrite)
charts.append(chart)
chart_ids[str(chart.uuid)] = chart.id
@@ -157,7 +162,7 @@ class ImportAssetsCommand(BaseCommand):
for file_name, config in configs.items():
if file_name.startswith("dashboards/"):
config = update_id_refs(config, chart_ids, dataset_info)
dashboard = import_dashboard(config, overwrite=True)
dashboard = import_dashboard(config, overwrite=overwrite)
# set ref in the dashboard_slices table
dashboard_chart_ids: list[dict[str, int]] = []
@@ -206,7 +211,73 @@ class ImportAssetsCommand(BaseCommand):
)
def run(self) -> None:
self.validate()
self._import(self._configs, self.sparse, self.contents)
self._import(self._configs, self.sparse, self.contents, self.overwrite)
# Maps asset file prefixes to the model class used to look up UUIDs for
# the "already exists" validation check when ``overwrite`` is ``False``.
_MODEL_BY_PREFIX: dict[str, Any] = {
"databases/": Database,
"datasets/": SqlaTable,
"charts/": Slice,
"dashboards/": Dashboard,
"queries/": SavedQuery,
}
def _bundle_entries_by_prefix(self) -> dict[str, list[tuple[str, str]]]:
"""Group ``(file_name, uuid)`` pairs from the bundle by asset prefix."""
bundle_by_prefix: dict[str, list[tuple[str, str]]] = {
prefix: [] for prefix in self._MODEL_BY_PREFIX
}
for file_name, config in self._configs.items():
uuid = config.get("uuid")
if not uuid:
continue
for prefix in bundle_by_prefix:
if file_name.startswith(prefix):
bundle_by_prefix[prefix].append((file_name, str(uuid)))
break
return bundle_by_prefix
def _prevent_overwrite_existing_assets(
self, exceptions: list[ValidationError]
) -> None:
"""
When ``overwrite`` is ``False``, raise a clear validation error for any
asset in the bundle whose UUID already exists in the database.
Only the UUIDs present in the import bundle are queried (per prefix),
so the cost scales with the bundle size rather than with the total
number of stored assets.
"""
if self.overwrite:
return
for prefix, entries in self._bundle_entries_by_prefix().items():
if not entries:
continue
model_cls = self._MODEL_BY_PREFIX[prefix]
incoming_uuids = [uuid for _, uuid in entries]
existing_uuids = {
str(uuid)
for (uuid,) in db.session.query(model_cls.uuid)
.filter(model_cls.uuid.in_(incoming_uuids))
.all()
}
if not existing_uuids:
continue
model_name = model_cls.__name__
for file_name, uuid in entries:
if uuid in existing_uuids:
exceptions.append(
ValidationError(
{
file_name: (
f"{model_name} already exists "
"and `overwrite=true` was not passed"
),
}
)
)
def validate(self) -> None:
exceptions: list[ValidationError] = []
@@ -229,6 +300,7 @@ class ImportAssetsCommand(BaseCommand):
self.ssh_tunnel_priv_key_passwords,
self.encrypted_extra_secrets,
)
self._prevent_overwrite_existing_assets(exceptions)
if exceptions:
raise CommandInvalidError(

View File

@@ -30,6 +30,7 @@ from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.commands.importers.v1.utils import get_contents_from_bundle
from superset.extensions import event_logger
from superset.utils import json
from superset.utils.core import parse_boolean_string
from superset.views.base_api import BaseSupersetApi, requires_form_data, statsd_metrics
@@ -157,6 +158,12 @@ class ImportExportRestApi(BaseSupersetApi):
sparse:
description: allow sparse update of resources
type: boolean
overwrite:
description: >-
overwrite existing assets? Defaults to ``true`` for
backwards compatibility. When ``false``, the import
fails if any of the assets already exist.
type: boolean
responses:
200:
description: Assets import result
@@ -188,6 +195,9 @@ class ImportExportRestApi(BaseSupersetApi):
if not contents:
raise NoValidFilesFoundError()
sparse = request.form.get("sparse") == "true"
# Defaults to True for backwards compatibility: historically this
# endpoint always overwrote existing assets.
overwrite = parse_boolean_string(request.form.get("overwrite", "true"))
passwords = (
json.loads(request.form["passwords"])
@@ -218,6 +228,7 @@ class ImportExportRestApi(BaseSupersetApi):
command = ImportAssetsCommand(
contents,
sparse=sparse,
overwrite=overwrite,
passwords=passwords,
ssh_tunnel_passwords=ssh_tunnel_passwords,
ssh_tunnel_private_keys=ssh_tunnel_private_keys,

View File

@@ -78,6 +78,15 @@ FEATURE_FLAGS = {
WEBDRIVER_BASEURL = "http://0.0.0.0:8081/"
# Enable CORS for embedded dashboard E2E tests (test app on port 9000)
ENABLE_CORS = True
CORS_OPTIONS: dict = {
"origins": [
"http://localhost:9000",
],
"supports_credentials": True,
}
def GET_FEATURE_FLAGS_FUNC(ff): # noqa: N802
ff_copy = copy(ff)
@@ -86,6 +95,7 @@ def GET_FEATURE_FLAGS_FUNC(ff): # noqa: N802
TESTING = True
TALISMAN_ENABLED = False
WTF_CSRF_ENABLED = False
FAB_ROLES = {"TestRole": [["Security", "menu_access"], ["List Users", "menu_access"]]}

View File

@@ -19,6 +19,7 @@ import copy
from typing import Any, cast
import yaml
from marshmallow.exceptions import ValidationError
from pytest_mock import MockerFixture
from sqlalchemy.orm.session import Session
from sqlalchemy.sql import select
@@ -32,6 +33,18 @@ from tests.unit_tests.fixtures.assets_configs import (
datasets_config,
)
saved_queries_config: dict[str, Any] = {
"queries/examples/my_query.yaml": {
"schema": "main",
"label": "My saved query",
"description": None,
"sql": "SELECT 1",
"uuid": "e3e4f1f0-5c9d-4a4c-a4e4-0000000000aa",
"version": "1.0.0",
"database_uuid": "a2dc77af-e654-49bb-b321-40f6b559a1ee",
},
}
def test_import_new_assets(mocker: MockerFixture, session: Session) -> None:
"""
@@ -227,6 +240,309 @@ def test_import_assets_skips_tags_when_feature_disabled(
assert db.session.query(TaggedObject).count() == 0
def test_import_overwrite_defaults_to_true(session: Session) -> None:
"""
``ImportAssetsCommand.overwrite`` defaults to ``True`` for backwards
compatibility — historically the command always overwrote existing assets.
"""
from superset.commands.importers.v1.assets import ImportAssetsCommand
command = ImportAssetsCommand({})
assert command.overwrite is True
explicit_false = ImportAssetsCommand({}, overwrite=False)
assert explicit_false.overwrite is False
def test_import_threads_overwrite_flag(mocker: MockerFixture, session: Session) -> None:
"""
``overwrite`` must be threaded through to ``import_database``,
``import_saved_query``, ``import_dataset``, ``import_chart`` and
``import_dashboard``. Previously these were hard-coded to ``overwrite=True``
which caused the API flag to be ignored.
"""
from superset import security_manager
from superset.commands.importers.v1 import assets as assets_module
from superset.commands.importers.v1.assets import ImportAssetsCommand
mocker.patch.object(security_manager, "can_access", return_value=True)
mocked_db = mocker.patch.object(assets_module, "import_database")
mocked_db.return_value.uuid = "a2dc77af-e654-49bb-b321-40f6b559a1ee"
mocked_db.return_value.id = 1
mocked_ds = mocker.patch.object(assets_module, "import_dataset")
mocked_ds.return_value.uuid = "53d47c0c-c03d-47f0-b9ac-81225f808283"
mocked_ds.return_value.id = 1
mocked_ds.return_value.datasource_type = "table"
mocked_ds.return_value.table_name = "video_game_sales"
mocked_chart = mocker.patch.object(assets_module, "import_chart")
mocked_chart.return_value.viz_type = "table"
mocked_dash = mocker.patch.object(assets_module, "import_dashboard")
mocker.patch.object(assets_module, "find_chart_uuids", return_value=[])
mocker.patch.object(assets_module, "update_id_refs", side_effect=lambda c, *_: c)
mocker.patch.object(assets_module, "migrate_dashboard")
mocker.patch("superset.db.session.execute")
configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
ImportAssetsCommand._import(configs, overwrite=False)
assert mocked_db.called
for call in mocked_db.call_args_list:
assert call.kwargs["overwrite"] is False
for call in mocked_ds.call_args_list:
assert call.kwargs["overwrite"] is False
for call in mocked_chart.call_args_list:
assert call.kwargs["overwrite"] is False
for call in mocked_dash.call_args_list:
assert call.kwargs["overwrite"] is False
def test_prevent_overwrite_flags_existing_assets(
mocker: MockerFixture, session: Session
) -> None:
"""
With ``overwrite=False``, ``_prevent_overwrite_existing_assets`` must
surface a clear ``ValidationError`` for each asset whose UUID already
exists in the database.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.models.slice import Slice
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
# seed the database with the fixture assets
seed_configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
ImportAssetsCommand._import(seed_configs)
command = ImportAssetsCommand({}, overwrite=False)
command._configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
# one exception for each of the seeded assets (db + datasets + charts + dashboards)
expected_count = (
len(databases_config)
+ len(datasets_config)
+ len(charts_config_1)
+ len(dashboards_config_1)
)
assert len(exceptions) == expected_count
for exc in exceptions:
assert isinstance(exc, ValidationError)
[(_, message)] = exc.messages.items()
assert "already exists" in message
assert "`overwrite=true` was not passed" in message
def test_prevent_overwrite_allows_new_assets(
mocker: MockerFixture, session: Session
) -> None:
"""
With ``overwrite=False`` and no conflicting UUIDs in the database, the
validation step must not raise.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.models.slice import Slice
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
command = ImportAssetsCommand({}, overwrite=False)
command._configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
assert exceptions == []
def test_prevent_overwrite_noop_when_overwrite_true(
mocker: MockerFixture, session: Session
) -> None:
"""
With ``overwrite=True`` (the default) the "already exists" validation must
be a no-op even when assets exist in the database — this preserves the
historical behavior.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.models.slice import Slice
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
seed_configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
ImportAssetsCommand._import(seed_configs)
command = ImportAssetsCommand({}) # overwrite defaults to True
command._configs = copy.deepcopy(seed_configs)
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
assert exceptions == []
def test_prevent_overwrite_flags_existing_saved_queries(
mocker: MockerFixture, session: Session
) -> None:
"""
Saved queries (``queries/`` prefix) must also be covered by the
"already exists" validation when ``overwrite=False`` — otherwise
``import_saved_query`` silently returns existing rows and the endpoint
would appear to succeed despite the conflict.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.models.slice import Slice
from superset.models.sql_lab import SavedQuery
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
SavedQuery.metadata.create_all(engine) # pylint: disable=no-member
# seed a saved query with a UUID that matches the fixture below
saved_query_uuid = next(iter(saved_queries_config.values()))["uuid"]
db.session.add(SavedQuery(uuid=saved_query_uuid, label="seeded"))
db.session.flush()
command = ImportAssetsCommand({}, overwrite=False)
command._configs = copy.deepcopy(saved_queries_config)
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
assert len(exceptions) == 1
[(file_name, message)] = exceptions[0].messages.items()
assert file_name.startswith("queries/")
assert "SavedQuery already exists" in message
def test_prevent_overwrite_partial_conflict(
mocker: MockerFixture, session: Session
) -> None:
"""
When only some of the incoming assets already exist, validation must flag
exactly the conflicting ones and leave brand-new assets untouched.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.models.slice import Slice
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
# seed only databases + datasets; charts and dashboards stay new
ImportAssetsCommand._import(
{
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
}
)
command = ImportAssetsCommand({}, overwrite=False)
command._configs = {
**copy.deepcopy(databases_config),
**copy.deepcopy(datasets_config),
**copy.deepcopy(charts_config_1),
**copy.deepcopy(dashboards_config_1),
}
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
flagged_files = {next(iter(exc.messages)) for exc in exceptions}
assert flagged_files == set(databases_config) | set(datasets_config)
def test_prevent_overwrite_queries_only_bundle_uuids(
mocker: MockerFixture, session: Session
) -> None:
"""
The validation must scope its UUID lookup to the UUIDs present in the
import bundle (one ``WHERE uuid IN (...)`` query per prefix that has
incoming entries) and skip prefixes with no entries entirely. Otherwise
every import with ``overwrite=false`` would scan all asset tables in
full, regardless of bundle size.
"""
from superset import db, security_manager
from superset.commands.importers.v1.assets import ImportAssetsCommand
from superset.connectors.sqla.models import SqlaTable
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.models.sql_lab import SavedQuery
mocker.patch.object(security_manager, "can_access", return_value=True)
engine = db.session.get_bind()
Slice.metadata.create_all(engine) # pylint: disable=no-member
SavedQuery.metadata.create_all(engine) # pylint: disable=no-member
# bundle only contains a database — no datasets/charts/dashboards/queries
bundle = copy.deepcopy(databases_config)
spy = mocker.spy(db.session, "query")
command = ImportAssetsCommand({}, overwrite=False)
command._configs = bundle
exceptions: list[ValidationError] = []
command._prevent_overwrite_existing_assets(exceptions)
# exactly one UUID query — for the only prefix with bundle entries — and
# it targets the Database UUID column. Empty-bundle prefixes (datasets/
# charts/dashboards/queries) must not be queried at all, otherwise this
# validation degrades to a full-table scan per asset type.
queried_columns = [
call.args[0]
for call in spy.call_args_list
if call.args and getattr(call.args[0], "key", None) == "uuid"
]
assert len(queried_columns) == 1
assert queried_columns[0].class_ is Database
queried_models = {col.class_ for col in queried_columns}
for model_cls in (SqlaTable, Slice, Dashboard, SavedQuery):
assert model_cls not in queried_models
# no row matches in an empty table, so no validation errors are raised
assert exceptions == []
def test_import_removes_dashboard_charts(
mocker: MockerFixture, session: Session
) -> None:

View File

@@ -48,7 +48,9 @@ def test_export_assets(
mocked_export_result = [
(
"metadata.yaml",
lambda: "version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n", # noqa: E501
lambda: (
"version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n"
), # noqa: E501
),
("databases/example.yaml", lambda: "<DATABASE CONTENTS>"),
]
@@ -109,6 +111,7 @@ def test_import_assets(
ImportAssetsCommand.assert_called_with(
mocked_contents,
sparse=False,
overwrite=True,
passwords=passwords,
ssh_tunnel_passwords=None,
ssh_tunnel_private_keys=None,
@@ -160,6 +163,7 @@ def test_import_assets_with_encrypted_extra_secrets(
ImportAssetsCommand.assert_called_with(
mocked_contents,
sparse=False,
overwrite=True,
passwords=None,
ssh_tunnel_passwords=None,
ssh_tunnel_private_keys=None,
@@ -168,6 +172,54 @@ def test_import_assets_with_encrypted_extra_secrets(
)
def test_import_assets_overwrite_false(
mocker: MockerFixture,
client: Any,
full_api_access: None,
) -> None:
"""
Passing ``overwrite=false`` on the form must be forwarded to
``ImportAssetsCommand``. Previously the flag was ignored and assets were
always overwritten.
"""
mocked_contents = {
"metadata.yaml": (
"version: 1.0.0\ntype: assets\ntimestamp: '2022-01-01T00:00:00+00:00'\n"
),
"databases/example.yaml": "<DATABASE CONTENTS>",
}
ImportAssetsCommand = mocker.patch("superset.importexport.api.ImportAssetsCommand") # noqa: N806
root = Path("assets_export")
buf = BytesIO()
with ZipFile(buf, "w") as bundle:
for path, contents in mocked_contents.items():
with bundle.open(str(root / path), "w") as fp:
fp.write(contents.encode())
buf.seek(0)
form_data = {
"bundle": (buf, "assets_export.zip"),
"overwrite": "false",
}
response = client.post(
"/api/v1/assets/import/", data=form_data, content_type="multipart/form-data"
)
assert response.status_code == 200
ImportAssetsCommand.assert_called_with(
mocked_contents,
sparse=False,
overwrite=False,
passwords=None,
ssh_tunnel_passwords=None,
ssh_tunnel_private_keys=None,
ssh_tunnel_priv_key_passwords=None,
encrypted_extra_secrets=None,
)
def test_import_assets_not_zip(
mocker: MockerFixture,
client: Any,