Compare commits

...

9 Commits

Author SHA1 Message Date
Joe Li
d6d1f3588a docs: document APP_ICON deprecation in favor of theme-based brandLogoUrl
The APP_ICON configuration variable was deprecated in PR #31590 (Ant Design v5 theming overhaul) but this breaking change was not documented in UPDATING.md. Users were confused when their custom logos stopped working in Superset 6.0.0rc1.

Added clear migration instructions showing how to replace APP_ICON with the new THEME_DEFAULT.token.brandLogoUrl configuration.

Resolves misconceptions about APP_ICON functionality mentioned in issue #34824.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2026-02-12 14:04:08 -08:00
Pat Buxton
58d245c6b0 chore(deps): Update sqlachemy-utils to 0.42.0 (#36240) 2026-02-12 12:39:06 -08:00
Jean Massucatto
dbf5e1f131 feat(theme): use IBM Plex Mono for code and numerical displays (#37366)
Co-authored-by: Mehmet Salih Yavuz <salih.yavuz@proton.me>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 09:32:41 -08:00
Jonathan Alberth Quispe Fuentes
88ce1425e2 fix(roles): optimize user fetching and resolve N+1 query issue (#37235) 2026-02-12 09:32:19 -08:00
Amin Ghadersohi
4dfece9ee5 feat(mcp): add event_logger instrumentation to MCP tools (#37859) 2026-02-12 16:50:20 +01:00
Amin Ghadersohi
3f64c25712 fix(mcp): Add database_name as valid filter column for list_datasets (#37865)
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-12 16:47:46 +01:00
dependabot[bot]
afacca350f chore(deps-dev): bump oxlint from 1.42.0 to 1.46.0 in /superset-frontend (#37917)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: hainenber <dotronghai96@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: hainenber <dotronghai96@gmail.com>
2026-02-12 21:45:26 +07:00
dependabot[bot]
30ccbb2e05 chore(deps): update @types/geojson requirement from ^7946.0.10 to ^7946.0.16 in /superset-frontend/plugins/plugin-chart-cartodiagram (#37908)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2026-02-12 20:59:28 +07:00
Michael S. Molina
19ec7b48a0 fix: Conditional formatting painting empty cells (#37894) 2026-02-12 10:22:00 -03:00
59 changed files with 1737 additions and 970 deletions

View File

@@ -24,6 +24,42 @@ assists people when migrating to a new version.
## Next
### MCP Tool Observability
MCP (Model Context Protocol) tools now include enhanced observability instrumentation for monitoring and debugging:
**Two-layer instrumentation:**
1. **Middleware layer** (`LoggingMiddleware`): Automatically logs all MCP tool calls with `duration_ms` and `success` status in the audit log (Action Log UI, logs table)
2. **Sub-operation tracking**: All 19 MCP tools include granular `event_logger.log_context()` blocks for tracking individual operations like validation, database writes, and query execution
**Action naming convention:**
- Tool-level logs: `mcp_tool_call` (via middleware)
- Sub-operation logs: `mcp.{tool_name}.{operation}` (e.g., `mcp.generate_chart.validation`, `mcp.execute_sql.query_execution`)
**Querying MCP logs:**
```sql
-- Top slowest MCP operations
SELECT action, COUNT(*) as calls, AVG(duration_ms) as avg_ms
FROM logs
WHERE action LIKE 'mcp.%'
GROUP BY action
ORDER BY avg_ms DESC
LIMIT 20;
-- MCP tool success rate
SELECT
json_extract(curated_payload, '$.tool') as tool,
COUNT(*) as total_calls,
SUM(CASE WHEN json_extract(curated_payload, '$.success') = 'true' THEN 1 ELSE 0 END) as successful,
ROUND(100.0 * SUM(CASE WHEN json_extract(curated_payload, '$.success') = 'true' THEN 1 ELSE 0 END) / COUNT(*), 2) as success_rate
FROM logs
WHERE action = 'mcp_tool_call'
GROUP BY tool
ORDER BY total_calls DESC;
```
**Security note:** Sensitive parameters (passwords, API keys, tokens) are automatically redacted in logs as `[REDACTED]`.
### Signal Cache Backend
A new `SIGNAL_CACHE_CONFIG` configuration provides a unified Redis-based backend for real-time coordination features in Superset. This backend enables:
@@ -197,6 +233,18 @@ See `superset/mcp_service/PRODUCTION.md` for deployment guides.
---
- [35621](https://github.com/apache/superset/pull/35621): The default hash algorithm has changed from MD5 to SHA-256 for improved security and FedRAMP compliance. This affects cache keys for thumbnails, dashboard digests, chart digests, and filter option names. Existing cached data will be invalidated upon upgrade. To opt out of this change and maintain backward compatibility, set `HASH_ALGORITHM = "md5"` in your `superset_config.py`.
- [31590](https://github.com/apache/superset/pull/31590): The `APP_ICON` configuration variable is now deprecated and ignored by the frontend. Custom logos should now be configured using the theme system with `brandLogoUrl`. To migrate, replace:
```
APP_ICON = "/static/assets/images/custom_logo.png"
```
with:
```
THEME_DEFAULT = {
"token": {
"brandLogoUrl": "/static/assets/images/custom_logo.png"
}
}
```
- [35062](https://github.com/apache/superset/pull/35062): Changed the function signature of `setupExtensions` to `setupCodeOverrides` with options as arguments.
### Breaking Changes

View File

@@ -48,6 +48,7 @@
"@emotion/react": "^11.13.3",
"@emotion/styled": "^11.14.1",
"@fontsource/fira-code": "^5.2.7",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@fontsource/inter": "^5.2.8",
"@mdx-js/react": "^3.1.1",
"@saucelabs/theme-github-codeblock": "^0.3.0",

View File

@@ -2446,6 +2446,11 @@
resolved "https://registry.npmjs.org/@fontsource/fira-code/-/fira-code-5.2.7.tgz"
integrity sha512-tnB9NNund9TwIym8/7DMJe573nlPEQb+fKUV5GL8TBYXjIhDvL0D7mgmNVNQUPhXp+R7RylQeiBdkA4EbOHPGQ==
"@fontsource/ibm-plex-mono@^5.2.7":
version "5.2.7"
resolved "https://registry.yarnpkg.com/@fontsource/ibm-plex-mono/-/ibm-plex-mono-5.2.7.tgz#ef5b6f052115fdf6666208a5f8a0f13fcd7ba1fd"
integrity sha512-MKAb8qV+CaiMQn2B0dIi1OV3565NYzp3WN5b4oT6LTkk+F0jR6j0ZN+5BKJiIhffDC3rtBULsYZE65+0018z9w==
"@fontsource/inter@^5.2.8":
version "5.2.8"
resolved "https://registry.npmjs.org/@fontsource/inter/-/inter-5.2.8.tgz"

View File

@@ -99,7 +99,7 @@ dependencies = [
"simplejson>=3.15.0",
"slack_sdk>=3.19.0, <4",
"sqlalchemy>=1.4, <2",
"sqlalchemy-utils>=0.38.3, <0.39",
"sqlalchemy-utils>=0.42.0, <0.43",
"sqlglot>=28.10.0, <29",
# newer pandas needs 0.9+
"tabulate>=0.9.0, <1.0",

View File

@@ -399,7 +399,7 @@ sqlalchemy==1.4.54
# marshmallow-sqlalchemy
# shillelagh
# sqlalchemy-utils
sqlalchemy-utils==0.38.3
sqlalchemy-utils==0.42.0
# via
# apache-superset (pyproject.toml)
# apache-superset-core

View File

@@ -990,7 +990,7 @@ sqlalchemy==1.4.54
# sqlalchemy-utils
sqlalchemy-bigquery==1.15.0
# via apache-superset
sqlalchemy-utils==0.38.3
sqlalchemy-utils==0.42.0
# via
# -c requirements/base-constraint.txt
# apache-superset

View File

@@ -45,7 +45,7 @@ dependencies = [
"flask-appbuilder>=5.0.2,<6",
"pydantic>=2.8.0",
"sqlalchemy>=1.4.0,<2.0",
"sqlalchemy-utils>=0.38.0",
"sqlalchemy-utils>=0.42.0",
"sqlglot>=28.10.0, <29",
"typing-extensions>=4.0.0",
]

View File

@@ -136,7 +136,7 @@
"react/jsx-no-bind": "off",
"react/jsx-props-no-spreading": "off",
"react/jsx-boolean-value": ["error", "never", { "always": [] }],
"react/jsx-no-duplicate-props": ["error", { "ignoreCase": true }],
"react/jsx-no-duplicate-props": "error",
"react/jsx-no-undef": "error",
"react/jsx-pascal-case": ["error", { "allowAllCaps": true, "ignore": [] }],
"react/jsx-uses-vars": "error",

View File

@@ -28,6 +28,7 @@
"@emotion/cache": "^11.4.0",
"@emotion/react": "^11.14.0",
"@emotion/styled": "^11.14.1",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@luma.gl/constants": "~9.2.5",
"@luma.gl/core": "~9.2.5",
"@luma.gl/engine": "~9.2.5",
@@ -265,7 +266,7 @@
"lightningcss": "^1.31.1",
"mini-css-extract-plugin": "^2.10.0",
"open-cli": "^8.0.0",
"oxlint": "^1.42.0",
"oxlint": "^1.46.0",
"po2json": "^0.4.5",
"prettier": "3.8.1",
"prettier-plugin-packagejson": "^3.0.0",
@@ -4032,12 +4033,11 @@
}
}
},
"node_modules/@fontsource/fira-code": {
"node_modules/@fontsource/ibm-plex-mono": {
"version": "5.2.7",
"resolved": "https://registry.npmjs.org/@fontsource/fira-code/-/fira-code-5.2.7.tgz",
"integrity": "sha512-tnB9NNund9TwIym8/7DMJe573nlPEQb+fKUV5GL8TBYXjIhDvL0D7mgmNVNQUPhXp+R7RylQeiBdkA4EbOHPGQ==",
"resolved": "https://registry.npmjs.org/@fontsource/ibm-plex-mono/-/ibm-plex-mono-5.2.7.tgz",
"integrity": "sha512-MKAb8qV+CaiMQn2B0dIi1OV3565NYzp3WN5b4oT6LTkk+F0jR6j0ZN+5BKJiIhffDC3rtBULsYZE65+0018z9w==",
"license": "OFL-1.1",
"peer": true,
"funding": {
"url": "https://github.com/sponsors/ayuhito"
}
@@ -9058,10 +9058,44 @@
"@octokit/openapi-types": "^25.1.0"
}
},
"node_modules/@oxlint/darwin-arm64": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/darwin-arm64/-/darwin-arm64-1.42.0.tgz",
"integrity": "sha512-ui5CdAcDsXPQwZQEXOOSWsilJWhgj9jqHCvYBm2tDE8zfwZZuF9q58+hGKH1x5y0SV4sRlyobB2Quq6uU6EgeA==",
"node_modules/@oxlint/binding-android-arm-eabi": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm-eabi/-/binding-android-arm-eabi-1.46.0.tgz",
"integrity": "sha512-vLPcE+HcZ/W/0cVA1KLuAnoUSejGougDH/fDjBFf0Q+rbBIyBNLevOhgx3AnBNAt3hcIGY7U05ISbJCKZeVa3w==",
"cpu": [
"arm"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"android"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-android-arm64": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm64/-/binding-android-arm64-1.46.0.tgz",
"integrity": "sha512-b8IqCczUsirdtJ3R/be4cRm64I5pMPafMO/9xyTAZvc+R/FxZHMQuhw0iNT9hQwRn+Uo5rNAoA8QS7QurG2QeA==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"android"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-darwin-arm64": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-arm64/-/binding-darwin-arm64-1.46.0.tgz",
"integrity": "sha512-CfC/KGnNMhI01dkfCMjquKnW4zby3kqD5o/9XA7+pgo9I4b+Nipm+JVFyZPWMNwKqLXNmi35GTLWjs9svPxlew==",
"cpu": [
"arm64"
],
@@ -9070,12 +9104,15 @@
"optional": true,
"os": [
"darwin"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/darwin-x64": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/darwin-x64/-/darwin-x64-1.42.0.tgz",
"integrity": "sha512-wo0M/hcpHRv7vFje99zHHqheOhVEwUOKjOgBKyi0M99xcLizv04kcSm1rTd6HSCeZgOtiJYZRVAlKhQOQw2byQ==",
"node_modules/@oxlint/binding-darwin-x64": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-x64/-/binding-darwin-x64-1.46.0.tgz",
"integrity": "sha512-m38mKPsV3rBdWOJ4TAGZiUjWU8RGrBxsmdSeMQ0bPr/8O6CUOm/RJkPBf0GAfPms2WRVcbkfEXvIiPshAeFkeA==",
"cpu": [
"x64"
],
@@ -9084,12 +9121,66 @@
"optional": true,
"os": [
"darwin"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/linux-arm64-gnu": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/linux-arm64-gnu/-/linux-arm64-gnu-1.42.0.tgz",
"integrity": "sha512-j4QzfCM8ks+OyM+KKYWDiBEQsm5RCW50H1Wz16wUyoFsobJ+X5qqcJxq6HvkE07m8euYmZelyB0WqsiDoz1v8g==",
"node_modules/@oxlint/binding-freebsd-x64": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-freebsd-x64/-/binding-freebsd-x64-1.46.0.tgz",
"integrity": "sha512-YaFRKslSAfuMwn7ejS1/wo9jENqQigpGBjjThX+mrpmEROLYGky+zIC5xSVGRng28U92VEDVbSNJ/sguz3dUAA==",
"cpu": [
"x64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"freebsd"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-arm-gnueabihf": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.46.0.tgz",
"integrity": "sha512-Nlw+5mSZQtkg1Oj0N8ulxzG8ATpmSDz5V2DNaGhaYAVlcdR8NYSm/xTOnweOXc/UOOv3LwkPPYzqcfPhu2lEkA==",
"cpu": [
"arm"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-arm-musleabihf": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-musleabihf/-/binding-linux-arm-musleabihf-1.46.0.tgz",
"integrity": "sha512-d3Y5y4ukMqAGnWLMKpwqj8ftNUaac7pA0NrId4AZ77JvHzezmxEcm2gswaBw2HW8y1pnq6KDB0vEPPvpTfDLrA==",
"cpu": [
"arm"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-arm64-gnu": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.46.0.tgz",
"integrity": "sha512-jkjx+XSOPuFR+C458prQmehO+v0VK19/3Hj2mOYDF4hHUf3CzmtA4fTmQUtkITZiGHnky7Oao6JeJX24mrX7WQ==",
"cpu": [
"arm64"
],
@@ -9098,12 +9189,15 @@
"optional": true,
"os": [
"linux"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/linux-arm64-musl": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/linux-arm64-musl/-/linux-arm64-musl-1.42.0.tgz",
"integrity": "sha512-g5b1Uw7zo6yw4Ymzyd1etKzAY7xAaGA3scwB8tAp3QzuY7CYdfTwlhiLKSAKbd7T/JBgxOXAGNcLDorJyVTXcg==",
"node_modules/@oxlint/binding-linux-arm64-musl": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.46.0.tgz",
"integrity": "sha512-X/aPB1rpJUdykjWSeeGIbjk6qbD8VDulgLuTSMWgr/t6m1ljcAjqHb1g49pVG9bZl55zjECgzvlpPLWnfb4FMQ==",
"cpu": [
"arm64"
],
@@ -9112,12 +9206,83 @@
"optional": true,
"os": [
"linux"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/linux-x64-gnu": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/linux-x64-gnu/-/linux-x64-gnu-1.42.0.tgz",
"integrity": "sha512-HnD99GD9qAbpV4q9iQil7mXZUJFpoBdDavfcC2CgGLPlawfcV5COzQPNwOgvPVkr7C0cBx6uNCq3S6r9IIiEIg==",
"node_modules/@oxlint/binding-linux-ppc64-gnu": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.46.0.tgz",
"integrity": "sha512-AymyOxGWwKY2KJa8b+h8iLrYJZbWKYCjqctSc2q6uIAkYPrCsxcWlge1JP6XZ14Sa80DVMwI/QvktbytSV+xVw==",
"cpu": [
"ppc64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-riscv64-gnu": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-gnu/-/binding-linux-riscv64-gnu-1.46.0.tgz",
"integrity": "sha512-PkeVdPKCDA59rlMuucsel2LjlNEpslQN5AhkMMorIJZItbbqi/0JSuACCzaiIcXYv0oNfbeQ8rbOBikv+aT6cg==",
"cpu": [
"riscv64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-riscv64-musl": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-musl/-/binding-linux-riscv64-musl-1.46.0.tgz",
"integrity": "sha512-snQaRLO/X+Ry/CxX1px1g8GUbmXzymdRs+/RkP2bySHWZFhFDtbLm2hA1ujX/jKlTLMJDZn4hYzFGLDwG/Rh2w==",
"cpu": [
"riscv64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-s390x-gnu": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.46.0.tgz",
"integrity": "sha512-kZhDMwUe/sgDTluGao9c0Dqc1JzV6wPzfGo0l/FLQdh5Zmp39Yg1FbBsCgsJfVKmKl1fNqsHyFLTShWMOlOEhA==",
"cpu": [
"s390x"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"linux"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-linux-x64-gnu": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.46.0.tgz",
"integrity": "sha512-n5a7VtQTxHZ13cNAKQc3ziARv5bE1Fx868v/tnhZNVUjaRNYe5uiUrRJ/LZghdAzOxVuQGarjjq/q4QM2+9OPA==",
"cpu": [
"x64"
],
@@ -9126,12 +9291,15 @@
"optional": true,
"os": [
"linux"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/linux-x64-musl": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/linux-x64-musl/-/linux-x64-musl-1.42.0.tgz",
"integrity": "sha512-8NTe8A78HHFn+nBi+8qMwIjgv9oIBh+9zqCPNLH56ah4vKOPvbePLI6NIv9qSkmzrBuu8SB+FJ2TH/G05UzbNA==",
"node_modules/@oxlint/binding-linux-x64-musl": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-musl/-/binding-linux-x64-musl-1.46.0.tgz",
"integrity": "sha512-KpsDU/BhdVn3iKCLxMXAOZIpO8fS0jEA5iluRoK1rhHPwKtpzEm/OCwERsu/vboMSZm66qnoTUVXRPJ8M+iKVQ==",
"cpu": [
"x64"
],
@@ -9140,12 +9308,32 @@
"optional": true,
"os": [
"linux"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/win32-arm64": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/win32-arm64/-/win32-arm64-1.42.0.tgz",
"integrity": "sha512-lAPS2YAuu+qFqoTNPFcNsxXjwSV0M+dOgAzzVTAN7Yo2ifj+oLOx0GsntWoM78PvQWI7Q827ZxqtU2ImBmDapA==",
"node_modules/@oxlint/binding-openharmony-arm64": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-openharmony-arm64/-/binding-openharmony-arm64-1.46.0.tgz",
"integrity": "sha512-jtbqUyEXlsDlRmMtTZqNbw49+1V/WxqNAR5l0S3OEkdat9diI5I+eqq9IT+jb5cSDdszTGcXpn7S3+gUYSydxQ==",
"cpu": [
"arm64"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"openharmony"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-win32-arm64-msvc": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.46.0.tgz",
"integrity": "sha512-EE8NjpqEZPwHQVigNvdyJ11dZwWIfsfn4VeBAuiJeAdrnY4HFX27mIjJINJgP5ZdBYEFV1OWH/eb9fURCYel8w==",
"cpu": [
"arm64"
],
@@ -9154,12 +9342,32 @@
"optional": true,
"os": [
"win32"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/win32-x64": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/@oxlint/win32-x64/-/win32-x64-1.42.0.tgz",
"integrity": "sha512-3/KmyUOHNriL6rLpaFfm9RJxdhpXY2/Ehx9UuorJr2pUA+lrZL15FAEx/DOszYm5r10hfzj40+efAHcCilNvSQ==",
"node_modules/@oxlint/binding-win32-ia32-msvc": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-1.46.0.tgz",
"integrity": "sha512-BHyk3H/HRdXs+uImGZ/2+qCET+B8lwGHOm7m54JiJEEUWf3zYCFX/Df1SPqtozWWmnBvioxoTG1J3mPRAr8KUA==",
"cpu": [
"ia32"
],
"dev": true,
"license": "MIT",
"optional": true,
"os": [
"win32"
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@oxlint/binding-win32-x64-msvc": {
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.46.0.tgz",
"integrity": "sha512-DJbQsSJUr4KSi9uU0QqOgI7PX2C+fKGZX+YDprt3vM2sC0dWZsgVTLoN2vtkNyEWJSY2mnvRFUshWXT3bmo0Ug==",
"cpu": [
"x64"
],
@@ -9168,7 +9376,10 @@
"optional": true,
"os": [
"win32"
]
],
"engines": {
"node": "^20.19.0 || >=22.12.0"
}
},
"node_modules/@peculiar/asn1-cms": {
"version": "2.6.0",
@@ -14241,18 +14452,6 @@
"@types/node": "*"
}
},
"node_modules/@types/cheerio": {
"version": "0.22.35",
"resolved": "https://registry.npmjs.org/@types/cheerio/-/cheerio-0.22.35.tgz",
"integrity": "sha512-yD57BchKRvTV+JD53UZ6PD8KWY5g5rvvMLRnZR3EQBCZXiDT/HR+pKpMzFGlWNhFrXlo7VPZXtKvIEwZkAWOIA==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"@types/node": "*"
}
},
"node_modules/@types/chroma-js": {
"version": "2.4.5",
"resolved": "https://registry.npmjs.org/@types/chroma-js/-/chroma-js-2.4.5.tgz",
@@ -15494,20 +15693,6 @@
"@types/node": "*"
}
},
"node_modules/@types/webpack": {
"version": "5.28.5",
"resolved": "https://registry.npmjs.org/@types/webpack/-/webpack-5.28.5.tgz",
"integrity": "sha512-wR87cgvxj3p6D0Crt1r5avwqffqPXUkNlnQ1mjU93G7gCuFjufZR4I6j8cz5g1F1tTYpfOOFvly+cmIQwL9wvw==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"@types/node": "*",
"tapable": "^2.2.0",
"webpack": "^5"
}
},
"node_modules/@types/webpack-sources": {
"version": "3.2.3",
"resolved": "https://registry.npmjs.org/@types/webpack-sources/-/webpack-sources-3.2.3.tgz",
@@ -17604,8 +17789,7 @@
"version": "1.43.6",
"resolved": "https://registry.npmjs.org/ace-builds/-/ace-builds-1.43.6.tgz",
"integrity": "sha512-L1ddibQ7F3vyXR2k2fg+I8TQTPWVA6CKeDQr/h2+8CeyTp3W6EQL8xNFZRTztuP8xNOAqL3IYPqdzs31GCjDvg==",
"license": "BSD-3-Clause",
"peer": true
"license": "BSD-3-Clause"
},
"node_modules/acorn": {
"version": "7.4.1",
@@ -18056,29 +18240,6 @@
"node": ">=8"
}
},
"node_modules/array.prototype.filter": {
"version": "1.0.4",
"resolved": "https://registry.npmjs.org/array.prototype.filter/-/array.prototype.filter-1.0.4.tgz",
"integrity": "sha512-r+mCJ7zXgXElgR4IRC+fkvNCeoaavWBs6EdCso5Tbcf+iEMKzBU/His60lt34WEZ9vlb8wDkZvQGcVI5GwkfoQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"call-bind": "^1.0.7",
"define-properties": "^1.2.1",
"es-abstract": "^1.23.2",
"es-array-method-boxes-properly": "^1.0.0",
"es-object-atoms": "^1.0.0",
"is-string": "^1.0.7"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/array.prototype.findlast": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/array.prototype.findlast/-/array.prototype.findlast-1.2.5.tgz",
@@ -23287,15 +23448,6 @@
"node": ">=8"
}
},
"node_modules/discontinuous-range": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/discontinuous-range/-/discontinuous-range-1.0.0.tgz",
"integrity": "sha512-c68LpLbO+7kP/b1Hr1qs8/BJ09F5khZGTxqxZuhzxpmwJKOgRFHJWIb9/KmqnqHhLdO55aOxFH/EGBvUQbL/RQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/distributions": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/distributions/-/distributions-2.2.0.tgz",
@@ -23892,87 +24044,6 @@
"node": ">=4"
}
},
"node_modules/enzyme": {
"version": "3.11.0",
"resolved": "https://registry.npmjs.org/enzyme/-/enzyme-3.11.0.tgz",
"integrity": "sha512-Dw8/Gs4vRjxY6/6i9wU0V+utmQO9kvh9XLnz3LIudviOnVYDEe2ec+0k+NQoMamn1VrjKgCUOWj5jG/5M5M0Qw==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"array.prototype.flat": "^1.2.3",
"cheerio": "^1.0.0-rc.3",
"enzyme-shallow-equal": "^1.0.1",
"function.prototype.name": "^1.1.2",
"has": "^1.0.3",
"html-element-map": "^1.2.0",
"is-boolean-object": "^1.0.1",
"is-callable": "^1.1.5",
"is-number-object": "^1.0.4",
"is-regex": "^1.0.5",
"is-string": "^1.0.5",
"is-subset": "^0.1.1",
"lodash.escape": "^4.0.1",
"lodash.isequal": "^4.5.0",
"object-inspect": "^1.7.0",
"object-is": "^1.0.2",
"object.assign": "^4.1.0",
"object.entries": "^1.1.1",
"object.values": "^1.1.1",
"raf": "^3.4.1",
"rst-selector-parser": "^2.2.3",
"string.prototype.trim": "^1.2.1"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/enzyme-shallow-equal": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/enzyme-shallow-equal/-/enzyme-shallow-equal-1.0.7.tgz",
"integrity": "sha512-/um0GFqUXnpM9SvKtje+9Tjoz3f1fpBC3eXRFrNs8kpYn69JljciYP7KZTqM/YQbUY9KUjvKB4jo/q+L6WGGvg==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"hasown": "^2.0.0",
"object-is": "^1.1.5"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/enzyme-to-json": {
"version": "3.6.2",
"resolved": "https://registry.npmjs.org/enzyme-to-json/-/enzyme-to-json-3.6.2.tgz",
"integrity": "sha512-Ynm6Z6R6iwQ0g2g1YToz6DWhxVnt8Dy1ijR2zynRKxTyBGA8rCDXU3rs2Qc4OKvUvc2Qoe1bcFK6bnPs20TrTg==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"@types/cheerio": "^0.22.22",
"lodash": "^4.17.21",
"react-is": "^16.12.0"
},
"engines": {
"node": ">=6.0.0"
},
"peerDependencies": {
"enzyme": "^3.4.0"
}
},
"node_modules/enzyme-to-json/node_modules/react-is": {
"version": "16.13.1",
"resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz",
"integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/err-code": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/err-code/-/err-code-2.0.3.tgz",
@@ -28868,22 +28939,6 @@
"safe-buffer": "~5.1.0"
}
},
"node_modules/html-element-map": {
"version": "1.3.1",
"resolved": "https://registry.npmjs.org/html-element-map/-/html-element-map-1.3.1.tgz",
"integrity": "sha512-6XMlxrAFX4UEEGxctfFnmrFaaZFNf9i5fNuV5wZ3WWQ4FVaNP1aX1LkX9j2mfEx1NpjeE/rL3nmgEn23GdFmrg==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"array.prototype.filter": "^1.0.0",
"call-bind": "^1.0.2"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/html-encoding-sniffer": {
"version": "6.0.0",
"resolved": "https://registry.npmjs.org/html-encoding-sniffer/-/html-encoding-sniffer-6.0.0.tgz",
@@ -30419,15 +30474,6 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-subset": {
"version": "0.1.1",
"resolved": "https://registry.npmjs.org/is-subset/-/is-subset-0.1.1.tgz",
"integrity": "sha512-6Ybun0IkarhmEqxXCNw/C0bna6Zb/TkfUX9UbwJtK6ObwAVCxmAP308WWTHviM/zAqXk05cdhYsUsZeGQh99iw==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/is-symbol": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/is-symbol/-/is-symbol-1.1.1.tgz",
@@ -34484,15 +34530,15 @@
}
},
"node_modules/jsdom/node_modules/@noble/hashes": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-2.0.1.tgz",
"integrity": "sha512-XlOlEbQcE9fmuXxrVTXCTlG2nlRXa9Rj3rr5Ue/+tX+nmkgbX720YHh0VR3hBF9xDvwnb8D2shVGOwNx+ulArw==",
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz",
"integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">= 20.19.0"
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
@@ -36239,15 +36285,6 @@
"dev": true,
"license": "MIT"
},
"node_modules/lodash.escape": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/lodash.escape/-/lodash.escape-4.0.1.tgz",
"integrity": "sha512-nXEOnb/jK9g0DYMr1/Xvq6l5xMD7GDG55+GSYIYmS0G4tBk/hURD4JR9WCavs04t33WmJx9kCyp9vJ+mr4BOUw==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/lodash.flattendeep": {
"version": "4.4.0",
"resolved": "https://registry.npmjs.org/lodash.flattendeep/-/lodash.flattendeep-4.4.0.tgz",
@@ -36451,17 +36488,6 @@
"yallist": "^3.0.2"
}
},
"node_modules/luxon": {
"version": "3.7.1",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.7.1.tgz",
"integrity": "sha512-RkRWjA926cTvz5rAb1BqyWkKbbjzCGchDUIKMCUvNi17j6f6j8uHGDV82Aqcqtzd+icoYpELmG3ksgGiFNNcNg==",
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">=12"
}
},
"node_modules/lz-string": {
"version": "1.5.0",
"resolved": "https://registry.npmjs.org/lz-string/-/lz-string-1.5.0.tgz",
@@ -37841,15 +37867,6 @@
"integrity": "sha512-2I8/T3X/hLxB2oPHgqcNYUVdA/ZEFShT7IAujifIPMfKkNbLOqY8XCoyHCXrsdjb36dW9MwoTwBCFpXKMwNwaQ==",
"license": "MIT"
},
"node_modules/moo": {
"version": "0.5.2",
"resolved": "https://registry.npmjs.org/moo/-/moo-0.5.2.tgz",
"integrity": "sha512-iSAJLHYKnX41mKcJKjqvnAN9sf0LMDTXDEvFv+ffuRR9a1MIuXLjMNL6EsnDHSkKLTWNqQQ5uo61P4EbU4NU+Q==",
"dev": true,
"license": "BSD-3-Clause",
"optional": true,
"peer": true
},
"node_modules/mousetrap": {
"version": "1.6.5",
"resolved": "https://registry.npmjs.org/mousetrap/-/mousetrap-1.6.5.tgz",
@@ -37999,40 +38016,6 @@
"integrity": "sha512-OWND8ei3VtNC9h7V60qff3SVobHr996CTwgxubgyQYEpg290h9J0buyECNNJexkFm5sOajh5G116RYA1c8ZMSw==",
"license": "MIT"
},
"node_modules/nearley": {
"version": "2.20.1",
"resolved": "https://registry.npmjs.org/nearley/-/nearley-2.20.1.tgz",
"integrity": "sha512-+Mc8UaAebFzgV+KpI5n7DasuuQCHA89dmwm7JXw3TV43ukfNQ9DnBH3Mdb2g/I4Fdxc26pwimBWvjIw0UAILSQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"commander": "^2.19.0",
"moo": "^0.5.0",
"railroad-diagrams": "^1.0.0",
"randexp": "0.4.6"
},
"bin": {
"nearley-railroad": "bin/nearley-railroad.js",
"nearley-test": "bin/nearley-test.js",
"nearley-unparse": "bin/nearley-unparse.js",
"nearleyc": "bin/nearleyc.js"
},
"funding": {
"type": "individual",
"url": "https://nearley.js.org/#give-to-nearley"
}
},
"node_modules/nearley/node_modules/commander": {
"version": "2.20.3",
"resolved": "https://registry.npmjs.org/commander/-/commander-2.20.3.tgz",
"integrity": "sha512-GpVkmM8vF2vQUkj2LvZmD35JxeJOLCwJ9cUkugyk2nuhbv3+mJvpLYYt+0+USMxE+oj+ey/lJEnhZw75x/OMcQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/negotiator": {
"version": "0.6.3",
"resolved": "https://registry.npmjs.org/negotiator/-/negotiator-0.6.3.tgz",
@@ -39411,9 +39394,9 @@
}
},
"node_modules/oxlint": {
"version": "1.42.0",
"resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.42.0.tgz",
"integrity": "sha512-qnspC/lrp8FgKNaONLLn14dm+W5t0SSlus6V5NJpgI2YNT1tkFYZt4fBf14ESxf9AAh98WBASnW5f0gtw462Lg==",
"version": "1.46.0",
"resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.46.0.tgz",
"integrity": "sha512-I9h42QDtAVsRwoueJ4PL/7qN5jFzIUXvbO4Z5ddtII92ZCiD7uiS/JW2V4viBSfGLsbZkQp3YEs6Ls4I8q+8tA==",
"dev": true,
"license": "MIT",
"bin": {
@@ -39426,14 +39409,25 @@
"url": "https://github.com/sponsors/Boshen"
},
"optionalDependencies": {
"@oxlint/darwin-arm64": "1.42.0",
"@oxlint/darwin-x64": "1.42.0",
"@oxlint/linux-arm64-gnu": "1.42.0",
"@oxlint/linux-arm64-musl": "1.42.0",
"@oxlint/linux-x64-gnu": "1.42.0",
"@oxlint/linux-x64-musl": "1.42.0",
"@oxlint/win32-arm64": "1.42.0",
"@oxlint/win32-x64": "1.42.0"
"@oxlint/binding-android-arm-eabi": "1.46.0",
"@oxlint/binding-android-arm64": "1.46.0",
"@oxlint/binding-darwin-arm64": "1.46.0",
"@oxlint/binding-darwin-x64": "1.46.0",
"@oxlint/binding-freebsd-x64": "1.46.0",
"@oxlint/binding-linux-arm-gnueabihf": "1.46.0",
"@oxlint/binding-linux-arm-musleabihf": "1.46.0",
"@oxlint/binding-linux-arm64-gnu": "1.46.0",
"@oxlint/binding-linux-arm64-musl": "1.46.0",
"@oxlint/binding-linux-ppc64-gnu": "1.46.0",
"@oxlint/binding-linux-riscv64-gnu": "1.46.0",
"@oxlint/binding-linux-riscv64-musl": "1.46.0",
"@oxlint/binding-linux-s390x-gnu": "1.46.0",
"@oxlint/binding-linux-x64-gnu": "1.46.0",
"@oxlint/binding-linux-x64-musl": "1.46.0",
"@oxlint/binding-openharmony-arm64": "1.46.0",
"@oxlint/binding-win32-arm64-msvc": "1.46.0",
"@oxlint/binding-win32-ia32-msvc": "1.46.0",
"@oxlint/binding-win32-x64-msvc": "1.46.0"
},
"peerDependencies": {
"oxlint-tsgolint": ">=0.11.2"
@@ -41435,31 +41429,6 @@
"performance-now": "^2.1.0"
}
},
"node_modules/railroad-diagrams": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/railroad-diagrams/-/railroad-diagrams-1.0.0.tgz",
"integrity": "sha512-cz93DjNeLY0idrCNOH6PviZGRN9GJhsdm9hpn1YCS879fj4W+x5IFJhhkRZcwVgMmFF7R82UA/7Oh+R8lLZg6A==",
"dev": true,
"license": "CC0-1.0",
"optional": true,
"peer": true
},
"node_modules/randexp": {
"version": "0.4.6",
"resolved": "https://registry.npmjs.org/randexp/-/randexp-0.4.6.tgz",
"integrity": "sha512-80WNmd9DA0tmZrw9qQa62GPPWfuXJknrmVmLcxvq4uZBdYqb1wYoKTmnlGUchvVWe0XiLupYkBoXVOxz3C8DYQ==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"discontinuous-range": "1.0.0",
"ret": "~0.1.10"
},
"engines": {
"node": ">=0.12"
}
},
"node_modules/randombytes": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/randombytes/-/randombytes-2.1.0.tgz",
@@ -42886,21 +42855,6 @@
"node": ">=6"
}
},
"node_modules/react-shallow-renderer": {
"version": "16.15.0",
"resolved": "https://registry.npmjs.org/react-shallow-renderer/-/react-shallow-renderer-16.15.0.tgz",
"integrity": "sha512-oScf2FqQ9LFVQgA73vr86xl2NaOIX73rh+YFqcOp68CWj56tSfgtGKrEbyhCj0rSijyG9M1CYprTh39fBi5hzA==",
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"object-assign": "^4.1.1",
"react-is": "^16.12.0 || ^17.0.0 || ^18.0.0"
},
"peerDependencies": {
"react": "^16.0.0 || ^17.0.0 || ^18.0.0"
}
},
"node_modules/react-sortable-hoc": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/react-sortable-hoc/-/react-sortable-hoc-2.0.0.tgz",
@@ -42943,31 +42897,6 @@
"react": "^16.8.3 || ^17.0.0-0 || ^18.0.0"
}
},
"node_modules/react-test-renderer": {
"version": "17.0.2",
"resolved": "https://registry.npmjs.org/react-test-renderer/-/react-test-renderer-17.0.2.tgz",
"integrity": "sha512-yaQ9cB89c17PUb0x6UfWRs7kQCorVdHlutU1boVPEsB8IDZH6n9tHxMacc3y0JoXOJUsZb/t/Mb8FUWMKaM7iQ==",
"license": "MIT",
"optional": true,
"peer": true,
"dependencies": {
"object-assign": "^4.1.1",
"react-is": "^17.0.2",
"react-shallow-renderer": "^16.13.1",
"scheduler": "^0.20.2"
},
"peerDependencies": {
"react": "17.0.2"
}
},
"node_modules/react-test-renderer/node_modules/react-is": {
"version": "17.0.2",
"resolved": "https://registry.npmjs.org/react-is/-/react-is-17.0.2.tgz",
"integrity": "sha512-w2GsyukL62IJnlaff/nRegPQR94C/XXamvMWmSHRJ4y7Ts/4ocGRmTHvOs8PSE6pB3dWOrD/nueuU5sduBsQ4w==",
"license": "MIT",
"optional": true,
"peer": true
},
"node_modules/react-transition-group": {
"version": "4.4.5",
"resolved": "https://registry.npmjs.org/react-transition-group/-/react-transition-group-4.4.5.tgz",
@@ -44755,18 +44684,6 @@
"node": ">=8"
}
},
"node_modules/ret": {
"version": "0.1.15",
"resolved": "https://registry.npmjs.org/ret/-/ret-0.1.15.tgz",
"integrity": "sha512-TTlYpa+OL+vMMNG24xSlQGEJ3B/RzEfUlLct7b5G/ytav+wPrplCpVMFuwzXbkecJrb6IYo1iFb0S9v37754mg==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">=0.12"
}
},
"node_modules/retry": {
"version": "0.13.1",
"resolved": "https://registry.npmjs.org/retry/-/retry-0.13.1.tgz",
@@ -44892,19 +44809,6 @@
"integrity": "sha512-IXgzBWvWQwE6PrDI05OvmXUIruQTcoMDzRsOd5CDvHCVLcLHMTSYvOK5Cm46kWqlV3yAbuSpBZdJ5oP5OUoStg==",
"license": "Unlicense"
},
"node_modules/rst-selector-parser": {
"version": "2.2.3",
"resolved": "https://registry.npmjs.org/rst-selector-parser/-/rst-selector-parser-2.2.3.tgz",
"integrity": "sha512-nDG1rZeP6oFTLN6yNDV/uiAvs1+FS/KlrEwh7+y7dpuApDBy6bI2HTBcc0/V8lv9OTqfyD34eF7au2pm8aBbhA==",
"dev": true,
"license": "BSD-3-Clause",
"optional": true,
"peer": true,
"dependencies": {
"lodash.flattendeep": "^4.4.0",
"nearley": "^2.7.10"
}
},
"node_modules/run-applescript": {
"version": "7.0.0",
"resolved": "https://registry.npmjs.org/run-applescript/-/run-applescript-7.0.0.tgz",
@@ -50952,15 +50856,15 @@
}
},
"node_modules/whatwg-url/node_modules/@noble/hashes": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-2.0.1.tgz",
"integrity": "sha512-XlOlEbQcE9fmuXxrVTXCTlG2nlRXa9Rj3rr5Ue/+tX+nmkgbX720YHh0VR3hBF9xDvwnb8D2shVGOwNx+ulArw==",
"version": "1.8.0",
"resolved": "https://registry.npmjs.org/@noble/hashes/-/hashes-1.8.0.tgz",
"integrity": "sha512-jCs9ldd7NwzpgXDIf6P3+NrHh9/sD6CQdxHyjQI+h/6rDNo88ypBxxz45UDuZHz9r3tNz7N/VInSVoVdtXEI4A==",
"dev": true,
"license": "MIT",
"optional": true,
"peer": true,
"engines": {
"node": ">= 20.19.0"
"node": "^14.21.3 || >=16"
},
"funding": {
"url": "https://paulmillr.com/funding/"
@@ -52057,7 +51961,7 @@
"@emotion/cache": "^11.4.0",
"@emotion/react": "^11.4.1",
"@emotion/styled": "^11.14.1",
"@fontsource/fira-code": "^5.2.6",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@fontsource/inter": "^5.2.6",
"antd": "^5.26.0",
"jed": "^1.1.1",
@@ -52069,13 +51973,6 @@
"tinycolor2": "*"
}
},
"packages/superset-core/node_modules/@types/lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-FOvQ0YPD5NOfPgMzJihoT+Za5pdkDJWcbpuj1DjaKZIr/gxodQjY/uWEFlTNqW2ugXHUiL8lRQgw63dzKHZdeQ==",
"dev": true,
"license": "MIT"
},
"packages/superset-ui-chart-controls": {
"name": "@superset-ui/chart-controls",
"version": "0.20.3",
@@ -52201,13 +52098,6 @@
"@types/unist": "*"
}
},
"packages/superset-ui-core/node_modules/@types/lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-FOvQ0YPD5NOfPgMzJihoT+Za5pdkDJWcbpuj1DjaKZIr/gxodQjY/uWEFlTNqW2ugXHUiL8lRQgw63dzKHZdeQ==",
"dev": true,
"license": "MIT"
},
"packages/superset-ui-core/node_modules/@types/mdast": {
"version": "3.0.15",
"resolved": "https://registry.npmjs.org/@types/mdast/-/mdast-3.0.15.tgz",
@@ -52224,12 +52114,6 @@
"dev": true,
"license": "MIT"
},
"packages/superset-ui-core/node_modules/ace-builds": {
"version": "1.43.5",
"resolved": "https://registry.npmjs.org/ace-builds/-/ace-builds-1.43.5.tgz",
"integrity": "sha512-iH5FLBKdB7SVn9GR37UgA/tpQS8OTWIxWAuq3Ofaw+Qbc69FfPXsXd9jeW7KRG2xKpKMqBDnu0tHBrCWY5QI7A==",
"license": "BSD-3-Clause"
},
"packages/superset-ui-core/node_modules/character-entities-legacy": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/character-entities-legacy/-/character-entities-legacy-3.0.0.tgz",
@@ -53937,7 +53821,7 @@
"version": "0.0.1",
"license": "Apache-2.0",
"dependencies": {
"@types/geojson": "^7946.0.10",
"@types/geojson": "^7946.0.16",
"geojson": "^0.5.0",
"lodash": "^4.17.23"
},
@@ -54017,13 +53901,6 @@
"react-dom": "^17.0.2"
}
},
"plugins/plugin-chart-handlebars/node_modules/@types/lodash": {
"version": "4.17.21",
"resolved": "https://registry.npmjs.org/@types/lodash/-/lodash-4.17.21.tgz",
"integrity": "sha512-FOvQ0YPD5NOfPgMzJihoT+Za5pdkDJWcbpuj1DjaKZIr/gxodQjY/uWEFlTNqW2ugXHUiL8lRQgw63dzKHZdeQ==",
"dev": true,
"license": "MIT"
},
"plugins/plugin-chart-handlebars/node_modules/just-handlebars-helpers": {
"version": "1.0.19",
"resolved": "https://registry.npmjs.org/just-handlebars-helpers/-/just-handlebars-helpers-1.0.19.tgz",

View File

@@ -109,6 +109,7 @@
"@emotion/cache": "^11.4.0",
"@emotion/react": "^11.14.0",
"@emotion/styled": "^11.14.1",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@luma.gl/constants": "~9.2.5",
"@luma.gl/core": "~9.2.5",
"@luma.gl/engine": "~9.2.5",
@@ -346,7 +347,7 @@
"lightningcss": "^1.31.1",
"mini-css-extract-plugin": "^2.10.0",
"open-cli": "^8.0.0",
"oxlint": "^1.42.0",
"oxlint": "^1.46.0",
"po2json": "^0.4.5",
"prettier": "3.8.1",
"prettier-plugin-packagejson": "^3.0.0",

View File

@@ -33,7 +33,7 @@
"@emotion/cache": "^11.4.0",
"@emotion/react": "^11.4.1",
"@emotion/styled": "^11.14.1",
"@fontsource/fira-code": "^5.2.6",
"@fontsource/ibm-plex-mono": "^5.2.7",
"@fontsource/inter": "^5.2.6",
"nanoid": "^5.0.9",
"react": "^17.0.2",

View File

@@ -23,9 +23,9 @@ import '@fontsource/inter/200.css';
import '@fontsource/inter/400.css';
import '@fontsource/inter/500.css';
import '@fontsource/inter/600.css';
import '@fontsource/fira-code/400.css';
import '@fontsource/fira-code/500.css';
import '@fontsource/fira-code/600.css';
import '@fontsource/ibm-plex-mono/400.css';
import '@fontsource/ibm-plex-mono/500.css';
import '@fontsource/ibm-plex-mono/600.css';
/* eslint-enable import/extensions */
import { css, useTheme, Global } from '@emotion/react';

View File

@@ -502,7 +502,7 @@ test('Theme base theme integration works with real-world Superset base theme con
colorSuccess: '#5ac189',
colorInfo: '#66bcfe',
fontFamily: "'Inter', Helvetica, Arial",
fontFamilyCode: "'Fira Code', 'Courier New', monospace",
fontFamilyCode: "'IBM Plex Mono', 'Courier New', monospace",
},
};

View File

@@ -17,4 +17,5 @@
* under the License.
*/
export { default as isBlank } from './isBlank';
export { default as logging } from './logging';

View File

@@ -0,0 +1,59 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import isBlank from './isBlank';
test('returns true for null', () => {
expect(isBlank(null)).toBe(true);
});
test('returns true for undefined', () => {
expect(isBlank(undefined)).toBe(true);
});
test('returns true for empty string', () => {
expect(isBlank('')).toBe(true);
});
test('returns true for whitespace-only strings', () => {
expect(isBlank(' ')).toBe(true);
expect(isBlank(' ')).toBe(true);
expect(isBlank('\t')).toBe(true);
expect(isBlank('\n')).toBe(true);
expect(isBlank(' \t\n ')).toBe(true);
});
test('returns false for non-empty strings', () => {
expect(isBlank('hello')).toBe(false);
expect(isBlank(' hello ')).toBe(false);
});
test('returns true for NaN', () => {
expect(isBlank(NaN)).toBe(true);
});
test('returns false for numbers', () => {
expect(isBlank(0)).toBe(false);
expect(isBlank(50)).toBe(false);
expect(isBlank(-1)).toBe(false);
});
test('returns false for booleans', () => {
expect(isBlank(true)).toBe(false);
expect(isBlank(false)).toBe(false);
});

View File

@@ -0,0 +1,28 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { isEmpty, isNaN, isNil, isString, trim } from 'lodash';
/**
* Checks if a value is null, undefined, NaN, or a whitespace-only string.
*/
export default function isBlank(value: unknown): boolean {
return (
isNil(value) || isNaN(value) || (isString(value) && isEmpty(trim(value)))
);
}

View File

@@ -17,6 +17,8 @@
* under the License.
*/
import memoizeOne from 'memoize-one';
import { isString, isBoolean } from 'lodash';
import { isBlank } from '@apache-superset/core';
import { addAlpha, DataRecord } from '@superset-ui/core';
import {
ColorFormatters,
@@ -254,6 +256,9 @@ export const getColorFunction = (
}
return (value: number | string | boolean | null) => {
if (isBlank(value) && operator !== Comparator.IsNull) {
return undefined;
}
const compareResult = comparatorFunction(value, columnValues);
if (compareResult === false) return undefined;
const { cutoffValue, extremeValue } = compareResult;
@@ -318,11 +323,3 @@ export const getColorFormatters = memoizeOne(
[],
) ?? [],
);
function isString(value: unknown) {
return typeof value === 'string';
}
function isBoolean(value: unknown) {
return typeof value === 'boolean';
}

View File

@@ -506,6 +506,117 @@ test('getColorFunction IsNotNull', () => {
expect(colorFunction(null)).toBeUndefined();
});
test('getColorFunction returns undefined for null values on numeric comparators', () => {
const operators = [
{ operator: Comparator.LessThan, targetValue: 50 },
{ operator: Comparator.LessOrEqual, targetValue: 50 },
{ operator: Comparator.GreaterThan, targetValue: 50 },
{ operator: Comparator.GreaterOrEqual, targetValue: 50 },
{ operator: Comparator.Equal, targetValue: 50 },
{ operator: Comparator.NotEqual, targetValue: 50 },
];
operators.forEach(({ operator, targetValue }) => {
const colorFunction = getColorFunction(
{
operator,
targetValue,
colorScheme: '#FF0000',
column: 'count',
},
countValues,
);
expect(colorFunction(null)).toBeUndefined();
expect(colorFunction(undefined as unknown as null)).toBeUndefined();
});
});
test('getColorFunction returns undefined for null values on Between comparators', () => {
const operators = [
Comparator.Between,
Comparator.BetweenOrEqual,
Comparator.BetweenOrLeftEqual,
Comparator.BetweenOrRightEqual,
];
operators.forEach(operator => {
const colorFunction = getColorFunction(
{
operator,
targetValueLeft: -10,
targetValueRight: 50,
colorScheme: '#FF0000',
column: 'count',
},
countValues,
);
expect(colorFunction(null)).toBeUndefined();
expect(colorFunction(undefined as unknown as null)).toBeUndefined();
});
});
test('getColorFunction returns undefined for null values on None operator', () => {
const colorFunction = getColorFunction(
{
operator: Comparator.None,
colorScheme: '#FF0000',
column: 'count',
},
countValues,
);
expect(colorFunction(null)).toBeUndefined();
expect(colorFunction(undefined as unknown as null)).toBeUndefined();
});
test('getColorFunction returns undefined for null values on string comparators', () => {
const operators = [
Comparator.BeginsWith,
Comparator.EndsWith,
Comparator.Containing,
Comparator.NotContaining,
];
operators.forEach(operator => {
const colorFunction = getColorFunction(
{
operator,
targetValue: 'test',
colorScheme: '#FF0000',
column: 'name',
},
strValues,
);
expect(colorFunction(null)).toBeUndefined();
expect(colorFunction(undefined as unknown as null)).toBeUndefined();
});
});
test('getColorFunction returns undefined for empty and whitespace string values', () => {
const colorFunction = getColorFunction(
{
operator: Comparator.LessThan,
targetValue: 50,
colorScheme: '#FF0000',
column: 'count',
},
countValues,
);
expect(colorFunction('' as unknown as number)).toBeUndefined();
expect(colorFunction(' ' as unknown as number)).toBeUndefined();
expect(colorFunction('\t' as unknown as number)).toBeUndefined();
});
test('getColorFunction IsNull still matches null values', () => {
const colorFunction = getColorFunction(
{
operator: Comparator.IsNull,
targetValue: '',
colorScheme: '#FF0000',
column: 'isMember',
},
boolValues,
);
expect(colorFunction(null)).toEqual('#FF0000FF');
expect(colorFunction(true)).toBeUndefined();
});
test('correct column config', () => {
const columnConfig = [
{

View File

@@ -42,6 +42,21 @@ test('renders SQLEditor', async () => {
});
});
test('SQLEditor uses fontFamilyCode from theme', async () => {
const ref = createRef<AceEditor>();
const { container } = render(<SQLEditor ref={ref as React.Ref<never>} />);
await waitFor(() => {
expect(container.querySelector(selector)).toBeInTheDocument();
});
const editorInstance = ref.current?.editor;
const fontFamily = editorInstance?.getOption('fontFamily');
// Verify font family is set (not undefined) and contains a monospace font
expect(fontFamily).toBeDefined();
expect(fontFamily).toMatch(/mono|courier|consolas/i);
});
test('renders FullSQLEditor', async () => {
const { container } = render(<FullSQLEditor />);

View File

@@ -114,7 +114,7 @@ export function AsyncAceEditor(
defaultMode,
defaultTheme,
defaultTabSize = 2,
fontFamily = 'Menlo, Consolas, Courier New, Ubuntu Mono, source-code-pro, Lucida Console, monospace',
fontFamily,
placeholder,
}: AsyncAceEditorOptions = {},
) {
@@ -171,6 +171,7 @@ export function AsyncAceEditor(
ref,
) {
const token = useTheme();
const editorFontFamily = fontFamily || token.fontFamilyCode;
const langTools = acequire('ace/ext/language_tools');
const setCompleters = useCallback(
@@ -436,7 +437,7 @@ export function AsyncAceEditor(
theme={theme}
tabSize={tabSize}
defaultValue={defaultValue}
setOptions={{ fontFamily }}
setOptions={{ fontFamily: editorFontFamily }}
{...props}
/>
</>

View File

@@ -34,6 +34,11 @@ test('works with an onClick handler', () => {
expect(mockAction).toHaveBeenCalled();
});
test('renders with monospace prop', () => {
const { getByText } = render(<Label monospace>monospace text</Label>);
expect(getByText('monospace text')).toBeInTheDocument();
});
// test stories from the storybook!
test('renders all the storybook gallery variants', () => {
// @ts-expect-error: Suppress TypeScript error for LabelGallery usage

View File

@@ -233,7 +233,7 @@ class ScatterPlotGlowOverlay extends PureComponent<ScatterPlotGlowOverlayProps>
) {
ctx.beginPath();
if (location.properties.cluster) {
let clusterLabel = clusterLabelMap[i];
const clusterLabel = clusterLabelMap[i];
// Validate clusterLabel is a finite number before using it for radius calculation
const numericLabel = Number(clusterLabel);
const safeNumericLabel = Number.isFinite(numericLabel)

View File

@@ -29,7 +29,7 @@
"access": "public"
},
"dependencies": {
"@types/geojson": "^7946.0.10",
"@types/geojson": "^7946.0.16",
"geojson": "^0.5.0",
"lodash": "^4.17.23"
},

View File

@@ -142,8 +142,8 @@ const naturalSort: SortFunction = (as, bs) => {
}
// finally, "smart" string sorting per http://stackoverflow.com/a/4373421/112871
let a = String(as);
let b = String(bs);
const a = String(as);
const b = String(bs);
if (a === b) {
return 0;
}

View File

@@ -614,9 +614,7 @@ describe('plugin-chart-table', () => {
expect(getComputedStyle(screen.getByTitle('2467063')).background).toBe(
'',
);
expect(getComputedStyle(screen.getByText('N/A')).background).toBe(
'rgba(172, 225, 196, 1)',
);
expect(getComputedStyle(screen.getByText('N/A')).background).toBe('');
});
test('should display original label in grouped headers', () => {
const props = transformProps(testData.comparison);

View File

@@ -531,6 +531,7 @@ const ResultSet = ({
placement="left"
>
<Label
monospace
css={css`
line-height: ${theme.fontSizeLG}px;
`}

View File

@@ -39,7 +39,7 @@ export default function RowCountLabel(props: RowCountLabelProps) {
limitReached || (rowcount === 0 && !loading) ? 'error' : 'default';
const formattedRowCount = getNumberFormatter()(rowcount);
const labelText = (
<Label type={type}>
<Label type={type} monospace>
{loading ? (
t('Loading...')
) : (

View File

@@ -221,7 +221,7 @@ test('should render a DeleteComponentButton in editMode', () => {
/* oxlint-disable-next-line jest/no-disabled-tests */
test.skip('should render a BackgroundStyleDropdown when focused', () => {
let { rerender } = setup({ component: rowWithoutChildren });
const { rerender } = setup({ component: rowWithoutChildren });
expect(screen.queryByTestId('background-style-dropdown')).toBeFalsy();
// we cannot set props on the Row because of the WithDragDropContext wrapper

View File

@@ -24,6 +24,7 @@ import {
waitFor,
} from 'spec/helpers/testing-library';
import { SupersetClient } from '@superset-ui/core';
import rison from 'rison';
import RoleListEditModal from './RoleListEditModal';
import {
updateRoleName,
@@ -208,4 +209,40 @@ describe('RoleListEditModal', () => {
expect(screen.getByTitle('User Name')).toBeInTheDocument();
expect(screen.getByTitle('Email')).toBeInTheDocument();
});
test('fetches users with correct role relationship filter', async () => {
const mockGet = SupersetClient.get as jest.Mock;
mockGet.mockResolvedValue({
json: {
count: 0,
result: [],
},
});
render(<RoleListEditModal {...mockProps} />);
await waitFor(() => {
expect(mockGet).toHaveBeenCalled();
});
// verify the endpoint and query parameters
const callArgs = mockGet.mock.calls[0][0];
expect(callArgs.endpoint).toContain('/api/v1/security/users/');
const urlMatch = callArgs.endpoint.match(/\?q=(.+)/);
expect(urlMatch).toBeTruthy();
const decodedQuery = rison.decode(urlMatch[1]);
expect(decodedQuery).toEqual({
page_size: 100,
page: 0,
filters: [
{
col: 'roles',
opr: 'rel_m_m',
value: mockRole.id,
},
],
});
});
});

View File

@@ -104,7 +104,7 @@ function RoleListEditModal({
permissions,
groups,
}: RoleListEditModalProps) {
const { id, name, permission_ids, user_ids, group_ids } = role;
const { id, name, permission_ids, group_ids } = role;
const [activeTabKey, setActiveTabKey] = useState(roleTabs.edit.key);
const { addDangerToast, addSuccessToast } = useToasts();
const [roleUsers, setRoleUsers] = useState<UserObject[]>([]);
@@ -112,13 +112,7 @@ function RoleListEditModal({
const formRef = useRef<FormInstance | null>(null);
useEffect(() => {
if (!user_ids.length) {
setRoleUsers([]);
setLoadingRoleUsers(false);
return;
}
const filters = [{ col: 'id', opr: 'in', value: user_ids }];
const filters = [{ col: 'roles', opr: 'rel_m_m', value: id }];
fetchPaginatedData({
endpoint: `/api/v1/security/users/`,
@@ -137,7 +131,7 @@ function RoleListEditModal({
email: user.email,
}),
});
}, [user_ids]);
}, [id]);
useEffect(() => {
if (!loadingRoleUsers && formRef.current && roleUsers.length >= 0) {

View File

@@ -25,7 +25,7 @@ import {
} from '@superset-ui/core/utils/dates';
import { useEffect, useMemo } from 'react';
import { Link, useParams } from 'react-router-dom';
import { Tooltip } from '@superset-ui/core/components';
import { Label, Tooltip } from '@superset-ui/core/components';
import { ListView } from 'src/components';
import SubMenu from 'src/features/home/SubMenu';
import withToasts from 'src/components/MessageToasts/withToasts';
@@ -149,8 +149,14 @@ function ExecutionLog({
},
}: {
row: { original: AnnotationObject };
}) =>
fDuration(new Date(startDttm).getTime(), new Date(endDttm).getTime()),
}) => (
<Label monospace>
{fDuration(
new Date(startDttm).getTime(),
new Date(endDttm).getTime(),
)}
</Label>
),
Header: t('Duration'),
disableSortBy: true,
id: 'duration',

View File

@@ -936,7 +936,7 @@ THEME_DEFAULT: Theme = {
# Fonts
"fontUrls": [],
"fontFamily": "Inter, Helvetica, Arial, sans-serif",
"fontFamilyCode": "'Fira Code', 'Courier New', monospace",
"fontFamilyCode": "'IBM Plex Mono', 'Courier New', monospace",
# Extra tokens
"transitionTiming": 0.3,
"brandIconMaxWidth": 37,

View File

@@ -21,10 +21,12 @@ from datetime import datetime
from typing import Any, Dict, List
import dateutil.parser
from sqlalchemy import select
from sqlalchemy.exc import SQLAlchemyError
from sqlalchemy.orm import Query
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.daos.base import BaseDAO
from superset.daos.base import BaseDAO, ColumnOperator, ColumnOperatorEnum
from superset.extensions import db
from superset.models.core import Database
from superset.models.dashboard import Dashboard
@@ -35,8 +37,10 @@ from superset.views.base import DatasourceFilter
logger = logging.getLogger(__name__)
# Custom filterable fields for datasets
DATASET_CUSTOM_FIELDS: dict[str, list[str]] = {}
# Custom filterable fields for datasets (not direct model columns)
DATASET_CUSTOM_FIELDS: dict[str, list[str]] = {
"database_name": ["eq", "like", "ilike"],
}
class DatasetDAO(BaseDAO[SqlaTable]):
@@ -49,6 +53,37 @@ class DatasetDAO(BaseDAO[SqlaTable]):
base_filter = DatasourceFilter
@classmethod
def apply_column_operators(
cls,
query: Query,
column_operators: list[ColumnOperator] | None = None,
) -> Query:
"""Override to handle database_name filter via subquery on Database.
database_name lives on Database, not SqlaTable, so we intercept it
here and use a subquery to avoid duplicate joins with DatasourceFilter.
"""
if not column_operators:
return query
remaining_operators: list[ColumnOperator] = []
for c in column_operators:
if not isinstance(c, ColumnOperator):
c = ColumnOperator.model_validate(c)
if c.col == "database_name":
operator_enum = ColumnOperatorEnum(c.opr)
subq = select(Database.id).where(
operator_enum.apply(Database.database_name, c.value)
)
query = query.filter(SqlaTable.database_id.in_(subq))
else:
remaining_operators.append(c)
if remaining_operators:
query = super().apply_column_operators(query, remaining_operators)
return query
@staticmethod
def get_database_by_id(database_id: int) -> Database | None:
try:

View File

@@ -25,6 +25,7 @@ from urllib.parse import parse_qs, urlparse
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.auth import has_dataset_access
from superset.mcp_service.chart.chart_utils import (
analyze_chart_capabilities,
@@ -132,23 +133,24 @@ async def generate_chart( # noqa: C901
await ctx.debug(
"Validating chart request: dataset_id=%s" % (request.dataset_id,)
)
from superset.mcp_service.chart.validation import ValidationPipeline
with event_logger.log_context(action="mcp.generate_chart.validation"):
from superset.mcp_service.chart.validation import ValidationPipeline
validation_result = ValidationPipeline.validate_request_with_warnings(
request.model_dump()
)
validation_result = ValidationPipeline.validate_request_with_warnings(
request.model_dump()
)
if validation_result.is_valid and validation_result.request is not None:
# Use the validated request going forward
request = validation_result.request
if validation_result.is_valid and validation_result.request is not None:
# Use the validated request going forward
request = validation_result.request
# Capture runtime warnings (informational, not blocking)
if validation_result.warnings:
runtime_warnings = validation_result.warnings.get("warnings", [])
if runtime_warnings:
await ctx.info(
"Runtime suggestions: %s" % ("; ".join(runtime_warnings[:3]),)
)
# Capture runtime warnings (informational, not blocking)
if validation_result.warnings:
runtime_warnings = validation_result.warnings.get("warnings", [])
if runtime_warnings:
await ctx.info(
"Runtime suggestions: %s" % ("; ".join(runtime_warnings[:3]),)
)
if not validation_result.is_valid:
execution_time = int((time.time() - start_time) * 1000)
@@ -197,35 +199,38 @@ async def generate_chart( # noqa: C901
from superset.daos.dataset import DatasetDAO
await ctx.debug("Looking up dataset: dataset_id=%s" % (request.dataset_id,))
dataset = None
if isinstance(request.dataset_id, int) or (
isinstance(request.dataset_id, str) and request.dataset_id.isdigit()
):
dataset_id = (
int(request.dataset_id)
if isinstance(request.dataset_id, str)
else request.dataset_id
)
dataset = DatasetDAO.find_by_id(dataset_id)
# SECURITY FIX: Also validate permissions for numeric ID access
if dataset and not has_dataset_access(dataset):
logger.warning(
"User %s attempted to access dataset %s without permission",
ctx.user.username if hasattr(ctx, "user") else "unknown",
dataset_id,
with event_logger.log_context(action="mcp.generate_chart.dataset_lookup"):
dataset = None
if isinstance(request.dataset_id, int) or (
isinstance(request.dataset_id, str) and request.dataset_id.isdigit()
):
dataset_id = (
int(request.dataset_id)
if isinstance(request.dataset_id, str)
else request.dataset_id
)
dataset = None # Treat as not found
else:
# SECURITY FIX: Try UUID lookup with permission validation
dataset = DatasetDAO.find_by_id(request.dataset_id, id_column="uuid")
# Validate permissions for UUID-based access
if dataset and not has_dataset_access(dataset):
logger.warning(
"User %s attempted access dataset %s via UUID",
ctx.user.username if hasattr(ctx, "user") else "unknown",
request.dataset_id,
dataset = DatasetDAO.find_by_id(dataset_id)
# SECURITY FIX: Also validate permissions for numeric ID access
if dataset and not has_dataset_access(dataset):
logger.warning(
"User %s attempted to access dataset %s without permission",
ctx.user.username if hasattr(ctx, "user") else "unknown",
dataset_id,
)
dataset = None # Treat as not found
else:
# SECURITY FIX: Try UUID lookup with permission validation
dataset = DatasetDAO.find_by_id(
request.dataset_id, id_column="uuid"
)
dataset = None # Treat as not found
# Validate permissions for UUID-based access
if dataset and not has_dataset_access(dataset):
logger.warning(
"User %s attempted access dataset %s via UUID",
ctx.user.username if hasattr(ctx, "user") else "unknown",
request.dataset_id,
)
dataset = None # Treat as not found
if not dataset:
await ctx.error(
@@ -267,22 +272,25 @@ async def generate_chart( # noqa: C901
)
try:
command = CreateChartCommand(
{
"slice_name": chart_name,
"viz_type": form_data["viz_type"],
"datasource_id": dataset.id,
"datasource_type": "table",
"params": json.dumps(form_data),
}
)
with event_logger.log_context(action="mcp.generate_chart.db_write"):
command = CreateChartCommand(
{
"slice_name": chart_name,
"viz_type": form_data["viz_type"],
"datasource_id": dataset.id,
"datasource_type": "table",
"params": json.dumps(form_data),
}
)
chart = command.run()
chart_id = chart.id
chart = command.run()
chart_id = chart.id
# Ensure chart was created successfully before committing
if not chart or not chart.id:
raise Exception("Chart creation failed - no chart ID returned")
# Ensure chart was created successfully before committing
if not chart or not chart.id:
raise RuntimeError(
"Chart creation failed - no chart ID returned"
)
await ctx.info(
"Chart created successfully: chart_id=%s, chart_name=%s"
@@ -301,35 +309,39 @@ async def generate_chart( # noqa: C901
# Generate form_data_key for saved charts (needed for chatbot rendering)
try:
from superset.commands.explore.form_data.parameters import (
CommandParameters,
)
from superset.mcp_service.commands.create_form_data import (
MCPCreateFormDataCommand,
)
from superset.utils.core import DatasourceType
with event_logger.log_context(
action="mcp.generate_chart.form_data_cache"
):
from superset.commands.explore.form_data.parameters import (
CommandParameters,
)
from superset.mcp_service.commands.create_form_data import (
MCPCreateFormDataCommand,
)
from superset.utils.core import DatasourceType
# Add datasource to form_data for the cache
form_data_with_datasource = {
**form_data,
"datasource": f"{dataset.id}__table",
}
# Add datasource to form_data for the cache
form_data_with_datasource = {
**form_data,
"datasource": f"{dataset.id}__table",
}
cmd_params = CommandParameters(
datasource_type=DatasourceType.TABLE,
datasource_id=dataset.id,
chart_id=chart.id,
tab_id=None,
form_data=json.dumps(form_data_with_datasource),
)
form_data_key = MCPCreateFormDataCommand(cmd_params).run()
await ctx.debug(
"Generated form_data_key for saved chart: form_data_key=%s"
% (form_data_key,)
)
cmd_params = CommandParameters(
datasource_type=DatasourceType.TABLE,
datasource_id=dataset.id,
chart_id=chart.id,
tab_id=None,
form_data=json.dumps(form_data_with_datasource),
)
form_data_key = MCPCreateFormDataCommand(cmd_params).run()
await ctx.debug(
"Generated form_data_key for saved chart: "
"form_data_key=%s" % (form_data_key,)
)
except Exception as fdk_error:
logger.warning(
"Failed to generate form_data_key for saved chart: %s", fdk_error
"Failed to generate form_data_key for saved chart: %s",
fdk_error,
)
await ctx.warning(
"Failed to generate form_data_key: error=%s" % (str(fdk_error),)
@@ -383,60 +395,66 @@ async def generate_chart( # noqa: C901
"Generating previews: formats=%s" % (str(request.preview_formats),)
)
try:
for format_type in request.preview_formats:
await ctx.debug(
"Processing preview format: format=%s" % (format_type,)
)
if chart_id:
# For saved charts, use the existing preview generation
from superset.mcp_service.chart.tool.get_chart_preview import (
_get_chart_preview_internal,
GetChartPreviewRequest,
with event_logger.log_context(action="mcp.generate_chart.preview"):
for format_type in request.preview_formats:
await ctx.debug(
"Processing preview format: format=%s" % (format_type,)
)
preview_request = GetChartPreviewRequest(
identifier=str(chart_id), format=format_type
)
preview_result = await _get_chart_preview_internal(
preview_request, ctx
)
if hasattr(preview_result, "content"):
previews[format_type] = preview_result.content
else:
# For preview-only mode (save_chart=false)
# Note: Screenshot-based URL previews are not supported.
# Use the explore_url to view the chart interactively.
if format_type in ["ascii", "table", "vega_lite"]:
# Generate preview from form data without saved chart
from superset.mcp_service.chart.preview_utils import (
generate_preview_from_form_data,
if chart_id:
# For saved charts, use the existing preview
from superset.mcp_service.chart.tool.get_chart_preview import ( # noqa: E501
_get_chart_preview_internal,
GetChartPreviewRequest,
)
# Convert dataset_id to int only if it's numeric
if (
isinstance(request.dataset_id, str)
and request.dataset_id.isdigit()
):
dataset_id_for_preview = int(request.dataset_id)
elif isinstance(request.dataset_id, int):
dataset_id_for_preview = request.dataset_id
else:
# Skip preview generation for non-numeric dataset IDs
logger.warning(
"Cannot generate preview for non-numeric "
preview_request = GetChartPreviewRequest(
identifier=str(chart_id), format=format_type
)
preview_result = await _get_chart_preview_internal(
preview_request, ctx
)
if hasattr(preview_result, "content"):
previews[format_type] = preview_result.content
else:
# For preview-only mode (save_chart=false)
# Note: Screenshot-based URL previews are not
# supported. Use explore_url to view interactively.
if format_type in [
"ascii",
"table",
"vega_lite",
]:
# Generate preview from form data
from superset.mcp_service.chart.preview_utils import (
generate_preview_from_form_data,
)
continue
preview_result = generate_preview_from_form_data(
form_data=form_data,
dataset_id=dataset_id_for_preview,
preview_format=format_type,
)
# Convert dataset_id to int only if numeric
if (
isinstance(request.dataset_id, str)
and request.dataset_id.isdigit()
):
dataset_id_for_preview = int(request.dataset_id)
elif isinstance(request.dataset_id, int):
dataset_id_for_preview = request.dataset_id
else:
# Skip for non-numeric dataset IDs
logger.warning(
"Cannot generate preview for"
" non-numeric dataset IDs"
)
continue
if not hasattr(preview_result, "error"):
previews[format_type] = preview_result
preview_result = generate_preview_from_form_data(
form_data=form_data,
dataset_id=dataset_id_for_preview,
preview_format=format_type,
)
if not hasattr(preview_result, "error"):
previews[format_type] = preview_result
except Exception as e:
# Log warning but don't fail the entire request

View File

@@ -29,6 +29,7 @@ from superset_core.mcp import tool
if TYPE_CHECKING:
from superset.models.slice import Slice
from superset.extensions import event_logger
from superset.mcp_service.chart.schemas import (
ChartData,
ChartError,
@@ -82,25 +83,27 @@ async def get_chart_data( # noqa: C901
from superset.utils import json as utils_json
# Find the chart
chart = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
await ctx.debug(
"Performing ID-based chart lookup: chart_id=%s" % (chart_id,)
)
chart = ChartDAO.find_by_id(chart_id)
else:
await ctx.debug(
"Performing UUID-based chart lookup: uuid=%s" % (request.identifier,)
)
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
with event_logger.log_context(action="mcp.get_chart_data.chart_lookup"):
chart = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
await ctx.debug(
"Performing ID-based chart lookup: chart_id=%s" % (chart_id,)
)
chart = ChartDAO.find_by_id(chart_id)
else:
await ctx.debug(
"Performing UUID-based chart lookup: uuid=%s"
% (request.identifier,)
)
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
if not chart:
await ctx.error("Chart not found: identifier=%s" % (request.identifier,))
@@ -232,8 +235,9 @@ async def get_chart_data( # noqa: C901
)
# Execute the query
command = ChartDataCommand(query_context)
result = command.run()
with event_logger.log_context(action="mcp.get_chart_data.query_execution"):
command = ChartDataCommand(query_context)
result = command.run()
# Handle empty query results for certain chart types
if not result or ("queries" not in result) or len(result["queries"]) == 0:
@@ -385,21 +389,27 @@ async def get_chart_data( # noqa: C901
# Handle different export formats
if request.format == "csv":
return _export_data_as_csv(
chart,
data[: request.limit] if request.limit else data,
raw_columns,
cache_status,
performance,
)
with event_logger.log_context(
action="mcp.get_chart_data.format_conversion"
):
return _export_data_as_csv(
chart,
data[: request.limit] if request.limit else data,
raw_columns,
cache_status,
performance,
)
elif request.format == "excel":
return _export_data_as_excel(
chart,
data[: request.limit] if request.limit else data,
raw_columns,
cache_status,
performance,
)
with event_logger.log_context(
action="mcp.get_chart_data.format_conversion"
):
return _export_data_as_excel(
chart,
data[: request.limit] if request.limit else data,
raw_columns,
cache_status,
performance,
)
await ctx.report_progress(4, 4, "Building response")

View File

@@ -24,6 +24,7 @@ import logging
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.chart.schemas import (
ChartError,
ChartInfo,
@@ -71,16 +72,17 @@ async def get_chart_info(
"Retrieving chart information: identifier=%s" % (request.identifier,)
)
tool = ModelGetInfoCore(
dao_class=ChartDAO,
output_schema=ChartInfo,
error_schema=ChartError,
serializer=serialize_chart_object,
supports_slug=False, # Charts don't have slugs
logger=logger,
)
with event_logger.log_context(action="mcp.get_chart_info.lookup"):
tool = ModelGetInfoCore(
dao_class=ChartDAO,
output_schema=ChartInfo,
error_schema=ChartError,
serializer=serialize_chart_object,
supports_slug=False, # Charts don't have slugs
logger=logger,
)
result = tool.run_tool(request.identifier)
result = tool.run_tool(request.identifier)
if isinstance(result, ChartInfo):
await ctx.info(

View File

@@ -25,6 +25,7 @@ from typing import Any, Dict, List, Protocol
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.chart.schemas import (
AccessibilityMetadata,
ASCIIPreview,
@@ -1807,65 +1808,71 @@ async def _get_chart_preview_internal( # noqa: C901
from superset.daos.chart import ChartDAO
# Find the chart
chart: Any = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
await ctx.debug(
"Performing ID-based chart lookup: chart_id=%s" % (chart_id,)
)
chart = ChartDAO.find_by_id(chart_id)
else:
await ctx.debug(
"Performing UUID-based chart lookup: uuid=%s" % (request.identifier,)
)
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
# If not found and looks like a form_data_key, try to create transient chart
if (
not chart
and isinstance(request.identifier, str)
and len(request.identifier) > 8
with event_logger.log_context(action="mcp.get_chart_preview.chart_lookup"):
chart: Any = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
# This might be a form_data_key, try to get form data from cache
from superset.commands.explore.form_data.get import GetFormDataCommand
from superset.commands.explore.form_data.parameters import (
CommandParameters,
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
await ctx.debug(
"Performing ID-based chart lookup: chart_id=%s" % (chart_id,)
)
chart = ChartDAO.find_by_id(chart_id)
else:
await ctx.debug(
"Performing UUID-based chart lookup: uuid=%s"
% (request.identifier,)
)
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
try:
cmd_params = CommandParameters(key=request.identifier)
cmd = GetFormDataCommand(cmd_params)
form_data_json = cmd.run()
if form_data_json:
from superset.utils import json as utils_json
form_data = utils_json.loads(form_data_json)
# Create a transient chart object from form data
class TransientChart:
def __init__(self, form_data: Dict[str, Any]):
self.id = None
self.slice_name = "Unsaved Chart Preview"
self.viz_type = form_data.get("viz_type", "table")
self.datasource_id = None
self.datasource_type = "table"
self.params = utils_json.dumps(form_data)
self.form_data = form_data
self.uuid = None
chart = TransientChart(form_data)
except Exception as e:
# Form data key not found or invalid
logger.debug(
"Failed to get form data for key %s: %s", request.identifier, e
# If not found and looks like a form_data_key, try transient
if (
not chart
and isinstance(request.identifier, str)
and len(request.identifier) > 8
):
# This might be a form_data_key
from superset.commands.explore.form_data.get import (
GetFormDataCommand,
)
from superset.commands.explore.form_data.parameters import (
CommandParameters,
)
try:
cmd_params = CommandParameters(key=request.identifier)
cmd = GetFormDataCommand(cmd_params)
form_data_json = cmd.run()
if form_data_json:
from superset.utils import json as utils_json
form_data = utils_json.loads(form_data_json)
# Create a transient chart object from form data
class TransientChart:
def __init__(self, form_data: Dict[str, Any]):
self.id = None
self.slice_name = "Unsaved Chart Preview"
self.viz_type = form_data.get("viz_type", "table")
self.datasource_id = None
self.datasource_type = "table"
self.params = utils_json.dumps(form_data)
self.form_data = form_data
self.uuid = None
chart = TransientChart(form_data)
except (ValueError, KeyError, AttributeError, TypeError) as e:
# Form data key not found or invalid
logger.debug(
"Failed to get form data for key %s: %s",
request.identifier,
e,
)
if not chart:
await ctx.error("Chart not found: identifier=%s" % (request.identifier,))
@@ -1911,8 +1918,11 @@ async def _get_chart_preview_internal( # noqa: C901
)
# Handle different preview formats using strategy pattern
preview_generator = PreviewFormatGenerator(chart, request)
content = preview_generator.generate()
with event_logger.log_context(
action="mcp.get_chart_preview.preview_generation"
):
preview_generator = PreviewFormatGenerator(chart, request)
content = preview_generator.generate()
if isinstance(content, ChartError):
await ctx.error(
@@ -1930,18 +1940,19 @@ async def _get_chart_preview_internal( # noqa: C901
await ctx.report_progress(3, 3, "Building response")
# Create performance and accessibility metadata
execution_time = int((time.time() - start_time) * 1000)
performance = PerformanceMetadata(
query_duration_ms=execution_time,
cache_status="miss",
optimization_suggestions=[],
)
with event_logger.log_context(action="mcp.get_chart_preview.metadata"):
execution_time = int((time.time() - start_time) * 1000)
performance = PerformanceMetadata(
query_duration_ms=execution_time,
cache_status="miss",
optimization_suggestions=[],
)
accessibility = AccessibilityMetadata(
color_blind_safe=True,
alt_text=f"Preview of {chart.slice_name or f'Chart {chart.id}'}",
high_contrast_available=False,
)
accessibility = AccessibilityMetadata(
color_blind_safe=True,
alt_text=f"Preview of {chart.slice_name or f'Chart {chart.id}'}",
high_contrast_available=False,
)
await ctx.debug(
"Preview generation completed: execution_time_ms=%s, content_type=%s"

View File

@@ -28,6 +28,7 @@ from superset_core.mcp import tool
if TYPE_CHECKING:
from superset.models.slice import Slice
from superset.extensions import event_logger
from superset.mcp_service.chart.schemas import (
ChartFilter,
ChartInfo,
@@ -121,15 +122,16 @@ async def list_charts(request: ListChartsRequest, ctx: Context) -> ChartList:
)
try:
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
with event_logger.log_context(action="mcp.list_charts.query"):
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
count = len(result.charts) if hasattr(result, "charts") else 0
total_pages = getattr(result, "total_pages", None)
await ctx.info(
@@ -145,9 +147,10 @@ async def list_charts(request: ListChartsRequest, ctx: Context) -> ChartList:
"Applying field filtering via serialization context: columns=%s"
% (columns_to_filter,)
)
return result.model_dump(
mode="json", context={"select_columns": columns_to_filter}
)
with event_logger.log_context(action="mcp.list_charts.serialization"):
return result.model_dump(
mode="json", context={"select_columns": columns_to_filter}
)
except Exception as e:
await ctx.error("Failed to list charts: %s" % (str(e),))
raise

View File

@@ -25,6 +25,7 @@ import time
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.chart.chart_utils import (
analyze_chart_capabilities,
analyze_chart_semantics,
@@ -99,19 +100,20 @@ async def update_chart(
# Find the existing chart
from superset.daos.chart import ChartDAO
chart = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
chart = ChartDAO.find_by_id(chart_id)
else:
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
with event_logger.log_context(action="mcp.update_chart.chart_lookup"):
chart = None
if isinstance(request.identifier, int) or (
isinstance(request.identifier, str) and request.identifier.isdigit()
):
chart_id = (
int(request.identifier)
if isinstance(request.identifier, str)
else request.identifier
)
chart = ChartDAO.find_by_id(chart_id)
else:
# Try UUID lookup using DAO flexible method
chart = ChartDAO.find_by_id(request.identifier, id_column="uuid")
if not chart:
return GenerateChartResponse.model_validate(
@@ -132,21 +134,22 @@ async def update_chart(
# Update chart using Superset's command
from superset.commands.chart.update import UpdateChartCommand
# Generate new chart name if provided, otherwise keep existing
chart_name = (
request.chart_name
if request.chart_name
else chart.slice_name or generate_chart_name(request.config)
)
with event_logger.log_context(action="mcp.update_chart.db_write"):
# Generate new chart name if provided, otherwise keep existing
chart_name = (
request.chart_name
if request.chart_name
else chart.slice_name or generate_chart_name(request.config)
)
update_payload = {
"slice_name": chart_name,
"viz_type": new_form_data["viz_type"],
"params": json.dumps(new_form_data),
}
update_payload = {
"slice_name": chart_name,
"viz_type": new_form_data["viz_type"],
"params": json.dumps(new_form_data),
}
command = UpdateChartCommand(chart.id, update_payload)
updated_chart = command.run()
command = UpdateChartCommand(chart.id, update_payload)
updated_chart = command.run()
# Generate semantic analysis
capabilities = analyze_chart_capabilities(updated_chart, request.config)
@@ -176,21 +179,23 @@ async def update_chart(
previews = {}
if request.generate_preview:
try:
from superset.mcp_service.chart.tool.get_chart_preview import (
_get_chart_preview_internal,
GetChartPreviewRequest,
)
for format_type in request.preview_formats:
preview_request = GetChartPreviewRequest(
identifier=str(updated_chart.id), format=format_type
)
preview_result = await _get_chart_preview_internal(
preview_request, ctx
with event_logger.log_context(action="mcp.update_chart.preview"):
from superset.mcp_service.chart.tool.get_chart_preview import (
_get_chart_preview_internal,
GetChartPreviewRequest,
)
if hasattr(preview_result, "content"):
previews[format_type] = preview_result.content
for format_type in request.preview_formats:
preview_request = GetChartPreviewRequest(
identifier=str(updated_chart.id),
format=format_type,
)
preview_result = await _get_chart_preview_internal(
preview_request, ctx
)
if hasattr(preview_result, "content"):
previews[format_type] = preview_result.content
except Exception as e:
# Log warning but don't fail the entire request

View File

@@ -26,6 +26,7 @@ from typing import Any, Dict
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.chart.chart_utils import (
analyze_chart_capabilities,
analyze_chart_semantics,
@@ -65,23 +66,25 @@ def update_chart_preview(
start_time = time.time()
try:
# Map the new config to form_data format
# Pass dataset_id to enable column type checking for proper viz_type selection
new_form_data = map_config_to_form_data(
request.config, dataset_id=request.dataset_id
)
with event_logger.log_context(action="mcp.update_chart_preview.form_data"):
# Map the new config to form_data format
# Pass dataset_id to enable column type checking
new_form_data = map_config_to_form_data(
request.config, dataset_id=request.dataset_id
)
# Generate new explore link with updated form_data
explore_url = generate_explore_link(request.dataset_id, new_form_data)
# Generate new explore link with updated form_data
explore_url = generate_explore_link(request.dataset_id, new_form_data)
# Extract new form_data_key from the explore URL
new_form_data_key = None
if "form_data_key=" in explore_url:
new_form_data_key = explore_url.split("form_data_key=")[1].split("&")[0]
# Generate semantic analysis
capabilities = analyze_chart_capabilities(None, request.config)
semantics = analyze_chart_semantics(None, request.config)
with event_logger.log_context(action="mcp.update_chart_preview.metadata"):
# Generate semantic analysis
capabilities = analyze_chart_capabilities(None, request.config)
semantics = analyze_chart_semantics(None, request.config)
# Create performance metadata
execution_time = int((time.time() - start_time) * 1000)

View File

@@ -27,6 +27,7 @@ from typing import Any, Dict
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.dashboard.schemas import (
AddChartToDashboardRequest,
AddChartToDashboardResponse,
@@ -147,75 +148,79 @@ def add_chart_to_existing_dashboard(
from superset.commands.dashboard.update import UpdateDashboardCommand
from superset.daos.dashboard import DashboardDAO
# Validate dashboard exists
dashboard = DashboardDAO.find_by_id(request.dashboard_id)
if not dashboard:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=f"Dashboard with ID {request.dashboard_id} not found",
# Validate dashboard and chart exist
with event_logger.log_context(action="mcp.add_chart_to_dashboard.validation"):
dashboard = DashboardDAO.find_by_id(request.dashboard_id)
if not dashboard:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=(f"Dashboard with ID {request.dashboard_id} not found"),
)
# Get chart object for SQLAlchemy relationships and validation
from superset import db
from superset.models.slice import Slice
new_chart = db.session.get(Slice, request.chart_id)
if not new_chart:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=f"Chart with ID {request.chart_id} not found",
)
# Check if chart is already in dashboard
current_chart_ids = [slice.id for slice in dashboard.slices]
if request.chart_id in current_chart_ids:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=(
f"Chart {request.chart_id} is already in dashboard "
f"{request.dashboard_id}"
),
)
# Calculate layout position
with event_logger.log_context(action="mcp.add_chart_to_dashboard.layout"):
# Parse current layout
try:
current_layout = json.loads(dashboard.position_json or "{}")
except (json.JSONDecodeError, TypeError):
current_layout = {}
# Find position for new chart
row_index = _find_next_row_position(current_layout)
# Add chart and row to layout
chart_key, row_key = _add_chart_to_layout(
current_layout, new_chart, request.chart_id, row_index
)
# Get chart object for SQLAlchemy relationships and validation
from superset import db
from superset.models.slice import Slice
new_chart = db.session.get(Slice, request.chart_id)
if not new_chart:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=f"Chart with ID {request.chart_id} not found",
)
# Check if chart is already in dashboard
current_chart_ids = [slice.id for slice in dashboard.slices]
if request.chart_id in current_chart_ids:
return AddChartToDashboardResponse(
dashboard=None,
dashboard_url=None,
position=None,
error=(
f"Chart {request.chart_id} is already in dashboard "
f"{request.dashboard_id}"
),
)
# Parse current layout
try:
current_layout = json.loads(dashboard.position_json or "{}")
except (json.JSONDecodeError, TypeError):
current_layout = {}
# Find position for new chart
row_index = _find_next_row_position(current_layout)
# Add chart and row to layout
chart_key, row_key = _add_chart_to_layout(
current_layout, new_chart, request.chart_id, row_index
)
# Ensure proper layout structure
_ensure_layout_structure(current_layout, row_key)
# Get chart objects for SQLAlchemy relationships
# Get existing chart objects
existing_chart_objects = dashboard.slices
# Combine existing and new chart objects (new_chart was retrieved above)
all_chart_objects = list(existing_chart_objects) + [new_chart]
# Prepare update data
update_data = {
"position_json": json.dumps(current_layout),
"slices": all_chart_objects, # Pass ORM objects, not IDs
}
# Ensure proper layout structure
_ensure_layout_structure(current_layout, row_key)
# Update the dashboard
command = UpdateDashboardCommand(request.dashboard_id, update_data)
updated_dashboard = command.run()
with event_logger.log_context(action="mcp.add_chart_to_dashboard.db_write"):
# Get existing chart objects
existing_chart_objects = dashboard.slices
# Combine existing and new chart objects
all_chart_objects = list(existing_chart_objects) + [new_chart]
# Prepare update data
update_data = {
"position_json": json.dumps(current_layout),
"slices": all_chart_objects, # Pass ORM objects, not IDs
}
# Update the dashboard
command = UpdateDashboardCommand(request.dashboard_id, update_data)
updated_dashboard = command.run()
# Convert to response format
from superset.mcp_service.dashboard.schemas import (

View File

@@ -27,6 +27,7 @@ from typing import Any, Dict, List
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.dashboard.schemas import (
DashboardInfo,
GenerateDashboardRequest,
@@ -137,57 +138,63 @@ def generate_dashboard(
from superset.commands.dashboard.create import CreateDashboardCommand
from superset.models.slice import Slice
chart_objects = (
db.session.query(Slice).filter(Slice.id.in_(request.chart_ids)).all()
)
found_chart_ids = [chart.id for chart in chart_objects]
# Check if all requested charts were found
missing_chart_ids = set(request.chart_ids) - set(found_chart_ids)
if missing_chart_ids:
return GenerateDashboardResponse(
dashboard=None,
dashboard_url=None,
error=f"Charts not found: {list(missing_chart_ids)}",
with event_logger.log_context(action="mcp.generate_dashboard.chart_validation"):
chart_objects = (
db.session.query(Slice).filter(Slice.id.in_(request.chart_ids)).all()
)
found_chart_ids = [chart.id for chart in chart_objects]
# Check if all requested charts were found
missing_chart_ids = set(request.chart_ids) - set(found_chart_ids)
if missing_chart_ids:
return GenerateDashboardResponse(
dashboard=None,
dashboard_url=None,
error=f"Charts not found: {list(missing_chart_ids)}",
)
# Create dashboard layout with chart objects
layout = _create_dashboard_layout(chart_objects)
with event_logger.log_context(action="mcp.generate_dashboard.layout"):
layout = _create_dashboard_layout(chart_objects)
# Prepare dashboard data
dashboard_data = {
"dashboard_title": request.dashboard_title,
"slug": None, # Let Superset auto-generate slug
"css": "",
"json_metadata": json.dumps(
{
"filter_scopes": {},
"expanded_slices": {},
"refresh_frequency": 0,
"timed_refresh_immune_slices": [],
"color_scheme": None,
"label_colors": {},
"shared_label_colors": {},
"color_scheme_domain": [],
"cross_filters_enabled": False,
"native_filter_configuration": [],
"global_chart_configuration": {
"scope": {"rootPath": ["ROOT_ID"], "excluded": []}
},
"chart_configuration": {},
}
),
"position_json": json.dumps(layout),
"published": request.published,
"slices": chart_objects, # Pass ORM objects, not IDs
}
# Prepare dashboard data and create dashboard
with event_logger.log_context(action="mcp.generate_dashboard.db_write"):
dashboard_data = {
"dashboard_title": request.dashboard_title,
"slug": None, # Let Superset auto-generate slug
"css": "",
"json_metadata": json.dumps(
{
"filter_scopes": {},
"expanded_slices": {},
"refresh_frequency": 0,
"timed_refresh_immune_slices": [],
"color_scheme": None,
"label_colors": {},
"shared_label_colors": {},
"color_scheme_domain": [],
"cross_filters_enabled": False,
"native_filter_configuration": [],
"global_chart_configuration": {
"scope": {
"rootPath": ["ROOT_ID"],
"excluded": [],
}
},
"chart_configuration": {},
}
),
"position_json": json.dumps(layout),
"published": request.published,
"slices": chart_objects, # Pass ORM objects, not IDs
}
if request.description:
dashboard_data["description"] = request.description
if request.description:
dashboard_data["description"] = request.description
# Create the dashboard using Superset's command pattern
command = CreateDashboardCommand(dashboard_data)
dashboard = command.run()
# Create the dashboard using Superset's command pattern
command = CreateDashboardCommand(dashboard_data)
dashboard = command.run()
# Convert to our response format
from superset.mcp_service.dashboard.schemas import (

View File

@@ -28,6 +28,7 @@ from datetime import datetime, timezone
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.dashboard.schemas import (
dashboard_serializer,
DashboardError,
@@ -59,16 +60,17 @@ async def get_dashboard_info(
try:
from superset.daos.dashboard import DashboardDAO
tool = ModelGetInfoCore(
dao_class=DashboardDAO,
output_schema=DashboardInfo,
error_schema=DashboardError,
serializer=dashboard_serializer,
supports_slug=True, # Dashboards support slugs
logger=logger,
)
with event_logger.log_context(action="mcp.get_dashboard_info.lookup"):
tool = ModelGetInfoCore(
dao_class=DashboardDAO,
output_schema=DashboardInfo,
error_schema=DashboardError,
serializer=dashboard_serializer,
supports_slug=True, # Dashboards support slugs
logger=logger,
)
result = tool.run_tool(request.identifier)
result = tool.run_tool(request.identifier)
if isinstance(result, DashboardInfo):
await ctx.info(

View File

@@ -31,6 +31,7 @@ from superset_core.mcp import tool
if TYPE_CHECKING:
from superset.models.dashboard import Dashboard
from superset.extensions import event_logger
from superset.mcp_service.dashboard.schemas import (
DashboardFilter,
DashboardInfo,
@@ -123,15 +124,16 @@ async def list_dashboards(
logger=logger,
)
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
with event_logger.log_context(action="mcp.list_dashboards.query"):
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
count = len(result.dashboards) if hasattr(result, "dashboards") else 0
total_pages = getattr(result, "total_pages", None)
await ctx.info(
@@ -147,4 +149,7 @@ async def list_dashboards(
"Applying field filtering via serialization context: columns=%s"
% (columns_to_filter,)
)
return result.model_dump(mode="json", context={"select_columns": columns_to_filter})
with event_logger.log_context(action="mcp.list_dashboards.serialization"):
return result.model_dump(
mode="json", context={"select_columns": columns_to_filter}
)

View File

@@ -54,6 +54,7 @@ class DatasetFilter(ColumnOperator):
col: Literal[
"table_name",
"schema",
"database_name",
"owner",
"favorite",
] = Field(

View File

@@ -28,6 +28,7 @@ from datetime import datetime, timezone
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.dataset.schemas import (
DatasetError,
DatasetInfo,
@@ -83,16 +84,17 @@ async def get_dataset_info(
try:
from superset.daos.dataset import DatasetDAO
tool = ModelGetInfoCore(
dao_class=DatasetDAO,
output_schema=DatasetInfo,
error_schema=DatasetError,
serializer=serialize_dataset_object,
supports_slug=False, # Datasets don't have slugs
logger=logger,
)
with event_logger.log_context(action="mcp.get_dataset_info.lookup"):
tool = ModelGetInfoCore(
dao_class=DatasetDAO,
output_schema=DatasetInfo,
error_schema=DatasetError,
serializer=serialize_dataset_object,
supports_slug=False, # Datasets don't have slugs
logger=logger,
)
result = tool.run_tool(request.identifier)
result = tool.run_tool(request.identifier)
if isinstance(result, DatasetInfo):
await ctx.info(

View File

@@ -31,6 +31,7 @@ from superset_core.mcp import tool
if TYPE_CHECKING:
from superset.connectors.sqla.models import SqlaTable
from superset.extensions import event_logger
from superset.mcp_service.dataset.schemas import (
DatasetFilter,
DatasetInfo,
@@ -129,15 +130,16 @@ async def list_datasets(request: ListDatasetsRequest, ctx: Context) -> DatasetLi
logger=logger,
)
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
with event_logger.log_context(action="mcp.list_datasets.query"):
result = tool.run_tool(
filters=request.filters,
search=request.search,
select_columns=request.select_columns,
order_column=request.order_column,
order_direction=request.order_direction,
page=max(request.page - 1, 0),
page_size=request.page_size,
)
await ctx.info(
"Datasets listed successfully: count=%s, total_count=%s, total_pages=%s"
@@ -156,9 +158,11 @@ async def list_datasets(request: ListDatasetsRequest, ctx: Context) -> DatasetLi
"Applying field filtering via serialization context: columns=%s"
% (columns_to_filter,)
)
return result.model_dump(
mode="json", context={"select_columns": columns_to_filter}
)
with event_logger.log_context(action="mcp.list_datasets.serialization"):
return result.model_dump(
mode="json",
context={"select_columns": columns_to_filter},
)
except Exception as e:
await ctx.error(

View File

@@ -28,6 +28,7 @@ from urllib.parse import parse_qs, urlparse
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.chart.chart_utils import (
generate_explore_link as generate_url,
map_config_to_form_data,
@@ -91,11 +92,11 @@ async def generate_explore_link(
try:
await ctx.report_progress(1, 3, "Converting configuration to form data")
# Map config to form_data using shared utilities
# Pass dataset_id to enable column type checking for proper viz_type selection
form_data = map_config_to_form_data(
request.config, dataset_id=request.dataset_id
)
with event_logger.log_context(action="mcp.generate_explore_link.form_data"):
# Map config to form_data using shared utilities
form_data = map_config_to_form_data(
request.config, dataset_id=request.dataset_id
)
# Add datasource to form_data for consistency with generate_chart
# Only set if not already present to avoid overwriting
@@ -112,8 +113,13 @@ async def generate_explore_link(
)
await ctx.report_progress(2, 3, "Generating explore URL")
# Generate explore link using shared utilities
explore_url = generate_url(dataset_id=request.dataset_id, form_data=form_data)
with event_logger.log_context(
action="mcp.generate_explore_link.url_generation"
):
# Generate explore link using shared utilities
explore_url = generate_url(
dataset_id=request.dataset_id, form_data=form_data
)
# Extract form_data_key from the explore URL using proper URL parsing
form_data_key = None

View File

@@ -87,20 +87,47 @@ def _sanitize_error_for_logging(error: Exception) -> str:
return error_str
_SENSITIVE_PARAM_KEYS = frozenset(
{
"password",
"token",
"api_key",
"secret",
"credentials",
"authorization",
"cookie",
}
)
def _sanitize_params(params: dict[str, Any]) -> dict[str, Any]:
"""Remove sensitive fields from params before logging."""
if not isinstance(params, dict):
return params
return {
k: "[REDACTED]" if k.lower() in _SENSITIVE_PARAM_KEYS else v
for k, v in params.items()
}
class LoggingMiddleware(Middleware):
"""
Middleware that logs every MCP message (request and response) using the
event logger. This matches the core audit log system (Action Log UI,
logs table, custom loggers). Also attempts to log dashboard_id, chart_id
(slice_id), and dataset_id if present in tool params.
Tool calls are handled in on_call_tool() which wraps execution to capture
duration_ms. Non-tool messages (resource reads, prompts, etc.) are handled
in on_message().
"""
async def on_message(
self,
context: MiddlewareContext,
call_next: Callable[[MiddlewareContext], Awaitable[Any]],
) -> Any:
# Extract agent_id and user_id
def _extract_context_info(
self, context: MiddlewareContext
) -> tuple[
str | None, int | None, int | None, int | None, int | None, dict[str, Any]
]:
"""Extract agent_id, user_id, and entity IDs from context."""
agent_id = None
user_id = None
dashboard_id = None
@@ -113,18 +140,78 @@ class LoggingMiddleware(Middleware):
agent_id = getattr(context.session, "agent_id", None)
try:
user_id = get_user_id()
except Exception:
except (RuntimeError, AttributeError):
user_id = None
# Try to extract IDs from params
if isinstance(params, dict):
dashboard_id = params.get("dashboard_id")
# Chart ID may be under 'chart_id' or 'slice_id'
slice_id = params.get("chart_id") or params.get("slice_id")
dataset_id = params.get("dataset_id")
# Log to Superset's event logger (DB, Action Log UI, or custom)
return agent_id, user_id, dashboard_id, slice_id, dataset_id, params
async def on_call_tool(
self,
context: MiddlewareContext,
call_next: Callable[[MiddlewareContext], Awaitable[Any]],
) -> Any:
"""Log tool calls with duration tracking."""
agent_id, user_id, dashboard_id, slice_id, dataset_id, params = (
self._extract_context_info(context)
)
tool_name = getattr(context.message, "name", None)
start_time = time.time()
success = False
try:
result = await call_next(context)
success = True
return result
finally:
duration_ms = int((time.time() - start_time) * 1000)
event_logger.log(
user_id=user_id,
action="mcp_tool_call",
dashboard_id=dashboard_id,
duration_ms=duration_ms,
slice_id=slice_id,
referrer=None,
curated_payload={
"tool": tool_name,
"agent_id": agent_id,
"params": _sanitize_params(params),
"method": context.method,
"dashboard_id": dashboard_id,
"slice_id": slice_id,
"dataset_id": dataset_id,
"success": success,
},
)
logger.info(
"MCP tool call: tool=%s, agent_id=%s, user_id=%s, method=%s, "
"dashboard_id=%s, slice_id=%s, dataset_id=%s, duration_ms=%s, "
"success=%s",
tool_name,
agent_id,
user_id,
context.method,
dashboard_id,
slice_id,
dataset_id,
duration_ms,
success,
)
async def on_message(
self,
context: MiddlewareContext,
call_next: Callable[[MiddlewareContext], Awaitable[Any]],
) -> Any:
"""Log non-tool messages (resource reads, prompts, etc.)."""
agent_id, user_id, dashboard_id, slice_id, dataset_id, params = (
self._extract_context_info(context)
)
event_logger.log(
user_id=user_id,
action="mcp_tool_call",
action="mcp_message",
dashboard_id=dashboard_id,
duration_ms=None,
slice_id=slice_id,
@@ -132,24 +219,19 @@ class LoggingMiddleware(Middleware):
curated_payload={
"tool": getattr(context.message, "name", None),
"agent_id": agent_id,
"params": params,
"params": _sanitize_params(params),
"method": context.method,
"dashboard_id": dashboard_id,
"slice_id": slice_id,
"dataset_id": dataset_id,
},
)
# (Optional) also log to standard logger for debugging
logger.info(
"MCP tool call: tool=%s, agent_id=%s, user_id=%s, method=%s, "
"dashboard_id=%s, slice_id=%s, dataset_id=%s",
"MCP message: tool=%s, agent_id=%s, user_id=%s, method=%s",
getattr(context.message, "name", None),
agent_id,
user_id,
context.method,
dashboard_id,
slice_id,
dataset_id,
)
return await call_next(context)

View File

@@ -33,6 +33,7 @@ from superset_core.mcp import tool
from superset.errors import ErrorLevel, SupersetError, SupersetErrorType
from superset.exceptions import SupersetErrorException, SupersetSecurityException
from superset.extensions import event_logger
from superset.mcp_service.sql_lab.schemas import (
ColumnInfo,
ExecuteSqlRequest,
@@ -72,28 +73,29 @@ async def execute_sql(request: ExecuteSqlRequest, ctx: Context) -> ExecuteSqlRes
from superset.models.core import Database
# 1. Get database and check access
database = db.session.query(Database).filter_by(id=request.database_id).first()
if not database:
raise SupersetErrorException(
SupersetError(
message=f"Database with ID {request.database_id} not found",
error_type=SupersetErrorType.DATABASE_NOT_FOUND_ERROR,
level=ErrorLevel.ERROR,
)
with event_logger.log_context(action="mcp.execute_sql.db_validation"):
database = (
db.session.query(Database).filter_by(id=request.database_id).first()
)
if not security_manager.can_access_database(database):
raise SupersetSecurityException(
SupersetError(
message=f"Access denied to database {database.database_name}",
error_type=SupersetErrorType.DATABASE_SECURITY_ACCESS_ERROR,
level=ErrorLevel.ERROR,
if not database:
raise SupersetErrorException(
SupersetError(
message=f"Database with ID {request.database_id} not found",
error_type=SupersetErrorType.DATABASE_NOT_FOUND_ERROR,
level=ErrorLevel.ERROR,
)
)
)
# 2. Build QueryOptions
# Caching is enabled by default to reduce database load.
# force_refresh bypasses cache when user explicitly requests fresh data.
if not security_manager.can_access_database(database):
raise SupersetSecurityException(
SupersetError(
message=(f"Access denied to database {database.database_name}"),
error_type=SupersetErrorType.DATABASE_SECURITY_ACCESS_ERROR,
level=ErrorLevel.ERROR,
)
)
# 2. Build QueryOptions and execute query
cache_opts = CacheOptions(force_refresh=True) if request.force_refresh else None
options = QueryOptions(
catalog=request.catalog,
@@ -106,10 +108,12 @@ async def execute_sql(request: ExecuteSqlRequest, ctx: Context) -> ExecuteSqlRes
)
# 3. Execute query
result = database.execute(request.sql, options)
with event_logger.log_context(action="mcp.execute_sql.query_execution"):
result = database.execute(request.sql, options)
# 4. Convert to MCP response format
response = _convert_to_response(result)
with event_logger.log_context(action="mcp.execute_sql.response_conversion"):
response = _convert_to_response(result)
# Log successful execution
if response.success:

View File

@@ -27,6 +27,7 @@ from urllib.parse import urlencode
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.sql_lab.schemas import (
OpenSqlLabRequest,
SqlLabResponse,
@@ -48,8 +49,9 @@ def open_sql_lab_with_context(
try:
from superset.daos.database import DatabaseDAO
# Validate database exists and is accessible
database = DatabaseDAO.find_by_id(request.database_connection_id)
with event_logger.log_context(action="mcp.open_sql_lab.db_validation"):
# Validate database exists and is accessible
database = DatabaseDAO.find_by_id(request.database_connection_id)
if not database:
return SqlLabResponse(
url="",

View File

@@ -25,6 +25,7 @@ import logging
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.mcp_core import InstanceInfoCore
from superset.mcp_service.system.schemas import (
GetSupersetInstanceInfoRequest,
@@ -98,7 +99,8 @@ def get_instance_info(
}
# Run the configurable core
return _instance_info_core.run_tool()
with event_logger.log_context(action="mcp.get_instance_info.metrics"):
return _instance_info_core.run_tool()
except Exception as e:
error_msg = f"Unexpected error in instance info: {str(e)}"

View File

@@ -29,6 +29,7 @@ from typing import Callable, Literal
from fastmcp import Context
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.common.schema_discovery import (
CHART_DEFAULT_COLUMNS,
CHART_SEARCH_COLUMNS,
@@ -154,8 +155,9 @@ async def get_schema(request: GetSchemaRequest, ctx: Context) -> GetSchemaRespon
)
# Create core instance and run (columns extracted dynamically)
core = factory()
schema_info = core.run_tool()
with event_logger.log_context(action="mcp.get_schema.discovery"):
core = factory()
schema_info = core.run_tool()
await ctx.debug(
f"Schema for {request.model_type}: "

View File

@@ -24,6 +24,7 @@ import platform
from flask import current_app
from superset_core.mcp import tool
from superset.extensions import event_logger
from superset.mcp_service.system.schemas import HealthCheckResponse
from superset.utils.version import get_version_metadata
@@ -64,9 +65,10 @@ async def health_check() -> HealthCheckResponse:
service_name = f"{app_name} MCP Service"
try:
# Get version from Superset version metadata
version_metadata = get_version_metadata()
version = version_metadata.get("version_string", "unknown")
with event_logger.log_context(action="mcp.health_check.status"):
# Get version from Superset version metadata
version_metadata = get_version_metadata()
version = version_metadata.get("version_string", "unknown")
response = HealthCheckResponse(
status="healthy",

View File

@@ -26,7 +26,7 @@ from flask_appbuilder.security.sqla.models import RegisterUser, Role
from flask_wtf.csrf import generate_csrf
from marshmallow import EXCLUDE, fields, post_load, Schema, ValidationError
from sqlalchemy import asc, desc
from sqlalchemy.orm import joinedload
from sqlalchemy.orm import selectinload
from superset.commands.dashboard.embedded.exceptions import (
EmbeddedDashboardNotFoundError,
@@ -298,7 +298,9 @@ class RoleRestAPI(BaseSupersetApi):
page_size = args.get("page_size", 10)
query = db.session.query(Role).options(
joinedload(Role.permissions), joinedload(Role.user)
selectinload(Role.permissions),
selectinload(Role.user),
selectinload(Role.groups),
)
filters = args.get("filters", [])
@@ -318,6 +320,8 @@ class RoleRestAPI(BaseSupersetApi):
if "name" in filter_dict:
query = query.filter(Role.name.ilike(f"%{filter_dict['name']}%"))
total_count = query.count()
roles = (
query.order_by(order_by).offset(page * page_size).limit(page_size).all()
)
@@ -334,7 +338,7 @@ class RoleRestAPI(BaseSupersetApi):
}
for role in roles
],
count=query.count(),
count=total_count,
ids=[role.id for role in roles],
)
except ForbiddenError as e:

View File

@@ -20,8 +20,9 @@
import jwt
import pytest
from flask.ctx import AppContext
from flask_wtf.csrf import generate_csrf
from superset import db
from superset import db, security_manager
from superset.daos.dashboard import EmbeddedDashboardDAO
from superset.models.dashboard import Dashboard
from superset.utils.urls import get_url_host
@@ -35,6 +36,76 @@ from tests.integration_tests.fixtures.birth_names_dashboard import (
)
@pytest.fixture
def create_test_roles_with_users(app_context: AppContext):
"""
Fixture that creates two test roles with specific users, permissions, and groups.
"""
user1, user2, user3 = [
security_manager.add_user(
username=f"test_user_{i}",
first_name="Test",
last_name=f"User{i}",
email=f"test_user_{i}@test.com",
role=[],
password="password", # noqa: S106
)
for i in range(3)
]
test_group = security_manager.add_group(
name="test_group_1",
label="Test Group 1",
description="Test group for role testing",
roles=[],
)
pvm1 = security_manager.add_permission_view_menu("can_read", "Dashboard")
pvm2 = security_manager.add_permission_view_menu("can_write", "Dashboard")
pvm3 = security_manager.add_permission_view_menu("can_read", "Chart")
test_role_1 = security_manager.add_role("test_role_1", [pvm1, pvm2])
test_role_1.user.append(user1)
test_role_1.user.append(user2)
test_role_1.groups.append(test_group)
test_role_2 = security_manager.add_role("test_role_2", [pvm3])
test_role_2.user.append(user3)
db.session.commit()
test_data = {
"test_role_1": {
"role": test_role_1,
"user_ids": sorted([user1.id, user2.id]),
"permission_ids": sorted([pvm1.id, pvm2.id]),
"group_ids": [test_group.id],
},
"test_role_2": {
"role": test_role_2,
"user_ids": [user3.id],
"permission_ids": [pvm3.id],
"group_ids": [],
},
}
yield test_data
# Cleanup
db.session.delete(test_role_1)
db.session.delete(test_role_2)
db.session.delete(user1)
db.session.delete(user2)
db.session.delete(user3)
db.session.delete(test_group)
db.session.commit()
@pytest.fixture
def inject_test_roles_data(request, create_test_roles_with_users):
request.instance.test_roles_data = create_test_roles_with_users
class TestSecurityCsrfApi(SupersetTestCase):
resource_name = "security"
@@ -293,3 +364,41 @@ class TestSecurityRolesApi(SupersetTestCase):
self.login(GAMMA_USERNAME)
response = self.client.get(self.show_uri)
self.assert403(response)
@pytest.mark.usefixtures("inject_test_roles_data")
def test_get_roles_with_specific_test_data(self):
"""
Security API: Test roles endpoint with specific test data
"""
self.login(ADMIN_USERNAME)
response = self.client.get(f"{self.show_uri}?q=(page_size:100)")
self.assert200(response)
data = json.loads(response.data.decode("utf-8"))
# Create a mapping of role names to API response
api_roles_by_name = {role["name"]: role for role in data["result"]}
# Verify test_role_1
assert "test_role_1" in api_roles_by_name, (
f"test_role_1 not found in API response. "
f"Available roles: {list(api_roles_by_name.keys())}"
)
role1_api = api_roles_by_name["test_role_1"]
role1_expected = self.test_roles_data["test_role_1"]
assert sorted(role1_api["user_ids"]) == role1_expected["user_ids"]
assert sorted(role1_api["permission_ids"]) == role1_expected["permission_ids"]
assert sorted(role1_api["group_ids"]) == role1_expected["group_ids"]
# Verify test_role_2
assert "test_role_2" in api_roles_by_name, (
f"test_role_2 not found in API response. "
f"Available roles: {list(api_roles_by_name.keys())}"
)
role2_api = api_roles_by_name["test_role_2"]
role2_expected = self.test_roles_data["test_role_2"]
assert sorted(role2_api["user_ids"]) == role2_expected["user_ids"]
assert sorted(role2_api["permission_ids"]) == role2_expected["permission_ids"]
assert role2_api["group_ids"] == role2_expected["group_ids"]

View File

@@ -67,10 +67,13 @@ def test_extension_config_full():
"views": {
"sqllab": {
"panels": [
{"id": "query_insights.main", "name": "Query Insights"}
],
},
},
{
"id": "query_insights.main",
"name": "Query Insights",
}
]
}
}
},
"moduleFederation": {"exposes": ["./index"]},
},

View File

@@ -934,6 +934,52 @@ async def test_invalid_filter_column_raises(mcp_server):
)
def test_database_name_filter_accepted():
"""Test that database_name is accepted as a valid filter column.
Regression test for TypeError 'encoding without a string argument' when
filtering datasets by database_name.
"""
request = ListDatasetsRequest(
filters=[{"col": "database_name", "opr": "ilike", "value": "%dynamo%"}],
select_columns=["id", "database_name", "table_name"],
)
assert len(request.filters) == 1
assert request.filters[0].col == "database_name"
assert request.filters[0].opr.value == "ilike"
assert request.filters[0].value == "%dynamo%"
@patch("superset.daos.dataset.DatasetDAO.list")
@pytest.mark.asyncio
async def test_list_datasets_with_database_name_filter(mock_list, mcp_server):
"""Test list_datasets with database_name filter via MCP client.
Regression test: previously database_name was not in the allowed filter
columns, causing a Pydantic ValidationError that downstream code could
not serialize properly (TypeError: encoding without a string argument).
"""
dataset = create_mock_dataset(
dataset_id=5,
table_name="dynamo_table",
database_name="dynamodb",
)
mock_list.return_value = ([dataset], 1)
async with Client(mcp_server) as client:
request = ListDatasetsRequest(
filters=[{"col": "database_name", "opr": "ilike", "value": "%dynamo%"}],
select_columns=["id", "database_name", "table_name"],
)
result = await client.call_tool(
"list_datasets", {"request": request.model_dump()}
)
assert result.content is not None
data = json.loads(result.content[0].text)
assert data["datasets"] is not None
assert len(data["datasets"]) == 1
assert data["datasets"][0]["database_name"] == "dynamodb"
@patch("superset.daos.dataset.DatasetDAO.find_by_id")
@pytest.mark.asyncio
async def test_get_dataset_info_includes_columns_and_metrics(mock_info, mcp_server):

View File

@@ -0,0 +1,207 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Unit tests for LoggingMiddleware on_call_tool() and on_message() methods.
Tests verify that:
- on_call_tool() captures duration_ms and success status
- on_message() logs non-tool messages without duration
- _extract_context_info() extracts entity IDs from params
"""
from typing import Any
from unittest.mock import AsyncMock, MagicMock, patch
import pytest
from superset.mcp_service.middleware import LoggingMiddleware
def _make_context(
method: str = "tools/call",
name: str = "list_charts",
params: dict[str, Any] | None = None,
metadata: dict[str, Any] | None = None,
):
"""Create a mock MiddlewareContext."""
ctx = MagicMock()
ctx.method = method
message = MagicMock()
message.name = name
message.params = params or {}
ctx.message = message
if metadata is not None:
ctx.metadata = metadata
else:
ctx.metadata = None
ctx.session = None
return ctx
class TestLoggingMiddlewareOnCallTool:
"""Tests for LoggingMiddleware.on_call_tool()."""
@patch("superset.mcp_service.middleware.event_logger")
@patch("superset.mcp_service.middleware.get_user_id", return_value=42)
@pytest.mark.asyncio
async def test_on_call_tool_logs_duration_and_success(
self, mock_get_user_id, mock_event_logger
):
"""on_call_tool records duration_ms and success=True on normal return."""
middleware = LoggingMiddleware()
ctx = _make_context(name="list_charts")
call_next = AsyncMock(return_value="tool_result")
result = await middleware.on_call_tool(ctx, call_next)
assert result == "tool_result"
call_next.assert_awaited_once_with(ctx)
# Verify event_logger.log was called with duration_ms and success
mock_event_logger.log.assert_called_once()
call_kwargs = mock_event_logger.log.call_args[1]
assert call_kwargs["action"] == "mcp_tool_call"
assert call_kwargs["user_id"] == 42
assert isinstance(call_kwargs["duration_ms"], int)
assert call_kwargs["duration_ms"] >= 0
assert call_kwargs["curated_payload"]["success"] is True
assert call_kwargs["curated_payload"]["tool"] == "list_charts"
@patch("superset.mcp_service.middleware.event_logger")
@patch("superset.mcp_service.middleware.get_user_id", return_value=42)
@pytest.mark.asyncio
async def test_on_call_tool_logs_failure_on_exception(
self, mock_get_user_id, mock_event_logger
):
"""on_call_tool records success=False when tool raises."""
middleware = LoggingMiddleware()
ctx = _make_context(name="execute_sql")
call_next = AsyncMock(side_effect=ValueError("boom"))
with pytest.raises(ValueError, match="boom"):
await middleware.on_call_tool(ctx, call_next)
# Verify event_logger.log was still called (in the finally block)
mock_event_logger.log.assert_called_once()
call_kwargs = mock_event_logger.log.call_args[1]
assert call_kwargs["curated_payload"]["success"] is False
assert call_kwargs["duration_ms"] >= 0
@patch("superset.mcp_service.middleware.event_logger")
@patch("superset.mcp_service.middleware.get_user_id", return_value=42)
@pytest.mark.asyncio
async def test_on_call_tool_extracts_entity_ids(
self, mock_get_user_id, mock_event_logger
):
"""on_call_tool extracts dashboard_id, chart_id, dataset_id from params."""
middleware = LoggingMiddleware()
ctx = _make_context(
name="get_chart_info",
params={
"dashboard_id": 10,
"chart_id": 20,
"dataset_id": 30,
},
)
call_next = AsyncMock(return_value="ok")
await middleware.on_call_tool(ctx, call_next)
call_kwargs = mock_event_logger.log.call_args[1]
assert call_kwargs["dashboard_id"] == 10
assert call_kwargs["slice_id"] == 20
assert call_kwargs["curated_payload"]["dataset_id"] == 30
class TestLoggingMiddlewareOnMessage:
"""Tests for LoggingMiddleware.on_message()."""
@patch("superset.mcp_service.middleware.event_logger")
@patch("superset.mcp_service.middleware.get_user_id", return_value=1)
@pytest.mark.asyncio
async def test_on_message_logs_without_duration(
self, mock_get_user_id, mock_event_logger
):
"""on_message logs with action=mcp_message and duration_ms=None."""
middleware = LoggingMiddleware()
ctx = _make_context(method="resources/read", name="instance/metadata")
call_next = AsyncMock(return_value="resource_data")
result = await middleware.on_message(ctx, call_next)
assert result == "resource_data"
call_next.assert_awaited_once_with(ctx)
mock_event_logger.log.assert_called_once()
call_kwargs = mock_event_logger.log.call_args[1]
assert call_kwargs["action"] == "mcp_message"
assert call_kwargs["duration_ms"] is None
# on_message should NOT have success field
assert "success" not in call_kwargs["curated_payload"]
class TestExtractContextInfo:
"""Tests for LoggingMiddleware._extract_context_info()."""
@patch("superset.mcp_service.middleware.get_user_id", return_value=99)
def test_extract_with_metadata_agent_id(self, mock_get_user_id):
"""Extracts agent_id from context.metadata."""
middleware = LoggingMiddleware()
ctx = _make_context(metadata={"agent_id": "agent-123"})
agent_id, user_id, dashboard_id, slice_id, dataset_id, params = (
middleware._extract_context_info(ctx)
)
assert agent_id == "agent-123"
assert user_id == 99
@patch(
"superset.mcp_service.middleware.get_user_id",
side_effect=RuntimeError("no Flask request context"),
)
def test_extract_handles_missing_user(self, mock_get_user_id):
"""Gracefully handles missing user context."""
middleware = LoggingMiddleware()
ctx = _make_context()
agent_id, user_id, dashboard_id, slice_id, dataset_id, params = (
middleware._extract_context_info(ctx)
)
assert user_id is None
@patch("superset.mcp_service.middleware.get_user_id", return_value=1)
def test_extract_slice_id_from_chart_id(self, mock_get_user_id):
"""Extracts slice_id from chart_id param (alias)."""
middleware = LoggingMiddleware()
ctx = _make_context(params={"chart_id": 55})
_, _, _, slice_id, _, _ = middleware._extract_context_info(ctx)
assert slice_id == 55
@patch("superset.mcp_service.middleware.get_user_id", return_value=1)
def test_extract_slice_id_from_slice_id(self, mock_get_user_id):
"""Extracts slice_id from slice_id param (fallback)."""
middleware = LoggingMiddleware()
ctx = _make_context(params={"slice_id": 66})
_, _, _, slice_id, _, _ = middleware._extract_context_info(ctx)
assert slice_id == 66