Compare commits

...

119 Commits

Author SHA1 Message Date
Maxime Beauchemin
6d7467bafd feat: add devcontainer profiles for MCP development
- Split devcontainer config into base + profile structure
  - default: Standard Superset development (port 9001)
  - with-mcp: Superset + MCP service (ports 9001, 5008)
- Add MCP service to docker-compose-light.yml as optional profile
- Update start-superset.sh to conditionally start MCP when ENABLE_MCP=true
- Add MCP case to docker-bootstrap.sh for starting MCP service
- Preserve original devcontainer.json as .old for migration reference

This allows developers to choose their development environment:
- Use default profile for standard Superset development
- Use with-mcp profile for MCP/LLM agent development
- MCP service runs on port 5008 when enabled via profile

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 18:59:27 -07:00
Amin Ghadersohi
71f294b7d3 docs: update MCP Chart Test Plan with accurate implementation details
- Add authentication setup and dataset discovery prerequisites
- Remove references to unsupported preview formats (base64, interactive, vega_lite)
- Fix schema examples: operator -> op, xlsx -> excel
- Add performance limits for preview formats
- Update error expectations for empty query results
- Enhance cache testing with cache_timeout parameter
- Add Important Schema Notes section for common gotchas
- Update response patterns to match actual implementation
2025-07-30 20:15:41 -04:00
Amin Ghadersohi
e839d0989a refactor: remove redundant docs folder from mcp_service 2025-07-30 19:41:05 -04:00
Amin Ghadersohi
714e21b3ec feat: enhance MCP documentation with Superset best practices
- Add Docusaurus admonitions (:::note, :::tip, :::warning) following Superset patterns
- Improve code block formatting and language tags
- Add important notes about authentication and installation
- Ensure consistency with Superset documentation style
- Maintain accuracy with FastMCP specifications
2025-07-30 19:26:08 -04:00
Amin Ghadersohi
bf78eb69ed fix: resolve Mermaid diagram syntax errors in development guide
Replace @ symbols in node names that cause Mermaid parsing errors
2025-07-30 19:21:45 -04:00
Amin Ghadersohi
a13c1ba8c2 fix: update broken README.md links in MCP documentation
Replace ./README.md references with proper Docusaurus links to intro page
2025-07-30 19:20:38 -04:00
Amin Ghadersohi
422c34a6ee feat: integrate MCP service documentation into Superset Docusaurus site
- Add comprehensive MCP service documentation to docs/docs/mcp-service/
- Create 7 new documentation files covering all aspects of MCP service
- Add MCP Service section to Docusaurus sidebar configuration
- Update README files to reference official documentation
- Remove outdated standalone architecture documentation
- Install missing docs dependencies to fix eslint-docs pre-commit hook

Documentation includes:
- Introduction and overview with getting started guide
- Complete API reference with all 16 tools and examples
- Authentication setup with JWT and identity provider integration
- Development guide with internal architecture and patterns
- System architecture overview with deployment considerations
- Preset integration guide for enterprise features
2025-07-30 19:17:18 -04:00
Amin Ghadersohi
cd213fc57d update: enhance MCP service README with comprehensive setup instructions
Improve README with official Superset installation instructions, detailed Claude Desktop
connection steps using FastMCP proxy, and troubleshooting guide. Adds verification steps
and clear instructions for HTTP-based MCP service deployment.
2025-07-30 18:24:54 -04:00
Amin Ghadersohi
0f20a88598 feat(mcp): major enhancements to chart generation with validation, pooled screenshots, and simplified tests
This commit introduces significant improvements to the MCP chart generation system:

## Core Features
- Enhanced chart generation with comprehensive validation and error handling
- Implemented WebDriver pooling for 10x faster screenshot generation
- Added detailed validation for chart configurations with helpful error messages
- Support for both table and XY (line/bar/area/scatter) chart types

## Validation & Error Handling
- Column existence validation with similarity-based suggestions
- Data type compatibility checks for aggregations (SUM/AVG for numbers only)
- Filter operator validation
- Detailed error responses with actionable suggestions and available columns/metrics

## Performance Improvements
- WebDriver connection pooling eliminates browser startup overhead
- Reuses browser instances across multiple screenshot requests
- Automatic health checking and recovery of WebDriver instances
- Configurable pool size and timeout settings

## Screenshot Enhancements
- New PooledChartScreenshot and PooledExploreScreenshot classes
- FastAPI endpoints for serving chart and explore screenshots as PNG images
- Proper caching with TTL support
- Clean chart-only screenshots for explore views (hides UI elements)

## Test Improvements
- Simplified test structure focusing on schema validation over complex mocking
- Removed brittle integration-style tests in favor of unit tests
- Fixed all test failures with cleaner, more maintainable approach
- Added comprehensive test plan for manual testing with Claude Desktop

## API Enhancements
- Centralized URL generation for screenshots and explore links
- Consistent error response format across all endpoints
- Support for UUID-based chart lookups alongside numeric IDs

## Code Quality
- Fixed all pre-commit issues (mypy, ruff, long lines, complexity)
- Extracted helper functions to reduce cyclomatic complexity
- Proper type annotations throughout
- Better separation of concerns

## Documentation
- Added comprehensive test plan for chart tools
- Detailed guides for table chart configuration
- WebDriver pooling architecture documentation
- Demo scripts for testing functionality

The changes maintain backward compatibility while significantly improving
performance, reliability, and developer experience for chart generation in the
MCP service.
2025-07-30 18:09:23 -04:00
Amin Ghadersohi
33d16eaca1 refactor: rename pydantic_schemas to schemas for MCP service
Simplified the directory structure by renaming `pydantic_schemas` to `schemas` throughout the MCP service codebase. This follows standard Python conventions and makes imports more concise.

Changes:
- Renamed directory: superset/mcp_service/pydantic_schemas/ → superset/mcp_service/schemas/
- Updated all import statements across 42 files
- Updated documentation references in README files
- All tests pass, pre-commit hooks clean
2025-07-30 14:20:42 -04:00
Amin Ghadersohi
5b56bc622b feat: major MCP service enhancements with cache control, testing infrastructure, and bug fixes
This comprehensive update adds significant functionality and improvements to the MCP service:

  New Features:
  - Cache control utilities for metadata operations with configurable TTL
  - Entity testing plan documentation for systematic API validation
  - Chart data retrieval tool with caching support
  - Comprehensive request schema validation framework
  - Sortable columns testing infrastructure

  Documentation Updates:
  - Expanded README with architecture details and usage examples
  - Phase 1 status tracking and implementation notes
  - Enhanced schema documentation with cache control patterns
  - LLMS.md reference added to CLAUDE.md

  Search Functionality Fixes:
  - Dashboard search: removed invalid columns (owners, roles, tags, published)
  - Dataset search: removed non-existent 'tags' and problematic 'catalog' columns
  - Chart search: maintained correct text-only columns (slice_name, description)
  - Updated corresponding unit tests to match implementations

  Infrastructure Improvements:
  - Added comprehensive cache control test coverage
  - Request schema validation testing
  - Sortable columns validation across all entity types
  - Enhanced error handling and logging

  Technical Debt:
  - Standardized import patterns across MCP tools
  - Improved type safety in Pydantic schemas
  - Better separation of concerns for cache operations
2025-07-30 14:20:42 -04:00
Amin Ghadersohi
8cbfe027b5 update: add chart update tools and centralized URL configuration
- Add update_chart and update_chart_preview tools for modifying saved and cached charts
  - Implement centralized URL configuration using SUPERSET_WEBSERVER_ADDRESS
  - Add comprehensive MCP audit logging with impersonation tracking and payload  sanitization
  - Refactor duplicate chart analysis functions to chart_utils.py
  - Update all README documentation with new tools and features
  - Add 194+ unit tests covering URL utils, audit logging, and chart updates
  - Fix all pre-commit issues and ensure test coverage
2025-07-30 14:20:42 -04:00
Amin Ghadersohi
c3b3edc6ba refactor: rename create_chart to generate_chart and standardize MCP test naming
- Rename create_chart function to generate_chart throughout MCP service
  - Update CreateChartRequest to GenerateChartRequest schema
  - Rename test files to follow test_*.py convention consistently
  - Fix test class and method names (TestCreateChart → TestGenerateChart)
  - Update all documentation and README files with new naming
  - Standardize auth scope mappings and tool references
  - Maintain backward compatibility for Superset CreateChartCommand references
2025-07-30 14:20:42 -04:00
Amin Ghadersohi
aa06bb9fda revert unnecessary config change 2025-07-30 14:20:41 -04:00
Amin Ghadersohi
0f222b9034 update: revert comment change 2025-07-30 14:20:41 -04:00
Amin Ghadersohi
7044153ca4 revert: switch webdriver back to Firefox for chart
screenshots

  Changed WEBDRIVER_TYPE from "chrome" to "firefox" and removed
   Chrome-specific
  binary path from WEBDRIVER_CONFIGURATION. Firefox provides
  more reliable
  screenshot generation for the MCP service chart preview
  functionality.

  - Set WEBDRIVER_TYPE = "firefox"
  - Cleared binary_location in WEBDRIVER_CONFIGURATION to use
  default Firefox
  - Chart preview tools now use geckodriver instead of Chrome
  headless mode

  This resolves webdriver timeout issues in chart screenshot
  generation.
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
ce82c35bb6 docs: clarify MCP service capabilities and available operations
Updated all MCP service READMEs to accurately reflect available operations:

- Removed overuse of 'comprehensive' and 'complete' where it overstated capabilities
- Added clear 'Available Operations' section showing:
  - Read operations available for all entities
  - Create operations only for charts and dashboards
  - No update or delete operations available
- Updated tool descriptions to be more specific about capabilities
- Changed 'complete workflow automation' to 'dashboard creation and chart addition'
- Clarified that we have 5 chart types, not 'comprehensive' support
- Updated test count references to be consistent (185+ tests)

This ensures documentation accurately represents Phase 1 capabilities without
overstating what operations are available.
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
9b25bd973f docs: update MCP service documentation to reflect Phase 1 completion
Updated all MCP service README files to reflect current status:

- README_ARCHITECTURE.md: Updated to 95% completion status, documented new
  dashboard generation and SQL Lab tools, corrected file references
- README_PHASE1_STATUS.md: Marked all core epics as complete, updated tool
  count to 16, reflected recent technical completions
- README_SCHEMAS.md: Updated test coverage numbers, documented new schema
  enhancements, marked Phase 1 features as complete

Changes reflect the successful completion of core MCP service functionality
including BaseDAO enhancements, comprehensive testing coverage, and all
planned Phase 1 tools and features.
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
5f7502b85c feat: comprehensive MCP service enhancements with type safety and new tools
Core Improvements:
- Enhanced BaseDAO with type-safe UUID handling and flexible column support
- Added comprehensive test coverage (185+ passing unit tests)
- Eliminated hardcoded string checks with SQLAlchemy type inspection
- Removed unnecessary async patterns for better performance consistency

New MCP Tools & Features:
- Dashboard generation: create_dashboard and add_chart_to_existing_dashboard
- Chart data/preview: get_chart_data and get_chart_preview with multiple formats
- SQL Lab integration: open_sql_lab_with_context with proper parameter handling
- Enhanced chart creation with comprehensive type support

Technical Enhancements:
- Fixed SQL Lab parameter naming (dbid) for frontend compatibility
- Improved error handling with graceful UUID conversion fallbacks
- Enhanced multi-identifier support (ID, UUID, slug) across all tools
- Comprehensive request schema pattern eliminating LLM validation issues

Testing & Quality:
- Added BaseDAO unit tests with edge cases and error scenarios
- Enhanced MCP tool testing with proper mocking patterns
- Fixed trailing whitespace and pre-commit compliance
- Professional error handling and type validation

Documentation:
- Updated all MCP service READMEs with current status (95% complete)
- Documented all 16 MCP tools with examples and usage patterns
- Enhanced architecture documentation with recent improvements
- Updated Phase 1 status reflecting completion of core features

This commit consolidates Phase 1 MCP service development with production-ready
authentication, comprehensive testing, and complete tool functionality.
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
1d5372210f update: read files 2025-07-30 14:20:41 -04:00
Amin Ghadersohi
64af53f6f6 update: enhance MCP service auth with factory pattern and code cleanup
- Implement configurable auth factory pattern as suggested by @dpgaspar
  - Add unit test coverage
  - Refactor auth.py for better pythonic patterns and error handling
  - Add proper type annotations and logging throughout
  - Split complex functions into focused helper functions
  - Add detailed architecture documentation with Mermaid diagrams
  - Improve JWT token extraction with robust fallback logic
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
56b308340e update: use real Superset user for MCP authentication instead of MCPUser
Replace MCPUser wrapper with actual Flask-AppBuilder User from database
  to enable proper RBAC permission filtering. Fixes MCP service returning
  0 counts due to empty permission queries.
2025-07-30 14:20:41 -04:00
Amin Ghadersohi
3b2b7609a8 update: add optional JWT Bearer authentication to MCP service
Implement comprehensive JWT authentication system for production deployments:

  • Add BearerAuthProvider integration with FastMCP for RS256 token validation
  • Support both static public keys and JWKS endpoints for enterprise key rotation
  • Implement scope-based authorization (dashboard:read, chart:write, etc.)
  • Extract user identity from JWT sub claim for proper audit trails
  • Maintain backward compatibility with admin fallback when auth disabled
  • Add comprehensive test coverage for authentication flows
  • Update documentation with human-friendly security setup guide

  Authentication is disabled by default for development convenience.
  Configure via environment variables for production use.
2025-07-30 14:20:40 -04:00
Amin Ghadersohi
61a91f80fe update deps 2025-07-30 14:20:40 -04:00
Amin Ghadersohi
ed3c5ecbc2 enhance MCP service with multi-identifier support and comprehensive UUID/slug functionality
- Add UUID field to ChartInfo and DatasetInfo Pydantic schemas for complete serialization
  - Include UUID in chart and dataset serialization functions (serialize_chart_object,
  serialize_dataset_object)
  - UUID and slug are now included in default response columns for better discoverability:
    * Dashboards: UUID and slug in DEFAULT_DASHBOARD_COLUMNS and returned by default
    * Charts: UUID in DEFAULT_CHART_COLUMNS and returned by default
    * Datasets: UUID in DEFAULT_DATASET_COLUMNS and returned by default
  - Search functionality enhanced to include UUID/slug fields across all relevant tools
  - Add comprehensive test coverage for UUID/slug functionality:
    * Default column verification tests ensuring UUID/slug are in default responses
    * Response data verification tests confirming UUID/slug values are returned
    * Custom column selection tests for explicit UUID/slug requests
    * Metadata accuracy tests verifying columns_requested/columns_loaded tracking
  - Update documentation to reflect enhanced multi-identifier capabilities
  - All 132 tests pass with comprehensive verification of UUID/slug support
2025-07-30 14:20:40 -04:00
Amin Ghadersohi
c8fbd4233c add multi-identifier support and fix columns metadata accuracy
**Multi-Identifier Support:**
  - Enhance ModelGetInfoTool to support ID, UUID, and slug lookup
  - Add intelligent identifier detection for get_*_info tools
  - Dashboards: support ID, UUID, and slug via id_or_slug_filter
  - Datasets/Charts: support ID and UUID via direct database queries
  - Add GetDashboardInfoRequest, GetDatasetInfoRequest, GetChartInfoRequest schemas

  **Enhanced Default Columns & Metadata:**
  - Add uuid to default columns for all list tools (dashboards, datasets, charts)
  - Include uuid/slug in search columns for better discoverability
  - Fix columns_requested to accurately reflect user input vs defaults
  - Fix columns_loaded to show actual DAO columns requested vs serialized fields

  **Testing & Code Quality:**
  - Add multi-identifier tests for all get_*_info tools (ID/UUID/slug scenarios)
  - Remove unused serialized_columns variable in ModelListTool
  - Fix linting issues (line length, docstrings)

  This provides flexible identifier support while ensuring accurate metadata
  tracking for better LLM compatibility and debugging.
2025-07-30 14:20:40 -04:00
Amin Ghadersohi
fc85f68585 update: implement FastMCP complex inputs pattern for MCP tools
- Replace individual parameters with structured request schemas for list_datasets, list_charts, and
  list_dashboards
  - Fix validation issues where LLMs passed arrays/objects as strings
  - Add ListDatasetsRequest, ListChartsRequest, ListDashboardsRequest schemas
  - Fix object serialization (charts, dashboards, datasets)
  - Add validation to prevent search+filters conflicts
  - Update all tests and fix linting issues

  Resolves string/array validation ambiguity that caused tool failures when LLMs sent complex
  parameters incorrectly.
2025-07-30 14:20:40 -04:00
Amin Ghadersohi
e825bbe1f4 update: generate_explore_link and create_chart working well 2025-07-30 14:20:40 -04:00
Amin Ghadersohi
a80b637a2a update: add a save_chart boolean parameter to create chart so that generate link to explore can use its code and just pass false 2025-07-30 14:20:40 -04:00
Amin Ghadersohi
541b3bd727 update: add docs to proxy 2025-07-30 14:20:40 -04:00
Amin Ghadersohi
a909799e5c update: simplify create chart a lot: support area, line, bar, scatter and table charts.
example prompt:
- can you use superset dataset 2 to plot popular baby names in 2024

- plot airline delay by day of week and group by airline use dataset 6
2025-07-30 14:20:39 -04:00
Amin Ghadersohi
01329f1c62 update: tests and deps 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
67f621d360 update: remove create_chart_simple tool 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
b552dbf4a1 update: fix all lint issues 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
364af98c04 update: create_chart: Remove x_axis from groupby if present, update docs and tests
- Updates create_chart logic to automatically remove x_axis from groupby for ECharts timeseries charts, preventing duplicate dimension usage.
- Updates and expands unit test to verify x_axis is excluded from groupby, using improved test mocks for accurate backend simulation.
- Updates documentation (README.md, README_ARCHITECTURE.md, README_PHASE1_STATUS.md, README_SCHEMAS.md) to clarify create_chart tool behavior and schema, including new groupby/x_axis handling.
- No breaking changes to tool signatures; behavior is now more robust and LLM-friendly.
2025-07-30 14:20:39 -04:00
Amin Ghadersohi
afdb8b38a6 update: Add columns and metrics to DatasetInfo/list_datasets, improve test coverage
- Updated DatasetInfo schema to include columns and metrics fields, with new TableColumnInfo and SqlMetricInfo models.
- Updated serialize_dataset_object to serialize columns and metrics for each dataset.
- Modified list_datasets tool to use serialize_dataset_object and include columns/metrics by default.
- Improved and fixed all related unit tests to use proper MagicMock objects for columns/metrics and to parse JSON responses.
- Ensured LLM/OpenAPI compatibility for dataset listing and info tools.
2025-07-30 14:20:39 -04:00
Amin Ghadersohi
fc7ea804bc update: chart create working with union of types 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
7c256ae9aa update: docs 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
1b190abc3b update: add new ModelGetAvailableFiltersTool 2025-07-30 14:20:39 -04:00
Amin Ghadersohi
c6c71bf835 update: update tool registration to use decorators, and update all tests to use mcp client directly in the tests as recommended 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
cd5ead7f11 update: update tool registration 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
e5eebe28f9 update: refactor auth and model tools 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
9eac6ef433 update: add tests for mcp list and info tools 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
d523d523e5 update: refactor tools directories 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
91a3214ed4 update: wip 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
95b787f024 update: wip 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
39121791e8 update: wip 2025-07-30 14:20:38 -04:00
Amin Ghadersohi
9d40fe913f update: wip 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
748ae49c8c update: wip 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
a9d543b6f4 update: unify FastMCP server, modularize tools, and document new DAO-based architecture
- Replace dual Flask/FastAPI setup with a single, unified FastMCP server (`server.py`)
- Introduce `MCPDAOWrapper` for secure, context-aware DAO access (`dao_wrapper.py`)
- Refactor all MCP tools to be modular and domain-organized (`tools/dashboard/`, `tools/chart/`, `tools/dataset/`, `tools/system/`)
- Strongly type all tool contracts with Pydantic v2 models, including full field documentation for LLM/OpenAPI compatibility
- Refactor and extend `BaseDAO` for robust, generic CRUD/list operations
- Add and update documentation:
  - Architecture and flow diagrams (`README_ARCHITECTURE.md`)
  - Tool schema reference and usage instructions (`README.md`, `README_SCHEMAS.md`)
  - Phase 1 status and roadmap (`README_PHASE1_STATUS.md`)
- Implement and test all core list/info tools for dashboards, datasets, and charts, with full search and filter support
- Add chart creation tool (`create_chart_simple`)
- Provide extension points for Preset-specific auth, RBAC, and logging (stubbed in Phase 1)
- Prepare for LLM/agent workflows and future command-based mutations (create/update/delete)
- Expand and update unit/integration test coverage for all tools
2025-07-30 14:20:37 -04:00
Amin Ghadersohi
55d6130fc4 update: docs and deps 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
b98e3eb309 Superset MCP Service - Adding get_instance_info tool 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
b469077e0e update: add list function to BaseDAO and use it to get and filter the dashboards 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
397b4e450b Superset MCP Service - Initial Commit DRAFT 2025-07-30 14:20:37 -04:00
Amin Ghadersohi
0f97002520 update: Refactor BaseDAO: enhance column operator logic and expand test coverage
- Improved the BaseDAO class to robustly handle column operator logic, ensuring all supported operators (eq, ne, sw, ew, in, nin, gt, gte, lt, lte, like, ilike, is_null, is_not_null) are consistently applied via ColumnOperatorEnum.
- Refactored the apply_column_operators and list methods for clarity and reliability, including better handling of columns, relationships, and search.
- Removed 1 index base page handing from list
2025-07-30 14:20:37 -04:00
Amin Ghadersohi
2312250127 update: add idea of ColumnOperator 2025-07-30 14:20:36 -04:00
Amin Ghadersohi
cd52193869 update: add helper for applying base filter 2025-07-30 14:20:36 -04:00
Amin Ghadersohi
9ffe680aaa feat(dao): add robust generic list and count methods to BaseDAO with full test coverage
- Introduced generic `list` and `count` methods to BaseDAO for consistent querying and counting across all DAOs.
- Both methods support filtering (including IN queries), ordering, pagination, search across columns, custom FAB-style filters, and always-on base filters.
- Added comprehensive unit tests for `list` and `count` in `tests/unit_tests/dao/base_test.py`, covering:
  - Filtering (including boolean, None, and IN queries)
  - Ordering (asc/desc, multiple columns)
  - Pagination (including out-of-range)
  - Search across columns
  - Custom filter logic
  - Always-on base filter logic
  - Edge cases and skip_base_filter
- Moved common test fixtures to `conftest.py` for reuse.
2025-07-30 14:20:36 -04:00
dependabot[bot]
5c2eb0a68c build(deps): bump reselect from 4.1.7 to 5.1.1 in /superset-frontend (#30119)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: hainenber <dotronghai96@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: hainenber <dotronghai96@gmail.com>
2025-07-30 08:54:58 -07:00
dependabot[bot]
0cbf4d5d4d chore(deps): bump d3-scale from 3.3.0 to 4.0.2 in /superset-frontend/packages/superset-ui-core (#31534)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: hainenber <dotronghai96@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: hainenber <dotronghai96@gmail.com>
Co-authored-by: Đỗ Trọng Hải <41283691+hainenber@users.noreply.github.com>
2025-07-30 08:52:30 -07:00
Hari Kiran
6006a21378 docs(development): fix comment in the dockerfile (#34391) 2025-07-29 21:53:46 -07:00
Maxime Beauchemin
bf967d6ba4 fix(charts): Fix unquoted 'Others' literal in series limit GROUP BY clause (#34390)
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-29 17:36:10 -07:00
Hari Kiran
131ae5aa9d docs(development): fix typo in the dockerfile (#34387) 2025-07-29 14:24:18 -07:00
Cesc Bausà
eca28582b6 feat(i18n): update Spanish translations (messages.po) (#34206) 2025-07-29 13:49:40 -07:00
Maxime Beauchemin
14e90a0f52 feat: Add GitHub Codespaces support with docker-compose-light (#34376)
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-29 13:10:17 -07:00
Maxime Beauchemin
a1c39d4906 feat(charts): Enable async buildQuery support for complex chart logic (#34383)
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-29 13:08:55 -07:00
Maxime Beauchemin
0964a8bb7a fix(big number with trendline): running 2 identical queries for no good reason (#34296) 2025-07-29 13:07:28 -07:00
Beto Dealmeida
8de8f95a3c feat: allow creating dataset without exploring (#34380) 2025-07-29 15:43:47 -04:00
Maxime Beauchemin
16db999067 fix: rate limiting issues with example data hosted on github.com (#34381) 2025-07-29 11:19:29 -07:00
Beto Dealmeida
972be15dda feat: focus on text input when modal opens (#34379) 2025-07-29 14:01:10 -04:00
Maxime Beauchemin
c9e06714f8 fix: prevent theme initialization errors during fresh installs (#34339)
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-29 09:32:53 -07:00
Beto Dealmeida
32626ab707 fix: use catalog name on generated queries (#34360) 2025-07-29 12:30:46 -04:00
dependabot[bot]
a9cd58508b chore(deps): bump cookie and @types/cookie in /superset-websocket (#34335)
Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: hainenber <dotronghai96@gmail.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: hainenber <dotronghai96@gmail.com>
2025-07-29 20:19:31 +07:00
Beto Dealmeida
122bb68e5a fix: subquery alias in RLS (#34374) 2025-07-28 22:58:15 -04:00
Beto Dealmeida
914ce9aa4f feat: read column metadata (#34359) 2025-07-28 22:57:57 -04:00
Gabriel Torres Ruiz
bb572983cd feat(theming): Align embedded sdk with theme configs (#34273) 2025-07-28 19:26:17 -07:00
Đỗ Trọng Hải
ff76ab647f build(deps): update ag-grid to non-breaking major v34 (#34326) 2025-07-29 07:46:55 +07:00
Mehmet Salih Yavuz
f554848c9f fix(PivotTable): Render html in cells if allowRenderHtml is true (#34351) 2025-07-29 01:12:37 +03:00
Hari Kiran
dc0c389488 docs(development): fix 2 typos in the dockerfile (#34341) 2025-07-28 15:06:21 -07:00
Beto Dealmeida
22b3cc0480 chore: bump BigQuery dialect to 1.15.0 (#34371) 2025-07-28 16:39:18 -04:00
Maxime Beauchemin
604d72cc98 feat: introducing a docker-compose-light.yml for lighter development (#34324) 2025-07-28 09:27:07 -07:00
Enzo Martellucci
913e068113 style(FastVizSwitcher): Adjust padding for FastVizSwitcher selector (#34317) 2025-07-28 14:39:10 +03:00
Geido
1a4e2173f5 fix(NavBar): Add brand text back (#34318) 2025-07-28 12:19:14 +03:00
Ian McEwen
c49789167b style(chart): restyle table pagination (#34311) 2025-07-27 19:39:10 -07:00
Maxime Beauchemin
1be2287b3a feat(timeseries): enhance 'Series Limit' to support grouping the long tail (#34308) 2025-07-25 16:26:32 -07:00
Maxime Beauchemin
e741a3167f feat: add a theme CRUD page to manage themes (#34182)
Co-authored-by: Mehmet Salih Yavuz <salih.yavuz@proton.me>
2025-07-25 13:26:41 -07:00
Michael S. Molina
5f11f9097a fix: Charts list is displaying empty dataset names when there's no schema (#34315) 2025-07-25 14:07:50 -03:00
Jan Suleiman
8783579aa8 fix(cartodiagram): add missing locales for rendering echarts (#34268) 2025-07-25 09:59:28 -07:00
Evan Rusackas
c25b4221f8 fix(npm): more reliable execution of npm run update-maps (#34305) 2025-07-25 13:48:05 -03:00
Pius Iniobong
9c771fb2ba fix: preserve correct column order when table layout is changed with time comparison enabled (#34300) 2025-07-25 15:31:33 +03:00
sha174n
7f44992c4b fix: enhance disallowed SQL functions list for improved security (#33084) 2025-07-24 16:36:32 -07:00
Beto Dealmeida
8df5860826 chore: bump sqlglot to latest version (27.3.0) (#34302) 2025-07-24 15:38:29 -07:00
Beto Dealmeida
b794b192d1 fix: return 422 on invalid SQL (#34303) 2025-07-24 16:40:56 -04:00
Maxime Beauchemin
3177131d52 feat: re-order CRUD list view action buttons (#34294) 2025-07-24 12:46:34 -07:00
Enzo Martellucci
89bf77b5c9 fix(theming): Fix visual regressions from theming P7 (#34237) 2025-07-24 19:57:50 +02:00
Maxime Beauchemin
30e5684006 fix: address numerous long-standing console errors (python & web) (#34299) 2025-07-24 09:50:26 -07:00
Maxime Beauchemin
3f8472ca7b chore: move some rules from ruff -> pylint (#34292) 2025-07-24 09:40:49 -07:00
Beto Dealmeida
efa8cb6fa4 chore: improve sqlglot parsing (#34270) 2025-07-24 10:50:59 -04:00
Beto Dealmeida
ab59b7e9b0 feat: make SupersetClient retry on 502-504 (#34290)
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-24 10:46:50 -04:00
Vitor Avila
c99843b13a fix: Hide View in SQL Lab for users without access (#34293) 2025-07-24 10:45:31 -03:00
Fardin Mustaque
da55a6c94a fix(chart-download): ensure full table or handlebar chart is captured in image export (#34233) 2025-07-24 15:47:44 +03:00
LisaHusband
7a1c056374 fix(charting): correctly categorize numeric columns with NULL values (#34213) 2025-07-24 15:46:58 +03:00
Michael S. Molina
1e5a4e9bdc fix: Saved queries list break if one query can't be parsed (#34289) 2025-07-24 08:30:04 -03:00
Đỗ Trọng Hải
9b88527883 chore: remove supposedly dev dep html-webpack-plugin from lockfile (#34288) 2025-07-24 15:53:16 +07:00
dependabot[bot]
800c1639ec chore(deps-dev): bump prettier from 3.5.3 to 3.6.2 in /superset-frontend (#33997) 2025-07-24 09:38:00 +07:00
Ahmed Habeeb
43775e9373 fix(sqllab_export): manually encode CSV output to support utf-8-sig (#34235) 2025-07-23 18:44:56 -07:00
Maxime Beauchemin
9099b0f00d fix: fix the pre-commit hook for tsc (#34275)
Co-authored-by: Mehmet Salih Yavuz <salih.yavuz@proton.me>
2025-07-23 13:21:54 -07:00
dependabot[bot]
77ffe65773 chore(deps): bump axios from 1.10.0 to 1.11.0 in /docs (#34285)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-24 00:06:53 +07:00
Damian Pendrak
32f8f33a4f fix(deckgl): fix deck.gl color breakpoints Control (#34244) 2025-07-23 19:25:29 +03:00
Enzo Martellucci
710c277681 style(Button): Vertically align icons across all buttons (#34067) 2025-07-23 19:24:55 +03:00
Michael S. Molina
11324607d0 fix: Bulk select is not respecting the TAGGING_SYSTEM feature flag (#34282) 2025-07-23 11:33:06 -03:00
Mehmet Salih Yavuz
9c6271136d fix(theming): Visual regressions p2 (#34279) 2025-07-23 16:14:06 +02:00
Maxime Beauchemin
c444eed63e chore(docker): use editable mode in docker images (#34146) 2025-07-22 16:14:31 -07:00
Mehmet Salih Yavuz
1df5e59fdf fix(theming): Theming visual fixes (#34253) 2025-07-23 01:55:32 +03:00
Maxime Beauchemin
77f66e7434 fix: build issues on master with 'npm run dev' (#34272) 2025-07-22 15:55:18 -07:00
Maxime Beauchemin
2c81eb6c39 feat(docker): do not include chromium (headless browser) by default in Dockerfile (#34258) 2025-07-22 13:12:55 -07:00
Maxime Beauchemin
09c4afc894 feat: introduce comprehensive LLM context guides for AI-powered development (#34194)
Co-authored-by: Claude <noreply@anthropic.com>
2025-07-22 12:22:13 -07:00
JUST.in DO IT
229d92590a fix: Matching errorType on superset api error with SupersetError (#34261) 2025-07-22 11:51:42 -07:00
Kamil Gabryjelski
f4f516c64c fix: Missing ownState and isCached props in Chart.jsx (#34259) 2025-07-22 18:58:28 +02:00
452 changed files with 43151 additions and 7940 deletions

View File

@@ -0,0 +1,125 @@
---
description: Apache Superset development standards and guidelines for Cursor IDE
globs: ["**/*.py", "**/*.ts", "**/*.tsx", "**/*.js", "**/*.jsx", "**/*.sql", "**/*.md"]
alwaysApply: true
---
# Apache Superset Development Standards for Cursor IDE
Apache Superset is a data visualization platform with Flask/Python backend and React/TypeScript frontend.
## ⚠️ CRITICAL: Ongoing Refactors (What NOT to Do)
**These migrations are actively happening - avoid deprecated patterns:**
### Frontend Modernization
- **NO `any` types** - Use proper TypeScript types
- **NO JavaScript files** - Convert to TypeScript (.ts/.tsx)
- **NO Enzyme** - Use React Testing Library/Jest (Enzyme fully removed)
- **Use @superset-ui/core** - Don't import Ant Design directly
### Testing Strategy Migration
- **Prefer unit tests** over integration tests
- **Prefer integration tests** over Cypress end-to-end tests
- **Cypress is last resort** - Actively moving away from Cypress
- **Use Jest + React Testing Library** for component testing
### Backend Type Safety
- **Add type hints** - All new Python code needs proper typing
- **MyPy compliance** - Run `pre-commit run mypy` to validate
- **SQLAlchemy typing** - Use proper model annotations
## Code Standards
### TypeScript Frontend
- **NO `any` types** - Use proper TypeScript
- **Functional components** with hooks
- **@superset-ui/core** for UI components (not direct antd)
- **Jest** for testing (NO Enzyme)
- **Redux** for global state, hooks for local
### Python Backend
- **Type hints required** for all new code
- **MyPy compliant** - run `pre-commit run mypy`
- **SQLAlchemy models** with proper typing
- **pytest** for testing
### Apache License Headers
- **New files require ASF license headers** - When creating new code files, include the standard Apache Software Foundation license header
- **LLM instruction files are excluded** - Files like LLMS.md, CLAUDE.md, etc. are in `.rat-excludes` to avoid header token overhead
## Key Directory Structure
```
superset/
├── superset/ # Python backend (Flask, SQLAlchemy)
│ ├── views/api/ # REST API endpoints
│ ├── models/ # Database models
│ └── connectors/ # Database connections
├── superset-frontend/src/ # React TypeScript frontend
│ ├── components/ # Reusable components
│ ├── explore/ # Chart builder
│ ├── dashboard/ # Dashboard interface
│ └── SqlLab/ # SQL editor
├── superset-frontend/packages/
│ └── superset-ui-core/ # UI component library (USE THIS)
├── tests/ # Python/integration tests
├── docs/ # Documentation (UPDATE FOR CHANGES)
└── UPDATING.md # Breaking changes log
```
## Architecture Patterns
### Dataset-Centric Approach
Charts built from enriched datasets containing:
- Dimension columns with labels/descriptions
- Predefined metrics as SQL expressions
- Self-service analytics within defined contexts
### Security & Features
- **RBAC**: Role-based access via Flask-AppBuilder
- **Feature flags**: Control feature rollouts
- **Row-level security**: SQL-based data access control
## Test Utilities
### Python Test Helpers
- **`SupersetTestCase`** - Base class in `tests/integration_tests/base_tests.py`
- **`@with_config`** - Config mocking decorator
- **`@with_feature_flags`** - Feature flag testing
- **`login_as()`, `login_as_admin()`** - Authentication helpers
- **`create_dashboard()`, `create_slice()`** - Data setup utilities
### TypeScript Test Helpers
- **`superset-frontend/spec/helpers/testing-library.tsx`** - Custom render() with providers
- **`createWrapper()`** - Redux/Router/Theme wrapper
- **`selectOption()`** - Select component helper
- **React Testing Library** - NO Enzyme (removed)
## Pre-commit Validation
**Use pre-commit hooks for quality validation:**
```bash
# Install hooks
pre-commit install
# Quick validation (faster than --all-files)
pre-commit run # Staged files only
pre-commit run mypy # Python type checking
pre-commit run prettier # Code formatting
pre-commit run eslint # Frontend linting
```
## Development Guidelines
- **Documentation**: Update docs/ for any user-facing changes
- **Breaking Changes**: Add to UPDATING.md
- **Docstrings**: Required for new functions/classes
- **Follow existing patterns**: Mimic code style, use existing libraries and utilities
- **Type Safety**: This codebase is actively modernizing toward full TypeScript and type safety
- **Always run `pre-commit run`** to validate changes before committing
---
**Note**: This codebase is actively modernizing toward full TypeScript and type safety. Always run `pre-commit run` to validate changes. Follow the ongoing refactors section to avoid deprecated patterns.

5
.devcontainer/README.md Normal file
View File

@@ -0,0 +1,5 @@
# Superset Development with GitHub Codespaces
For complete documentation on using GitHub Codespaces with Apache Superset, please see:
**[Setting up a Development Environment - GitHub Codespaces](https://superset.apache.org/docs/contributing/development#github-codespaces-cloud-development)**

View File

@@ -0,0 +1,19 @@
{
// Extend the base configuration
"extends": "../devcontainer-base.json",
"name": "Apache Superset Development (Default)",
// Forward ports for development
"forwardPorts": [9001],
"portsAttributes": {
"9001": {
"label": "Superset (via Webpack Dev Server)",
"onAutoForward": "notify",
"visibility": "public"
}
},
// Auto-start Superset on Codespace resume
"postStartCommand": ".devcontainer/start-superset.sh"
}

View File

@@ -0,0 +1,39 @@
{
"name": "Apache Superset Development",
// Keep this in sync with the base image in Dockerfile (ARG PY_VER)
// Using the same base as Dockerfile, but non-slim for dev tools
"image": "python:3.11.13-bookworm",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {
"moby": true,
"dockerDashComposeVersion": "v2"
},
"ghcr.io/devcontainers/features/node:1": {
"version": "20"
},
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/common-utils:2": {
"configureZshAsDefaultShell": true
},
"ghcr.io/devcontainers/features/sshd:1": {
"version": "latest"
}
},
// Run commands after container is created
"postCreateCommand": "chmod +x .devcontainer/setup-dev.sh && .devcontainer/setup-dev.sh",
// VS Code customizations
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"charliermarsh.ruff",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
}
}

View File

@@ -0,0 +1,52 @@
{
"name": "Apache Superset Development",
// Keep this in sync with the base image in Dockerfile (ARG PY_VER)
// Using the same base as Dockerfile, but non-slim for dev tools
"image": "python:3.11.13-bookworm",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {
"moby": true,
"dockerDashComposeVersion": "v2"
},
"ghcr.io/devcontainers/features/node:1": {
"version": "20"
},
"ghcr.io/devcontainers/features/git:1": {},
"ghcr.io/devcontainers/features/common-utils:2": {
"configureZshAsDefaultShell": true
},
"ghcr.io/devcontainers/features/sshd:1": {
"version": "latest"
}
},
// Forward ports for development
"forwardPorts": [9001],
"portsAttributes": {
"9001": {
"label": "Superset (via Webpack Dev Server)",
"onAutoForward": "notify",
"visibility": "public"
}
},
// Run commands after container is created
"postCreateCommand": "chmod +x .devcontainer/setup-dev.sh && .devcontainer/setup-dev.sh",
// Auto-start Superset on Codespace resume
"postStartCommand": ".devcontainer/start-superset.sh",
// VS Code customizations
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance",
"charliermarsh.ruff",
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode"
]
}
}
}

32
.devcontainer/setup-dev.sh Executable file
View File

@@ -0,0 +1,32 @@
#!/bin/bash
# Setup script for Superset Codespaces development environment
echo "🔧 Setting up Superset development environment..."
# The universal image has most tools, just need Superset-specific libs
echo "📦 Installing Superset-specific dependencies..."
sudo apt-get update
sudo apt-get install -y \
libsasl2-dev \
libldap2-dev \
libpq-dev \
tmux \
gh
# Install uv for fast Python package management
echo "📦 Installing uv..."
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add cargo/bin to PATH for uv
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.bashrc
echo 'export PATH="$HOME/.cargo/bin:$PATH"' >> ~/.zshrc
# Install Claude Code CLI via npm
echo "🤖 Installing Claude Code..."
npm install -g @anthropic-ai/claude-code
# Make the start script executable
chmod +x .devcontainer/start-superset.sh
echo "✅ Development environment setup complete!"
echo "🚀 Run '.devcontainer/start-superset.sh' to start Superset"

69
.devcontainer/start-superset.sh Executable file
View File

@@ -0,0 +1,69 @@
#!/bin/bash
# Startup script for Superset in Codespaces
echo "🚀 Starting Superset in Codespaces..."
echo "🌐 Frontend will be available at port 9001"
# Check if MCP is enabled
if [ "$ENABLE_MCP" = "true" ]; then
echo "🤖 MCP Service will be available at port 5008"
fi
# Find the workspace directory (Codespaces clones as 'superset', not 'superset-2')
WORKSPACE_DIR=$(find /workspaces -maxdepth 1 -name "superset*" -type d | head -1)
if [ -n "$WORKSPACE_DIR" ]; then
cd "$WORKSPACE_DIR"
echo "📁 Working in: $WORKSPACE_DIR"
else
echo "📁 Using current directory: $(pwd)"
fi
# Check if docker is running
if ! docker info > /dev/null 2>&1; then
echo "⏳ Waiting for Docker to start..."
sleep 5
fi
# Clean up any existing containers
echo "🧹 Cleaning up existing containers..."
docker-compose -f docker-compose-light.yml --profile mcp down
# Start services
echo "🏗️ Building and starting services..."
echo ""
echo "📝 Once started, login with:"
echo " Username: admin"
echo " Password: admin"
echo ""
echo "📋 Running in foreground with live logs (Ctrl+C to stop)..."
# Run docker-compose and capture exit code
if [ "$ENABLE_MCP" = "true" ]; then
echo "🤖 Starting with MCP Service enabled..."
docker-compose -f docker-compose-light.yml --profile mcp up
else
docker-compose -f docker-compose-light.yml up
fi
EXIT_CODE=$?
# If it failed, provide helpful instructions
if [ $EXIT_CODE -ne 0 ] && [ $EXIT_CODE -ne 130 ]; then # 130 is Ctrl+C
echo ""
echo "❌ Superset startup failed (exit code: $EXIT_CODE)"
echo ""
echo "🔄 To restart Superset, run:"
echo " .devcontainer/start-superset.sh"
echo ""
echo "🔧 For troubleshooting:"
echo " # View logs:"
echo " docker-compose -f docker-compose-light.yml logs"
echo ""
echo " # Clean restart (removes volumes):"
echo " docker-compose -f docker-compose-light.yml down -v"
echo " .devcontainer/start-superset.sh"
echo ""
echo " # Common issues:"
echo " - Network timeouts: Just retry, often transient"
echo " - Port conflicts: Check 'docker ps'"
echo " - Database issues: Try clean restart with -v"
fi

View File

@@ -0,0 +1,29 @@
{
// Extend the base configuration
"extends": "../devcontainer-base.json",
"name": "Apache Superset Development with MCP",
// Forward ports for development
"forwardPorts": [9001, 5008],
"portsAttributes": {
"9001": {
"label": "Superset (via Webpack Dev Server)",
"onAutoForward": "notify",
"visibility": "public"
},
"5008": {
"label": "MCP Service (Model Context Protocol)",
"onAutoForward": "notify",
"visibility": "private"
}
},
// Auto-start Superset with MCP on Codespace resume
"postStartCommand": "ENABLE_MCP=true .devcontainer/start-superset.sh",
// Environment variables
"containerEnv": {
"ENABLE_MCP": "true"
}
}

1
.github/copilot-instructions.md vendored Symbolic link
View File

@@ -0,0 +1 @@
../LLMS.md

View File

@@ -51,7 +51,7 @@ jobs:
SUPERSET_TESTENV: true
SUPERSET_SECRET_KEY: not-a-secret
run: |
pytest --durations-min=0.5 --cov-report= --cov=superset/sql/ ./tests/unit_tests/sql/ --cache-clear --cov-fail-under=100
pytest --durations-min=0.5 --cov=superset/sql/ ./tests/unit_tests/sql/ --cache-clear --cov-fail-under=100
- name: Upload code coverage
uses: codecov/codecov-action@v5
with:

4
.gitignore vendored
View File

@@ -127,5 +127,7 @@ docker/*local*
# Jest test report
test-report.html
superset/static/stats/statistics.html
.aider*
# LLM-related
CLAUDE.local.md
.aider*

View File

@@ -52,14 +52,6 @@ repos:
- id: trailing-whitespace
exclude: ^.*\.(snap)
args: ["--markdown-linebreak-ext=md"]
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v4.0.0-alpha.8 # Use the sha or tag you want to point at
hooks:
- id: prettier
additional_dependencies:
- prettier@3.5.3
args: ["--ignore-path=./superset-frontend/.prettierignore", "--exclude", "site-packages"]
files: "superset-frontend"
- repo: local
hooks:
- id: eslint-frontend
@@ -76,7 +68,7 @@ repos:
files: ^docs/.*\.(js|jsx|ts|tsx)$
- id: type-checking-frontend
name: Type-Checking (Frontend)
entry: bash -c './scripts/check-type.js package=superset-frontend excludeDeclarationDir=cypress-base'
entry: ./scripts/check-type.js package=superset-frontend excludeDeclarationDir=cypress-base
language: system
files: ^superset-frontend\/.*\.(js|jsx|ts|tsx)$
exclude: ^superset-frontend/cypress-base\/
@@ -97,9 +89,9 @@ repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.7
hooks:
- id: ruff-format
- id: ruff
args: [--fix]
- id: ruff-format
- repo: local
hooks:
- id: pylint
@@ -113,9 +105,10 @@ repos:
- |
TARGET_BRANCH=${GITHUB_BASE_REF:-master}
git fetch origin "$TARGET_BRANCH"
files=$(git diff --name-only --diff-filter=ACM origin/"$TARGET_BRANCH"..HEAD | grep '^superset/.*\.py$' || true)
BASE=$(git merge-base origin/"$TARGET_BRANCH" HEAD)
files=$(git diff --name-only --diff-filter=ACM "$BASE"..HEAD | grep '^superset/.*\.py$' || true)
if [ -n "$files" ]; then
pylint --rcfile=.pylintrc --load-plugins=superset.extensions.pylint $files
pylint --rcfile=.pylintrc --load-plugins=superset.extensions.pylint --reports=no $files
else
echo "No Python files to lint."
fi

View File

@@ -76,3 +76,11 @@ ydb.svg
erd.puml
erd.svg
intro_header.txt
# for LLMs
llm-context.md
LLMS.md
CLAUDE.md
CURSOR.md
GEMINI.md
GPT.md

215
CHART_METADATA_API.md Normal file
View File

@@ -0,0 +1,215 @@
# Chart Metadata API Reference
The Superset MCP service provides rich metadata alongside chart generation to enable better UI integration and user experiences.
## Background & Design Philosophy
Modern chart systems need to provide more than just visual output. Inspired by contemporary web standards and LLM integration patterns, this metadata system addresses several key needs:
**Accessibility-First Design**: Following WCAG guidelines and `aria-*` attribute patterns, charts include semantic descriptions and accessibility metadata to ensure inclusive experiences.
**Rich Context for AI Systems**: Similar to how platforms like social media generate rich previews (OpenGraph, Twitter Cards), charts provide semantic understanding beyond just visual representation - enabling AI agents to reason about and describe visualizations meaningfully.
**Performance-Aware Integration**: Modern web APIs emphasize performance transparency (Core Web Vitals, etc.). Charts include execution metrics and optimization suggestions to help UIs make informed decisions about rendering and user feedback.
**Capability-Driven UX**: Rather than requiring UIs to hardcode chart type behaviors, the system exposes what each chart can actually do - enabling dynamic, contextual interfaces that adapt to chart capabilities.
## Overview
When generating charts via `generate_chart`, the response includes structured metadata that helps UIs:
- Present appropriate controls and interactions
- Generate accessible descriptions
- Optimize rendering performance
- Guide user workflows
## Metadata Types
### ChartCapabilities
Describes what interactions and features the chart supports.
```python
{
"supports_interaction": bool, # User can interact (zoom, pan, hover)
"supports_real_time": bool, # Chart can update with live data
"supports_drill_down": bool, # Can navigate to more detailed views
"supports_export": bool, # Can be exported to other formats
"optimal_formats": [ # Recommended preview formats
"url", # Static image URL
"interactive", # HTML with JavaScript controls
"ascii", # Text-based representation
"vega_lite" # Vega-Lite specification
],
"data_types": [ # Types of data visualized
"time_series", # Time-based data
"categorical", # Discrete categories
"metric" # Numeric measurements
]
}
```
**UI Integration:**
- Show/hide interaction controls based on `supports_interaction`
- Enable real-time updates if `supports_real_time`
- Display drill-down options for `supports_drill_down`
- Choose optimal preview format from `optimal_formats`
### ChartSemantics
Provides semantic understanding of what the chart represents and reveals.
```python
{
"primary_insight": "Shows trends and changes over time",
"data_story": "This line chart analyzes sales, revenue over Q1-Q4",
"recommended_actions": [
"Review data patterns and trends",
"Consider filtering for more detail",
"Export chart for reporting"
],
"anomalies": [], # Notable outliers (future enhancement)
"statistical_summary": {} # Key statistics (future enhancement)
}
```
**UI Integration:**
- Display `primary_insight` as chart description
- Use `data_story` for accessibility and tooltips
- Show `recommended_actions` as suggested next steps
- Highlight `anomalies` in the visualization
### AccessibilityMetadata
Information for creating inclusive, accessible chart experiences.
```python
{
"color_blind_safe": bool, # Uses colorblind-friendly palette
"alt_text": "Chart showing Sales Data over time",
"high_contrast_available": bool # High contrast version available
}
```
**UI Integration:**
- Use `alt_text` for screen readers
- Show accessibility indicators if `color_blind_safe`
- Offer high contrast mode if available
### PerformanceMetadata
Performance information for optimization and user feedback.
```python
{
"query_duration_ms": 1250, # Time to generate chart data
"cache_status": "hit|miss|error", # Whether data came from cache
"optimization_suggestions": [ # Performance improvement tips
"Consider adding date filters to reduce data volume",
"Chart complexity may impact load time"
]
}
```
**UI Integration:**
- Show loading indicators based on `query_duration_ms`
- Display cache status for debugging
- Present `optimization_suggestions` to users
- Warn about slow queries
## Example Response
```json
{
"chart": {
"id": 123,
"slice_name": "Sales Trends Q1-Q4",
"viz_type": "echarts_timeseries_line",
"url": "/explore/?slice_id=123"
},
"capabilities": {
"supports_interaction": true,
"supports_real_time": false,
"supports_drill_down": false,
"supports_export": true,
"optimal_formats": ["url", "interactive", "ascii"],
"data_types": ["time_series", "metric"]
},
"semantics": {
"primary_insight": "Shows trends and changes over time",
"data_story": "This line chart analyzes sales over Q1-Q4",
"recommended_actions": [
"Review seasonal patterns",
"Export for quarterly report"
]
},
"accessibility": {
"color_blind_safe": true,
"alt_text": "Line chart showing sales trends from Q1 to Q4",
"high_contrast_available": false
},
"performance": {
"query_duration_ms": 450,
"cache_status": "miss",
"optimization_suggestions": []
}
}
```
## Usage Examples
### React Component Integration
```jsx
function ChartComponent({ chartData }) {
const { capabilities, semantics, accessibility, performance } = chartData;
return (
<div>
{/* Accessibility */}
<img
src={chartData.chart.url}
alt={accessibility.alt_text}
aria-describedby="chart-description"
/>
{/* Semantic description */}
<p id="chart-description">{semantics.primary_insight}</p>
{/* Conditional controls based on capabilities */}
{capabilities.supports_interaction && (
<InteractiveControls />
)}
{capabilities.supports_export && (
<ExportButton />
)}
{/* Performance feedback */}
{performance.query_duration_ms > 2000 && (
<SlowQueryWarning suggestions={performance.optimization_suggestions} />
)}
{/* Recommended actions */}
<ActionSuggestions actions={semantics.recommended_actions} />
</div>
);
}
```
## Chart Type Mapping
Different chart types provide different capabilities:
| Chart Type | Interaction | Real-time | Drill-down | Optimal Formats |
|------------|------------|-----------|------------|-----------------|
| `echarts_timeseries_line` | ✅ | ✅ | ❌ | url, interactive, ascii |
| `echarts_timeseries_bar` | ✅ | ✅ | ❌ | url, interactive, ascii |
| `table` | ❌ | ❌ | ✅ | url, table, ascii |
| `pie` | ✅ | ❌ | ❌ | url, interactive |
## Future Enhancements
- **Statistical Summary**: Automatic calculation of mean, median, trends
- **Anomaly Detection**: Identification of outliers and unusual patterns
- **Smart Recommendations**: ML-powered suggestions for chart improvements
- **Accessibility Scoring**: Automated accessibility compliance checking

1
CLAUDE.md Symbolic link
View File

@@ -0,0 +1 @@
LLMS.md

View File

@@ -59,7 +59,7 @@ RUN mkdir -p /app/superset/static/assets \
# NOTE: we mount packages and plugins as they are referenced in package.json as workspaces
# ideally we'd COPY only their package.json. Here npm ci will be cached as long
# as the full content of these folders don't change, yielding a decent cache reuse rate.
# Note that's it's not possible selectively COPY of mount using blobs.
# Note that it's not possible to selectively COPY or mount using blobs.
RUN --mount=type=bind,source=./superset-frontend/package.json,target=./package.json \
--mount=type=bind,source=./superset-frontend/package-lock.json,target=./package-lock.json \
--mount=type=cache,target=/root/.cache \
@@ -74,7 +74,7 @@ RUN --mount=type=bind,source=./superset-frontend/package.json,target=./package.j
COPY superset-frontend /app/superset-frontend
######################################################################
# superset-node used for compile frontend assets
# superset-node is used for compiling frontend assets
######################################################################
FROM superset-node-ci AS superset-node
@@ -90,7 +90,7 @@ RUN --mount=type=cache,target=/root/.npm \
# Copy translation files
COPY superset/translations /app/superset/translations
# Build the frontend if not in dev mode
# Build translations if enabled, then cleanup localization files
RUN if [ "$BUILD_TRANSLATIONS" = "true" ]; then \
npm run build-translation; \
fi; \
@@ -167,7 +167,7 @@ RUN mkdir -p \
&& touch superset/static/version_info.json
# Install Playwright and optionally setup headless browsers
ARG INCLUDE_CHROMIUM="true"
ARG INCLUDE_CHROMIUM="false"
ARG INCLUDE_FIREFOX="false"
RUN --mount=type=cache,target=${SUPERSET_HOME}/.cache/uv \
if [ "$INCLUDE_CHROMIUM" = "true" ] || [ "$INCLUDE_FIREFOX" = "true" ]; then \
@@ -223,7 +223,7 @@ RUN --mount=type=cache,target=${SUPERSET_HOME}/.cache/uv \
/app/docker/pip-install.sh --requires-build-essential -r requirements/base.txt
# Install the superset package
RUN --mount=type=cache,target=${SUPERSET_HOME}/.cache/uv \
uv pip install .
uv pip install -e .
RUN python -m compileall /app/superset
USER superset
@@ -246,7 +246,7 @@ RUN --mount=type=cache,target=${SUPERSET_HOME}/.cache/uv \
/app/docker/pip-install.sh --requires-build-essential -r requirements/development.txt
# Install the superset package
RUN --mount=type=cache,target=${SUPERSET_HOME}/.cache/uv \
uv pip install .
uv pip install -e .
RUN uv pip install .[postgres]
RUN python -m compileall /app/superset

1
GEMINI.md Symbolic link
View File

@@ -0,0 +1 @@
LLMS.md

1
GPT.md Symbolic link
View File

@@ -0,0 +1 @@
LLMS.md

192
LLMS.md Normal file
View File

@@ -0,0 +1,192 @@
# LLM Context Guide for Apache Superset
Apache Superset is a data visualization platform with Flask/Python backend and React/TypeScript frontend.
## ⚠️ CRITICAL: Ongoing Refactors (What NOT to Do)
**These migrations are actively happening - avoid deprecated patterns:**
### Frontend Modernization
- **NO `any` types** - Use proper TypeScript types
- **NO JavaScript files** - Convert to TypeScript (.ts/.tsx)
- **Use @superset-ui/core** - Don't import Ant Design directly
### Testing Strategy Migration
- **Prefer unit tests** over integration tests
- **Prefer integration tests** over Cypress end-to-end tests
- **Cypress is last resort** - Actively moving away from Cypress
- **Use Jest + React Testing Library** for component testing
### Backend Type Safety
- **Add type hints** - All new Python code needs proper typing
- **MyPy compliance** - Run `pre-commit run mypy` to validate
- **SQLAlchemy typing** - Use proper model annotations
### UUID Migration
- **Prefer UUIDs over auto-incrementing IDs** - New models should use UUID primary keys
- **External API exposure** - Use UUIDs in public APIs instead of internal integer IDs
- **Existing models** - Add UUID fields alongside integer IDs for gradual migration
## Key Directories
```
superset/
├── superset/ # Python backend (Flask, SQLAlchemy)
│ ├── views/api/ # REST API endpoints
│ ├── models/ # Database models
│ └── connectors/ # Database connections
├── superset-frontend/src/ # React TypeScript frontend
│ ├── components/ # Reusable components
│ ├── explore/ # Chart builder
│ ├── dashboard/ # Dashboard interface
│ └── SqlLab/ # SQL editor
├── superset-frontend/packages/
│ └── superset-ui-core/ # UI component library (USE THIS)
├── tests/ # Python/integration tests
├── docs/ # Documentation (UPDATE FOR CHANGES)
└── UPDATING.md # Breaking changes log
```
## Code Standards
### TypeScript Frontend
- **Avoid `any` types** - Use proper TypeScript, reuse existing types
- **Functional components** with hooks
- **@superset-ui/core** for UI components (not direct antd)
- **Jest** for testing (NO Enzyme)
- **Redux** for global state where it exists, hooks for local
### Python Backend
- **Type hints required** for all new code
- **MyPy compliant** - run `pre-commit run mypy`
- **SQLAlchemy models** with proper typing
- **pytest** for testing
### Apache License Headers
- **New files require ASF license headers** - When creating new code files, include the standard Apache Software Foundation license header
- **LLM instruction files are excluded** - Files like LLMS.md, CLAUDE.md, etc. are in `.rat-excludes` to avoid header token overhead
## Documentation Requirements
- **docs/**: Update for any user-facing changes
- **UPDATING.md**: Add breaking changes here
- **Docstrings**: Required for new functions/classes
## Architecture Patterns
### Security & Features
- **RBAC**: Role-based access via Flask-AppBuilder
- **Feature flags**: Control feature rollouts
- **Row-level security**: SQL-based data access control
## Test Utilities
### Python Test Helpers
- **`SupersetTestCase`** - Base class in `tests/integration_tests/base_tests.py`
- **`@with_config`** - Config mocking decorator
- **`@with_feature_flags`** - Feature flag testing
- **`login_as()`, `login_as_admin()`** - Authentication helpers
- **`create_dashboard()`, `create_slice()`** - Data setup utilities
### TypeScript Test Helpers
- **`superset-frontend/spec/helpers/testing-library.tsx`** - Custom render() with providers
- **`createWrapper()`** - Redux/Router/Theme wrapper
- **`selectOption()`** - Select component helper
- **React Testing Library** - NO Enzyme (removed)
### Test Database Patterns
- **Mock patterns**: Use `MagicMock()` for config objects, avoid `AsyncMock` for synchronous code
- **API tests**: Update expected columns when adding new model fields
### Running Tests
```bash
# Frontend
npm run test # All tests
npm run test -- filename.test.tsx # Single file
# Backend
pytest # All tests
pytest tests/unit_tests/specific_test.py # Single file
pytest tests/unit_tests/ # Directory
# If pytest fails with database/setup issues, ask the user to run test environment setup
```
## Environment Validation
**Quick Setup Check (run this first):**
```bash
# Verify Superset is running
curl -f http://localhost:8088/health || echo "❌ Setup required - see https://superset.apache.org/docs/contributing/development#working-with-llms"
```
**If health checks fail:**
"It appears you aren't set up properly. Please refer to the [Working with LLMs](https://superset.apache.org/docs/contributing/development#working-with-llms) section in the development docs for setup instructions."
**Key Project Files:**
- `superset-frontend/package.json` - Frontend build scripts (`npm run dev` on port 9000, `npm run test`, `npm run lint`)
- `pyproject.toml` - Python tooling (ruff, mypy configs)
- `requirements/` folder - Python dependencies (base.txt, development.txt)
## SQLAlchemy Query Best Practices
- **Use negation operator**: `~Model.field` instead of `== False` to avoid ruff E712 errors
- **Example**: `~Model.is_active` instead of `Model.is_active == False`
## Pre-commit Validation
**Use pre-commit hooks for quality validation:**
```bash
# Install hooks
pre-commit install
# IMPORTANT: Stage your changes first!
git add . # Pre-commit only checks staged files
# Quick validation (faster than --all-files)
pre-commit run # Staged files only
pre-commit run mypy # Python type checking
pre-commit run prettier # Code formatting
pre-commit run eslint # Frontend linting
```
**Important pre-commit usage notes:**
- **Stage files first**: Run `git add .` before `pre-commit run` to check only changed files (much faster)
- **Virtual environment**: Activate your Python virtual environment before running pre-commit
```bash
# Common virtual environment locations (yours may differ):
source .venv/bin/activate # if using .venv
source venv/bin/activate # if using venv
source ~/venvs/superset/bin/activate # if using a central location
```
If you get a "command not found" error, ask the user which virtual environment to activate
- **Auto-fixes**: Some hooks auto-fix issues (e.g., trailing whitespace). Re-run after fixes are applied
## Common File Patterns
### API Structure
- **`/api.py`** - REST endpoints with decorators and OpenAPI docstrings
- **`/schemas.py`** - Marshmallow validation schemas for OpenAPI spec
- **`/commands/`** - Business logic classes with @transaction() decorators
- **`/models/`** - SQLAlchemy database models
- **OpenAPI docs**: Auto-generated at `/swagger/v1` from docstrings and schemas
### Migration Files
- **Location**: `superset/migrations/versions/`
- **Naming**: `YYYY-MM-DD_HH-MM_hash_description.py`
- **Utilities**: Use helpers from `superset.migrations.shared.utils` for database compatibility
- **Pattern**: Import utilities instead of raw SQLAlchemy operations
## Platform-Specific Instructions
- **[LLMS.md](LLMS.md)** - General LLM development guide (READ THIS FIRST)
- **[CLAUDE.md](CLAUDE.md)** - For Claude/Anthropic tools
- **[.github/copilot-instructions.md](.github/copilot-instructions.md)** - For GitHub Copilot
- **[GEMINI.md](GEMINI.md)** - For Google Gemini tools
- **[GPT.md](GPT.md)** - For OpenAI/ChatGPT tools
- **[.cursor/rules/dev-standard.mdc](.cursor/rules/dev-standard.mdc)** - For Cursor editor
---
**LLM Note**: This codebase is actively modernizing toward full TypeScript and type safety. Always run `pre-commit run` to validate changes. Follow the ongoing refactors section to avoid deprecated patterns.

View File

@@ -23,6 +23,9 @@ This file documents any backwards-incompatible changes in Superset and
assists people when migrating to a new version.
## Next
- [33084](https://github.com/apache/superset/pull/33084) The DISALLOWED_SQL_FUNCTIONS configuration now includes additional potentially sensitive database functions across PostgreSQL, MySQL, SQLite, MS SQL Server, and ClickHouse. Existing queries using these functions may now be blocked. Review your SQL Lab queries and dashboards if you encounter "disallowed function" errors after upgrading
- [34235](https://github.com/apache/superset/pull/34235) CSV exports now use `utf-8-sig` encoding by default to include a UTF-8 BOM, improving compatibility with Excel.
- [34258](https://github.com/apache/superset/pull/34258) changing the default in Dockerfile to INCLUDE_CHROMIUM="false" (from "true") in the past. This ensures the `lean` layer is lean by default, and people can opt-in to the `chromium` layer by setting the build arg `INCLUDE_CHROMIUM=true`. This is a breaking change for anyone using the `lean` layer, as it will no longer include Chromium by default.
- [34204](https://github.com/apache/superset/pull/33603) OpenStreetView has been promoted as the new default for Deck.gl visualization since it can be enabled by default without requiring an API key. If you have Mapbox set up and want to disable OpenStreeView in your environment, please follow the steps documented here [https://superset.apache.org/docs/configuration/map-tiles].
- [33116](https://github.com/apache/superset/pull/33116) In Echarts Series charts (e.g. Line, Area, Bar, etc.) charts, the `x_axis_sort_series` and `x_axis_sort_series_ascending` form data items have been renamed with `x_axis_sort` and `x_axis_sort_asc`.
There's a migration added that can potentially affect a significant number of existing charts.

View File

@@ -20,6 +20,9 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-image: &superset-image apachesuperset.docker.scarf.sh/apache/superset:${TAG:-latest-dev}
x-superset-volumes:

194
docker-compose-light.yml Normal file
View File

@@ -0,0 +1,194 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# -----------------------------------------------------------------------
# Lightweight docker-compose for running multiple Superset instances
# This includes only essential services: database, Redis, and Superset app
#
# IMPORTANT: To run multiple instances in parallel:
# - Use different project names: docker-compose -p project1 -f docker-compose-light.yml up
# - Use different NODE_PORT values: NODE_PORT=9002 docker-compose -p project2 -f docker-compose-light.yml up
# - Volumes are isolated by project name (e.g., project1_db_home_light, project2_db_home_light)
# - Database name is intentionally different (superset_light) to prevent accidental cross-connections
#
# MCP Service (Model Context Protocol):
# - Optional service for LLM agent integration, available under 'mcp' profile
# - To include MCP: docker-compose -f docker-compose-light.yml --profile mcp up
# - MCP runs on port 5008 by default (customize with MCP_PORT=5009)
# - Enable SQL debugging with MCP_SQL_DEBUG=true
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-user: &superset-user root
x-superset-volumes: &superset-volumes
# /app/pythonpath_docker will be appended to the PYTHONPATH in the final container
- ./docker:/app/docker
- ./superset:/app/superset
- ./superset-frontend:/app/superset-frontend
- superset_home_light:/app/superset_home
- ./tests:/app/tests
x-common-build: &common-build
context: .
target: ${SUPERSET_BUILD_TARGET:-dev} # can use `dev` (default) or `lean`
cache_from:
- apache/superset-cache:3.10-slim-bookworm
args:
DEV_MODE: "true"
INCLUDE_CHROMIUM: ${INCLUDE_CHROMIUM:-false}
INCLUDE_FIREFOX: ${INCLUDE_FIREFOX:-false}
BUILD_TRANSLATIONS: ${BUILD_TRANSLATIONS:-false}
services:
db-light:
env_file:
- path: docker/.env # default
required: true
- path: docker/.env-local # optional override
required: false
image: postgres:16
restart: unless-stopped
# No host port mapping - only accessible within Docker network
volumes:
- db_home_light:/var/lib/postgresql/data
- ./docker/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
environment:
# Override database name to avoid conflicts
POSTGRES_DB: superset_light
superset-light:
env_file:
- path: docker/.env # default
required: true
- path: docker/.env-local # optional override
required: false
build:
<<: *common-build
command: ["/app/docker/docker-bootstrap.sh", "app"]
restart: unless-stopped
# No host port mapping - accessed via webpack dev server proxy
extra_hosts:
- "host.docker.internal:host-gateway"
user: *superset-user
depends_on:
superset-init-light:
condition: service_completed_successfully
volumes: *superset-volumes
environment:
# Override DB connection for light service
DATABASE_HOST: db-light
DATABASE_DB: superset_light
POSTGRES_DB: superset_light
EXAMPLES_HOST: db-light
EXAMPLES_DB: superset_light
EXAMPLES_USER: superset
EXAMPLES_PASSWORD: superset
# Use light-specific config that disables Redis
SUPERSET_CONFIG_PATH: /app/docker/pythonpath_dev/superset_config_docker_light.py
superset-init-light:
build:
<<: *common-build
command: ["/app/docker/docker-init.sh"]
env_file:
- path: docker/.env # default
required: true
- path: docker/.env-local # optional override
required: false
depends_on:
db-light:
condition: service_started
user: *superset-user
volumes: *superset-volumes
environment:
# Override DB connection for light service
DATABASE_HOST: db-light
DATABASE_DB: superset_light
POSTGRES_DB: superset_light
EXAMPLES_HOST: db-light
EXAMPLES_DB: superset_light
EXAMPLES_USER: superset
EXAMPLES_PASSWORD: superset
# Use light-specific config that disables Redis
SUPERSET_CONFIG_PATH: /app/docker/pythonpath_dev/superset_config_docker_light.py
healthcheck:
disable: true
superset-node-light:
build:
context: .
target: superset-node
args:
# This prevents building the frontend bundle since we'll mount local folder
# and build it on startup while firing docker-frontend.sh in dev mode, where
# it'll mount and watch local files and rebuild as you update them
DEV_MODE: "true"
BUILD_TRANSLATIONS: ${BUILD_TRANSLATIONS:-false}
environment:
# set this to false if you have perf issues running the npm i; npm run dev in-docker
# if you do so, you have to run this manually on the host, which should perform better!
BUILD_SUPERSET_FRONTEND_IN_DOCKER: true
NPM_RUN_PRUNE: false
SCARF_ANALYTICS: "${SCARF_ANALYTICS:-}"
# configuring the dev-server to use the host.docker.internal to connect to the backend
superset: "http://superset-light:8088"
ports:
- "127.0.0.1:${NODE_PORT:-9001}:9000" # Parameterized port
command: ["/app/docker/docker-frontend.sh"]
env_file:
- path: docker/.env # default
required: true
- path: docker/.env-local # optional override
required: false
volumes: *superset-volumes
superset-mcp-light:
profiles:
- mcp
build:
<<: *common-build
command: ["/app/docker/docker-bootstrap.sh", "mcp"]
restart: unless-stopped
ports:
- "127.0.0.1:${MCP_PORT:-5008}:5008" # Parameterized port
extra_hosts:
- "host.docker.internal:host-gateway"
user: *superset-user
depends_on:
superset-init-light:
condition: service_completed_successfully
volumes: *superset-volumes
env_file:
- path: docker/.env # default
required: true
- path: docker/.env-local # optional override
required: false
environment:
# Override DB connection for light service
DATABASE_HOST: db-light
DATABASE_DB: superset_light
POSTGRES_DB: superset_light
# Use light-specific config that disables Redis
SUPERSET_CONFIG_PATH: /app/docker/pythonpath_dev/superset_config_docker_light.py
# Enable SQL debugging for MCP if needed
SQLALCHEMY_DEBUG: ${MCP_SQL_DEBUG:-false}
volumes:
superset_home_light:
external: false
db_home_light:
external: false

View File

@@ -20,6 +20,9 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-volumes:
&superset-volumes # /app/pythonpath_docker will be appended to the PYTHONPATH in the final container

View File

@@ -20,6 +20,9 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-user: &superset-user root
x-superset-volumes: &superset-volumes

View File

@@ -53,7 +53,12 @@ PYTHONPATH=/app/pythonpath:/app/docker/pythonpath_dev
REDIS_HOST=redis
REDIS_PORT=6379
# Development and logging configuration
# FLASK_DEBUG: Enables Flask dev features (auto-reload, better error pages) - keep 'true' for development
FLASK_DEBUG=true
# SUPERSET_LOG_LEVEL: Controls Superset application logging verbosity (debug, info, warning, error, critical)
SUPERSET_LOG_LEVEL=info
SUPERSET_APP_ROOT="/"
SUPERSET_ENV=development
SUPERSET_LOAD_EXAMPLES=yes
@@ -66,4 +71,3 @@ SUPERSET_SECRET_KEY=TEST_NON_DEV_SECRET
ENABLE_PLAYWRIGHT=false
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
BUILD_SUPERSET_FRONTEND_IN_DOCKER=true
SUPERSET_LOG_LEVEL=info

View File

@@ -78,6 +78,10 @@ case "${1}" in
echo "Starting web app..."
/usr/bin/run-server.sh
;;
mcp)
echo "Starting MCP service..."
superset mcp run --host 0.0.0.0 --port ${MCP_PORT:-5008} --debug
;;
*)
echo "Unknown Operation!!!"
;;

View File

@@ -20,4 +20,5 @@
# DON'T ignore the .gitignore
!.gitignore
!superset_config.py
!superset_config_docker_light.py
!superset_config_local.example

View File

@@ -129,7 +129,7 @@ if os.getenv("CYPRESS_CONFIG") == "true":
#
try:
import superset_config_docker
from superset_config_docker import * # noqa
from superset_config_docker import * # noqa: F403
logger.info(
f"Loaded your Docker configuration at [{superset_config_docker.__file__}]"

View File

@@ -0,0 +1,37 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Configuration for docker-compose-light.yml - disables Redis and uses minimal services
# Import all settings from the main config first
from flask_caching.backends.filesystemcache import FileSystemCache
from superset_config import * # noqa: F403
# Override caching to use simple in-memory cache instead of Redis
RESULTS_BACKEND = FileSystemCache("/app/superset_home/sqllab")
CACHE_CONFIG = {
"CACHE_TYPE": "SimpleCache",
"CACHE_DEFAULT_TIMEOUT": 300,
"CACHE_KEY_PREFIX": "superset_light_",
}
DATA_CACHE_CONFIG = CACHE_CONFIG
THUMBNAIL_CACHE_CONFIG = CACHE_CONFIG
# Disable Celery entirely for lightweight mode
CELERY_CONFIG = None # type: ignore[assignment,misc]

View File

@@ -10,44 +10,85 @@ version: 1
apache-superset>=6.0
:::
Superset now rides on **Ant Design v5s token-based theming**.
Superset now rides on **Ant Design v5's token-based theming**.
Every Antd token works, plus a handful of Superset-specific ones for charts and dashboard chrome.
## 1 — Create a theme
## Managing Themes via CRUD Interface
1. Open the official [Ant Design Theme Editor](https://ant.design/theme-editor)
2. Design your palette, typography, and component overrides.
3. Open the `CONFIG` modal and paste the JSON.
Superset now includes a built-in **Theme Management** interface accessible from the admin menu under **Settings > Themes**.
### Creating a New Theme
1. Navigate to **Settings > Themes** in the Superset interface
2. Click **+ Theme** to create a new theme
3. Use the [Ant Design Theme Editor](https://ant.design/theme-editor) to design your theme:
- Design your palette, typography, and component overrides
- Open the `CONFIG` modal and copy the JSON configuration
4. Paste the JSON into the theme definition field in Superset
5. Give your theme a descriptive name and save
You can also extend with Superset-specific tokens (documented in the default theme object) before you import.
## 2 — Apply it instance-wide
### Applying Themes to Dashboards
Once created, themes can be applied to individual dashboards:
- Edit any dashboard and select your custom theme from the theme dropdown
- Each dashboard can have its own theme, allowing for branded or context-specific styling
## Alternative: Instance-wide Configuration
For system-wide theming, you can configure default themes via Python configuration:
### Setting Default Themes
```python
# superset_config.py
THEME = {
# Paste your JSON theme definition here
# Default theme (light mode)
THEME_DEFAULT = {
"token": {
"colorPrimary": "#2893B3",
"colorSuccess": "#5ac189",
# ... your theme JSON configuration
}
}
# Dark theme configuration
THEME_DARK = {
"algorithm": "dark",
"token": {
"colorPrimary": "#2893B3",
# ... your dark theme overrides
}
}
# Theme behavior settings
THEME_SETTINGS = {
"enforced": False, # If True, forces default theme always
"allowSwitching": True, # Allow users to switch between themes
"allowOSPreference": True, # Auto-detect system theme preference
}
```
Restart Superset to apply changes
### Copying Themes from CRUD Interface
## 3 — Tweak live in the app (beta)
To use a theme created via the CRUD interface as your system default:
Set the feature flag in your `superset_config`
```python
DEFAULT_FEATURE_FLAGS: dict[str, bool] = {
{{ ... }}
THEME_ALLOW_THEME_EDITOR_BETA = True,
}
```
1. Navigate to **Settings > Themes** and edit your desired theme
2. Copy the complete JSON configuration from the theme definition field
3. Paste it directly into your `superset_config.py` as shown above
- Enables a JSON editor panel inside Superset as a new icon in the navbar
- Intended for testing/design and rapid in-context iteration
- End-user theme switching & preferences coming later
Restart Superset to apply changes.
## 4 — Potential Next Steps
## Theme Development Workflow
- CRUD UI for managing multiple themes
- Per-dashboard & per-workspace theme assignment
- User-selectable theme preferences
1. **Design**: Use the [Ant Design Theme Editor](https://ant.design/theme-editor) to iterate on your design
2. **Test**: Create themes in Superset's CRUD interface for testing
3. **Apply**: Assign themes to specific dashboards or configure instance-wide
4. **Iterate**: Modify theme JSON directly in the CRUD interface or re-import from the theme editor
## Advanced Features
- **System Themes**: Superset includes built-in light and dark themes
- **Per-Dashboard Theming**: Each dashboard can have its own visual identity
- **JSON Editor**: Edit theme configurations directly within Superset's interface

View File

@@ -120,6 +120,78 @@ docker volume rm superset_db_home
docker-compose up
```
## GitHub Codespaces (Cloud Development)
GitHub Codespaces provides a complete, pre-configured development environment in the cloud. This is ideal for:
- Quick contributions without local setup
- Consistent development environments across team members
- Working from devices that can't run Docker locally
- Safe experimentation in isolated environments
:::info
We're grateful to GitHub for providing this excellent cloud development service that makes
contributing to Apache Superset more accessible to developers worldwide.
:::
### Getting Started with Codespaces
1. **Create a Codespace**: Use this pre-configured link that sets up everything you need:
[**Launch Superset Codespace →**](https://github.com/codespaces/new?skip_quickstart=true&machine=standardLinux32gb&repo=39464018&ref=codespaces&geo=UsWest&devcontainer_path=.devcontainer%2Fdevcontainer.json)
:::caution
**Important**: You must select at least the **4 CPU / 16GB RAM** machine type (pre-selected in the link above).
Smaller instances will not have sufficient resources to run Superset effectively.
:::
2. **Wait for Setup**: The initial setup takes several minutes. The Codespace will:
- Build the development container
- Install all dependencies
- Start all required services (PostgreSQL, Redis, etc.)
- Initialize the database with example data
3. **Access Superset**: Once ready, check the **PORTS** tab in VS Code for port `9001`.
Click the globe icon to open Superset in your browser.
- Default credentials: `admin` / `admin`
### Key Features
- **Auto-reload**: Both Python and TypeScript files auto-refresh on save
- **Pre-installed Extensions**: VS Code extensions for Python, TypeScript, and database tools
- **Multiple Instances**: Run multiple Codespaces for different branches/features
- **SSH Access**: Connect via terminal using `gh cs ssh` or through the GitHub web UI
- **VS Code Integration**: Works seamlessly with VS Code desktop app
### Managing Codespaces
- **List active Codespaces**: `gh cs list`
- **SSH into a Codespace**: `gh cs ssh`
- **Stop a Codespace**: Via GitHub UI or `gh cs stop`
- **Delete a Codespace**: Via GitHub UI or `gh cs delete`
### Debugging and Logs
Since Codespaces uses `docker-compose-light.yml`, you can monitor all services:
```bash
# Stream logs from all services
docker compose -f docker-compose-light.yml logs -f
# Stream logs from a specific service
docker compose -f docker-compose-light.yml logs -f superset
# View last 100 lines and follow
docker compose -f docker-compose-light.yml logs --tail=100 -f
# List all running services
docker compose -f docker-compose-light.yml ps
```
:::tip
Codespaces automatically stop after 30 minutes of inactivity to save resources.
Your work is preserved and you can restart anytime.
:::
## Installing Development Tools
:::note
@@ -194,6 +266,48 @@ You can also run the pre-commit checks manually in various ways:
Replace `<hook_id>` with the ID of the specific hook you want to run. You can find the list
of available hooks in the `.pre-commit-config.yaml` file.
## Working with LLMs
### Environment Setup
Ensure Docker Compose is running before starting LLM sessions:
```bash
docker compose up
```
Validate your environment:
```bash
curl -f http://localhost:8088/health && echo "✅ Superset ready"
```
### LLM Session Best Practices
- Always validate environment setup first using the health checks above
- Use focused validation commands: `pre-commit run` (not `--all-files`)
- **Read [LLMS.md](https://github.com/apache/superset/blob/master/LLMS.md) first** - Contains comprehensive development guidelines, coding standards, and critical refactor information
- **Check platform-specific files** when available:
- `CLAUDE.md` - For Claude/Anthropic tools
- `CURSOR.md` - For Cursor editor
- `GEMINI.md` - For Google Gemini tools
- `GPT.md` - For OpenAI/ChatGPT tools
- Follow the TypeScript migration guidelines and avoid deprecated patterns listed in LLMS.md
### Key Development Commands
```bash
# Frontend development
cd superset-frontend
npm run dev # Development server on http://localhost:9000
npm run test # Run all tests
npm run test -- filename.test.tsx # Run single test file
npm run lint # Linting and type checking
# Backend validation
pre-commit run mypy # Type checking
pytest # Run all tests
pytest tests/unit_tests/specific_test.py # Run single test file
pytest tests/unit_tests/ # Run all tests in directory
```
For detailed development context, environment setup, and coding guidelines, see [LLMS.md](https://github.com/apache/superset/blob/master/LLMS.md).
## Alternatives to `docker compose`
:::caution

View File

@@ -26,11 +26,14 @@ Superset locally is using Docker Compose on a Linux or Mac OSX
computer. Superset does not have official support for Windows. It's also the easiest
way to launch a fully functioning **development environment** quickly.
Note that there are 3 major ways we support to run `docker compose`:
Note that there are 4 major ways we support to run `docker compose`:
1. **docker-compose.yml:** for interactive development, where we mount your local folder with the
frontend/backend files that you can edit and experience the changes you
make in the app in real time
1. **docker-compose-light.yml:** a lightweight configuration with minimal services (database,
Superset app, and frontend dev server) for development. Uses in-memory caching instead of Redis
and is designed for running multiple instances simultaneously
1. **docker-compose-non-dev.yml** where we just build a more immutable image based on the
local branch and get all the required images running. Changes in the local branch
at the time you fire this up will be reflected, but changes to the code
@@ -44,7 +47,7 @@ Note that there are 3 major ways we support to run `docker compose`:
The `dev` builds include the `psycopg2-binary` required to connect
to the Postgres database launched as part of the `docker compose` builds.
More on these two approaches after setting up the requirements for either.
More on these approaches after setting up the requirements for either.
## Requirements
@@ -103,13 +106,36 @@ and help you start fresh. In the context of `docker compose` setting
from within docker. This will slow down the startup, but will fix various npm-related issues.
:::
### Option #2 - build a set of immutable images from the local branch
### Option #2 - lightweight development with multiple instances
For a lighter development setup that uses fewer resources and supports running multiple instances:
```bash
# Single lightweight instance (default port 9001)
docker compose -f docker-compose-light.yml up
# Multiple instances with different ports
NODE_PORT=9001 docker compose -p superset-1 -f docker-compose-light.yml up
NODE_PORT=9002 docker compose -p superset-2 -f docker-compose-light.yml up
NODE_PORT=9003 docker compose -p superset-3 -f docker-compose-light.yml up
```
This configuration includes:
- PostgreSQL database (internal network only)
- Superset application server
- Frontend development server with webpack hot reloading
- In-memory caching (no Redis)
- Isolated volumes and networks per instance
Access each instance at `http://localhost:{NODE_PORT}` (e.g., `http://localhost:9001`).
### Option #3 - build a set of immutable images from the local branch
```bash
docker compose -f docker-compose-non-dev.yml up
```
### Option #3 - boot up an official release
### Option #4 - boot up an official release
```bash
# Set the version you want to run

View File

@@ -0,0 +1,741 @@
---
title: API Reference
sidebar_position: 3
version: 1
---
# MCP Tools API Reference
Complete reference for all 16 MCP tools with request/response examples.
> 🚀 **First time here?** Start with [Dashboard Tools](#dashboard-tools) or [Chart Tools](#chart-tools) to see the most commonly used features.
>
> 🔐 **Need authentication?** See the [Authentication Guide](./authentication) for JWT setup.
>
> 🔧 **Want to add tools?** Check the [Development Guide](./development#adding-new-tools) for step-by-step instructions.
## Dashboard Tools
### list_dashboards
List dashboards with search, filtering, and pagination support.
**Request Schema:**
```json
{
"search": "sales", // Optional: Search term
"filters": [ // Optional: Advanced filters
{
"col": "published",
"opr": "eq",
"value": true
}
],
"page": 1, // Optional: Page number (default: 1)
"page_size": 20, // Optional: Items per page (default: 20)
"select_columns": [ // Optional: Specific columns
"id", "dashboard_title", "uuid"
],
"use_cache": true // Optional: Use cached data (default: true)
}
```
**Response Example:**
```json
{
"dashboards": [
{
"id": 1,
"uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"dashboard_title": "Sales Performance",
"url": "/superset/dashboard/1/",
"published": true,
"owners": ["admin"],
"created_on": "2024-01-15T10:30:00Z",
"changed_on": "2024-01-20T14:15:00Z"
}
],
"total_count": 45,
"page": 1,
"page_size": 20,
"cache_status": {
"cache_hit": true,
"cache_age_seconds": 300
}
}
```
### get_dashboard_info
Get detailed information about a specific dashboard.
**Request Schema:**
```json
{
"identifier": "a1b2c3d4-e5f6-7890-abcd-ef1234567890", // ID, UUID, or slug
"use_cache": true
}
```
**Response Example:**
```json
{
"dashboard_id": 1,
"uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"dashboard_title": "Sales Performance Dashboard",
"slug": "sales-performance",
"url": "/superset/dashboard/1/",
"published": true,
"owners": ["admin", "analyst"],
"roles": ["Sales Team"],
"charts": [
{
"id": 10,
"slice_name": "Monthly Revenue",
"viz_type": "line"
},
{
"id": 11,
"slice_name": "Regional Sales",
"viz_type": "bar"
}
],
"filters": [
{
"column": "region",
"type": "select"
}
],
"created_on": "2024-01-15T10:30:00Z",
"changed_on": "2024-01-20T14:15:00Z"
}
```
### generate_dashboard
Create a new dashboard with multiple charts.
**Request Schema:**
```json
{
"chart_ids": [10, 11, 12, 13],
"dashboard_title": "Q4 Performance Dashboard",
"description": "Quarterly performance metrics and KPIs",
"published": true,
"layout_type": "grid" // Optional: "grid" or "tabs"
}
```
**Response Example:**
```json
{
"dashboard_id": 25,
"uuid": "new-dash-uuid-here",
"dashboard_title": "Q4 Performance Dashboard",
"url": "/superset/dashboard/25/",
"charts_added": 4,
"layout": {
"type": "grid",
"columns": 2,
"rows": 2
},
"created_on": "2024-01-25T16:45:00Z"
}
```
## Chart Tools
### list_charts
List charts with advanced filtering and search capabilities.
**Request Schema:**
```json
{
"search": "revenue",
"filters": [
{
"col": "viz_type",
"opr": "in",
"value": ["line", "bar", "area"]
}
],
"page": 1,
"page_size": 25,
"select_columns": ["id", "slice_name", "viz_type", "uuid"],
"use_cache": true
}
```
**Response Example:**
```json
{
"charts": [
{
"id": 10,
"uuid": "chart-uuid-1",
"slice_name": "Monthly Revenue Trend",
"viz_type": "line",
"datasource_name": "sales_data",
"owners": ["admin"],
"created_on": "2024-01-10T09:15:00Z"
}
],
"total_count": 125,
"page": 1,
"page_size": 25
}
```
### get_chart_info
Get comprehensive chart information including configuration.
**Request Schema:**
```json
{
"identifier": 10, // ID or UUID
"include_form_data": true, // Include chart configuration
"use_cache": true
}
```
**Response Example:**
```json
{
"chart_id": 10,
"uuid": "chart-uuid-1",
"slice_name": "Monthly Revenue Trend",
"viz_type": "line",
"datasource_id": 5,
"datasource_name": "sales_data",
"datasource_type": "table",
"form_data": {
"viz_type": "line",
"x_axis": "month",
"metrics": ["sum__revenue"],
"time_range": "Last 12 months"
},
"query_context": {
"datasource": {"id": 5, "type": "table"},
"queries": [{"columns": [], "metrics": ["sum__revenue"]}]
},
"explore_url": "/superset/explore/?form_data=%7B%22slice_id%22%3A10%7D",
"owners": ["admin"],
"created_on": "2024-01-10T09:15:00Z",
"changed_on": "2024-01-15T11:30:00Z"
}
```
### generate_chart
Create a new chart with specified configuration.
**Request Schema:**
```json
{
"dataset_id": "5",
"config": {
"chart_type": "xy",
"x": {"name": "month", "label": "Month"},
"y": [
{
"name": "revenue",
"aggregate": "SUM",
"label": "Total Revenue"
},
{
"name": "orders",
"aggregate": "COUNT",
"label": "Order Count"
}
],
"kind": "line",
"x_axis": {
"title": "Month",
"format": "smart_date"
},
"y_axis": {
"title": "Revenue ($)",
"format": "$,.0f"
},
"legend": {
"show": true,
"position": "top"
}
},
"slice_name": "Revenue and Orders Trend",
"description": "Monthly revenue and order count comparison",
"save_chart": true,
"generate_preview": true,
"preview_formats": ["url", "ascii"]
}
```
**Response Example:**
```json
{
"chart_id": 45,
"uuid": "new-chart-uuid",
"slice_name": "Revenue and Orders Trend",
"viz_type": "echarts_timeseries_line",
"datasource_id": 5,
"explore_url": "/superset/explore/?form_data=%7B%22slice_id%22%3A45%7D",
"query_executed": true,
"query_result": {
"status": "success",
"row_count": 12,
"execution_time": 0.145
},
"preview": {
"url": {
"preview_url": "http://localhost:5008/screenshot/chart/45.png",
"width": 800,
"height": 600
},
"ascii": {
"ascii_content": "Revenue Trend\n==============\nJan |████████████████ $125K\nFeb |██████████████████ $140K\n...",
"width": 80,
"height": 20
}
},
"created_on": "2024-01-25T14:20:00Z"
}
```
### get_chart_data
Export chart data in multiple formats.
**Request Schema:**
```json
{
"identifier": 10,
"format": "json", // "json", "csv", "excel"
"limit": 1000, // Optional: Row limit
"offset": 0, // Optional: Row offset
"filters": [ // Optional: Additional filters
{
"column": "region",
"op": "=",
"value": "US"
}
],
"use_cache": true,
"force_refresh": false
}
```
**Response Example:**
```json
{
"data": [
{
"month": "2024-01",
"revenue": 125000,
"orders": 450
},
{
"month": "2024-02",
"revenue": 140000,
"orders": 520
}
],
"total_rows": 12,
"columns": [
{"name": "month", "type": "DATE"},
{"name": "revenue", "type": "BIGINT"},
{"name": "orders", "type": "BIGINT"}
],
"query": {
"sql": "SELECT month, SUM(revenue) as revenue, COUNT(*) as orders FROM sales_data GROUP BY month ORDER BY month",
"execution_time": 0.089
},
"cache_status": {
"cache_hit": false,
"cache_type": "query",
"refreshed": true
}
}
```
### get_chart_preview
Generate chart previews in multiple formats.
**Request Schema:**
```json
{
"identifier": 10,
"format": "url", // "url", "base64", "ascii", "table"
"width": 800, // For image formats
"height": 600, // For image formats
"ascii_width": 80, // For ASCII format
"ascii_height": 20, // For ASCII format
"use_cache": true
}
```
**Response Examples:**
**URL Format:**
```json
{
"format": "url",
"preview_url": "http://localhost:5008/screenshot/chart/10.png",
"width": 800,
"height": 600,
"supports_interaction": false,
"expires_at": "2024-01-26T14:20:00Z"
}
```
**ASCII Format:**
```json
{
"format": "ascii",
"ascii_content": "Monthly Revenue Trend\n=====================\n\nJan |████████████████████ $125K\nFeb |██████████████████████ $140K\nMar |███████████████████ $135K\nApr |█████████████████████████ $155K\n\nRange: $125K to $155K\n▁▃▂▅▇▆▄▃▂▄▅▆▇▅▃▂",
"width": 80,
"height": 20,
"supports_color": false
}
```
**Table Format:**
```json
{
"format": "table",
"table_data": "Monthly Revenue Data\n====================\n\nMonth | Revenue | Orders\n---------|----------|--------\nJan 2024 | $125,000 | 450\nFeb 2024 | $140,000 | 520\nMar 2024 | $135,000 | 495\n\nTotal: 12 rows × 3 columns",
"row_count": 12,
"supports_sorting": true
}
```
## Dataset Tools
### list_datasets
List available datasets with columns and metrics.
**Request Schema:**
```json
{
"search": "sales",
"filters": [
{
"col": "is_active",
"opr": "eq",
"value": true
}
],
"include_columns": true, // Include column metadata
"include_metrics": true, // Include metric metadata
"page": 1,
"page_size": 15
}
```
**Response Example:**
```json
{
"datasets": [
{
"id": 1,
"uuid": "dataset-uuid-1",
"table_name": "sales_data",
"database_name": "main_warehouse",
"schema": "public",
"owners": ["admin"],
"columns": [
{
"column_name": "region",
"type": "VARCHAR",
"is_active": true,
"is_dttm": false
},
{
"column_name": "revenue",
"type": "DECIMAL",
"is_active": true,
"is_dttm": false
}
],
"metrics": [
{
"metric_name": "sum__revenue",
"expression": "SUM(revenue)",
"metric_type": "sum"
}
],
"created_on": "2024-01-05T08:00:00Z"
}
],
"total_count": 23,
"page": 1,
"page_size": 15
}
```
### get_dataset_info
Get detailed dataset information with full column/metric metadata.
**Request Schema:**
```json
{
"identifier": "dataset-uuid-1", // ID or UUID
"include_columns": true,
"include_metrics": true,
"use_cache": true
}
```
**Response Example:**
```json
{
"dataset_id": 1,
"uuid": "dataset-uuid-1",
"table_name": "sales_data",
"database_name": "main_warehouse",
"database_id": 1,
"schema": "public",
"sql": null,
"is_active": true,
"owners": ["admin", "data_team"],
"columns": [
{
"id": 101,
"column_name": "region",
"type": "VARCHAR",
"is_active": true,
"is_dttm": false,
"groupby": true,
"filterable": true,
"description": "Geographic region"
},
{
"id": 102,
"column_name": "order_date",
"type": "DATE",
"is_active": true,
"is_dttm": true,
"groupby": true,
"filterable": true
}
],
"metrics": [
{
"id": 201,
"metric_name": "sum__revenue",
"expression": "SUM(revenue)",
"metric_type": "sum",
"is_active": true,
"description": "Total revenue"
}
],
"created_on": "2024-01-05T08:00:00Z",
"changed_on": "2024-01-18T12:30:00Z"
}
```
## System Tools
### get_superset_instance_info
Get Superset instance information and statistics.
**Request Schema:**
```json
{
"include_statistics": true, // Include usage statistics
"include_tools": true, // Include available MCP tools
"use_cache": true
}
```
**Response Example:**
```json
{
"version": "4.1.0",
"build": "apache-superset-4.1.0",
"mcp_service_version": "1.0.0",
"authentication": {
"enabled": true,
"type": "jwt_bearer",
"required_scopes": ["dashboard:read", "chart:read"]
},
"statistics": {
"dashboards": {
"total": 45,
"published": 32
},
"charts": {
"total": 125,
"by_viz_type": {
"line": 35,
"bar": 28,
"table": 42,
"pie": 20
}
},
"datasets": {
"total": 23,
"active": 18
},
"users": {
"total": 15,
"active": 12
}
},
"mcp_tools": [
{
"name": "list_dashboards",
"description": "List dashboards with search and filtering",
"category": "dashboard"
},
{
"name": "generate_chart",
"description": "Create new charts programmatically",
"category": "chart"
}
],
"database_connections": [
{
"id": 1,
"database_name": "main_warehouse",
"backend": "postgresql",
"status": "healthy"
}
],
"cache_status": {
"enabled": true,
"backend": "redis",
"hit_rate": 0.85
}
}
```
### generate_explore_link
Generate Superset explore URLs with pre-configured chart settings.
**Request Schema:**
```json
{
"dataset_id": "1",
"chart_config": {
"viz_type": "line",
"x_axis": "month",
"metrics": ["sum__revenue"],
"time_range": "Last 6 months"
},
"title": "Revenue Analysis",
"cache_form_data": true
}
```
**Response Example:**
```json
{
"explore_url": "/superset/explore/?form_data_key=abc123def456",
"full_url": "http://localhost:8088/superset/explore/?form_data_key=abc123def456",
"form_data_key": "abc123def456",
"expires_at": "2024-01-26T16:45:00Z",
"chart_config": {
"viz_type": "line",
"datasource": "1__table",
"x_axis": "month",
"metrics": ["sum__revenue"],
"time_range": "Last 6 months"
}
}
```
## SQL Lab Tools
### open_sql_lab_with_context
Open SQL Lab with pre-configured database, schema, and SQL.
**Request Schema:**
```json
{
"database_connection_id": 1,
"schema": "public",
"dataset_in_context": "sales_data",
"sql": "SELECT region, SUM(revenue) as total_revenue\nFROM sales_data \nWHERE order_date >= '2024-01-01'\nGROUP BY region\nORDER BY total_revenue DESC",
"title": "Regional Sales Analysis"
}
```
**Response Example:**
```json
{
"sql_lab_url": "/superset/sqllab/?dbid=1&schema=public&sql_template=encoded_sql_here",
"full_url": "http://localhost:8088/superset/sqllab/?dbid=1&schema=public&sql_template=encoded_sql_here",
"database_connection": {
"id": 1,
"database_name": "main_warehouse",
"backend": "postgresql"
},
"schema": "public",
"sql_template": "SELECT region, SUM(revenue) as total_revenue...",
"context": {
"dataset": "sales_data",
"title": "Regional Sales Analysis"
}
}
```
## Error Responses
All tools can return error responses with this structure:
```json
{
"error": "Chart not found with identifier: 999",
"error_type": "NotFound",
"suggestions": [
"Verify the chart ID exists",
"Check if you have permission to access this chart",
"Try using the chart UUID instead of ID"
],
"details": {
"identifier": 999,
"identifier_type": "id"
}
}
```
## Cache Status
Many responses include cache status information:
```json
{
"cache_status": {
"cache_hit": true, // Data served from cache
"cache_type": "query", // Type: query, metadata, form_data
"cache_age_seconds": 300, // Age of cached data
"refreshed": false // Whether cache was refreshed
}
}
```
This API reference provides complete documentation for integrating with the Superset MCP service, including all request schemas, response formats, and error handling patterns.
## What's Next?
### 🔐 **Ready for Production?**
Set up authentication and security with the [Authentication Guide](./authentication).
### 🔧 **Want to Add More Tools?**
Learn how to extend the MCP service in the [Development Guide](./development).
### 🏗️ **Need Architecture Details?**
Understand the system design in the [Architecture Overview](./architecture).
### 🏢 **Enterprise Features?**
Explore advanced capabilities in the [Preset Integration Guide](./preset-integration).
> 📖 **Back to Documentation Index**: [MCP Service](./intro)

View File

@@ -0,0 +1,191 @@
---
title: Architecture Overview
sidebar_position: 5
version: 1
---
# Architecture Overview
The Superset Model Context Protocol (MCP) service provides a modular, schema-driven interface for programmatic access to Superset dashboards, charts, datasets, and instance metadata. Built on FastMCP for LLM agents and automation tools.
**Status:** Phase 1 Complete. Core functionality stable, authentication production-ready. See [SIP-171](https://github.com/apache/superset/issues/33870) for roadmap.
## Core Architecture
### Tool Structure
- **16 MCP tools** organized by domain: `dashboard/`, `chart/`, `dataset/`, `system/`
- All tools decorated with `@mcp.tool` and `@mcp_auth_hook`
- **Import inside functions**: All Superset DAOs/commands imported in function body to ensure proper app context
- Pydantic v2 schemas with LLM/OpenAPI-compatible field descriptions
### Request Schema Pattern
Eliminates LLM parameter validation issues using structured request objects:
```python
# New approach - single request object
get_dataset_info(request={"identifier": 123}) # ID
get_dataset_info(request={"identifier": "uuid-string"}) # UUID
# Old approach - replaced
get_dataset_info(dataset_id=123)
```
### Multi-Identifier Support
- **Charts/Datasets**: ID (numeric) or UUID (string)
- **Dashboards**: ID (numeric), UUID (string), or slug (string)
- Validation prevents conflicting parameters (search + filters)
## Available Tools
### Dashboard Tools (5)
- `list_dashboards` - List with search/filters/pagination
- `get_dashboard_info` - Get by ID/UUID/slug
- `get_dashboard_available_filters` - Discover filterable columns
- `generate_dashboard` - Create dashboards with multiple charts
- `add_chart_to_existing_dashboard` - Add charts to existing dashboards
### Chart Tools (8)
- `list_charts` - List with search/filters/pagination
- `get_chart_info` - Get by ID/UUID
- `get_chart_available_filters` - Discover filterable columns
- `generate_chart` - Create charts (table, line, bar, area, scatter)
- `update_chart` - Update saved charts
- `update_chart_preview` - Update cached previews
- `get_chart_data` - Export data (JSON/CSV/Excel)
- `get_chart_preview` - Screenshots, ASCII art, table previews
### Dataset Tools (3)
- `list_datasets` - List with columns/metrics
- `get_dataset_info` - Get by ID/UUID with metadata
- `get_dataset_available_filters` - Discover filterable columns
### System Tools (2)
- `get_superset_instance_info` - Instance statistics and version
- `generate_explore_link` - Generate chart exploration URLs
### SQL Lab Tools (1)
- `open_sql_lab_with_context` - Pre-configured SQL Lab sessions
## Authentication & Security
### JWT Bearer Authentication
Production-ready authentication with configurable factory pattern:
```python
# In superset_config.py
MCP_AUTH_ENABLED = True
MCP_JWKS_URI = "https://auth.company.com/.well-known/jwks.json"
MCP_JWT_ISSUER = "https://auth.company.com/"
MCP_JWT_AUDIENCE = "superset-mcp-api"
```
### Scope-Based Authorization
| Tool Category | Required Scope |
|---------------|----------------|
| Dashboard ops | `dashboard:read` |
| Chart ops | `chart:read` / `chart:write` |
| Dataset ops | `dataset:read` |
| System ops | `instance:read` |
### Audit Logging
All operations logged with MCP context:
- User impersonation tracking
- Tool execution details
- Sanitized payloads (sensitive data redacted)
## Cache Control
Leverages Superset's existing cache layers with comprehensive control:
### Cache Types
1. **Query Result Cache** - Database query results
2. **Metadata Cache** - Table schemas, columns, metrics
3. **Form Data Cache** - Chart configurations
4. **Dashboard Cache** - Rendered components
### Cache Parameters
Tools support cache control through request schemas:
- `use_cache`: Enable/disable caching (default: true)
- `force_refresh`: Force cache refresh (default: false)
- `cache_timeout`: Override timeout in seconds
- `refresh_metadata`: Force metadata refresh
### Cache Status Reporting
```json
{
"cache_status": {
"cache_hit": true,
"cache_type": "query",
"cache_age_seconds": 300,
"refreshed": false
}
}
```
## Tool Abstractions
### Generic Base Classes
- **ModelListTool**: Handles list/search/filter operations with pagination
- **ModelGetInfoTool**: Single object retrieval by multiple identifier types
- **ModelGetAvailableFiltersTool**: Returns filterable columns/operators
### Implementation Pattern
```python
@mcp.tool
@mcp_auth_hook
def my_tool(request: MyRequest) -> MyResponse:
# Import Superset modules inside function
from superset.daos.dashboard import DashboardDAO
from superset.commands.chart.create import CreateChartCommand
# Tool implementation
return response
```
## Configuration
### URL Configuration
Centralized URL management for consistent link generation:
```python
# In superset_config.py
SUPERSET_WEBSERVER_ADDRESS = "http://localhost:8088" # Development
SUPERSET_WEBSERVER_ADDRESS = "https://superset.company.com" # Production
```
### Schema Design Principles
- **Minimal columns** in list responses
- **Optional fields** in info schemas for missing data handling
- **Null exclusion** for cleaner JSON responses
- **Type safety** with clear Pydantic validation
## Adding New Tools
1. **Choose domain folder**: `dashboard/`, `chart/`, `dataset/`, or `system/`
2. **Define schemas**: Use Pydantic with field descriptions
3. **Implement tool**:
- Decorate with `@mcp.tool` and `@mcp_auth_hook`
- Import Superset modules inside function body
- Use generic abstractions where applicable
4. **Register**: Add to appropriate `__init__.py`
5. **Test**: Add unit tests in `tests/unit_tests/mcp_service/`
## Current Status
### ✅ Phase 1 Complete
- FastMCP server with CLI
- JWT authentication with RBAC
- All 16 core tools implemented
- Request schema pattern
- Cache control system
- Audit logging
- 194+ unit tests
### 🎯 Future Enhancements
- Demo notebooks and video examples
- OAuth integration for user impersonation
- Enhanced chart rendering formats
- Advanced security features
**Production Ready**: Core functionality stable with comprehensive testing and authentication.
---
For setup and usage, see the [MCP Service overview](./intro).

View File

@@ -0,0 +1,434 @@
---
title: Authentication & Security
sidebar_position: 4
version: 1
---
# Authentication & Security
The MCP service provides enterprise-grade JWT Bearer authentication with flexible configuration options and comprehensive security controls.
## Quick Start
### Development Mode (Default)
:::tip
Authentication is **disabled by default** for local development - no configuration needed.
:::
```bash
# No configuration needed - service runs without authentication
superset mcp run --port 5008 --debug
```
### Production Mode
:::warning
Always enable authentication for production deployments to secure your Superset instance.
:::
Enable JWT authentication in your Superset configuration:
```python
# In superset_config.py
MCP_AUTH_ENABLED = True
MCP_JWKS_URI = "https://auth.company.com/.well-known/jwks.json"
MCP_JWT_ISSUER = "https://auth.company.com/"
MCP_JWT_AUDIENCE = "superset-mcp-api"
MCP_REQUIRED_SCOPES = ["dashboard:read", "chart:read", "dataset:read"]
```
## Configuration Options
### Option 1: Simple Configuration
Add to your `superset_config.py`:
```python
# Enable authentication
MCP_AUTH_ENABLED = True
# JWT settings
MCP_JWKS_URI = "https://auth.company.com/.well-known/jwks.json"
MCP_JWT_ISSUER = "https://auth.company.com/"
MCP_JWT_AUDIENCE = "superset-mcp-api"
MCP_REQUIRED_SCOPES = ["dashboard:read", "chart:read"]
# Optional: User resolution
MCP_JWT_USER_CLAIM = "sub" # JWT claim for username (default: "sub")
MCP_JWT_EMAIL_CLAIM = "email" # JWT claim for email (default: "email")
MCP_FALLBACK_USER = "admin" # Fallback user if JWT user not found
```
### Option 2: Custom Factory
For advanced authentication requirements:
```python
def create_custom_mcp_auth(app):
"""Custom auth factory for enterprise environments."""
from fastmcp.server.auth.providers.bearer import BearerAuthProvider
return BearerAuthProvider(
jwks_uri=app.config["MCP_JWKS_URI"],
issuer=app.config["MCP_JWT_ISSUER"],
audience=app.config["MCP_JWT_AUDIENCE"],
required_scopes=app.config.get("MCP_REQUIRED_SCOPES", []),
user_resolver=custom_user_resolver,
cache_ttl=300 # Cache JWKS for 5 minutes
)
MCP_AUTH_FACTORY = create_custom_mcp_auth
```
### Option 3: Environment Variables
For containerized deployments:
```bash
# Environment variables
export MCP_AUTH_ENABLED=true
export MCP_JWKS_URI=https://auth.company.com/.well-known/jwks.json
export MCP_JWT_ISSUER=https://auth.company.com/
export MCP_JWT_AUDIENCE=superset-mcp-api
export MCP_REQUIRED_SCOPES=dashboard:read,chart:read,dataset:read
```
## Identity Provider Integration
### Auth0
```python
# Auth0 configuration
MCP_JWKS_URI = "https://your-tenant.auth0.com/.well-known/jwks.json"
MCP_JWT_ISSUER = "https://your-tenant.auth0.com/"
MCP_JWT_AUDIENCE = "superset-mcp-api"
```
### Okta
```python
# Okta configuration
MCP_JWKS_URI = "https://your-org.okta.com/oauth2/default/v1/keys"
MCP_JWT_ISSUER = "https://your-org.okta.com/oauth2/default"
MCP_JWT_AUDIENCE = "api://superset-mcp"
```
### AWS Cognito
```python
# Cognito configuration
MCP_JWKS_URI = "https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json"
MCP_JWT_ISSUER = "https://cognito-idp.{region}.amazonaws.com/{userPoolId}"
MCP_JWT_AUDIENCE = "your-app-client-id"
```
### Azure AD
```python
# Azure AD configuration
MCP_JWKS_URI = "https://login.microsoftonline.com/{tenant}/discovery/v2.0/keys"
MCP_JWT_ISSUER = "https://login.microsoftonline.com/{tenant}/v2.0"
MCP_JWT_AUDIENCE = "api://superset-mcp"
```
## Scope-Based Authorization
### Standard Scopes
The MCP service defines these standard scopes:
| Scope | Description | Required For |
|-------|-------------|--------------|
| `dashboard:read` | Read dashboard information | `list_dashboards`, `get_dashboard_info` |
| `dashboard:write` | Create/modify dashboards | `generate_dashboard`, `add_chart_to_existing_dashboard` |
| `chart:read` | Read chart information | `list_charts`, `get_chart_info`, `get_chart_data` |
| `chart:write` | Create/modify charts | `generate_chart`, `update_chart` |
| `dataset:read` | Read dataset information | `list_datasets`, `get_dataset_info` |
| `instance:read` | Read instance information | `get_superset_instance_info` |
### Custom Scopes
Define custom scopes for specific use cases:
```python
# Custom scope definitions
CUSTOM_MCP_SCOPES = {
"analytics:export": "Export analytical data",
"reports:generate": "Generate automated reports",
"admin:config": "Access administrative configuration"
}
# Map tools to custom scopes
def get_custom_required_scopes(tool_name: str) -> List[str]:
scope_map = {
"get_chart_data": ["chart:read", "analytics:export"],
"generate_dashboard": ["dashboard:write", "reports:generate"],
"get_superset_instance_info": ["instance:read", "admin:config"]
}
return scope_map.get(tool_name, [])
MCP_SCOPE_RESOLVER = get_custom_required_scopes
```
## JWT Token Format
### Required Claims
Your JWT tokens must include these standard claims:
```json
{
"iss": "https://auth.company.com/", // Issuer
"aud": "superset-mcp-api", // Audience
"sub": "user@company.com", // Subject (username)
"exp": 1704118800, // Expiration timestamp
"iat": 1704115200, // Issued at timestamp
"scope": "dashboard:read chart:read" // Space-separated scopes
}
```
### Optional Claims
Additional claims for enhanced functionality:
```json
{
"email": "user@company.com", // User email
"name": "John Doe", // Full name
"groups": ["analysts", "sales_team"], // User groups
"tenant_id": "company_123", // Multi-tenant ID
"role": "analyst" // User role
}
```
## Client Integration
### API Client Usage
```python
import requests
# Get JWT token from your identity provider
token = get_jwt_token()
# Call MCP service with Bearer authentication
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
response = requests.post(
"http://localhost:5008/call_tool",
headers=headers,
json={
"tool": "list_dashboards",
"arguments": {"search": "sales"}
}
)
data = response.json()
```
### Claude Desktop with Authentication
For Claude Desktop, the proxy script handles authentication:
```bash
#!/bin/bash
# run_proxy_with_auth.sh
# Get token from environment or file
if [ -f ~/.superset_mcp_token ]; then
TOKEN=$(cat ~/.superset_mcp_token)
else
TOKEN=${SUPERSET_MCP_TOKEN}
fi
# Export token for proxy
export MCP_AUTH_TOKEN="$TOKEN"
cd /path/to/superset
source venv/bin/activate
exec fastmcp proxy http://localhost:5008 --auth-header "Authorization: Bearer $TOKEN"
```
## User Resolution
### Default User Resolution
The service maps JWT claims to Superset users:
```python
def default_user_resolver(claims: Dict[str, Any]) -> User:
"""Default user resolution from JWT claims."""
# Extract username from configurable claim
username = claims.get(app.config.get("MCP_JWT_USER_CLAIM", "sub"))
# Find Superset user
user = security_manager.find_user(username=username)
if not user:
# Try email lookup
email = claims.get(app.config.get("MCP_JWT_EMAIL_CLAIM", "email"))
if email:
user = security_manager.find_user(email=email)
if not user and app.config.get("MCP_FALLBACK_USER"):
# Use fallback user for development
user = security_manager.find_user(username=app.config["MCP_FALLBACK_USER"])
return user
```
### Custom User Resolution
Implement custom user resolution logic:
```python
def custom_user_resolver(claims: Dict[str, Any]) -> User:
"""Custom user resolution for enterprise environments."""
# Extract custom claims
employee_id = claims.get("employee_id")
tenant_id = claims.get("tenant_id")
# Multi-tenant user lookup
user = find_user_by_employee_id(employee_id, tenant_id)
if user:
# Set additional context
user.mcp_tenant_id = tenant_id
user.mcp_groups = claims.get("groups", [])
return user
# Use custom resolver
MCP_USER_RESOLVER = custom_user_resolver
```
## Security Features
### Token Validation
Comprehensive JWT validation:
- **Signature verification**: RS256 with JWKS key rotation support
- **Expiration checking**: Automatic token expiry validation
- **Audience validation**: Prevents token reuse across services
- **Issuer validation**: Ensures tokens from trusted sources only
- **Scope validation**: Enforces tool-level permissions
### Request Security
- **HTTPS enforcement**: Production deployments should use HTTPS
- **Rate limiting**: Configurable per-user rate limits
- **Request logging**: All authenticated requests logged with user context
- **Input validation**: Comprehensive request schema validation
### Audit Logging
Every tool call is logged with security context:
```json
{
"timestamp": "2024-01-25T14:30:00Z",
"user_id": "user@company.com",
"tool_name": "get_chart_data",
"source": "mcp",
"jwt_subject": "user@company.com",
"jwt_scopes": ["chart:read", "analytics:export"],
"tenant_id": "company_123",
"request_id": "req_12345",
"execution_time": 0.145,
"status": "success"
}
```
## Testing Authentication
### Generate Test Tokens
For development and testing:
```python
from fastmcp.server.auth.providers.bearer import RSAKeyPair
# Generate test keypair
keypair = RSAKeyPair.generate()
print("Public key:", keypair.public_key)
# Create test token
token = keypair.create_token(
subject="test@example.com",
issuer="https://test.example.com",
audience="superset-mcp-api",
scopes=["dashboard:read", "chart:read", "dataset:read"],
expires_in=3600 # 1 hour
)
print("Test token:", token)
```
### Test Configuration
```python
# Test configuration with generated keypair
MCP_AUTH_ENABLED = True
MCP_JWT_PUBLIC_KEY = """-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA...
-----END PUBLIC KEY-----"""
MCP_JWT_ISSUER = "https://test.example.com"
MCP_JWT_AUDIENCE = "superset-mcp-api"
MCP_FALLBACK_USER = "admin"
```
### Manual Testing
```bash
# Test with curl
curl -X POST http://localhost:5008/call_tool \
-H "Authorization: Bearer $TEST_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"tool": "get_superset_instance_info",
"arguments": {"include_statistics": true}
}'
```
## Troubleshooting
### Common Issues
**Token Validation Errors:**
```
Error: Invalid JWT signature
Solution: Verify JWKS_URI is accessible and contains correct keys
```
**User Not Found:**
```
Error: User not found for JWT subject
Solution: Check MCP_JWT_USER_CLAIM configuration and user exists in Superset
```
**Insufficient Scopes:**
```
Error: Missing required scope 'chart:read'
Solution: Update JWT token to include required scopes
```
### Debug Configuration
Enable debug logging for authentication issues:
```python
# Enhanced logging for auth debugging
import logging
logging.getLogger('superset.mcp_service.auth').setLevel(logging.DEBUG)
# Log all JWT validation steps
MCP_AUTH_DEBUG = True
```
This authentication guide provides comprehensive coverage for securing the MCP service in production environments while maintaining development flexibility.

View File

@@ -0,0 +1,705 @@
---
title: Development Guide
sidebar_position: 2
version: 1
---
# MCP Service Development Guide
This guide covers the internal architecture, development workflows, and patterns for extending the Superset MCP service.
> 🚀 **New to MCP?** Start with the [Overview](./overview) to understand what the service does before diving into development.
>
> 📚 **Need API examples?** Check the [API Reference](./api-reference) to see how existing tools work.
>
> 🔐 **Planning production use?** Review [Authentication](./authentication) for security considerations.
## Internal Architecture
### Component Overview
The MCP service follows a layered architecture with clear separation of concerns:
```mermaid
graph TB
subgraph "Transport Layer"
HTTP[HTTP Server :5008]
FastMCP[FastMCP Protocol Handler]
end
subgraph "Auth & Middleware Layer"
AuthHook[Auth Hook Decorator]
JWT[JWT Validator]
RBAC[RBAC Engine]
Audit[Audit Logger]
end
subgraph "Tool Layer"
Tools[16 MCP Tools<br/>Tool Decorated]
Schemas[Pydantic Schemas]
Validation[Request Validation]
end
subgraph "Business Logic Layer"
Generic[Generic Tool Abstractions]
ModelList[ModelListTool]
ModelGet[ModelGetInfoTool]
ModelFilter[ModelGetAvailableFiltersTool]
end
subgraph "Data Access Layer"
DAOs[Superset DAOs]
Commands[Superset Commands]
Cache[Cache Manager]
end
subgraph "Storage Layer"
MetaDB[(Metadata DB)]
DataWH[(Data Warehouse)]
Redis[(Redis Cache)]
end
HTTP --> FastMCP
FastMCP --> AuthHook
AuthHook --> JWT
JWT --> RBAC
RBAC --> Audit
Audit --> Tools
Tools --> Schemas
Schemas --> Validation
Validation --> Generic
Generic --> ModelList
Generic --> ModelGet
Generic --> ModelFilter
ModelList --> DAOs
ModelGet --> DAOs
ModelFilter --> DAOs
Tools --> Commands
Commands --> Cache
DAOs --> MetaDB
Commands --> MetaDB
Commands --> DataWH
Cache --> Redis
```
### Request Flow
Every MCP tool call follows this execution pattern:
```mermaid
sequenceDiagram
participant Client as LLM Client
participant MCP as FastMCP Server
participant Auth as Auth Hook
participant Tool as MCP Tool
participant Generic as Generic Abstraction
participant DAO as Superset DAO
participant DB as Database
Client->>+MCP: tool_call(request)
MCP->>+Auth: validate_and_authorize()
Auth->>Auth: Validate JWT token
Auth->>Auth: Check required scopes
Auth->>Auth: Set Flask g.user context
Auth->>Auth: Log audit event
Auth->>+Tool: execute_tool(validated_request)
Tool->>Tool: Parse Pydantic request schema
Tool->>+Generic: Use generic abstraction
Generic->>+DAO: Query Superset data
DAO->>+DB: Execute SQL
DB-->>-DAO: Return results
DAO-->>-Generic: Return objects
Generic->>Generic: Apply pagination/filtering
Generic-->>-Tool: Return formatted data
Tool->>Tool: Build Pydantic response schema
Tool-->>-Auth: Return response
Auth->>Auth: Log success audit event
Auth-->>-MCP: Return validated response
MCP-->>-Client: JSON response
```
### Tool Registration System
Tools are automatically discovered and registered through the decorator pattern:
```python
# superset/mcp_service/mcp_app.py
from fastmcp import FastMCP
# Global MCP instance
mcp = FastMCP("Superset MCP Service")
# Tools register themselves via decorators
@mcp.tool
@mcp_auth_hook(['chart:read'])
def get_chart_info(request: GetChartInfoRequest) -> GetChartInfoResponse:
# Tool implementation
pass
# All tool modules imported to trigger registration
from superset.mcp_service.chart.tool import *
from superset.mcp_service.dashboard.tool import *
from superset.mcp_service.dataset.tool import *
from superset.mcp_service.system.tool import *
```
## Development Patterns
### Tool Implementation Pattern
All tools follow this standardized pattern:
```python
# Example: superset/mcp_service/chart/tool/get_chart_info.py
from superset.mcp_service.auth import mcp_auth_hook
from superset.mcp_service.mcp_app import mcp
from superset.mcp_service.schemas.chart_schemas import (
GetChartInfoRequest,
GetChartInfoResponse,
ChartError
)
@mcp.tool
@mcp_auth_hook(['chart:read'])
def get_chart_info(request: GetChartInfoRequest) -> GetChartInfoResponse:
"""
Get detailed information about a specific chart.
Supports lookup by ID or UUID with comprehensive metadata.
"""
try:
# CRITICAL: Import Superset modules inside function
from superset.daos.chart import ChartDAO
from superset.models.slice import Slice
# Use generic abstraction for common operations
from superset.mcp_service.generic_tools import ModelGetInfoTool
tool = ModelGetInfoTool(
dao=ChartDAO,
model=Slice,
response_schema=GetChartInfoResponse,
identifier_field_map={
'id': 'id',
'uuid': 'uuid'
}
)
return tool.execute(request)
except Exception as e:
return ChartError(
error=f"Failed to get chart info: {str(e)}",
error_type="ChartInfoError"
)
```
### Schema Design Patterns
Pydantic schemas follow these conventions:
```python
# Request Schema Pattern
class GetChartInfoRequest(BaseModel):
"""Request to get detailed chart information."""
identifier: Union[int, str] = Field(
...,
description="Chart ID (numeric) or UUID (string)"
)
include_form_data: bool = Field(
default=True,
description="Whether to include chart configuration"
)
use_cache: bool = Field(
default=True,
description="Whether to use cached data"
)
# Response Schema Pattern
class GetChartInfoResponse(BaseModel):
"""Detailed chart information response."""
chart_id: int = Field(description="Chart numeric ID")
uuid: Optional[str] = Field(description="Chart UUID")
slice_name: str = Field(description="Chart display name")
viz_type: str = Field(description="Visualization type")
datasource_id: Optional[int] = Field(description="Dataset ID")
form_data: Optional[Dict[str, Any]] = Field(description="Chart configuration")
explore_url: Optional[str] = Field(description="Explore URL for editing")
# Cache status for transparency
cache_status: Optional[CacheStatus] = Field(description="Cache hit information")
# Error Schema Pattern
class ChartError(BaseModel):
"""Chart operation error response."""
error: str = Field(description="Error message")
error_type: str = Field(description="Error type identifier")
suggestions: Optional[List[str]] = Field(description="Suggested fixes")
```
### Generic Tool Abstractions
Common operations are abstracted into reusable classes:
```python
# superset/mcp_service/generic_tools.py
from typing import Type, Dict, Any, List, Optional
from pydantic import BaseModel
class ModelListTool:
"""Generic tool for list operations with pagination and filtering."""
def __init__(self,
dao: Type,
model: Type,
response_schema: Type[BaseModel],
default_columns: List[str] = None,
searchable_columns: List[str] = None):
self.dao = dao
self.model = model
self.response_schema = response_schema
self.default_columns = default_columns or []
self.searchable_columns = searchable_columns or []
def execute(self, request: BaseModel) -> BaseModel:
"""Execute list operation with pagination and filtering."""
# Build query with filters
query = self.dao.find_all()
# Apply search if provided
if hasattr(request, 'search') and request.search:
query = self._apply_search(query, request.search)
# Apply filters if provided
if hasattr(request, 'filters') and request.filters:
query = self._apply_filters(query, request.filters)
# Apply pagination
total = query.count()
if hasattr(request, 'page') and hasattr(request, 'page_size'):
offset = (request.page - 1) * request.page_size
query = query.offset(offset).limit(request.page_size)
# Execute query and serialize
results = query.all()
serialized = [self._serialize_model(obj) for obj in results]
return self.response_schema(
results=serialized,
total_count=total,
page=getattr(request, 'page', 1),
page_size=getattr(request, 'page_size', len(serialized))
)
class ModelGetInfoTool:
"""Generic tool for getting single object by multiple identifier types."""
def __init__(self,
dao: Type,
model: Type,
response_schema: Type[BaseModel],
identifier_field_map: Dict[str, str]):
self.dao = dao
self.model = model
self.response_schema = response_schema
self.identifier_field_map = identifier_field_map
def execute(self, request: BaseModel) -> BaseModel:
"""Execute get operation with multi-identifier support."""
identifier = request.identifier
# Determine identifier type and field
if isinstance(identifier, int):
field = self.identifier_field_map.get('id', 'id')
obj = self.dao.find_by_id(identifier)
elif isinstance(identifier, str):
if len(identifier) == 36 and '-' in identifier: # UUID format
field = self.identifier_field_map.get('uuid', 'uuid')
obj = self.dao.find_by_uuid(identifier)
else: # Assume slug
field = self.identifier_field_map.get('slug', 'slug')
obj = getattr(self.dao, 'find_by_slug', lambda x: None)(identifier)
if not obj:
raise ValueError(f"Object not found with identifier: {identifier}")
# Serialize and return
serialized = self._serialize_model(obj)
return self.response_schema(**serialized)
```
## Adding New Tools
### Step-by-Step Process
1. **Define the Domain**
Choose the appropriate domain folder:
- `dashboard/` - Dashboard operations
- `chart/` - Chart operations
- `dataset/` - Dataset operations
- `system/` - System-level operations
2. **Create Schemas**
```bash
# Create schema file
touch superset/mcp_service/schemas/my_domain_schemas.py
```
```python
# Define request/response schemas
class MyToolRequest(BaseModel):
param1: str = Field(description="Parameter description")
param2: Optional[int] = Field(default=None, description="Optional parameter")
class MyToolResponse(BaseModel):
result: str = Field(description="Result description")
metadata: Dict[str, Any] = Field(description="Additional metadata")
```
3. **Implement the Tool**
```bash
# Create tool file
touch superset/mcp_service/my_domain/tool/my_tool.py
```
```python
@mcp.tool
@mcp_auth_hook(['required:scope'])
def my_tool(request: MyToolRequest) -> MyToolResponse:
"""Tool description for LLM."""
# Import Superset modules inside function
from superset.daos.my_dao import MyDAO
# Implement business logic
result = MyDAO.do_something(request.param1)
return MyToolResponse(
result=result,
metadata={"processed_at": datetime.utcnow()}
)
```
4. **Register the Tool**
```python
# Add to superset/mcp_service/my_domain/tool/__init__.py
from .my_tool import my_tool
__all__ = ['my_tool']
```
```python
# Import in superset/mcp_service/mcp_app.py
from superset.mcp_service.my_domain.tool import *
```
5. **Add Tests**
```bash
# Create test file
touch tests/unit_tests/mcp_service/test_my_tool.py
```
```python
import pytest
from superset.mcp_service.my_domain.tool.my_tool import my_tool
from superset.mcp_service.schemas.my_domain_schemas import MyToolRequest
class TestMyTool:
def test_my_tool_success(self):
request = MyToolRequest(param1="test")
response = my_tool(request)
assert response.result == "expected_result"
```
### Tool Best Practices
1. **Import Inside Functions**
```python
# ❌ DON'T: Import at module level
from superset.daos.chart import ChartDAO
@mcp.tool
def my_tool():
# Tool implementation
pass
# ✅ DO: Import inside function
@mcp.tool
def my_tool():
from superset.daos.chart import ChartDAO
# Tool implementation
pass
```
2. **Use Generic Abstractions**
```python
# ✅ Leverage existing patterns
@mcp.tool
def list_my_objects(request):
from superset.mcp_service.generic_tools import ModelListTool
tool = ModelListTool(
dao=MyDAO,
model=MyModel,
response_schema=ListMyObjectsResponse
)
return tool.execute(request)
```
3. **Comprehensive Error Handling**
```python
@mcp.tool
def my_tool(request):
try:
# Tool implementation
return success_response
except PermissionError as e:
return MyToolError(
error="Permission denied",
error_type="PermissionError",
suggestions=["Check user permissions"]
)
except Exception as e:
return MyToolError(
error=f"Unexpected error: {str(e)}",
error_type="InternalError"
)
```
## Testing Patterns
### Unit Test Structure
```python
# tests/unit_tests/mcp_service/test_chart_tools.py
import pytest
from unittest.mock import Mock, patch
from superset.mcp_service.chart.tool.get_chart_info import get_chart_info
from superset.mcp_service.schemas.chart_schemas import GetChartInfoRequest
class TestGetChartInfo:
"""Test suite for get_chart_info tool."""
@patch('superset.mcp_service.chart.tool.get_chart_info.ChartDAO')
def test_get_chart_info_by_id_success(self, mock_dao):
"""Test successful chart lookup by ID."""
# Setup mock
mock_chart = Mock()
mock_chart.id = 1
mock_chart.slice_name = "Test Chart"
mock_chart.viz_type = "line"
mock_dao.find_by_id.return_value = mock_chart
# Execute
request = GetChartInfoRequest(identifier=1)
response = get_chart_info(request)
# Verify
assert response.chart_id == 1
assert response.slice_name == "Test Chart"
mock_dao.find_by_id.assert_called_once_with(1)
@patch('superset.mcp_service.chart.tool.get_chart_info.ChartDAO')
def test_get_chart_info_not_found(self, mock_dao):
"""Test chart not found scenario."""
# Setup mock
mock_dao.find_by_id.return_value = None
# Execute
request = GetChartInfoRequest(identifier=999)
response = get_chart_info(request)
# Verify error response
assert hasattr(response, 'error')
assert "not found" in response.error.lower()
```
### Integration Test Patterns
```python
# tests/integration_tests/mcp_service/test_chart_integration.py
import pytest
from superset.app import create_app
from superset.mcp_service.mcp_app import mcp
from tests.integration_tests.base_tests import SupersetTestCase
class TestChartIntegration(SupersetTestCase):
"""Integration tests for chart tools."""
def setUp(self):
super().setUp()
self.app = create_app()
self.app_context = self.app.app_context()
self.app_context.push()
def tearDown(self):
self.app_context.pop()
super().tearDown()
def test_chart_workflow_integration(self):
"""Test complete chart workflow."""
# Create chart
create_request = {
"dataset_id": "1",
"config": {
"chart_type": "table",
"columns": [{"name": "region"}]
}
}
create_response = mcp.call_tool("generate_chart", create_request)
chart_id = create_response["chart_id"]
# Get chart info
info_request = {"identifier": chart_id}
info_response = mcp.call_tool("get_chart_info", info_request)
assert info_response["chart_id"] == chart_id
assert info_response["viz_type"] == "table"
# Get chart data
data_request = {"identifier": chart_id, "limit": 10}
data_response = mcp.call_tool("get_chart_data", data_request)
assert "data" in data_response
assert len(data_response["data"]) <= 10
```
## Performance Considerations
### Caching Strategy
The MCP service leverages Superset's existing cache layers:
```python
# Cache control in tools
@mcp.tool
def get_chart_data(request: GetChartDataRequest):
"""Tool with cache control."""
cache_config = {
'use_cache': request.use_cache,
'force_refresh': request.force_refresh,
'cache_timeout': request.cache_timeout
}
# Use Superset's cache infrastructure
result = execute_with_cache(query, cache_config)
return ChartDataResponse(
data=result.data,
cache_status=result.cache_status
)
```
### Query Optimization
```python
# Efficient pagination
def list_objects(query, page, page_size):
"""Optimized pagination pattern."""
# Count query optimization
total = query.count()
# Limit columns for list operations
query = query.options(load_only('id', 'name', 'created_on'))
# Apply pagination
offset = (page - 1) * page_size
results = query.offset(offset).limit(page_size).all()
return results, total
```
## Security Considerations
### Authentication Flow
```python
# JWT validation and user context
@mcp_auth_hook(['chart:read'])
def secure_tool(request):
"""Tool with proper security context."""
# g.user is set by auth hook
user_id = g.user.id
# Apply user-specific filtering
query = ChartDAO.find_all().filter(
Chart.owners.contains(g.user)
)
return execute_query(query)
```
### Input Validation
```python
# Comprehensive request validation
class CreateChartRequest(BaseModel):
"""Validated chart creation request."""
dataset_id: Union[int, str] = Field(
...,
description="Dataset ID or UUID"
)
config: ChartConfig = Field(
...,
description="Chart configuration"
)
@validator('dataset_id')
def validate_dataset_id(cls, v):
"""Validate dataset exists and user has access."""
# Validation logic
return v
@validator('config')
def validate_chart_config(cls, v):
"""Validate chart configuration."""
# Configuration validation
return v
```
This development guide provides comprehensive coverage of the MCP service's internal architecture and development patterns, enabling team members to effectively extend and maintain the system.
## Related Documentation
### 📚 **Ready to Use Your New Tools?**
Test your implementations with examples from the [API Reference](./api-reference).
### 🔐 **Securing Your Extensions?**
Add authentication to your tools using the [Authentication Guide](./authentication).
### 🏗️ **Understanding the Big Picture?**
See the complete system design in the [Architecture Overview](./architecture).
### 🏢 **Building Enterprise Features?**
Explore advanced patterns in the [Preset Integration Guide](./preset-integration).
> 📖 **Back to Documentation Index**: [MCP Service](./intro)

View File

@@ -0,0 +1,124 @@
---
title: MCP Service
sidebar_position: 1
version: 1
---
# Superset MCP Service
The Superset Model Context Protocol (MCP) service provides programmatic access to Superset dashboards, charts, datasets, and instance metadata. Built for LLM agents and automation tools.
## What is MCP?
The Model Context Protocol (MCP) is an open standard that allows AI assistants to securely connect to data sources and tools. Superset's MCP service exposes **16 production-ready tools** that enable:
- 📊 **Data Exploration**: List and query dashboards, charts, and datasets
- 🔧 **Chart Creation**: Generate visualizations programmatically
- 📈 **Data Export**: Extract data in multiple formats (JSON, CSV, Excel)
- 🔗 **Navigation**: Generate explore links and SQL Lab sessions
## Quick Start
### Installation
:::note
The MCP service is included with Superset development setup. FastMCP dependencies are installed automatically with `make install`.
:::
```bash
# MCP service is included with Superset development setup
git clone https://github.com/apache/superset.git
cd superset
make venv && source venv/bin/activate
make install
# Start Superset
superset run -p 8088 --with-threads --reload --debugger
# Start MCP service (separate terminal)
source venv/bin/activate
superset mcp run --port 5008 --debug
```
### Claude Desktop Integration
```json
{
"mcpServers": {
"Superset MCP": {
"command": "/path/to/superset/superset/mcp_service/run_proxy.sh",
"args": [],
"env": {}
}
}
}
```
## Key Features
### 🔧 **16 Production Tools**
| Category | Tools | Purpose |
|----------|-------|---------|
| **Dashboard** (5) | List, get info, create, add charts | Dashboard management |
| **Chart** (8) | Full CRUD, data export, previews | Chart operations |
| **Dataset** (3) | List, get info, discover filters | Dataset exploration |
| **System** (2) | Instance info, explore links | System integration |
| **SQL Lab** (1) | Pre-configured sessions | SQL development |
### 🔐 **Enterprise Security**
- **JWT Bearer Authentication**: Production-ready with configurable factory pattern
- **RBAC Integration**: Scope-based permissions with Superset's security model
- **Audit Logging**: Comprehensive MCP context tracking
### 📊 **Advanced Capabilities**
- **Multi-format Export**: JSON, CSV, Excel data export
- **Chart Previews**: Screenshots, ASCII art, table representations
- **Cache Control**: Leverage Superset's existing cache infrastructure
- **Request Schemas**: Eliminates LLM parameter validation issues
## Example Usage
```python
# List dashboards
dashboards = client.call_tool("list_dashboards", {
"search": "sales",
"page_size": 10
})
# Create a chart
chart = client.call_tool("generate_chart", {
"dataset_id": "1",
"config": {
"chart_type": "line",
"x": {"name": "date"},
"y": [{"name": "revenue", "aggregate": "SUM"}]
}
})
# Export chart data
data = client.call_tool("get_chart_data", {
"identifier": chart["chart_id"],
"format": "json",
"limit": 1000
})
```
## Status
✅ **Phase 1 Complete** - Core functionality stable, authentication production-ready, comprehensive testing coverage.
## Documentation Structure
### Getting Started
- **[Overview](./overview)** - Features, use cases, and examples
- **[API Reference](./api-reference)** - Complete tool documentation
### Development
- **[Development Guide](./development)** - Internal architecture and adding tools
- **[Architecture](./architecture)** - System design and patterns
### Production
- **[Authentication](./authentication)** - JWT setup and security
- **[Preset Integration](./preset-integration)** - Enterprise features
> 🚀 **Ready to start?** Continue with the [Overview](./overview) for detailed examples and use cases.

View File

@@ -0,0 +1,196 @@
---
title: MCP Service Overview
sidebar_position: 1
version: 1
---
# Superset MCP Service
The Superset Model Context Protocol (MCP) service provides a modular, schema-driven interface for programmatic access to Superset dashboards, charts, datasets, and instance metadata. Built on FastMCP, it's designed for LLM agents and automation tools.
**Status:** ✅ Phase 1 Complete. Core functionality stable, authentication production-ready, comprehensive testing coverage.
## What is MCP?
The Model Context Protocol (MCP) is an open standard for connecting AI assistants to data sources and tools. Superset's MCP service exposes 16 tools that allow LLM agents to:
- **Explore data**: List and query dashboards, charts, and datasets
- **Create visualizations**: Generate charts and dashboards programmatically
- **Export data**: Extract chart data in multiple formats
- **Navigate interfaces**: Generate explore links and SQL Lab sessions
## Key Features
### 🔧 **16 Production-Ready Tools**
- **Dashboard Tools (5)**: List, get info, create dashboards, add charts
- **Chart Tools (8)**: Full CRUD operations, data export, screenshot previews
- **Dataset Tools (3)**: List, get info, discover filterable columns
- **System Tools (2)**: Instance info, explore link generation
- **SQL Lab Tools (1)**: Pre-configured SQL sessions
### 🔐 **Enterprise Authentication**
- **JWT Bearer Authentication**: Production-ready with configurable factory pattern
- **RBAC Integration**: Scope-based permissions with Superset's security model
- **Audit Logging**: Comprehensive MCP context tracking with impersonation support
### 📊 **Advanced Capabilities**
- **Multi-format Export**: JSON, CSV, Excel data export
- **Chart Previews**: Screenshots, ASCII art, and table representations
- **Cache Control**: Comprehensive control over Superset's cache layers
- **Request Schema Pattern**: Eliminates LLM parameter validation issues
## Architecture Overview
```mermaid
graph TB
subgraph "Client Layer"
LLM[LLM/Agent Client]
Claude[Claude Desktop]
SDK[Custom SDK]
end
subgraph "MCP Service Layer"
FastMCP[FastMCP Server<br/>Port 5008]
Auth[JWT Auth Hook]
Tools[16 MCP Tools]
end
subgraph "Superset Integration"
DAOs[Superset DAOs]
Commands[Superset Commands]
Cache[Cache Layer]
end
subgraph "Data Layer"
DB[(Superset Database)]
DataWarehouse[(Data Warehouse)]
end
LLM --> FastMCP
Claude --> FastMCP
SDK --> FastMCP
FastMCP --> Auth
Auth --> Tools
Tools --> DAOs
Tools --> Commands
Tools --> Cache
DAOs --> DB
Commands --> DB
Commands --> DataWarehouse
```
## Getting Started
### Quick Setup
```bash
# Clone and install Superset
git clone https://github.com/apache/superset.git
cd superset
make venv && source venv/bin/activate
make install
# Start Superset
superset run -p 8088 --with-threads --reload --debugger
# Start MCP service (in separate terminal)
source venv/bin/activate
superset mcp run --port 5008 --debug
```
### Connect to Claude Desktop
:::note
The MCP service runs on HTTP and requires a proxy for Claude Desktop integration.
:::
```bash
# Install FastMCP proxy
pip install fastmcp
```
Configure Claude Desktop (`~/.config/Claude/claude_desktop_config.json`):
```json
{
"mcpServers": {
"Superset MCP": {
"command": "/path/to/superset/superset/mcp_service/run_proxy.sh",
"args": [],
"env": {}
}
}
}
```
## Use Cases
### Data Exploration
- "List all dashboards related to sales"
- "Show me the charts in the Q4 Performance dashboard"
- "What datasets are available for customer analysis?"
### Chart Creation
- "Create a line chart showing revenue trends by month"
- "Generate a table showing top 10 products by sales"
- "Build a bar chart comparing regional performance"
### Data Export
- "Export the sales data from this chart as CSV"
- "Get the underlying data for this dashboard as JSON"
- "Show me a preview of this chart as ASCII art"
### Dashboard Management
- "Create a new dashboard with these 4 charts"
- "Add this revenue chart to the executive dashboard"
- "Generate an explore link for this chart configuration"
## Example Workflow
```python
# List available dashboards
dashboards = client.call_tool("list_dashboards", {
"search": "sales",
"page_size": 10
})
# Get detailed dashboard info
dashboard = client.call_tool("get_dashboard_info", {
"identifier": dashboards["dashboards"][0]["id"]
})
# Create a new chart
chart = client.call_tool("generate_chart", {
"dataset_id": "1",
"config": {
"chart_type": "line",
"x": {"name": "date"},
"y": [{"name": "revenue", "aggregate": "SUM"}]
}
})
# Export chart data
data = client.call_tool("get_chart_data", {
"identifier": chart["chart_id"],
"format": "json",
"limit": 1000
})
```
## Next Steps
### Ready to Use MCP?
- **[📚 API Reference](./api-reference)** - Try all 16 tools with request/response examples
- **[🔐 Authentication](./authentication)** - Set up JWT security for production use
### Want to Extend MCP?
- **[🔧 Development Guide](./development)** - Learn internal architecture and add new tools
- **[🏗️ Architecture](./architecture)** - Understand system design and deployment patterns
### Enterprise Deployment?
- **[🏢 Preset Integration](./preset-integration)** - RBAC extensions and OIDC integration for enterprise
> 💡 **Getting started?** Return to the [MCP Service intro](./intro) for a complete overview.

View File

@@ -0,0 +1,483 @@
---
title: Preset.io Integration
sidebar_position: 6
version: 1
---
# Preset.io Integration Guide
This document outlines integration points for the Preset.io team to extend the Superset MCP service with enterprise features, RBAC customizations, and OIDC integration.
## RBAC Extension Points
### Custom Authorization Factory
The MCP service supports custom authorization logic through the factory pattern:
```python
# In preset_config.py or superset_config.py
def create_preset_mcp_auth(app):
"""Custom auth factory for Preset.io environments."""
from superset.mcp_service.auth import create_auth_provider
from preset.auth.mcp import PresetMCPAuthProvider
return PresetMCPAuthProvider(
jwks_uri=app.config["PRESET_JWKS_URI"],
issuer=app.config["PRESET_JWT_ISSUER"],
audience=app.config["PRESET_JWT_AUDIENCE"],
tenant_resolver=preset_tenant_resolver,
rbac_manager=app.security_manager,
)
MCP_AUTH_FACTORY = create_preset_mcp_auth
```
### Multi-Tenant RBAC
Extend the base auth hook for tenant-aware permissions:
```python
# preset/mcp/auth.py
from superset.mcp_service.auth import mcp_auth_hook
from functools import wraps
def preset_tenant_auth_hook(required_permissions=None):
"""Preset-specific auth hook with tenant isolation."""
def decorator(func):
@wraps(func)
@mcp_auth_hook(required_permissions)
def wrapper(*args, **kwargs):
# Extract tenant from JWT claims
tenant_id = g.user.tenant_id if hasattr(g.user, 'tenant_id') else None
# Inject tenant context
g.mcp_tenant_id = tenant_id
g.mcp_tenant_context = get_tenant_context(tenant_id)
return func(*args, **kwargs)
return wrapper
return decorator
```
### Custom Permission Scopes
Define Preset-specific permission scopes:
```python
# preset/mcp/permissions.py
PRESET_MCP_SCOPES = {
# Tenant-level permissions
"tenant:admin": "Full tenant administration",
"tenant:read": "Read tenant resources",
# Workspace-level permissions
"workspace:admin": "Full workspace administration",
"workspace:read": "Read workspace resources",
# Enhanced dashboard permissions
"dashboard:publish": "Publish dashboards to marketplace",
"dashboard:embed": "Generate embed tokens",
# Enhanced chart permissions
"chart:export": "Export chart data and configs",
"chart:alerts": "Manage chart alerts and notifications",
# Dataset permissions with row-level security
"dataset:rls": "Apply row-level security filters",
"dataset:pii": "Access PII-flagged columns",
}
def get_preset_required_scopes(tool_name: str, context: dict = None) -> List[str]:
"""Map tool calls to Preset-specific permission requirements."""
base_scopes = get_base_required_scopes(tool_name)
# Add tenant-aware scopes
if context and context.get('tenant_id'):
base_scopes.append(f"tenant:{context['tenant_id']}")
# Add workspace-aware scopes
if context and context.get('workspace_id'):
base_scopes.append(f"workspace:{context['workspace_id']}")
return base_scopes
```
### Row-Level Security Integration
Extend data access tools with RLS:
```python
# preset/mcp/rls.py
def apply_preset_rls_filters(query_context: dict, user_context: dict) -> dict:
"""Apply Preset row-level security filters to query context."""
# Get user's RLS rules from Preset metadata
rls_rules = get_user_rls_rules(
user_id=user_context['user_id'],
tenant_id=user_context['tenant_id'],
workspace_id=user_context.get('workspace_id')
)
# Apply RLS filters to query
for rule in rls_rules:
if rule.applies_to_dataset(query_context['datasource']['id']):
query_context = rule.apply_filter(query_context)
return query_context
# Usage in custom tools
@mcp.tool
@preset_tenant_auth_hook(['dataset:read', 'dataset:rls'])
def preset_get_chart_data(request: GetChartDataRequest) -> ChartDataResponse:
"""Get chart data with Preset RLS applied."""
# Apply RLS before executing query
query_context = build_query_context(request)
query_context = apply_preset_rls_filters(
query_context,
{'user_id': g.user.id, 'tenant_id': g.mcp_tenant_id}
)
return execute_chart_data_query(query_context)
```
## OIDC Integration Points
### Preset OIDC Provider
Custom OIDC integration for Preset environments:
```python
# preset/mcp/oidc.py
from superset.mcp_service.auth.providers.bearer import BearerAuthProvider
import requests
from typing import Dict, Any
class PresetOIDCAuthProvider(BearerAuthProvider):
"""OIDC-specific auth provider for Preset.io."""
def __init__(self,
oidc_discovery_url: str,
client_id: str,
client_secret: str = None,
**kwargs):
# Discover OIDC endpoints
self.discovery_doc = self._fetch_discovery_document(oidc_discovery_url)
super().__init__(
jwks_uri=self.discovery_doc['jwks_uri'],
issuer=self.discovery_doc['issuer'],
**kwargs
)
self.client_id = client_id
self.client_secret = client_secret
def _fetch_discovery_document(self, discovery_url: str) -> Dict[str, Any]:
"""Fetch OIDC discovery document."""
response = requests.get(discovery_url)
response.raise_for_status()
return response.json()
def validate_token(self, token: str) -> Dict[str, Any]:
"""Validate JWT token with OIDC-specific claims."""
claims = super().validate_token(token)
# Validate OIDC-specific claims
if claims.get('aud') != self.client_id:
raise ValueError("Invalid audience claim")
# Extract Preset-specific claims
claims['preset_tenant_id'] = claims.get('tenant_id')
claims['preset_workspace_id'] = claims.get('workspace_id')
claims['preset_roles'] = claims.get('roles', [])
return claims
def resolve_user(self, claims: Dict[str, Any]) -> Any:
"""Resolve Superset user from OIDC claims."""
from preset.auth.user_resolver import resolve_preset_user
return resolve_preset_user(
subject=claims['sub'],
email=claims.get('email'),
tenant_id=claims.get('preset_tenant_id'),
roles=claims.get('preset_roles', [])
)
```
### Configuration for OIDC
```python
# In preset_config.py
def create_preset_oidc_auth(app):
"""Factory for Preset OIDC authentication."""
from preset.mcp.oidc import PresetOIDCAuthProvider
return PresetOIDCAuthProvider(
oidc_discovery_url=app.config["PRESET_OIDC_DISCOVERY_URL"],
client_id=app.config["PRESET_OIDC_CLIENT_ID"],
client_secret=app.config["PRESET_OIDC_CLIENT_SECRET"],
audience=app.config["PRESET_MCP_AUDIENCE"],
required_scopes=app.config.get("PRESET_MCP_REQUIRED_SCOPES", [])
)
# MCP Configuration
MCP_AUTH_ENABLED = True
MCP_AUTH_FACTORY = create_preset_oidc_auth
# OIDC Configuration
PRESET_OIDC_DISCOVERY_URL = "https://auth.preset.io/.well-known/openid_configuration"
PRESET_OIDC_CLIENT_ID = "preset-mcp-service"
PRESET_OIDC_CLIENT_SECRET = os.environ.get("PRESET_OIDC_CLIENT_SECRET")
PRESET_MCP_AUDIENCE = "preset-superset-mcp"
PRESET_MCP_REQUIRED_SCOPES = [
"openid", "profile", "email",
"superset:read", "superset:write"
]
```
## Preset-Specific Tools
### Tenant Management Tools
```python
# preset/mcp/tools/tenant.py
@mcp.tool
@preset_tenant_auth_hook(['tenant:read'])
def get_tenant_info(request: GetTenantInfoRequest) -> TenantInfoResponse:
"""Get Preset tenant information and quotas."""
tenant_id = g.mcp_tenant_id
tenant = get_tenant_by_id(tenant_id)
return TenantInfoResponse(
tenant_id=tenant.id,
name=tenant.name,
plan=tenant.plan,
quotas=tenant.quotas,
usage=get_tenant_usage(tenant_id),
workspaces=list_tenant_workspaces(tenant_id)
)
@mcp.tool
@preset_tenant_auth_hook(['workspace:read'])
def list_workspace_assets(request: ListWorkspaceAssetsRequest) -> ListWorkspaceAssetsResponse:
"""List all assets in a Preset workspace."""
workspace_id = request.workspace_id
tenant_id = g.mcp_tenant_id
# Validate workspace belongs to tenant
validate_workspace_access(workspace_id, tenant_id)
assets = {
'dashboards': list_workspace_dashboards(workspace_id),
'charts': list_workspace_charts(workspace_id),
'datasets': list_workspace_datasets(workspace_id)
}
return ListWorkspaceAssetsResponse(
workspace_id=workspace_id,
assets=assets,
total_count=sum(len(v) for v in assets.values())
)
```
### Embed Token Generation
```python
# preset/mcp/tools/embed.py
@mcp.tool
@preset_tenant_auth_hook(['dashboard:embed'])
def generate_embed_token(request: GenerateEmbedTokenRequest) -> EmbedTokenResponse:
"""Generate secure embed token for dashboard/chart."""
# Validate resource access
resource = validate_embed_resource_access(
resource_type=request.resource_type,
resource_id=request.resource_id,
tenant_id=g.mcp_tenant_id
)
# Generate signed embed token
embed_token = create_embed_token(
resource=resource,
user_id=g.user.id,
tenant_id=g.mcp_tenant_id,
permissions=request.permissions,
expiry=request.expiry_hours
)
return EmbedTokenResponse(
embed_token=embed_token,
embed_url=f"{get_preset_base_url()}/embed/{embed_token}",
expires_at=embed_token.expires_at
)
```
## Audit and Compliance Extensions
### Enhanced Audit Logging
```python
# preset/mcp/audit.py
from superset.mcp_service.auth import get_audit_context
def create_preset_audit_context(user_context: dict, tool_name: str,
request_data: dict) -> dict:
"""Create Preset-specific audit context."""
base_context = get_audit_context(user_context, tool_name, request_data)
# Add Preset-specific fields
preset_context = {
**base_context,
'tenant_id': user_context.get('tenant_id'),
'workspace_id': user_context.get('workspace_id'),
'preset_user_role': user_context.get('preset_role'),
'data_classification': classify_request_data(request_data),
'compliance_flags': get_compliance_flags(tool_name, request_data)
}
return preset_context
def log_preset_mcp_access(audit_context: dict):
"""Log MCP access to Preset audit systems."""
# Log to Superset's audit system
log_superset_audit_event(audit_context)
# Log to Preset's compliance system
log_preset_compliance_event(audit_context)
# Log to external SIEM if configured
if app.config.get('PRESET_SIEM_ENABLED'):
log_to_siem(audit_context)
```
### Data Classification
```python
# preset/mcp/classification.py
def classify_request_data(request_data: dict) -> dict:
"""Classify data sensitivity in MCP requests."""
classification = {
'contains_pii': False,
'data_level': 'public',
'retention_policy': 'standard'
}
# Check for PII in request
if contains_pii_fields(request_data):
classification['contains_pii'] = True
classification['data_level'] = 'restricted'
classification['retention_policy'] = 'pii_compliant'
# Check for sensitive datasets
if references_sensitive_datasets(request_data):
classification['data_level'] = 'confidential'
return classification
```
## Deployment Considerations
### Multi-Region Deployment
```python
# preset/mcp/deployment.py
def get_region_specific_config():
"""Get region-specific MCP configuration."""
region = os.environ.get('PRESET_REGION', 'us-east-1')
config_map = {
'us-east-1': {
'jwks_uri': 'https://auth-us.preset.io/.well-known/jwks.json',
'base_url': 'https://app.preset.io',
'data_residency': 'US'
},
'eu-west-1': {
'jwks_uri': 'https://auth-eu.preset.io/.well-known/jwks.json',
'base_url': 'https://eu.preset.io',
'data_residency': 'EU'
}
}
return config_map.get(region, config_map['us-east-1'])
# Usage in config
region_config = get_region_specific_config()
PRESET_JWKS_URI = region_config['jwks_uri']
SUPERSET_WEBSERVER_ADDRESS = region_config['base_url']
```
### Health Check Extensions
```python
# preset/mcp/health.py
@mcp.tool
def preset_health_check() -> HealthCheckResponse:
"""Preset-specific health check for MCP service."""
checks = {
'mcp_service': check_mcp_service_health(),
'database': check_database_health(),
'auth_provider': check_auth_provider_health(),
'tenant_isolation': check_tenant_isolation(),
'rls_engine': check_rls_engine_health()
}
overall_status = 'healthy' if all(
check['status'] == 'healthy' for check in checks.values()
) else 'degraded'
return HealthCheckResponse(
status=overall_status,
checks=checks,
region=os.environ.get('PRESET_REGION'),
version=get_preset_mcp_version()
)
```
## Configuration Templates
### Production Configuration
```python
# preset_production_config.py
from preset.mcp.auth import create_preset_oidc_auth
from preset.mcp.audit import create_preset_audit_context
# MCP Service Configuration
MCP_AUTH_ENABLED = True
MCP_AUTH_FACTORY = create_preset_oidc_auth
MCP_AUDIT_CONTEXT_FACTORY = create_preset_audit_context
# Preset OIDC Configuration
PRESET_OIDC_DISCOVERY_URL = "https://auth.preset.io/.well-known/openid_configuration"
PRESET_OIDC_CLIENT_ID = "preset-mcp-production"
PRESET_MCP_AUDIENCE = "preset-superset-mcp"
# Security Configuration
PRESET_MCP_REQUIRED_SCOPES = [
"openid", "profile", "email",
"tenant:read", "workspace:read",
"dashboard:read", "chart:read", "dataset:read"
]
# Audit Configuration
PRESET_AUDIT_ENABLED = True
PRESET_SIEM_ENABLED = True
PRESET_COMPLIANCE_MODE = "SOC2"
# Performance Configuration
PRESET_MCP_CACHE_ENABLED = True
PRESET_MCP_RATE_LIMIT = "1000/hour"
PRESET_MCP_TIMEOUT = 30
```
This integration guide provides the Preset.io team with concrete extension points for implementing enterprise features while maintaining compatibility with the base MCP service architecture.

View File

@@ -65,5 +65,6 @@
"last 1 firefox version",
"last 1 safari version"
]
}
},
"packageManager": "yarn@1.22.22+sha1.ac34549e6aa8e7ead463a7407e1c7390f61a6610"
}

View File

@@ -87,6 +87,16 @@ const sidebars = {
},
],
},
{
type: 'category',
label: 'MCP Service',
items: [
{
type: 'autogenerated',
dirName: 'mcp-service',
},
],
},
{
type: 'doc',
label: 'FAQ',

View File

@@ -4320,12 +4320,12 @@ available-typed-arrays@^1.0.7:
possible-typed-array-names "^1.0.0"
axios@^1.9.0:
version "1.10.0"
resolved "https://registry.yarnpkg.com/axios/-/axios-1.10.0.tgz#af320aee8632eaf2a400b6a1979fa75856f38d54"
integrity sha512-/1xYAC4MP/HEG+3duIhFr4ZQXR4sQXOIe+o6sdqzeykGLx6Upp/1p8MHqhINOvGeP7xyNHe7tsiJByc4SSVUxw==
version "1.11.0"
resolved "https://registry.yarnpkg.com/axios/-/axios-1.11.0.tgz#c2ec219e35e414c025b2095e8b8280278478fdb6"
integrity sha512-1Lx3WLFQWm3ooKDYZD1eXmoGO9fxYQjrycfHFC8P0sCfQVXyROp0p9PFWBehewBOdCwHc+f/b8I0fMto5eSfwA==
dependencies:
follow-redirects "^1.15.6"
form-data "^4.0.0"
form-data "^4.0.4"
proxy-from-env "^1.1.0"
babel-loader@^9.2.1:
@@ -6653,7 +6653,7 @@ form-data-encoder@^2.1.2:
resolved "https://registry.yarnpkg.com/form-data-encoder/-/form-data-encoder-2.1.4.tgz#261ea35d2a70d48d30ec7a9603130fa5515e9cd5"
integrity sha512-yDYSgNMraqvnxiEXO4hi88+YZxaHC6QKzb5N84iRCTDeRO7ZALpir/lVmf/uXUhnwUr2O4HU8s/n6x+yNjQkHw==
form-data@^4.0.0:
form-data@^4.0.4:
version "4.0.4"
resolved "https://registry.yarnpkg.com/form-data/-/form-data-4.0.4.tgz#784cdcce0669a9d68e94d11ac4eea98088edd2c4"
integrity sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==

View File

@@ -95,7 +95,7 @@ dependencies = [
"slack_sdk>=3.19.0, <4",
"sqlalchemy>=1.4, <2",
"sqlalchemy-utils>=0.38.3, <0.39",
"sqlglot>=26.1.3, <27",
"sqlglot>=27.3.0, <28",
# newer pandas needs 0.9+
"tabulate>=0.9.0, <1.0",
"typing-extensions>=4, <5",
@@ -111,7 +111,7 @@ athena = ["pyathena[pandas]>=2, <3"]
aurora-data-api = ["preset-sqlalchemy-aurora-data-api>=0.2.8,<0.3"]
bigquery = [
"pandas-gbq>=0.19.1",
"sqlalchemy-bigquery>=1.6.1",
"sqlalchemy-bigquery>=1.15.0",
"google-cloud-bigquery>=3.10.0",
]
clickhouse = ["clickhouse-connect>=0.5.14, <1.0"]
@@ -133,6 +133,7 @@ solr = ["sqlalchemy-solr >= 0.2.0"]
elasticsearch = ["elasticsearch-dbapi>=0.2.9, <0.3.0"]
exasol = ["sqlalchemy-exasol >= 2.4.0, <3.0"]
excel = ["xlrd>=1.2.0, <1.3"]
fastmcp = ["fastmcp>=2.8.1"]
firebird = ["sqlalchemy-firebird>=0.7.0, <0.8"]
firebolt = ["firebolt-sqlalchemy>=1.0.0, <2"]
gevent = ["gevent>=23.9.1"]
@@ -202,6 +203,7 @@ development = [
"pyinstrument>=4.0.2,<5",
"pylint",
"pytest<8.0.0", # hairy issue with pytest >=8 where current_app proxies are not set in time
"pytest-asyncio", # need this due to not using latest pytest
"pytest-cov",
"pytest-mock",
"python-ldap>=3.4.4",
@@ -311,15 +313,16 @@ select = [
"Q",
"S",
"T",
"TID",
"W",
]
ignore = [
"S101",
"PT006",
"T201",
"N999",
]
extend-select = ["I"]
# Allow fix for all enabled rules (when `--fix`) is provided.
@@ -329,6 +332,16 @@ unfixable = []
# Allow unused variables when underscore-prefixed.
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
[tool.ruff.lint.per-file-ignores]
"scripts/*" = ["TID251"]
"setup.py" = ["TID251"]
"superset/config.py" = ["TID251"]
"superset/cli/update.py" = ["TID251"]
"superset/key_value/types.py" = ["TID251"]
"superset/translations/utils.py" = ["TID251"]
"superset/extensions/__init__.py" = ["TID251"]
"superset/utils/json.py" = ["TID251"]
[tool.ruff.lint.isort]
case-sensitive = false
combine-as-imports = true
@@ -345,6 +358,9 @@ section-order = [
"local-folder"
]
[tool.ruff.lint.flake8-tidy-imports]
banned-api = { json = { msg = "Use superset.utils.json instead" }, simplejson = { msg = "Use superset.utils.json instead" } }
[tool.ruff.format]
# Like Black, use double quotes for strings.
quote-style = "double"

View File

@@ -378,7 +378,7 @@ sqlalchemy-utils==0.38.3
# via
# apache-superset (pyproject.toml)
# flask-appbuilder
sqlglot==26.28.1
sqlglot==27.3.0
# via apache-superset (pyproject.toml)
sshtunnel==0.4.0
# via apache-superset (pyproject.toml)

View File

@@ -16,4 +16,4 @@
# specific language governing permissions and limitations
# under the License.
#
-e .[development,bigquery,druid,gevent,gsheets,mysql,postgres,presto,prophet,trino,thumbnails]
-e .[development,bigquery,druid,fastmcp,gevent,gsheets,mysql,postgres,presto,prophet,trino,thumbnails]

View File

@@ -10,6 +10,14 @@ amqp==5.3.1
# via
# -c requirements/base.txt
# kombu
annotated-types==0.7.0
# via pydantic
anyio==4.9.0
# via
# httpx
# mcp
# sse-starlette
# starlette
apispec==6.6.1
# via
# -c requirements/base.txt
@@ -24,11 +32,14 @@ attrs==25.3.0
# via
# -c requirements/base.txt
# cattrs
# cyclopts
# jsonschema
# outcome
# referencing
# requests-cache
# trio
authlib==1.6.1
# via fastmcp
babel==2.17.0
# via
# -c requirements/base.txt
@@ -77,6 +88,8 @@ celery==5.5.2
certifi==2025.6.15
# via
# -c requirements/base.txt
# httpcore
# httpx
# requests
# selenium
cffi==1.17.1
@@ -101,6 +114,7 @@ click==8.2.1
# click-repl
# flask
# flask-appbuilder
# uvicorn
click-didyoumean==0.3.1
# via
# -c requirements/base.txt
@@ -140,10 +154,13 @@ cryptography==44.0.3
# via
# -c requirements/base.txt
# apache-superset
# authlib
# paramiko
# pyopenssl
cycler==0.12.1
# via matplotlib
cyclopts==3.22.2
# via fastmcp
db-dtypes==1.3.1
# via pandas-gbq
defusedxml==0.7.1
@@ -168,14 +185,23 @@ dnspython==2.7.0
# email-validator
docker==7.0.0
# via apache-superset
docstring-parser==0.17.0
# via cyclopts
docutils==0.21.2
# via rich-rst
email-validator==2.2.0
# via
# -c requirements/base.txt
# flask-appbuilder
# pydantic
et-xmlfile==2.0.0
# via
# -c requirements/base.txt
# openpyxl
exceptiongroup==1.3.0
# via fastmcp
fastmcp==2.10.6
# via apache-superset
filelock==3.12.2
# via virtualenv
flask==2.3.3
@@ -327,6 +353,8 @@ gunicorn==23.0.0
h11==0.16.0
# via
# -c requirements/base.txt
# httpcore
# uvicorn
# wsproto
hashids==1.3.1
# via
@@ -337,6 +365,14 @@ holidays==0.25
# -c requirements/base.txt
# apache-superset
# prophet
httpcore==1.0.9
# via httpx
httpx==0.28.1
# via
# fastmcp
# mcp
httpx-sse==0.4.1
# via mcp
humanize==4.12.3
# via
# -c requirements/base.txt
@@ -346,7 +382,9 @@ identify==2.5.36
idna==3.10
# via
# -c requirements/base.txt
# anyio
# email-validator
# httpx
# requests
# trio
# url-normalize
@@ -378,6 +416,7 @@ jsonschema==4.23.0
# via
# -c requirements/base.txt
# flask-appbuilder
# mcp
# openapi-schema-validator
# openapi-spec-validator
jsonschema-path==0.3.4
@@ -437,6 +476,8 @@ matplotlib==3.9.0
# via prophet
mccabe==0.7.0
# via pylint
mcp==1.12.0
# via fastmcp
mdurl==0.1.2
# via
# -c requirements/base.txt
@@ -475,6 +516,8 @@ odfpy==1.4.1
# via
# -c requirements/base.txt
# pandas
openapi-pydantic==0.5.1
# via fastmcp
openapi-schema-validator==0.6.3
# via
# -c requirements/base.txt
@@ -607,6 +650,16 @@ pycparser==2.22
# via
# -c requirements/base.txt
# cffi
pydantic==2.11.7
# via
# fastmcp
# mcp
# openapi-pydantic
# pydantic-settings
pydantic-core==2.33.2
# via pydantic
pydantic-settings==2.10.1
# via mcp
pydata-google-auth==1.9.0
# via pandas-gbq
pydruid==0.6.9
@@ -642,6 +695,8 @@ pyparsing==3.2.3
# -c requirements/base.txt
# apache-superset
# matplotlib
pyperclip==1.9.0
# via fastmcp
pysocks==1.7.1
# via
# -c requirements/base.txt
@@ -649,8 +704,11 @@ pysocks==1.7.1
pytest==7.4.4
# via
# apache-superset
# pytest-asyncio
# pytest-cov
# pytest-mock
pytest-asyncio==0.23.8
# via apache-superset
pytest-cov==6.0.0
# via apache-superset
pytest-mock==3.10.0
@@ -674,12 +732,16 @@ python-dotenv==1.1.0
# via
# -c requirements/base.txt
# apache-superset
# fastmcp
# pydantic-settings
python-geohash==0.8.5
# via
# -c requirements/base.txt
# apache-superset
python-ldap==3.4.4
# via apache-superset
python-multipart==0.0.20
# via mcp
pytz==2025.2
# via
# -c requirements/base.txt
@@ -734,7 +796,12 @@ rfc3339-validator==0.1.4
rich==13.9.4
# via
# -c requirements/base.txt
# cyclopts
# fastmcp
# flask-limiter
# rich-rst
rich-rst==1.3.1
# via cyclopts
rpds-py==0.25.0
# via
# -c requirements/base.txt
@@ -779,6 +846,7 @@ slack-sdk==3.35.0
sniffio==1.3.1
# via
# -c requirements/base.txt
# anyio
# trio
sortedcontainers==2.4.0
# via
@@ -795,23 +863,27 @@ sqlalchemy==1.4.54
# shillelagh
# sqlalchemy-bigquery
# sqlalchemy-utils
sqlalchemy-bigquery==1.12.0
sqlalchemy-bigquery==1.15.0
# via apache-superset
sqlalchemy-utils==0.38.3
# via
# -c requirements/base.txt
# apache-superset
# flask-appbuilder
sqlglot==26.28.1
sqlglot==27.3.0
# via
# -c requirements/base.txt
# apache-superset
sqloxide==0.1.51
# via apache-superset
sse-starlette==2.4.1
# via mcp
sshtunnel==0.4.0
# via
# -c requirements/base.txt
# apache-superset
starlette==0.47.2
# via mcp
statsd==4.0.1
# via apache-superset
tabulate==0.9.0
@@ -839,13 +911,23 @@ typing-extensions==4.14.0
# via
# -c requirements/base.txt
# alembic
# anyio
# apache-superset
# cattrs
# exceptiongroup
# limits
# pydantic
# pydantic-core
# pyopenssl
# referencing
# selenium
# shillelagh
# starlette
# typing-inspection
typing-inspection==0.4.1
# via
# pydantic
# pydantic-settings
tzdata==2025.2
# via
# -c requirements/base.txt
@@ -864,6 +946,8 @@ urllib3==2.5.0
# requests
# requests-cache
# selenium
uvicorn==0.35.0
# via mcp
vine==5.1.0
# via
# -c requirements/base.txt

View File

@@ -32,6 +32,10 @@ const PACKAGE_ARG_REGEX = /^package=/;
const EXCLUDE_DECLARATION_DIR_REGEX = /^excludeDeclarationDir=/;
const DECLARATION_FILE_REGEX = /\.d\.ts$/;
// Configuration for batching and fallback
const MAX_FILES_FOR_TARGETED_CHECK = 20; // Fallback to full check if more files
const BATCH_SIZE = 10; // Process files in batches of this size
void (async () => {
const args = process.argv.slice(2);
const {
@@ -45,27 +49,94 @@ void (async () => {
}
const packageRootDir = await getPackage(packageArg);
const updatedArgs = removePackageSegment(remainingArgs, packageRootDir);
const argsStr = updatedArgs.join(" ");
const changedFiles = removePackageSegment(remainingArgs, packageRootDir);
const excludedDeclarationDirs = getExcludedDeclarationDirs(
excludeDeclarationDirArg
// Filter to only TypeScript files
const tsFiles = changedFiles.filter(file =>
/\.(ts|tsx)$/.test(file) && !DECLARATION_FILE_REGEX.test(file)
);
console.log(`Type checking ${tsFiles.length} changed TypeScript files...`);
if (tsFiles.length === 0) {
console.log("No TypeScript files to check.");
exit(0);
}
// Decide strategy based on number of files
if (tsFiles.length > MAX_FILES_FOR_TARGETED_CHECK) {
console.log(`Too many files (${tsFiles.length} > ${MAX_FILES_FOR_TARGETED_CHECK}), running full type check...`);
await runFullTypeCheck(packageRootDir, excludeDeclarationDirArg);
} else {
console.log(`Running targeted type check on ${tsFiles.length} files...`);
await runTargetedTypeCheck(packageRootDir, tsFiles, excludeDeclarationDirArg);
}
})();
/**
* Run full type check on the entire project
*/
async function runFullTypeCheck(packageRootDir, excludeDeclarationDirArg) {
const packageRootDirAbsolute = join(SUPERSET_ROOT, packageRootDir);
const tsConfig = getTsConfig(packageRootDirAbsolute);
// Use incremental compilation for better caching
const command = `--noEmit --allowJs --incremental --project ${tsConfig}`;
await executeTypeCheck(packageRootDirAbsolute, command);
}
/**
* Run targeted type check on specific files, with batching
*/
async function runTargetedTypeCheck(packageRootDir, tsFiles, excludeDeclarationDirArg) {
const excludedDeclarationDirs = getExcludedDeclarationDirs(excludeDeclarationDirArg);
let declarationFiles = await getFilesRecursively(
packageRootDir,
join(SUPERSET_ROOT, packageRootDir),
DECLARATION_FILE_REGEX,
excludedDeclarationDirs
);
declarationFiles = removePackageSegment(declarationFiles, packageRootDir);
const declarationFilesStr = declarationFiles.join(" ");
const packageRootDirAbsolute = join(SUPERSET_ROOT, packageRootDir);
const tsConfig = getTsConfig(packageRootDirAbsolute);
const command = `--noEmit --allowJs --composite false --project ${tsConfig} ${argsStr} ${declarationFilesStr}`;
// Process files in batches to avoid command line length limits
const batches = [];
for (let i = 0; i < tsFiles.length; i += BATCH_SIZE) {
batches.push(tsFiles.slice(i, i + BATCH_SIZE));
}
let hasErrors = false;
for (const [batchIndex, batch] of batches.entries()) {
if (batches.length > 1) {
console.log(`\nProcessing batch ${batchIndex + 1}/${batches.length} (${batch.length} files)...`);
}
const argsStr = batch.join(" ");
const declarationFilesStr = declarationFiles.join(" ");
// For targeted checks, keep composite false since we're passing specific files
const command = `--noEmit --allowJs --composite false --project ${tsConfig} ${argsStr} ${declarationFilesStr}`;
try {
await executeTypeCheck(packageRootDirAbsolute, command);
} catch (error) {
hasErrors = true;
// Continue processing other batches to show all errors
}
}
if (hasErrors) {
exit(1);
}
}
/**
* Execute the TypeScript type check command
*/
async function executeTypeCheck(packageRootDirAbsolute, command) {
try {
chdir(packageRootDirAbsolute);
// Please ensure that tscw-config is installed in the package being type-checked.
const tscw = packageRequire("tscw-config");
const child = await tscw`${command}`;
@@ -77,14 +148,16 @@ void (async () => {
console.error(child.stderr);
}
exit(child.exitCode);
if (child.exitCode !== 0) {
throw new Error(`Type check failed with exit code ${child.exitCode}`);
}
} catch (e) {
console.error("Failed to execute type checking:", e);
console.error("Package:", packageRootDir);
console.error("Failed to execute type checking:", e.message);
console.error("Command:", `tscw ${command}`);
exit(1);
throw e;
}
})();
}
/**
*
@@ -112,7 +185,6 @@ function shouldExcludeDir(fullPath, excludedDirs) {
*
* @returns {Promise<string[]>}
*/
async function getFilesRecursively(dir, regex, excludedDirs) {
try {
const files = await readdir(dir, { withFileTypes: true });
@@ -186,7 +258,6 @@ function getExcludedDeclarationDirs(excludeDeclarationDirArg) {
* @param {RegExp[]} regexes
* @returns {{ matchedArgs: (string | undefined)[], remainingArgs: string[] }}
*/
function extractArgs(args, regexes) {
/**
* @type {(string | undefined)[]}

View File

@@ -54,7 +54,7 @@ const drillBy = (targetDrillByColumn: string, isLegacy = false) => {
interceptV1ChartData();
}
cy.get('.ant-dropdown:not(.ant-dropdown-hidden)')
cy.get('.ant-dropdown:not(.ant-dropdown-hidden)', { timeout: 15000 })
.should('be.visible')
.find("[role='menu'] [role='menuitem']")
.contains(/^Drill by$/)
@@ -529,7 +529,7 @@ describe('Drill by modal', () => {
]);
});
it('Bar Chart', () => {
it.skip('Bar Chart', () => {
testEchart('echarts_timeseries_bar', 'Bar Chart', [
[85, 94],
[490, 68],
@@ -612,7 +612,7 @@ describe('Drill by modal', () => {
]);
});
it('Mixed Chart', () => {
it.skip('Mixed Chart', () => {
cy.get('[data-test-viz-type="mixed_timeseries"] canvas').then($canvas => {
// click 'boy'
cy.wrap($canvas).scrollIntoView();

View File

@@ -154,6 +154,7 @@ describe('Horizontal FilterBar', () => {
{ name: 'test_12', column: 'year', datasetId: 2 },
]);
setFilterBarOrientation('horizontal');
cy.get('.filter-item-wrapper').should('have.length', 3);
openMoreFilters();
cy.getBySel('form-item-value').should('have.length', 12);

View File

@@ -53,8 +53,8 @@
"@visx/scale": "^3.5.0",
"@visx/tooltip": "^3.0.0",
"@visx/xychart": "^3.5.1",
"ag-grid-community": "33.1.1",
"ag-grid-react": "33.1.1",
"ag-grid-community": "^34.0.2",
"ag-grid-react": "34.0.2",
"antd": "^5.24.6",
"chrono-node": "^2.7.8",
"classnames": "^2.2.5",
@@ -77,7 +77,6 @@
"geostyler-style": "7.5.0",
"geostyler-wfs-parser": "^2.0.3",
"googleapis": "^130.0.0",
"html-webpack-plugin": "^5.6.3",
"immer": "^10.1.1",
"interweave": "^13.1.0",
"jquery": "^3.7.1",
@@ -249,7 +248,7 @@
"mini-css-extract-plugin": "^2.9.0",
"open-cli": "^8.0.0",
"po2json": "^0.4.5",
"prettier": "3.5.3",
"prettier": "3.6.2",
"prettier-plugin-packagejson": "^2.5.3",
"process": "^0.11.10",
"react-resizable": "^3.0.5",
@@ -276,7 +275,7 @@
"webpack-visualizer-plugin2": "^1.2.0"
},
"engines": {
"node": "^20.16.0",
"node": "^20.18.1",
"npm": "^10.8.1"
},
"peerDependencies": {
@@ -10878,6 +10877,12 @@
"url": "https://opencollective.com/immer"
}
},
"node_modules/@reduxjs/toolkit/node_modules/reselect": {
"version": "4.1.8",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-4.1.8.tgz",
"integrity": "sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ==",
"license": "MIT"
},
"node_modules/@rjsf/core": {
"version": "5.24.1",
"resolved": "https://registry.npmjs.org/@rjsf/core/-/core-5.24.1.tgz",
@@ -18748,27 +18753,27 @@
}
},
"node_modules/ag-charts-types": {
"version": "11.1.1",
"resolved": "https://registry.npmjs.org/ag-charts-types/-/ag-charts-types-11.1.1.tgz",
"integrity": "sha512-bRmUcf5VVhEEekhX8Vk0NSwa8Te8YM/zchjyYKR2CX4vDYiwoohM1Jg9RFvbIhVbLC1S6QrPEbx5v2C6RDfpSA==",
"version": "12.0.2",
"resolved": "https://registry.npmjs.org/ag-charts-types/-/ag-charts-types-12.0.2.tgz",
"integrity": "sha512-AWM1Y+XW+9VMmV3AbzdVEnreh/I2C9Pmqpc2iLmtId3Xbvmv7O56DqnuDb9EXjK5uPxmyUerTP+utL13UGcztw==",
"license": "MIT"
},
"node_modules/ag-grid-community": {
"version": "33.1.1",
"resolved": "https://registry.npmjs.org/ag-grid-community/-/ag-grid-community-33.1.1.tgz",
"integrity": "sha512-CNubIro0ipj4nfQ5WJPG9Isp7UI6MMDvNzrPdHNf3W+IoM8Uv3RUhjEn7xQqpQHuu6o/tMjrqpacipMUkhzqnw==",
"version": "34.0.2",
"resolved": "https://registry.npmjs.org/ag-grid-community/-/ag-grid-community-34.0.2.tgz",
"integrity": "sha512-hVJp5vrmwHRB10YjfSOVni5YJkO/v+asLjT72S4YnIFSx8lAgyPmByNJgtojk1aJ5h6Up93jTEmGDJeuKiWWLA==",
"license": "MIT",
"dependencies": {
"ag-charts-types": "11.1.1"
"ag-charts-types": "12.0.2"
}
},
"node_modules/ag-grid-react": {
"version": "33.1.1",
"resolved": "https://registry.npmjs.org/ag-grid-react/-/ag-grid-react-33.1.1.tgz",
"integrity": "sha512-xJ+t2gpqUUwpFqAeDvKz/GLVR4unkOghfQBr8iIY9RAdGFarYFClJavsOa8XPVVUqEB9OIuPVFnOdtocbX0jeA==",
"version": "34.0.2",
"resolved": "https://registry.npmjs.org/ag-grid-react/-/ag-grid-react-34.0.2.tgz",
"integrity": "sha512-1KBXkTvwtZiYVlSuDzBkiqfHjZgsATOmpLZdAtdmsCSOOOEWai0F9zHHgBuHfyciAE4nrbQWfojkx8IdnwsKFw==",
"license": "MIT",
"dependencies": {
"ag-grid-community": "33.1.1",
"ag-grid-community": "34.0.2",
"prop-types": "^15.8.1"
},
"peerDependencies": {
@@ -24606,6 +24611,12 @@
"d3-time": "1 - 2"
}
},
"node_modules/encodable/node_modules/reselect": {
"version": "4.1.8",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-4.1.8.tgz",
"integrity": "sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ==",
"license": "MIT"
},
"node_modules/encodeurl": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/encodeurl/-/encodeurl-2.0.0.tgz",
@@ -46219,9 +46230,9 @@
}
},
"node_modules/prettier": {
"version": "3.5.3",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.5.3.tgz",
"integrity": "sha512-QQtaxnoDJeAkDvDKWCLiwIXkTgRhwYDEQCghU9Z6q03iyek/rxRh/2lC3HB7P8sWT2xC/y5JDctPLBIGzHKbhw==",
"version": "3.6.2",
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.6.2.tgz",
"integrity": "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ==",
"devOptional": true,
"license": "MIT",
"bin": {
@@ -50630,9 +50641,9 @@
"license": "MIT"
},
"node_modules/reselect": {
"version": "4.1.8",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-4.1.8.tgz",
"integrity": "sha512-ab9EmR80F/zQTMNeneUr4cv+jSwPJgIlvEmVwLerwrWVbpLlBuls9XHzIeTFy4cegU2NHBp3va0LKOzU5qFEYQ==",
"version": "5.1.1",
"resolved": "https://registry.npmjs.org/reselect/-/reselect-5.1.1.tgz",
"integrity": "sha512-K/BG6eIky/SBpzfHZv/dd+9JBFiS4SWV7FIujVyJRux6e45+73RaUHXLmIR1f7WOMaQ0U1km6qwklRQxpJJY0w==",
"license": "MIT"
},
"node_modules/resize-observer-polyfill": {
@@ -59086,7 +59097,7 @@
"csstype": "^3.1.3",
"d3-format": "^1.3.2",
"d3-interpolate": "^3.0.1",
"d3-scale": "^3.0.0",
"d3-scale": "^4.0.2",
"d3-time": "^3.1.0",
"d3-time-format": "^4.1.0",
"dayjs": "^1.11.13",
@@ -59109,7 +59120,7 @@
"rehype-raw": "^7.0.0",
"rehype-sanitize": "^6.0.0",
"remark-gfm": "^4.0.1",
"reselect": "^4.0.0",
"reselect": "^5.1.1",
"rison": "^0.1.1",
"seedrandom": "^3.0.5",
"xss": "^1.0.14"
@@ -59188,16 +59199,19 @@
"license": "BSD-3-Clause"
},
"packages/superset-ui-core/node_modules/d3-scale": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-3.3.0.tgz",
"integrity": "sha512-1JGp44NQCt5d1g+Yy+GeOnZP7xHo0ii8zsQp6PGzd+C1/dl0KGsp9A7Mxwp+1D1o4unbTTxVdU/ZOIEBoeZPbQ==",
"license": "BSD-3-Clause",
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/d3-scale/-/d3-scale-4.0.2.tgz",
"integrity": "sha512-GZW464g1SH7ag3Y7hXjf8RoUuAFIqklOAq3MRl4OaWabTFJY9PN/E1YklhXLh+OQ3fM9yS2nOkCoS+WLZ6kvxQ==",
"license": "ISC",
"dependencies": {
"d3-array": "^2.3.0",
"d3-format": "1 - 2",
"d3-interpolate": "1.2.0 - 2",
"d3-time": "^2.1.1",
"d3-time-format": "2 - 3"
"d3-array": "2.10.0 - 3",
"d3-format": "1 - 3",
"d3-interpolate": "1.2.0 - 3",
"d3-time": "2.1.1 - 3",
"d3-time-format": "2 - 4"
},
"engines": {
"node": ">=12"
}
},
"packages/superset-ui-core/node_modules/d3-scale/node_modules/d3-interpolate": {
@@ -61169,8 +61183,8 @@
"@react-icons/all-files": "^4.1.0",
"@types/d3-array": "^2.9.0",
"@types/react-table": "^7.7.20",
"ag-grid-community": "^33.1.1",
"ag-grid-react": "^33.1.1",
"ag-grid-community": "^34.0.2",
"ag-grid-react": "^34.0.2",
"classnames": "^2.5.1",
"d3-array": "^2.4.0",
"lodash": "^4.17.21",
@@ -61206,8 +61220,10 @@
},
"peerDependencies": {
"@ant-design/icons": "^5.2.6",
"@reduxjs/toolkit": "*",
"@superset-ui/chart-controls": "*",
"@superset-ui/core": "*",
"@types/react-redux": "*",
"geostyler": "^14.1.3",
"geostyler-data": "^1.0.0",
"geostyler-openlayers-parser": "^4.0.0",

View File

@@ -72,7 +72,7 @@
"test": "cross-env NODE_ENV=test NODE_OPTIONS=\"--max-old-space-size=8192\" jest --max-workers=80% --silent",
"test-loud": "cross-env NODE_ENV=test NODE_OPTIONS=\"--max-old-space-size=8192\" jest --max-workers=80%",
"type": "tsc --noEmit",
"update-maps": "jupyter nbconvert --to notebook --execute --inplace 'plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb' -Xfrozen_modules=off",
"update-maps": "cd plugins/legacy-plugin-chart-country-map/scripts && jupyter nbconvert --to notebook --execute --inplace --allow-errors --ExecutePreprocessor.timeout=1200 'Country Map GeoJSON Generator.ipynb'",
"validate-release": "../RELEASING/validate_this_release.sh"
},
"browserslist": [
@@ -121,8 +121,8 @@
"@visx/scale": "^3.5.0",
"@visx/tooltip": "^3.0.0",
"@visx/xychart": "^3.5.1",
"ag-grid-community": "33.1.1",
"ag-grid-react": "33.1.1",
"ag-grid-community": "^34.0.2",
"ag-grid-react": "34.0.2",
"antd": "^5.24.6",
"chrono-node": "^2.7.8",
"classnames": "^2.2.5",
@@ -145,7 +145,6 @@
"geostyler-style": "7.5.0",
"geostyler-wfs-parser": "^2.0.3",
"googleapis": "^130.0.0",
"html-webpack-plugin": "^5.6.3",
"immer": "^10.1.1",
"interweave": "^13.1.0",
"jquery": "^3.7.1",
@@ -317,7 +316,7 @@
"mini-css-extract-plugin": "^2.9.0",
"open-cli": "^8.0.0",
"po2json": "^0.4.5",
"prettier": "3.5.3",
"prettier": "3.6.2",
"prettier-plugin-packagejson": "^2.5.3",
"process": "^0.11.10",
"react-resizable": "^3.0.5",
@@ -350,7 +349,7 @@
"regenerator-runtime": "^0.14.1"
},
"engines": {
"node": "^20.16.0",
"node": "^20.18.1",
"npm": "^10.8.1"
},
"overrides": {

View File

@@ -18,8 +18,7 @@
*/
import { ReactNode } from 'react';
import { t, css } from '@superset-ui/core';
import { InfoCircleOutlined } from '@ant-design/icons';
import { InfoTooltip, Tooltip } from '@superset-ui/core/components';
import { InfoTooltip, Tooltip, Icons } from '@superset-ui/core/components';
type ValidationError = string;
@@ -93,7 +92,7 @@ export function ControlHeader({
<div className="ControlHeader" data-test={`${name}-header`}>
<div className="pull-left">
<label className="control-label" htmlFor={name}>
{leftNode && <span>{leftNode}</span>}
{leftNode && <>{leftNode}</>}
<span
role={onClick ? 'button' : undefined}
{...(onClick ? { onClick, tabIndex: 0 } : {})}
@@ -104,9 +103,9 @@ export function ControlHeader({
{warning && (
<span>
<Tooltip id="error-tooltip" placement="top" title={warning}>
<InfoCircleOutlined
<Icons.InfoCircleOutlined
iconSize="m"
css={theme => css`
font-size: ${theme.sizeUnit * 3}px;
color: ${theme.colorError};
`}
/>
@@ -116,9 +115,9 @@ export function ControlHeader({
{danger && (
<span>
<Tooltip id="error-tooltip" placement="top" title={danger}>
<InfoCircleOutlined
<Icons.InfoCircleOutlined
iconSize="m"
css={theme => css`
font-size: ${theme.sizeUnit * 3}px;
color: ${theme.colorError};
`}
/>{' '}
@@ -132,9 +131,9 @@ export function ControlHeader({
placement="top"
title={validationErrors.join(' ')}
>
<InfoCircleOutlined
<Icons.InfoCircleOutlined
iconSize="m"
css={theme => css`
font-size: ${theme.sizeUnit * 3}px;
color: ${theme.colorError};
`}
/>{' '}

View File

@@ -86,7 +86,9 @@ export default function Select<VT extends string | number>({
minWidth,
}}
>
{options?.map(([val, label]) => <Option value={val}>{label}</Option>)}
{options?.map(([val, label]) => (
<Option value={val}>{label}</Option>
))}
{children}
{value && !optionsHasValue && (
<Option key={value} value={value}>

View File

@@ -30,7 +30,7 @@ const controlsWithoutXAxis: ControlSetRow[] = [
['groupby'],
[contributionModeControl],
['adhoc_filters'],
['limit'],
['limit', 'group_others_when_limit_reached'],
['timeseries_limit_metric'],
['order_desc'],
['row_limit'],

View File

@@ -41,6 +41,53 @@ import {
import { checkColumnType } from '../utils/checkColumnType';
import { isSortable } from '../utils/isSortable';
// Aggregation choices with computation methods for plugins and controls
export const aggregationChoices = {
raw: {
label: 'Overall value',
compute: (data: number[]) => {
if (!data.length) return null;
return data[0];
},
},
LAST_VALUE: {
label: 'Last Value',
compute: (data: number[]) => {
if (!data.length) return null;
return data[0];
},
},
sum: {
label: 'Total (Sum)',
compute: (data: number[]) =>
data.length ? data.reduce((a, b) => a + b, 0) : null,
},
mean: {
label: 'Average (Mean)',
compute: (data: number[]) =>
data.length ? data.reduce((a, b) => a + b, 0) / data.length : null,
},
min: {
label: 'Minimum',
compute: (data: number[]) => (data.length ? Math.min(...data) : null),
},
max: {
label: 'Maximum',
compute: (data: number[]) => (data.length ? Math.max(...data) : null),
},
median: {
label: 'Median',
compute: (data: number[]) => {
if (!data.length) return null;
const sorted = [...data].sort((a, b) => a - b);
const mid = Math.floor(sorted.length / 2);
return sorted.length % 2 === 0
? (sorted[mid - 1] + sorted[mid]) / 2
: sorted[mid];
},
},
} as const;
export const contributionModeControl = {
name: 'contributionMode',
config: {
@@ -69,17 +116,12 @@ export const aggregationControl = {
default: 'LAST_VALUE',
clearable: false,
renderTrigger: false,
choices: [
['raw', t('None')],
['LAST_VALUE', t('Last Value')],
['sum', t('Total (Sum)')],
['mean', t('Average (Mean)')],
['min', t('Minimum')],
['max', t('Maximum')],
['median', t('Median')],
],
choices: Object.entries(aggregationChoices).map(([value, { label }]) => [
value,
t(label),
]),
description: t(
'Aggregation method used to compute the Big Number from the Trendline.For non-additive metrics like ratios, averages, distinct counts, etc use NONE.',
'Method to compute the displayed value. "Overall value" calculates a single metric across the entire filtered time period, ideal for non-additive metrics like ratios, averages, or distinct counts. Other methods operate over the time series data points.',
),
provideFormDataToProps: true,
mapStateToProps: ({ form_data }: ControlPanelState) => ({

View File

@@ -283,6 +283,19 @@ const series_limit: SharedControlConfig<'SelectControl'> = {
),
};
const group_others_when_limit_reached: SharedControlConfig<'CheckboxControl'> =
{
type: 'CheckboxControl',
label: t('Group remaining as "Others"'),
default: false,
description: t(
'Groups remaining series into an "Others" category when series limit is reached. ' +
'This prevents incomplete time series data from being displayed.',
),
visibility: ({ form_data }: { form_data: any }) =>
Boolean(form_data?.limit || form_data?.series_limit),
};
const y_axis_format: SharedControlConfig<'SelectControl', SelectDefaultOption> =
{
type: 'SelectControl',
@@ -446,6 +459,7 @@ export default {
time_shift_color,
series_columns: dndColumnsControl,
series_limit,
group_others_when_limit_reached,
series_limit_metric: dndSortByControl,
legacy_order_by: dndSortByControl,
truncate_metric,

View File

@@ -37,7 +37,7 @@
"d3-format": "^1.3.2",
"dayjs": "^1.11.13",
"d3-interpolate": "^3.0.1",
"d3-scale": "^3.0.0",
"d3-scale": "^4.0.2",
"d3-time": "^3.1.0",
"d3-time-format": "^4.1.0",
"dompurify": "^3.2.4",
@@ -59,7 +59,7 @@
"rehype-raw": "^7.0.0",
"rehype-sanitize": "^6.0.0",
"remark-gfm": "^4.0.1",
"reselect": "^4.0.0",
"reselect": "^5.1.1",
"rison": "^0.1.1",
"seedrandom": "^3.0.5",
"@visx/responsive": "^3.12.0",

View File

@@ -17,11 +17,8 @@
* under the License.
*/
/** Type checking is disabled for this file due to reselect only supporting
* TS declarations for selectors with up to 12 arguments. */
// @ts-nocheck
import { RefObject } from 'react';
import { createSelector } from 'reselect';
import { createSelector, lruMemoize } from 'reselect';
import {
AppSection,
Behavior,
@@ -37,7 +34,7 @@ import {
SetDataMaskHook,
} from '../types/Base';
import { QueryData, DataRecordFilters } from '..';
import { SupersetTheme } from '../../theme';
import { supersetTheme, SupersetTheme } from '../../theme';
// TODO: more specific typing for these fields of ChartProps
type AnnotationData = PlainObject;
@@ -109,6 +106,8 @@ export interface ChartPropsConfig {
theme: SupersetTheme;
/* legend index */
legendIndex?: number;
inContextMenu?: boolean;
emitCrossFilters?: boolean;
}
const DEFAULT_WIDTH = 800;
@@ -161,7 +160,11 @@ export default class ChartProps<FormData extends RawFormData = RawFormData> {
theme: SupersetTheme;
constructor(config: ChartPropsConfig & { formData?: FormData } = {}) {
constructor(
config: ChartPropsConfig & { formData?: FormData } = {
theme: supersetTheme,
},
) {
const {
annotationData = {},
datasource = {},
@@ -276,5 +279,16 @@ ChartProps.createSelector = function create(): ChartPropsSelector {
emitCrossFilters,
theme,
}),
// Below config is to retain usage of 1-sized `lruMemoize` object in Reselect v4
// Reselect v5 introduces `weakMapMemoize` which is more performant but potentially memory-leaky
// due to infinite cache size.
// Source: https://github.com/reduxjs/reselect/releases/tag/v5.0.1
{
memoize: lruMemoize,
argsMemoize: lruMemoize,
memoizeOptions: {
maxSize: 10,
},
},
);
};

View File

@@ -35,6 +35,11 @@ import { useTheme, css } from '@superset-ui/core';
import { Global } from '@emotion/react';
export { getTooltipHTML } from './Tooltip';
export { useJsonValidation } from './useJsonValidation';
export type {
JsonValidationAnnotation,
UseJsonValidationOptions,
} from './useJsonValidation';
export interface AceCompleterKeywordData {
name: string;
@@ -202,7 +207,7 @@ export function AsyncAceEditor(
/* Basic editor styles with dark mode support */
.ace_editor.ace-github,
.ace_editor.ace-textmate {
.ace_editor.ace-tm {
background-color: ${token.colorBgContainer} !important;
color: ${token.colorText} !important;
}
@@ -265,7 +270,7 @@ export function AsyncAceEditor(
/* Adjust tooltip styles */
.ace_tooltip {
margin-left: ${token.margin}px;
padding: 0px;
padding: ${token.sizeUnit * 2}px;
background-color: ${token.colorBgElevated} !important;
color: ${token.colorText} !important;
border: 1px solid ${token.colorBorderSecondary};

View File

@@ -0,0 +1,75 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { renderHook } from '@testing-library/react-hooks';
import { useJsonValidation } from './useJsonValidation';
describe('useJsonValidation', () => {
it('returns empty array for valid JSON', () => {
const { result } = renderHook(() => useJsonValidation('{"key": "value"}'));
expect(result.current).toEqual([]);
});
it('returns empty array when disabled', () => {
const { result } = renderHook(() =>
useJsonValidation('invalid json', { enabled: false }),
);
expect(result.current).toEqual([]);
});
it('returns empty array for empty input', () => {
const { result } = renderHook(() => useJsonValidation(''));
expect(result.current).toEqual([]);
});
it('extracts line and column from error message with parentheses', () => {
// Since we can't control the exact error message from JSON.parse,
// let's test with a mock that demonstrates the pattern matching
const mockError = {
message:
"Expected ',' or '}' after property value in JSON at position 19 (line 3 column 2)",
};
// Test the regex pattern directly
const match = mockError.message.match(/\(line (\d+) column (\d+)\)/);
expect(match).toBeTruthy();
expect(match![1]).toBe('3');
expect(match![2]).toBe('2');
});
it('returns error on first line when no line/column info in message', () => {
const invalidJson = '{invalid}';
const { result } = renderHook(() => useJsonValidation(invalidJson));
expect(result.current).toHaveLength(1);
expect(result.current[0]).toMatchObject({
type: 'error',
row: 0,
column: 0,
text: expect.stringContaining('Invalid JSON'),
});
});
it('uses custom error prefix', () => {
const { result } = renderHook(() =>
useJsonValidation('{invalid}', { errorPrefix: 'Custom error' }),
);
expect(result.current[0].text).toContain('Custom error');
});
});

View File

@@ -0,0 +1,82 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { useMemo } from 'react';
export interface JsonValidationAnnotation {
type: 'error' | 'warning' | 'info';
row: number;
column: number;
text: string;
}
export interface UseJsonValidationOptions {
/** Whether to enable JSON validation. Default: true */
enabled?: boolean;
/** Custom error message prefix. Default: 'Invalid JSON' */
errorPrefix?: string;
}
/**
* Hook for JSON validation that returns AceEditor-compatible annotations.
* Based on the SQL Lab validation pattern.
*
* @param jsonValue - The JSON string to validate
* @param options - Validation options
* @returns Array of annotation objects for AceEditor
*/
export function useJsonValidation(
jsonValue?: string,
options: UseJsonValidationOptions = {},
): JsonValidationAnnotation[] {
const { enabled = true, errorPrefix = 'Invalid JSON' } = options;
return useMemo(() => {
// Skip validation if disabled or empty value
if (!enabled || !jsonValue?.trim()) {
return [];
}
try {
JSON.parse(jsonValue);
return []; // Valid JSON - no annotations
} catch (error: any) {
const errorMessage = error.message || 'syntax error';
// Try to extract line/column from error message
// Look for pattern: (line X column Y) - often at the end of error messages
let row = 0;
let column = 0;
const match = errorMessage.match(/\(line (\d+) column (\d+)\)/);
if (match) {
row = parseInt(match[1], 10) - 1; // Convert to 0-based
column = parseInt(match[2], 10) - 1;
}
return [
{
type: 'error' as const,
row,
column,
text: `${errorPrefix}: ${errorMessage}`,
},
];
}
}, [enabled, jsonValue, errorPrefix]);
}

View File

@@ -17,7 +17,6 @@
* under the License.
*/
import { Children, ReactElement, Fragment } from 'react';
import cx from 'classnames';
import { Button as AntdButton } from 'antd';
import { useTheme } from '@superset-ui/core';
@@ -70,7 +69,6 @@ export function Button(props: ButtonProps) {
if (!buttonStyle || buttonStyle === 'primary') {
variant = 'solid';
antdType = 'primary';
color = 'primary';
} else if (buttonStyle === 'secondary') {
variant = 'filled';
color = 'primary';
@@ -78,7 +76,6 @@ export function Button(props: ButtonProps) {
variant = 'outlined';
color = 'default';
} else if (buttonStyle === 'dashed') {
color = 'primary';
variant = 'dashed';
antdType = 'dashed';
} else if (buttonStyle === 'danger') {
@@ -90,6 +87,7 @@ export function Button(props: ButtonProps) {
const element = children as ReactElement;
let renderedChildren = [];
if (element && element.type === Fragment) {
renderedChildren = Children.toArray(element.props.children);
} else {
@@ -120,7 +118,7 @@ export function Button(props: ButtonProps) {
display: 'inline-flex',
alignItems: 'center',
justifyContent: 'center',
lineHeight: 1.5715,
lineHeight: 1,
fontSize: fontSizeSM,
fontWeight: fontWeightStrong,
height,
@@ -134,6 +132,11 @@ export function Button(props: ButtonProps) {
'& > span > :first-of-type': {
marginRight: firstChildMargin,
},
':not(:hover)': effectiveButtonStyle === 'secondary' && {
// NOTE: This is the best we can do contrast wise for the secondary button using antd tokens
// and abusing the semantics. Should be revisited when possible. https://github.com/apache/superset/pull/34253#issuecomment-3104834692
color: `${theme.colorPrimaryTextHover} !important`,
},
}}
icon={icon}
{...restProps}

View File

@@ -20,7 +20,7 @@ import { useEffect, useState } from 'react';
import SyntaxHighlighterBase from 'react-syntax-highlighter/dist/cjs/light';
import github from 'react-syntax-highlighter/dist/cjs/styles/hljs/github';
import tomorrow from 'react-syntax-highlighter/dist/cjs/styles/hljs/tomorrow-night';
import { themeObject } from '@superset-ui/core';
import { useTheme, isThemeDark } from '@superset-ui/core';
export type SupportedLanguage = 'sql' | 'htmlbars' | 'markdown' | 'json';
@@ -77,6 +77,7 @@ export const CodeSyntaxHighlighter: React.FC<CodeSyntaxHighlighterProps> = ({
wrapLines = true,
style: overrideStyle,
}) => {
const theme = useTheme();
const [isLanguageReady, setIsLanguageReady] = useState(
registeredLanguages.has(language),
);
@@ -92,14 +93,14 @@ export const CodeSyntaxHighlighter: React.FC<CodeSyntaxHighlighterProps> = ({
loadLanguage();
}, [language]);
const isDark = themeObject.isThemeDark();
const isDark = isThemeDark(theme);
const themeStyle = overrideStyle || (isDark ? tomorrow : github);
const defaultCustomStyle: React.CSSProperties = {
background: themeObject.theme.colorBgElevated,
padding: themeObject.theme.sizeUnit * 4,
background: theme.colorBgElevated,
padding: theme.sizeUnit * 4,
border: 0,
borderRadius: themeObject.theme.borderRadius,
borderRadius: theme.borderRadius,
...customStyle,
};

View File

@@ -15,7 +15,7 @@
* specific language governing permissions and limitations
* under the License.
*/
import { useTheme } from '@superset-ui/core';
import { useTheme, css } from '@superset-ui/core';
import { Typography } from '@superset-ui/core/components/Typography';
import { Icons } from '@superset-ui/core/components';
@@ -29,7 +29,7 @@ interface CollapseLabelInModalProps {
export const CollapseLabelInModal: React.FC<CollapseLabelInModalProps> = ({
title,
subtitle,
validateCheckStatus = false,
validateCheckStatus,
testId,
}) => {
const theme = useTheme();
@@ -37,36 +37,38 @@ export const CollapseLabelInModal: React.FC<CollapseLabelInModalProps> = ({
return (
<div data-test={testId}>
<Typography.Title
style={{
marginTop: 0,
marginBottom: theme.sizeUnit / 2,
fontSize: theme.fontSizeLG,
}}
css={css`
&& {
margin-top: 0;
margin-bottom: ${theme.sizeUnit / 2}px;
font-size: ${theme.fontSizeLG}px;
}
`}
>
{title}{' '}
{validateCheckStatus ? (
<Icons.CheckCircleOutlined
style={{ color: theme.colorSuccess }}
aria-label="check-circle"
role="img"
/>
) : (
<span
style={{
color: theme.colorErrorText,
fontSize: theme.fontSizeLG,
}}
>
*
</span>
)}
{validateCheckStatus !== undefined &&
(validateCheckStatus ? (
<Icons.CheckCircleOutlined
iconColor={theme.colorSuccess}
aria-label="check-circle"
/>
) : (
<span
css={css`
color: ${theme.colorErrorText};
font-size: ${theme.fontSizeLG}px;
`}
>
*
</span>
))}
</Typography.Title>
<Typography.Paragraph
style={{
margin: 0,
fontSize: theme.fontSizeSM,
color: theme.colorTextDescription,
}}
css={css`
margin: 0;
font-size: ${theme.fontSizeSM}px;
color: ${theme.colorTextDescription};
`}
>
{subtitle}
</Typography.Paragraph>

View File

@@ -68,6 +68,7 @@ export function ConfirmStatusChange({
onConfirm={confirm}
onHide={hide}
open={open}
name="please confirm"
title={title}
/>
</>

View File

@@ -31,17 +31,13 @@ const StyledDiv = styled.div`
}
`;
const DescriptionContainer = styled.div`
line-height: ${({ theme }) => theme.sizeUnit * 4}px;
padding-top: 16px;
`;
export function DeleteModal({
description,
onConfirm,
onHide,
open,
title,
name,
}: DeleteModalProps) {
const [disableChange, setDisableChange] = useState(true);
const [confirmation, setConfirmation] = useState<string>('');
@@ -81,12 +77,13 @@ export function DeleteModal({
onHide={hide}
onHandledPrimaryAction={confirm}
primaryButtonName={t('Delete')}
primaryButtonType="danger"
primaryButtonStyle="danger"
show={open}
name={name}
title={title}
centered
>
<DescriptionContainer>{description}</DescriptionContainer>
{description}
<StyledDiv>
<FormLabel htmlFor="delete">
{t('Type "%s" to confirm', t('DELETE'))}

View File

@@ -25,4 +25,5 @@ export interface DeleteModalProps {
onHide: () => void;
open: boolean;
title: ReactNode;
name?: string;
}

View File

@@ -128,7 +128,7 @@ const ImageContainer = ({
<Empty
description={false}
image={mappedImage}
imageStyle={getImageHeight(size)}
styles={{ image: getImageHeight(size) }}
/>
</div>
);
@@ -144,6 +144,7 @@ export const EmptyState: React.FC<EmptyStateProps> = ({
description = t('There is currently no information to display.'),
image = 'empty.svg',
buttonText,
buttonIcon,
buttonAction,
size = 'medium',
children,
@@ -165,6 +166,7 @@ export const EmptyState: React.FC<EmptyStateProps> = ({
)}
{buttonText && buttonAction && (
<Button
icon={buttonIcon}
buttonStyle="primary"
onClick={buttonAction}
onMouseDown={handleMouseDown}

View File

@@ -17,6 +17,7 @@
* under the License.
*/
import type { ReactNode, SyntheticEvent } from 'react';
import type { IconType } from '@superset-ui/core/components';
export type EmptyStateSize = 'small' | 'medium' | 'large';
@@ -25,6 +26,7 @@ export type EmptyStateProps = {
description?: ReactNode;
image?: ReactNode | string;
buttonText?: ReactNode;
buttonIcon?: IconType;
buttonAction?: (event: SyntheticEvent) => void;
size?: EmptyStateSize;
children?: ReactNode;

View File

@@ -17,7 +17,7 @@
* under the License.
*/
import { css, useTheme, themeObject } from '../..';
import { css, useTheme, getFontSize } from '../..';
import { AntdIconType, BaseIconProps, CustomIconType, IconType } from './types';
const genAriaLabel = (fileName: string) => {
@@ -52,7 +52,7 @@ export const BaseIconComponent: React.FC<
const style = {
color: iconColor,
fontSize: iconSize
? `${themeObject.getFontSize(iconSize)}px`
? `${getFontSize(theme, iconSize)}px`
: `${theme.fontSize}px`,
cursor: rest?.onClick ? 'pointer' : undefined,
};
@@ -76,12 +76,12 @@ export const BaseIconComponent: React.FC<
style={style}
width={
iconSize
? `${themeObject.getFontSize(iconSize) || theme.fontSize}px`
? `${getFontSize(theme, iconSize) || theme.fontSize}px`
: `${theme.fontSize}px`
}
height={
iconSize
? `${themeObject.getFontSize(iconSize) || theme.fontSize}px`
? `${getFontSize(theme, iconSize) || theme.fontSize}px`
: `${theme.fontSize}px`
}
{...(rest as CustomIconType)}

View File

@@ -19,7 +19,7 @@
import { KeyboardEvent, useMemo } from 'react';
import { SerializedStyles, CSSObject } from '@emotion/react';
import { kebabCase } from 'lodash';
import { css, t, useTheme, themeObject } from '@superset-ui/core';
import { css, t, useTheme, getFontSize } from '@superset-ui/core';
import {
CloseCircleOutlined,
InfoCircleOutlined,
@@ -68,7 +68,7 @@ export const InfoTooltip = ({
const iconCss = css`
color: ${variant?.color ?? theme.colorIcon};
font-size: ${themeObject.getFontSize(iconSize)}px;
font-size: ${getFontSize(theme, iconSize)}px;
`;
const handleKeyDown = (event: KeyboardEvent<HTMLSpanElement>) => {

View File

@@ -18,7 +18,7 @@
*/
import { Tag } from '@superset-ui/core/components/Tag';
import { css } from '@emotion/react';
import { useTheme, themeObject } from '@superset-ui/core';
import { useTheme, getColorVariants } from '@superset-ui/core';
import { DatasetTypeLabel } from './reusable/DatasetTypeLabel';
import { PublishedLabel } from './reusable/PublishedLabel';
import type { LabelProps } from './types';
@@ -37,7 +37,7 @@ export function Label(props: LabelProps) {
...rest
} = props;
const baseColor = themeObject.getColorVariants(type);
const baseColor = getColorVariants(theme, type);
const color = baseColor.active;
const borderColor = baseColor.border;
const backgroundColor = baseColor.bg;

View File

@@ -18,7 +18,7 @@
*/
import { useEffect, useState, FunctionComponent } from 'react';
import { t, styled, css } from '@superset-ui/core';
import { t, styled, css, useTheme } from '@superset-ui/core';
import dayjs from 'dayjs';
import { extendedDayjs } from '../../utils/dates';
import { Icons } from '../Icons';
@@ -38,24 +38,14 @@ extendedDayjs.updateLocale('en', {
});
const TextStyles = styled.span`
color: ${({ theme }) => theme.colors.grayscale.base};
`;
const RefreshIcon = styled(Icons.SyncOutlined)`
${({ theme }) => `
width: auto;
height: ${theme.sizeUnit * 5}px;
position: relative;
top: ${theme.sizeUnit}px;
left: ${theme.sizeUnit}px;
cursor: pointer;
`};
color: ${({ theme }) => theme.colorText};
`;
export const LastUpdated: FunctionComponent<LastUpdatedProps> = ({
updatedAt,
update,
}) => {
const theme = useTheme();
const [timeSince, setTimeSince] = useState<dayjs.Dayjs>(
extendedDayjs(updatedAt),
);
@@ -75,10 +65,9 @@ export const LastUpdated: FunctionComponent<LastUpdatedProps> = ({
<TextStyles>
{t('Last Updated %s', timeSince.isValid() ? timeSince.calendar() : '--')}
{update && (
<RefreshIcon
iconSize="l"
<Icons.SyncOutlined
css={css`
vertical-align: text-bottom;
margin-left: ${theme.sizeUnit * 2}px;
`}
onClick={update}
/>

View File

@@ -33,6 +33,7 @@ export function FormModal({
formSubmitHandler,
bodyStyle = {},
requiredFields = [],
name,
}: FormModalProps) {
const [form] = Form.useForm();
const [isSaving, setIsSaving] = useState(false);
@@ -78,6 +79,7 @@ export function FormModal({
return (
<Modal
name={name}
show={show}
title={title}
onHide={handleClose}

View File

@@ -33,7 +33,7 @@ export const InteractiveModal = (props: ModalProps) => (
InteractiveModal.args = {
disablePrimaryButton: false,
primaryButtonName: 'Danger',
primaryButtonType: 'danger',
primaryButtonStyle: 'danger',
show: true,
title: "I'm a modal!",
resizable: false,

View File

@@ -77,7 +77,7 @@ export const StyledModal = styled(BaseModal)<StyledModalProps>`
.ant-modal-header {
flex: 0 0 auto;
border-radius: ${theme.borderRadius}px ${theme.borderRadius}px 0 0;
padding: ${theme.sizeUnit * 4}px ${theme.sizeUnit * 6}px;
padding: ${theme.sizeUnit * 4}px ${theme.sizeUnit * 4}px;
.ant-modal-title {
font-weight: ${theme.fontWeightStrong};
@@ -121,7 +121,8 @@ export const StyledModal = styled(BaseModal)<StyledModalProps>`
.ant-modal-body {
flex: 0 1 auto;
padding: ${theme.sizeUnit * 4}px;
padding: ${theme.sizeUnit * 4}px ${theme.sizeUnit * 6}px;
overflow: auto;
${!resizable && height && `height: ${height};`}
}
@@ -208,7 +209,7 @@ const CustomModal = ({
onHide,
onHandledPrimaryAction,
primaryButtonName = t('OK'),
primaryButtonType = 'primary',
primaryButtonStyle = 'primary',
show,
name,
title,
@@ -261,7 +262,7 @@ const CustomModal = ({
</Button>,
<Button
key="submit"
buttonStyle={primaryButtonType}
buttonStyle={primaryButtonStyle}
disabled={disablePrimaryButton}
tooltip={primaryTooltipMessage}
loading={primaryButtonLoading}
@@ -332,7 +333,7 @@ const CustomModal = ({
}
footer={!hideFooter ? modalFooter : null}
hideFooter={hideFooter}
wrapProps={{ 'data-test': `${name || title}-modal`, ...wrapProps }}
wrapProps={{ 'data-test': `${name || 'antd'}-modal`, ...wrapProps }}
modalRender={modal =>
resizable || draggable ? (
<Draggable

View File

@@ -20,6 +20,7 @@ import type { CSSProperties, ReactNode } from 'react';
import type { ModalFuncProps } from 'antd';
import type { ResizableProps } from 're-resizable';
import type { DraggableProps } from 'react-draggable';
import { ButtonStyle } from '../Button/types';
export interface ModalProps {
className?: string;
@@ -30,7 +31,7 @@ export interface ModalProps {
onHide: () => void;
onHandledPrimaryAction?: () => void;
primaryButtonName?: string;
primaryButtonType?: 'primary' | 'danger';
primaryButtonStyle?: ButtonStyle;
show: boolean;
name?: string;
title: ReactNode;

View File

@@ -110,6 +110,7 @@ export const ModalTrigger = forwardRef(
className={className}
show={showModal}
onHide={close}
name={modalTitle}
title={modalTitle}
footer={modalFooter}
hideFooter={!modalFooter}

View File

@@ -17,7 +17,7 @@
* under the License.
*/
import { render } from '@superset-ui/core/spec';
import TelemetryPixel from '.';
import { TelemetryPixel } from '.';
const OLD_ENV = process.env;

View File

@@ -39,7 +39,7 @@ interface TelemetryPixelProps {
const PIXEL_ID = '0d3461e1-abb1-4691-a0aa-5ed50de66af0';
const TelemetryPixel = ({
export const TelemetryPixel = ({
version = 'unknownVersion',
sha = 'unknownSHA',
build = 'unknownBuild',
@@ -56,4 +56,3 @@ const TelemetryPixel = ({
/>
);
};
export default TelemetryPixel;

View File

@@ -1,150 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { Modal, Tooltip, Flex, Select } from 'antd';
import { Button, JsonEditor } from '@superset-ui/core/components';
import {
themeObject,
exampleThemes,
SerializableThemeConfig,
Theme,
AnyThemeConfig,
} from '@superset-ui/core';
import { useState } from 'react';
import { Icons } from '@superset-ui/core/components/Icons';
interface ThemeEditorProps {
tooltipTitle?: string;
modalTitle?: string;
theme?: Theme;
setTheme?: (config: AnyThemeConfig) => void;
}
const ThemeEditor: React.FC<ThemeEditorProps> = ({
tooltipTitle = 'Edit Theme',
modalTitle = 'Theme Editor',
theme,
setTheme,
}) => {
const [isModalOpen, setIsModalOpen] = useState<boolean>(false);
const jsonTheme =
JSON.stringify(theme?.toSerializedConfig(), null, 2) || '{}';
const [jsonMetadata, setJsonMetadata] = useState<string>(jsonTheme);
const [selectedTheme, setSelectedTheme] = useState<string | null>(null);
// Get theme names for the Select options
const themeOptions: { value: string; label: string }[] = Object.keys(
exampleThemes,
).map(key => ({
value: key,
label: key,
}));
const handleOpenModal = (): void => {
setIsModalOpen(true);
setJsonMetadata(JSON.stringify(theme?.toSerializedConfig(), null, 2));
};
const handleCancel = (): void => {
setIsModalOpen(false);
};
const handleSave = (): void => {
try {
const parsedTheme = JSON.parse(jsonMetadata);
setTheme?.(parsedTheme);
setIsModalOpen(false);
} catch (error) {
console.error('Invalid JSON in theme editor:', error);
alert('Error parsing JSON. Please check your input.');
}
};
const handleThemeChange = (value: string): void => {
setSelectedTheme(value);
// When a theme is selected, update the JSON editor with the theme definition
const themeData = exampleThemes[value] || ({} as SerializableThemeConfig);
setJsonMetadata(JSON.stringify(themeData, null, 2));
};
return (
<>
<Tooltip title={tooltipTitle} placement="bottom">
<Button
buttonStyle="link"
icon={
<Icons.BgColorsOutlined
iconSize="l"
iconColor={themeObject.theme.colorPrimary}
/>
}
onClick={handleOpenModal}
aria-label="Edit theme"
size="large"
/>
</Tooltip>
<Modal
title={modalTitle}
open={isModalOpen}
onCancel={handleCancel}
width={800}
centered
styles={{
body: {
padding: '24px',
},
}}
footer={
<Flex justify="end" gap="small">
<Button onClick={handleCancel} buttonStyle="secondary">
Cancel
</Button>
<Button type="primary" onClick={handleSave}>
Apply Theme
</Button>
</Flex>
}
>
<Flex vertical gap="middle">
<div>
Select a theme template:
<Select
placeholder="Choose a theme"
style={{ width: '100%', marginTop: '8px' }}
options={themeOptions}
onChange={handleThemeChange}
value={selectedTheme}
/>
</div>
<JsonEditor
showLoadingForImport
name="json_metadata"
value={jsonMetadata}
onChange={setJsonMetadata}
tabSize={2}
width="100%"
height="200px"
wrapEnabled
/>
</Flex>
</Modal>
</>
);
};
export default ThemeEditor;

View File

@@ -1,79 +0,0 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { Tooltip } from 'antd';
import { Dropdown, Icons } from '@superset-ui/core/components';
import { t } from '@superset-ui/core';
import { ThemeAlgorithm, ThemeMode } from '../../theme/types';
export interface ThemeSelectProps {
setThemeMode: (newMode: ThemeMode) => void;
tooltipTitle?: string;
themeMode: ThemeMode;
}
const ThemeSelect: React.FC<ThemeSelectProps> = ({
setThemeMode,
tooltipTitle = 'Select theme',
themeMode,
}) => {
const handleSelect = (mode: ThemeMode) => {
setThemeMode(mode);
};
const themeIconMap: Record<ThemeAlgorithm | ThemeMode, React.ReactNode> = {
[ThemeAlgorithm.DEFAULT]: <Icons.SunOutlined />,
[ThemeAlgorithm.DARK]: <Icons.MoonOutlined />,
[ThemeMode.SYSTEM]: <Icons.FormatPainterOutlined />,
[ThemeAlgorithm.COMPACT]: <Icons.CompressOutlined />,
};
return (
<Tooltip title={tooltipTitle} placement="bottom">
<Dropdown
menu={{
items: [
{
key: ThemeMode.DEFAULT,
label: t('Light'),
onClick: () => handleSelect(ThemeMode.DEFAULT),
icon: <Icons.SunOutlined />,
},
{
key: ThemeMode.DARK,
label: t('Dark'),
onClick: () => handleSelect(ThemeMode.DARK),
icon: <Icons.MoonOutlined />,
},
{
key: ThemeMode.SYSTEM,
label: t('Match system'),
onClick: () => handleSelect(ThemeMode.SYSTEM),
icon: <Icons.FormatPainterOutlined />,
},
],
}}
trigger={['click']}
>
{themeIconMap[themeMode] || <Icons.FormatPainterOutlined />}
</Dropdown>
</Tooltip>
);
};
export default ThemeSelect;

View File

@@ -0,0 +1,273 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import {
render,
screen,
userEvent,
waitFor,
within,
} from '@superset-ui/core/spec';
import { ThemeMode } from '@superset-ui/core';
import { Menu } from '@superset-ui/core/components';
import { ThemeSubMenu } from '.';
// Mock the translation function
jest.mock('@superset-ui/core', () => ({
...jest.requireActual('@superset-ui/core'),
t: (key: string) => key,
}));
describe('ThemeSubMenu', () => {
const defaultProps = {
allowOSPreference: true,
setThemeMode: jest.fn(),
themeMode: ThemeMode.DEFAULT,
hasLocalOverride: false,
onClearLocalSettings: jest.fn(),
};
const renderThemeSubMenu = (props = defaultProps) =>
render(
<Menu>
<ThemeSubMenu {...props} />
</Menu>,
);
const findMenuWithText = async (text: string) => {
await waitFor(() => {
const found = screen
.getAllByRole('menu')
.some(m => within(m).queryByText(text));
if (!found) throw new Error(`Menu with text "${text}" not yet rendered`);
});
return screen.getAllByRole('menu').find(m => within(m).queryByText(text))!;
};
beforeEach(() => {
jest.clearAllMocks();
});
it('renders Light and Dark theme options by default', async () => {
renderThemeSubMenu();
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Light');
expect(within(menu!).getByText('Light')).toBeInTheDocument();
expect(within(menu!).getByText('Dark')).toBeInTheDocument();
});
it('does not render Match system option when allowOSPreference is false', async () => {
renderThemeSubMenu({ ...defaultProps, allowOSPreference: false });
userEvent.hover(await screen.findByRole('menuitem'));
await waitFor(() => {
expect(screen.queryByText('Match system')).not.toBeInTheDocument();
});
});
it('renders with allowOSPreference as true by default', async () => {
renderThemeSubMenu();
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Match system');
expect(within(menu).getByText('Match system')).toBeInTheDocument();
});
it('renders clear option when both hasLocalOverride and onClearLocalSettings are provided', async () => {
const mockClear = jest.fn();
renderThemeSubMenu({
...defaultProps,
hasLocalOverride: true,
onClearLocalSettings: mockClear,
});
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Clear local theme');
expect(within(menu).getByText('Clear local theme')).toBeInTheDocument();
});
it('does not render clear option when hasLocalOverride is false', async () => {
const mockClear = jest.fn();
renderThemeSubMenu({
...defaultProps,
hasLocalOverride: false,
onClearLocalSettings: mockClear,
});
userEvent.hover(await screen.findByRole('menuitem'));
await waitFor(() => {
expect(screen.queryByText('Clear local theme')).not.toBeInTheDocument();
});
});
it('calls setThemeMode with DEFAULT when Light is clicked', async () => {
const mockSet = jest.fn();
renderThemeSubMenu({ ...defaultProps, setThemeMode: mockSet });
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Light');
userEvent.click(within(menu).getByText('Light'));
expect(mockSet).toHaveBeenCalledWith(ThemeMode.DEFAULT);
});
it('calls setThemeMode with DARK when Dark is clicked', async () => {
const mockSet = jest.fn();
renderThemeSubMenu({ ...defaultProps, setThemeMode: mockSet });
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Dark');
userEvent.click(within(menu).getByText('Dark'));
expect(mockSet).toHaveBeenCalledWith(ThemeMode.DARK);
});
it('calls setThemeMode with SYSTEM when Match system is clicked', async () => {
const mockSet = jest.fn();
renderThemeSubMenu({ ...defaultProps, setThemeMode: mockSet });
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Match system');
userEvent.click(within(menu).getByText('Match system'));
expect(mockSet).toHaveBeenCalledWith(ThemeMode.SYSTEM);
});
it('calls onClearLocalSettings when Clear local theme is clicked', async () => {
const mockClear = jest.fn();
renderThemeSubMenu({
...defaultProps,
hasLocalOverride: true,
onClearLocalSettings: mockClear,
});
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Clear local theme');
userEvent.click(within(menu).getByText('Clear local theme'));
expect(mockClear).toHaveBeenCalledTimes(1);
});
it('displays sun icon for DEFAULT theme', () => {
renderThemeSubMenu({ ...defaultProps, themeMode: ThemeMode.DEFAULT });
expect(screen.getByTestId('sun')).toBeInTheDocument();
});
it('displays moon icon for DARK theme', () => {
renderThemeSubMenu({ ...defaultProps, themeMode: ThemeMode.DARK });
expect(screen.getByTestId('moon')).toBeInTheDocument();
});
it('displays format-painter icon for SYSTEM theme', () => {
renderThemeSubMenu({ ...defaultProps, themeMode: ThemeMode.SYSTEM });
expect(screen.getByTestId('format-painter')).toBeInTheDocument();
});
it('displays override icon when hasLocalOverride is true', () => {
renderThemeSubMenu({ ...defaultProps, hasLocalOverride: true });
expect(screen.getByTestId('format-painter')).toBeInTheDocument();
});
it('renders Theme group header', async () => {
renderThemeSubMenu();
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Theme');
expect(within(menu).getByText('Theme')).toBeInTheDocument();
});
it('renders sun icon for Light theme option', async () => {
renderThemeSubMenu();
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Light');
const lightOption = within(menu).getByText('Light').closest('li');
expect(within(lightOption!).getByTestId('sun')).toBeInTheDocument();
});
it('renders moon icon for Dark theme option', async () => {
renderThemeSubMenu();
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Dark');
const darkOption = within(menu).getByText('Dark').closest('li');
expect(within(darkOption!).getByTestId('moon')).toBeInTheDocument();
});
it('renders format-painter icon for Match system option', async () => {
renderThemeSubMenu({ ...defaultProps, allowOSPreference: true });
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Match system');
const matchOption = within(menu).getByText('Match system').closest('li');
expect(
within(matchOption!).getByTestId('format-painter'),
).toBeInTheDocument();
});
it('renders clear icon for Clear local theme option', async () => {
renderThemeSubMenu({
...defaultProps,
hasLocalOverride: true,
onClearLocalSettings: jest.fn(),
});
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Clear local theme');
const clearOption = within(menu)
.getByText('Clear local theme')
.closest('li');
expect(within(clearOption!).getByTestId('clear')).toBeInTheDocument();
});
it('renders divider before clear option when clear option is present', async () => {
renderThemeSubMenu({
...defaultProps,
hasLocalOverride: true,
onClearLocalSettings: jest.fn(),
});
userEvent.hover(await screen.findByRole('menuitem'));
const menu = await findMenuWithText('Clear local theme');
const divider = within(menu).queryByRole('separator');
expect(divider).toBeInTheDocument();
});
it('does not render divider when clear option is not present', async () => {
renderThemeSubMenu({ ...defaultProps });
userEvent.hover(await screen.findByRole('menuitem'));
const divider = document.querySelector('.ant-menu-item-divider');
expect(divider).toBeNull();
});
});

View File

@@ -0,0 +1,170 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { useMemo } from 'react';
import { Icons, Menu } from '@superset-ui/core/components';
import {
css,
styled,
t,
ThemeMode,
useTheme,
ThemeAlgorithm,
} from '@superset-ui/core';
const StyledThemeSubMenu = styled(Menu.SubMenu)`
${({ theme }) => css`
[data-icon='caret-down'] {
color: ${theme.colorIcon};
font-size: ${theme.fontSizeXS}px;
margin-left: ${theme.sizeUnit}px;
}
&.ant-menu-submenu-active {
.ant-menu-title-content {
color: ${theme.colorPrimary};
}
}
`}
`;
const StyledThemeSubMenuItem = styled(Menu.Item)<{ selected: boolean }>`
${({ theme, selected }) => css`
&:hover {
color: ${theme.colorPrimary} !important;
cursor: pointer !important;
}
${selected &&
css`
background-color: ${theme.colors.primary.light4} !important;
color: ${theme.colors.primary.dark1} !important;
`}
`}
`;
export interface ThemeSubMenuOption {
key: ThemeMode;
label: string;
icon: React.ReactNode;
onClick: () => void;
}
export interface ThemeSubMenuProps {
setThemeMode: (newMode: ThemeMode) => void;
themeMode: ThemeMode;
hasLocalOverride?: boolean;
onClearLocalSettings?: () => void;
allowOSPreference?: boolean;
}
export const ThemeSubMenu: React.FC<ThemeSubMenuProps> = ({
setThemeMode,
themeMode,
hasLocalOverride = false,
onClearLocalSettings,
allowOSPreference = true,
}: ThemeSubMenuProps) => {
const theme = useTheme();
const handleSelect = (mode: ThemeMode) => {
setThemeMode(mode);
};
const themeIconMap: Record<ThemeAlgorithm | ThemeMode, React.ReactNode> =
useMemo(
() => ({
[ThemeAlgorithm.DEFAULT]: <Icons.SunOutlined />,
[ThemeAlgorithm.DARK]: <Icons.MoonOutlined />,
[ThemeMode.SYSTEM]: <Icons.FormatPainterOutlined />,
[ThemeAlgorithm.COMPACT]: <Icons.CompressOutlined />,
}),
[],
);
const selectedThemeModeIcon = useMemo(
() =>
hasLocalOverride ? (
<Icons.FormatPainterOutlined
style={{ color: theme.colors.error.base }}
/>
) : (
themeIconMap[themeMode]
),
[hasLocalOverride, theme.colors.error.base, themeIconMap, themeMode],
);
const themeOptions: ThemeSubMenuOption[] = [
{
key: ThemeMode.DEFAULT,
label: t('Light'),
icon: <Icons.SunOutlined />,
onClick: () => handleSelect(ThemeMode.DEFAULT),
},
{
key: ThemeMode.DARK,
label: t('Dark'),
icon: <Icons.MoonOutlined />,
onClick: () => handleSelect(ThemeMode.DARK),
},
...(allowOSPreference
? [
{
key: ThemeMode.SYSTEM,
label: t('Match system'),
icon: <Icons.FormatPainterOutlined />,
onClick: () => handleSelect(ThemeMode.SYSTEM),
},
]
: []),
];
// Add clear settings option only when there's a local theme active
const clearOption =
onClearLocalSettings && hasLocalOverride
? {
key: 'clear-local',
label: t('Clear local theme'),
icon: <Icons.ClearOutlined />,
onClick: onClearLocalSettings,
}
: null;
return (
<StyledThemeSubMenu
key="theme-sub-menu"
title={selectedThemeModeIcon}
icon={<Icons.CaretDownOutlined iconSize="xs" />}
>
<Menu.ItemGroup title={t('Theme')} />
{themeOptions.map(option => (
<StyledThemeSubMenuItem
key={option.key}
onClick={option.onClick}
selected={option.key === themeMode}
>
{option.icon} {option.label}
</StyledThemeSubMenuItem>
))}
{clearOption && [
<Menu.Divider key="theme-divider" />,
<Menu.Item key={clearOption.key} onClick={clearOption.onClick}>
{clearOption.icon} {clearOption.label}
</Menu.Item>,
]}
</StyledThemeSubMenu>
);
};

View File

@@ -19,6 +19,7 @@
import { styled, css } from '@superset-ui/core';
import { Typography as AntdTypography } from 'antd';
export type { TitleProps } from 'antd/es/typography/Title';
export type { ParagraphProps } from 'antd/es/typography/Paragraph';
const StyledLink = styled(AntdTypography.Link)`

View File

@@ -16,52 +16,10 @@
* specific language governing permissions and limitations
* under the License.
*/
import { t, styled, css } from '@superset-ui/core';
import { Icons, Modal, Typography } from '@superset-ui/core/components';
import { Button } from '@superset-ui/core/components/Button';
import { t } from '@superset-ui/core';
import { Icons, Modal, Typography, Button } from '@superset-ui/core/components';
import type { FC, ReactElement } from 'react';
const StyledModalTitle = styled(Typography.Title)`
&& {
font-weight: 600;
margin: 0;
}
`;
const StyledModalBody = styled(Typography.Text)`
${({ theme }) => css`
padding: 0 ${theme.sizeUnit * 2}px;
&& {
margin: 0;
}
`}
`;
const StyledDiscardBtn = styled(Button)`
${({ theme }) => css`
min-width: ${theme.sizeUnit * 22}px;
height: ${theme.sizeUnit * 8}px;
`}
`;
const StyledSaveBtn = styled(Button)`
${({ theme }) => css`
min-width: ${theme.sizeUnit * 17}px;
height: ${theme.sizeUnit * 8}px;
span > :first-of-type {
margin-right: 0;
}
`}
`;
const StyledWarningIcon = styled(Icons.WarningOutlined)`
${({ theme }) => css`
color: ${theme.colorWarning};
margin-right: ${theme.sizeUnit * 4}px;
`}
`;
export type UnsavedChangesModalProps = {
showModal: boolean;
onHide: () => void;
@@ -86,44 +44,22 @@ export const UnsavedChangesModal: FC<UnsavedChangesModalProps> = ({
show={showModal}
width="444px"
title={
<div
css={css`
align-items: center;
display: flex;
`}
>
<StyledWarningIcon iconSize="xl" />
<StyledModalTitle type="secondary" level={5}>
{title}
</StyledModalTitle>
</div>
<>
<Icons.WarningOutlined iconSize="m" style={{ marginRight: 8 }} />
{title}
</>
}
footer={
<div
css={css`
display: flex;
justify-content: flex-end;
width: 100%;
`}
>
<StyledDiscardBtn
htmlType="button"
buttonSize="small"
onClick={onConfirmNavigation}
>
<>
<Button buttonStyle="secondary" onClick={onConfirmNavigation}>
{t('Discard')}
</StyledDiscardBtn>
<StyledSaveBtn
htmlType="button"
buttonSize="small"
buttonStyle="primary"
onClick={handleSave}
>
</Button>
<Button buttonStyle="primary" onClick={handleSave}>
{t('Save')}
</StyledSaveBtn>
</div>
</Button>
</>
}
>
<StyledModalBody type="secondary">{body}</StyledModalBody>
<Typography.Text>{body}</Typography.Text>
</Modal>
);

View File

@@ -148,6 +148,7 @@ export {
Typography,
type TypographyProps,
type ParagraphProps,
type TitleProps,
} from './Typography';
export { Image, type ImageProps } from './Image';
@@ -163,6 +164,8 @@ export * from './Steps';
export * from './Table';
export * from './TableView';
export * from './Tag';
export * from './TelemetryPixel';
export * from './ThemeSubMenu';
export * from './UnsavedChangesModal';
export * from './constants';
export * from './Result';

View File

@@ -32,7 +32,7 @@ export const CACHE_KEY = '@SUPERSET-UI/CONNECTION';
export const DEFAULT_FETCH_RETRY_OPTIONS: FetchRetryOptions = {
retries: 3,
retryDelay: 1000,
retryOn: [503],
retryOn: [502, 503, 504],
};
export const COMMON_ERR_MESSAGES = {

View File

@@ -63,6 +63,7 @@ export default function buildQueryObject<T extends QueryFormData>(
series_columns,
series_limit,
series_limit_metric,
group_others_when_limit_reached,
...residualFormData
} = formData;
const {
@@ -128,6 +129,7 @@ export default function buildQueryObject<T extends QueryFormData>(
normalizeSeriesLimitMetric(series_limit_metric) ??
timeseries_limit_metric ??
undefined,
group_others_when_limit_reached: group_others_when_limit_reached ?? false,
order_desc: typeof order_desc === 'undefined' ? true : order_desc,
url_params: url_params || undefined,
custom_params,

View File

@@ -189,24 +189,6 @@ describe('Theme', () => {
});
});
describe('getFontSize', () => {
it('returns correct font size for given key', () => {
const theme = Theme.fromConfig();
// Test different font size keys
expect(theme.getFontSize('xs')).toBe('8');
expect(theme.getFontSize('m')).toBeTruthy();
expect(theme.getFontSize('xxl')).toBe('28');
});
it('defaults to medium font size when no key is provided', () => {
const theme = Theme.fromConfig();
const mediumSize = theme.getFontSize('m');
expect(theme.getFontSize()).toBe(mediumSize);
});
});
describe('toSerializedConfig', () => {
it('serializes theme config correctly', () => {
const theme = Theme.fromConfig({

View File

@@ -20,7 +20,6 @@
// eslint-disable-next-line no-restricted-syntax
import React from 'react';
import { theme as antdThemeImport, ConfigProvider } from 'antd';
import tinycolor from 'tinycolor2';
// @fontsource/* v5.1+ doesn't play nice with eslint-import plugin v2.31+
/* eslint-disable import/extensions */
@@ -44,6 +43,7 @@ import {
} from '@emotion/react';
import createCache from '@emotion/cache';
import { noop } from 'lodash';
import { isThemeDark } from './utils/themeUtils';
import { GlobalStyles } from './GlobalStyles';
import {
@@ -53,9 +53,7 @@ import {
SupersetTheme,
allowedAntdTokens,
SharedAntdTokens,
ColorVariants,
DeprecatedThemeColors,
FontSizeKey,
} from './types';
import {
@@ -102,15 +100,6 @@ export class Theme {
private antdConfig: AntdThemeConfig;
private static readonly sizeMap: Record<FontSizeKey, string> = {
xs: 'fontSizeXS',
s: 'fontSizeSM',
m: 'fontSize',
l: 'fontSizeLG',
xl: 'fontSizeXL',
xxl: 'fontSizeXXL',
};
private constructor({ config }: { config?: AnyThemeConfig }) {
this.SupersetThemeProvider = this.SupersetThemeProvider.bind(this);
@@ -183,7 +172,7 @@ export class Theme {
// Second phase: Now that theme is initialized, we can determine if it's dark
// and generate the legacy colors correctly
const systemColors = getSystemColors(tokens);
const isDark = this.isThemeDark(); // Now we can safely call this
const isDark = isThemeDark(this.theme); // Use utility function with theme
this.theme.colors = getDeprecatedColors(systemColors, isDark);
// Update the providers with the fully formed theme
@@ -201,22 +190,6 @@ export class Theme {
return serializeThemeConfig(this.antdConfig);
}
private getToken(token: string): any {
return (this.theme as Record<string, any>)[token];
}
public getFontSize(size?: FontSizeKey): string {
const fontSizeKey = Theme.sizeMap[size || 'm'];
return this.getToken(fontSizeKey) || this.getToken('fontSize');
}
/**
* Check if the current theme is dark based on background color
*/
isThemeDark(): boolean {
return tinycolor(this.theme.colorBgContainer).isDark();
}
toggleDarkMode(isDark: boolean): void {
// Create a new config based on the current one
const newConfig = { ...this.antdConfig };
@@ -250,45 +223,6 @@ export class Theme {
return JSON.stringify(serializeThemeConfig(this.antdConfig), null, 2);
}
getColorVariants(color: string): ColorVariants {
const firstLetterCapped = color.charAt(0).toUpperCase() + color.slice(1);
if (color === 'default' || color === 'grayscale') {
const isDark = this.isThemeDark();
const flipBrightness = (baseColor: string): string => {
if (!isDark) return baseColor;
const { r, g, b } = tinycolor(baseColor).toRgb();
const invertedColor = tinycolor({ r: 255 - r, g: 255 - g, b: 255 - b });
return invertedColor.toHexString();
};
return {
active: flipBrightness('#222'),
textActive: flipBrightness('#444'),
text: flipBrightness('#555'),
textHover: flipBrightness('#666'),
hover: flipBrightness('#888'),
borderHover: flipBrightness('#AAA'),
border: flipBrightness('#CCC'),
bgHover: flipBrightness('#DDD'),
bg: flipBrightness('#F4F4F4'),
};
}
const theme = this.getToken.bind(this);
return {
active: theme(`color${firstLetterCapped}Active`),
textActive: theme(`color${firstLetterCapped}TextActive`),
text: theme(`color${firstLetterCapped}Text`),
textHover: theme(`color${firstLetterCapped}TextHover`),
hover: theme(`color${firstLetterCapped}Hover`),
borderHover: theme(`color${firstLetterCapped}BorderHover`),
border: theme(`color${firstLetterCapped}Border`),
bgHover: theme(`color${firstLetterCapped}BgHover`),
bg: theme(`color${firstLetterCapped}Bg`),
};
}
private updateProviders(
theme: SupersetTheme,
antdConfig: AntdThemeConfig,

View File

@@ -26,7 +26,9 @@ import {
type ThemeStorage,
type ThemeControllerOptions,
type ThemeContextType,
type SupersetThemeConfig,
ThemeAlgorithm,
ThemeMode,
} from './types';
export {
@@ -66,7 +68,16 @@ const themeObject: Theme = Theme.fromConfig({
const { theme } = themeObject;
const supersetTheme = theme;
export { Theme, themeObject, styled, theme, supersetTheme };
export {
Theme,
ThemeAlgorithm,
ThemeMode,
themeObject,
styled,
theme,
supersetTheme,
};
export type {
SupersetTheme,
SerializableThemeConfig,
@@ -74,4 +85,8 @@ export type {
ThemeStorage,
ThemeControllerOptions,
ThemeContextType,
SupersetThemeConfig,
};
// Export theme utility functions
export * from './utils/themeUtils';

View File

@@ -411,6 +411,7 @@ export interface ThemeControllerOptions {
onChange?: (theme: Theme) => void;
canUpdateTheme?: () => boolean;
canUpdateMode?: () => boolean;
isGlobalContext?: boolean;
}
export interface ThemeContextType {
@@ -419,4 +420,25 @@ export interface ThemeContextType {
setTheme: (config: AnyThemeConfig) => void;
setThemeMode: (newMode: ThemeMode) => void;
resetTheme: () => void;
setTemporaryTheme: (config: AnyThemeConfig) => void;
clearLocalOverrides: () => void;
getCurrentCrudThemeId: () => string | null;
hasDevOverride: () => boolean;
canSetMode: () => boolean;
canSetTheme: () => boolean;
canDetectOSPreference: () => boolean;
createDashboardThemeProvider: (themeId: string) => Promise<Theme | null>;
}
/**
* Configuration object for complete theme setup including default, dark themes and settings
*/
export interface SupersetThemeConfig {
theme_default: AnyThemeConfig;
theme_dark?: AnyThemeConfig;
theme_settings?: {
enforced?: boolean;
allowSwitching?: boolean;
allowOSPreference?: boolean;
};
}

View File

@@ -0,0 +1,134 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import { getFontSize, getColorVariants, isThemeDark } from './themeUtils';
import { Theme } from '../Theme';
import { ThemeAlgorithm } from '../types';
// Mock emotion's cache to avoid actual DOM operations
jest.mock('@emotion/cache', () => ({
__esModule: true,
default: jest.fn().mockReturnValue({}),
}));
describe('themeUtils', () => {
let lightTheme: Theme;
let darkTheme: Theme;
beforeEach(() => {
jest.clearAllMocks();
// Create actual theme instances for testing
lightTheme = Theme.fromConfig({
token: {
colorPrimary: '#1890ff',
fontSizeXS: '8',
fontSize: '14',
fontSizeLG: '16',
},
});
darkTheme = Theme.fromConfig({
algorithm: ThemeAlgorithm.DARK,
token: {
colorPrimary: '#1890ff',
fontSizeXS: '8',
fontSize: '14',
fontSizeLG: '16',
},
});
});
describe('getFontSize', () => {
it('returns correct font size for given key', () => {
expect(getFontSize(lightTheme.theme, 'xs')).toBe('8');
expect(getFontSize(lightTheme.theme, 'm')).toBe('14');
expect(getFontSize(lightTheme.theme, 'l')).toBe('16');
});
it('defaults to medium font size when no key is provided', () => {
expect(getFontSize(lightTheme.theme)).toBe('14');
});
it('uses antd default when specific size not overridden', () => {
// Create theme with minimal config - antd will provide defaults
const minimalTheme = Theme.fromConfig({
token: { fontSize: '14' },
});
// Ant Design provides fontSizeXS: '8' by default
expect(getFontSize(minimalTheme.theme, 'xs')).toBe('8');
expect(getFontSize(minimalTheme.theme, 'm')).toBe('14');
});
});
describe('isThemeDark', () => {
it('returns false for light theme', () => {
expect(isThemeDark(lightTheme.theme)).toBe(false);
});
it('returns true for dark theme', () => {
expect(isThemeDark(darkTheme.theme)).toBe(true);
});
});
describe('getColorVariants', () => {
it('returns correct variants for primary color', () => {
const variants = getColorVariants(lightTheme.theme, 'primary');
expect(variants.text).toBeDefined();
expect(variants.bg).toBeDefined();
expect(variants.border).toBeDefined();
expect(variants.active).toBeDefined();
});
it('returns grayscale variants for default color in light theme', () => {
const variants = getColorVariants(lightTheme.theme, 'default');
expect(variants.active).toBe('#222');
expect(variants.textActive).toBe('#444');
expect(variants.text).toBe('#555');
expect(variants.bg).toBe('#F4F4F4');
});
it('returns inverted grayscale variants for default color in dark theme', () => {
const variants = getColorVariants(darkTheme.theme, 'default');
// In dark theme, colors should be inverted
expect(variants.active).toBe('#dddddd'); // Inverted #222
expect(variants.textActive).toBe('#bbbbbb'); // Inverted #444
expect(variants.text).toBe('#aaaaaa'); // Inverted #555
});
it('returns same variants for grayscale color as default', () => {
const defaultVariants = getColorVariants(lightTheme.theme, 'default');
const grayscaleVariants = getColorVariants(lightTheme.theme, 'grayscale');
expect(defaultVariants).toEqual(grayscaleVariants);
});
it('handles missing color tokens gracefully', () => {
const variants = getColorVariants(lightTheme.theme, 'nonexistent');
// Should return undefined for missing tokens
expect(variants.active).toBeUndefined();
expect(variants.text).toBeUndefined();
expect(variants.bg).toBeUndefined();
});
});
});

View File

@@ -0,0 +1,113 @@
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
import tinycolor from 'tinycolor2';
import type { SupersetTheme, FontSizeKey, ColorVariants } from '../types';
const fontSizeMap: Record<FontSizeKey, keyof SupersetTheme> = {
xs: 'fontSizeXS',
s: 'fontSizeSM',
m: 'fontSize',
l: 'fontSizeLG',
xl: 'fontSizeXL',
xxl: 'fontSizeXXL',
};
/**
* Get font size from theme tokens based on size key
* @param theme - Theme tokens from useTheme()
* @param size - Font size key
* @returns Font size as string
*/
export function getFontSize(theme: SupersetTheme, size?: FontSizeKey): string {
const key = fontSizeMap[size || 'm'];
return String(theme[key] || theme.fontSize);
}
/**
* Get color variants for a given color type from theme tokens
* @param theme - Theme tokens from useTheme()
* @param color - Color type (e.g., 'primary', 'error', 'success')
* @returns ColorVariants object with bg, border, text colors etc.
*/
export function getColorVariants(
theme: SupersetTheme,
color: string,
): ColorVariants {
const firstLetterCapped = color.charAt(0).toUpperCase() + color.slice(1);
if (color === 'default' || color === 'grayscale') {
const isDark = isThemeDark(theme);
const flipBrightness = (baseColor: string): string => {
if (!isDark) return baseColor;
const { r, g, b } = tinycolor(baseColor).toRgb();
const invertedColor = tinycolor({ r: 255 - r, g: 255 - g, b: 255 - b });
return invertedColor.toHexString();
};
return {
active: flipBrightness('#222'),
textActive: flipBrightness('#444'),
text: flipBrightness('#555'),
textHover: flipBrightness('#666'),
hover: flipBrightness('#888'),
borderHover: flipBrightness('#AAA'),
border: flipBrightness('#CCC'),
bgHover: flipBrightness('#DDD'),
bg: flipBrightness('#F4F4F4'),
};
}
return {
active: theme[
`color${firstLetterCapped}Active` as keyof SupersetTheme
] as string,
textActive: theme[
`color${firstLetterCapped}TextActive` as keyof SupersetTheme
] as string,
text: theme[
`color${firstLetterCapped}Text` as keyof SupersetTheme
] as string,
textHover: theme[
`color${firstLetterCapped}TextHover` as keyof SupersetTheme
] as string,
hover: theme[
`color${firstLetterCapped}Hover` as keyof SupersetTheme
] as string,
borderHover: theme[
`color${firstLetterCapped}BorderHover` as keyof SupersetTheme
] as string,
border: theme[
`color${firstLetterCapped}Border` as keyof SupersetTheme
] as string,
bgHover: theme[
`color${firstLetterCapped}BgHover` as keyof SupersetTheme
] as string,
bg: theme[`color${firstLetterCapped}Bg` as keyof SupersetTheme] as string,
};
}
/**
* Check if the current theme is dark mode based on background color
* @param theme - Theme tokens from useTheme()
* @returns true if theme is dark, false if light
*/
export function isThemeDark(theme: SupersetTheme): boolean {
return tinycolor(theme.colorBgContainer).isDark();
}

View File

@@ -30,6 +30,7 @@ export enum FeatureFlag {
AvoidColorsCollision = 'AVOID_COLORS_COLLISION',
ChartPluginsExperimental = 'CHART_PLUGINS_EXPERIMENTAL',
ConfirmDashboardDiff = 'CONFIRM_DASHBOARD_DIFF',
CssTemplates = 'CSS_TEMPLATES',
DashboardVirtualization = 'DASHBOARD_VIRTUALIZATION',
DashboardRbac = 'DASHBOARD_RBAC',
DatapanelClosedByDefault = 'DATAPANEL_CLOSED_BY_DEFAULT',
@@ -53,8 +54,6 @@ export enum FeatureFlag {
SqlValidatorsByEngine = 'SQL_VALIDATORS_BY_ENGINE',
SshTunneling = 'SSH_TUNNELING',
TaggingSystem = 'TAGGING_SYSTEM',
ThemeEnableDarkThemeSwitch = 'THEME_ENABLE_DARK_THEME_SWITCH',
ThemeAllowThemeEditorBeta = 'THEME_ALLOW_THEME_EDITOR_BETA',
Thumbnails = 'THUMBNAILS',
UseAnalogousColors = 'USE_ANALOGOUS_COLORS',
ForceSqlLabRunAsync = 'SQLLAB_FORCE_RUN_ASYNC',

View File

@@ -119,7 +119,7 @@ describe('ChartProps', () => {
});
expect(props1).not.toBe(props2);
});
it('selector returns a new chartProps if some input fields change', () => {
it('selector returns a new chartProps if some input fields change and returns memoized chart props', () => {
const props1 = selector({
width: 800,
height: 600,
@@ -145,7 +145,7 @@ describe('ChartProps', () => {
theme: supersetTheme,
});
expect(props1).not.toBe(props2);
expect(props1).not.toBe(props3);
expect(props1).toBe(props3);
});
});
});

View File

@@ -1385,7 +1385,7 @@ export default function (config) {
p[0] = p[0] - __.margin.left;
p[1] = p[1] - __.margin.top;
(dims = dimensionsForPoint(p)),
((dims = dimensionsForPoint(p)),
(strum = {
p1: p,
dims: dims,
@@ -1393,7 +1393,7 @@ export default function (config) {
maxX: xscale(dims.right),
minY: 0,
maxY: h(),
});
}));
strums[dims.i] = strum;
strums.active = dims.i;
@@ -1942,7 +1942,7 @@ export default function (config) {
p[0] = p[0] - __.margin.left;
p[1] = p[1] - __.margin.top;
(dims = dimensionsForPoint(p)),
((dims = dimensionsForPoint(p)),
(arc = {
p1: p,
dims: dims,
@@ -1953,7 +1953,7 @@ export default function (config) {
startAngle: undefined,
endAngle: undefined,
arc: d3.svg.arc().innerRadius(0),
});
}));
arcs[dims.i] = arc;
arcs.active = dims.i;

View File

@@ -24,7 +24,7 @@ import {
getSequentialSchemeRegistry,
CategoricalColorNamespace,
} from '@superset-ui/core';
import Datamap from 'datamaps/dist/datamaps.world.min';
import Datamap from 'datamaps/dist/datamaps.all.min';
import { ColorBy } from './utils';
const propTypes = {

Some files were not shown because too many files have changed in this diff Show More