Compare commits

...

5 Commits

Author SHA1 Message Date
Maxime Beauchemin
0796aa1c6d Update UPDATING.md
Co-authored-by: Evan Rusackas <evan@preset.io>
2025-07-30 14:46:17 -07:00
Maxime Beauchemin
14e6ec7d9f fix(examples): Load all YAML examples with --load-test-data flag
The integration tests depend on core examples like birth_names being loaded
even when using the --load-test-data flag. This fix ensures that all YAML
files are loaded when load_test_data is True, not just .test. files.

This resolves CI failures where tests couldn't find expected slices because
the birth_names examples weren't being loaded.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-07-30 00:11:59 -07:00
Maxime Beauchemin
14ffa69e0b fix(tests): Align test slice names with YAML examples 2025-07-29 23:13:46 -07:00
Maxime Beauchemin
ef4cf2b430 remove --max-fail 1 on integration tests to iterate faster 2025-07-29 22:36:12 -07:00
Maxime Beauchemin
48d8c91b19 feat: migrate examples from Python to YAML format with enhanced CLI
Migrates Superset's example data system from Python-based scripts to YAML configuration files, providing a cleaner, more maintainable approach to managing example datasets, charts, and dashboards.

- Converted 9 Python example modules to YAML configurations
- Exported existing examples from database and added as YAML files:
  - 11 dashboards (USA Births Names, World Bank's Data, etc.)
  - 115 charts
  - 25 datasets
- Moved test-specific fixtures to `tests/fixtures/examples/`
- Removed theme_id from dashboard exports for compatibility

- **New command group**: `superset examples` with subcommands:
  - `load` - Load example data (replaces `load-examples`)
  - `clear-old` - Remove old Python-based examples
  - `clear` - Placeholder for future YAML clearing
  - `reload` - Clear and reload in one command
- **Backwards compatibility**: `superset load-examples` still works with deprecation warning
- **Safety mechanism**: Detects old examples and preserves them to avoid data loss

- Fixed JSON data loading - examples can now load `.json.gz` files from CDN
- Fixed Docker compose configuration for isolated development
- Fixed webpack WebSocket configuration for different ports

- Import operations now log what's being created vs updated:
  - "Creating new dashboard: Sales Dashboard"
  - "Updating existing chart: World's Population"
- Provides clear visibility into the import process

- Moved import logging to individual import functions (DRY principle)
- Non-destructive migration approach - no user data is deleted
- Deterministic UUID generation for consistent example data

- Tested migration from old Python examples to new YAML format
- Verified safety mechanism prevents accidental data overwrites
- Confirmed backwards compatibility with deprecated command
- All pre-commit checks pass

- Updated installation docs to use new CLI commands
- Added deprecation notice to UPDATING.md
- Updated development documentation

None - the old `load-examples` command continues to work with a deprecation warning.

For users with existing Python-based examples:
1. Run `superset examples clear-old --confirm` to remove old examples
2. Run `superset examples load` to load new YAML-based examples
2025-07-29 22:23:52 -07:00
99 changed files with 8332 additions and 2853 deletions

View File

@@ -55,6 +55,7 @@ esm/*
tsconfig.tsbuildinfo
.*ipynb
.*yml
.*yaml
.*iml
.esprintrc
.prettierignore

View File

@@ -23,6 +23,8 @@ This file documents any backwards-incompatible changes in Superset and
assists people when migrating to a new version.
## Next
- [34346](https://github.com/apache/superset/pull/34346) The examples system has been migrated from Python-based scripts to YAML configuration files. The CLI command `superset load-examples` has been deprecated in favor of `superset examples load`. The old command still works but will show a deprecation warning. Additional example management commands are available under `superset examples` including `clear-old` and `reload`. If you have old Python-based examples loaded, the new YAML-based examples will not load automatically to preserve your existing data. To migrate to the new examples, run `superset examples clear-old --confirm` followed by `superset examples load`.
**Note**: This change affects Cypress tests that rely on specific chart names from the old examples (e.g., "Num Births Trend", "Daily Totals"). These charts may not exist in the new YAML examples, causing test failures. Consider updating your Cypress tests or creating test-specific fixtures.
- [33084](https://github.com/apache/superset/pull/33084) The DISALLOWED_SQL_FUNCTIONS configuration now includes additional potentially sensitive database functions across PostgreSQL, MySQL, SQLite, MS SQL Server, and ClickHouse. Existing queries using these functions may now be blocked. Review your SQL Lab queries and dashboards if you encounter "disallowed function" errors after upgrading
- [34235](https://github.com/apache/superset/pull/34235) CSV exports now use `utf-8-sig` encoding by default to include a UTF-8 BOM, improving compatibility with Excel.
- [34258](https://github.com/apache/superset/pull/34258) changing the default in Dockerfile to INCLUDE_CHROMIUM="false" (from "true") in the past. This ensures the `lean` layer is lean by default, and people can opt-in to the `chromium` layer by setting the build arg `INCLUDE_CHROMIUM=true`. This is a breaking change for anyone using the `lean` layer, as it will no longer include Chromium by default.

View File

@@ -20,9 +20,6 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-image: &superset-image apachesuperset.docker.scarf.sh/apache/superset:${TAG:-latest-dev}
x-superset-volumes:

View File

@@ -20,9 +20,6 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-volumes:
&superset-volumes # /app/pythonpath_docker will be appended to the PYTHONPATH in the final container

View File

@@ -20,9 +20,6 @@
# If you choose to use this type of deployment make sure to
# create you own docker environment file (docker/.env) with your own
# unique random secure passwords and SECRET_KEY.
#
# For verbose logging during development:
# - Set SUPERSET_LOG_LEVEL=debug in docker/.env-local for detailed Superset logs
# -----------------------------------------------------------------------
x-superset-user: &superset-user root
x-superset-volumes: &superset-volumes

View File

@@ -53,12 +53,7 @@ PYTHONPATH=/app/pythonpath:/app/docker/pythonpath_dev
REDIS_HOST=redis
REDIS_PORT=6379
# Development and logging configuration
# FLASK_DEBUG: Enables Flask dev features (auto-reload, better error pages) - keep 'true' for development
FLASK_DEBUG=true
# SUPERSET_LOG_LEVEL: Controls Superset application logging verbosity (debug, info, warning, error, critical)
SUPERSET_LOG_LEVEL=info
SUPERSET_APP_ROOT="/"
SUPERSET_ENV=development
SUPERSET_LOAD_EXAMPLES=yes
@@ -71,3 +66,4 @@ SUPERSET_SECRET_KEY=TEST_NON_DEV_SECRET
ENABLE_PLAYWRIGHT=false
PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=true
BUILD_SUPERSET_FRONTEND_IN_DOCKER=true
SUPERSET_LOG_LEVEL=info

View File

@@ -20,5 +20,4 @@
# DON'T ignore the .gitignore
!.gitignore
!superset_config.py
!superset_config_docker_light.py
!superset_config_local.example

View File

@@ -14,6 +14,7 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# mypy: disable-error-code="assignment,misc"
#
# This file is included in the final Docker image and SHOULD be overridden when
# deploying the image to prod. Settings configured here are intended for use in local

View File

@@ -348,7 +348,7 @@ superset init
# Load some data to play with.
# Note: you MUST have previously created an admin user with the username `admin` for this command to work.
superset load-examples
superset examples load
# Start the Flask dev web server from inside your virtualenv.
# Note that your page may not have CSS at this point.

View File

@@ -26,14 +26,11 @@ Superset locally is using Docker Compose on a Linux or Mac OSX
computer. Superset does not have official support for Windows. It's also the easiest
way to launch a fully functioning **development environment** quickly.
Note that there are 4 major ways we support to run `docker compose`:
Note that there are 3 major ways we support to run `docker compose`:
1. **docker-compose.yml:** for interactive development, where we mount your local folder with the
frontend/backend files that you can edit and experience the changes you
make in the app in real time
1. **docker-compose-light.yml:** a lightweight configuration with minimal services (database,
Superset app, and frontend dev server) for development. Uses in-memory caching instead of Redis
and is designed for running multiple instances simultaneously
1. **docker-compose-non-dev.yml** where we just build a more immutable image based on the
local branch and get all the required images running. Changes in the local branch
at the time you fire this up will be reflected, but changes to the code
@@ -47,7 +44,7 @@ Note that there are 4 major ways we support to run `docker compose`:
The `dev` builds include the `psycopg2-binary` required to connect
to the Postgres database launched as part of the `docker compose` builds.
More on these approaches after setting up the requirements for either.
More on these two approaches after setting up the requirements for either.
## Requirements
@@ -106,36 +103,13 @@ and help you start fresh. In the context of `docker compose` setting
from within docker. This will slow down the startup, but will fix various npm-related issues.
:::
### Option #2 - lightweight development with multiple instances
For a lighter development setup that uses fewer resources and supports running multiple instances:
```bash
# Single lightweight instance (default port 9001)
docker compose -f docker-compose-light.yml up
# Multiple instances with different ports
NODE_PORT=9001 docker compose -p superset-1 -f docker-compose-light.yml up
NODE_PORT=9002 docker compose -p superset-2 -f docker-compose-light.yml up
NODE_PORT=9003 docker compose -p superset-3 -f docker-compose-light.yml up
```
This configuration includes:
- PostgreSQL database (internal network only)
- Superset application server
- Frontend development server with webpack hot reloading
- In-memory caching (no Redis)
- Isolated volumes and networks per instance
Access each instance at `http://localhost:{NODE_PORT}` (e.g., `http://localhost:9001`).
### Option #3 - build a set of immutable images from the local branch
### Option #2 - build a set of immutable images from the local branch
```bash
docker compose -f docker-compose-non-dev.yml up
```
### Option #4 - boot up an official release
### Option #3 - boot up an official release
```bash
# Set the version you want to run

View File

@@ -151,7 +151,7 @@ Finish installing by running through the following commands:
superset fab create-admin
# Load some data to play with
superset load_examples
superset examples load
# Create default roles and permissions
superset init

View File

@@ -33,4 +33,4 @@ superset load-test-users
echo "Running tests"
pytest --durations-min=2 --maxfail=1 --cov-report= --cov=superset ./tests/integration_tests "$@"
pytest --durations-min=2 --cov-report= --cov=superset ./tests/integration_tests "$@"

View File

@@ -30,6 +30,7 @@ def load_examples_run(
load_big_data: bool = False,
only_metadata: bool = False,
force: bool = False,
cleanup: bool = False,
) -> None:
if only_metadata:
logger.info("Loading examples metadata")
@@ -40,51 +41,41 @@ def load_examples_run(
# pylint: disable=import-outside-toplevel
import superset.examples.data_loading as examples
# Clear old examples if requested
if cleanup:
clear_old_examples()
examples.load_css_templates()
if load_test_data:
# Import test fixtures from tests directory
from tests.fixtures.examples.energy import load_energy
from tests.fixtures.examples.supported_charts_dashboard import (
load_supported_charts_dashboard,
)
from tests.fixtures.examples.tabbed_dashboard import load_tabbed_dashboard
logger.info("Loading energy related dataset")
examples.load_energy(only_metadata, force)
load_energy(only_metadata, force)
logger.info("Loading [World Bank's Health Nutrition and Population Stats]")
examples.load_world_bank_health_n_pop(only_metadata, force)
logger.info("Loading [Birth names]")
examples.load_birth_names(only_metadata, force)
if load_test_data:
logger.info("Loading [Tabbed dashboard]")
examples.load_tabbed_dashboard(only_metadata)
load_tabbed_dashboard(only_metadata)
logger.info("Loading [Supported Charts Dashboard]")
examples.load_supported_charts_dashboard()
load_supported_charts_dashboard()
else:
logger.info("Loading [Random long/lat data]")
examples.load_long_lat_data(only_metadata, force)
logger.info("Loading [Country Map data]")
examples.load_country_map_data(only_metadata, force)
logger.info("Loading [San Francisco population polygons]")
examples.load_sf_population_polygons(only_metadata, force)
logger.info("Loading [Flights data]")
examples.load_flights(only_metadata, force)
logger.info("Loading [BART lines]")
examples.load_bart_lines(only_metadata, force)
logger.info("Loading [Misc Charts] dashboard")
examples.load_misc_dashboard()
logger.info("Loading DECK.gl demo")
examples.load_deck_dash()
if load_big_data:
# Import test fixture from tests directory
from tests.fixtures.examples.big_data import load_big_data as load_big_data_func
logger.info("Loading big synthetic data for tests")
examples.load_big_data()
load_big_data_func()
# load examples that are stored as YAML config files
logger.info("Loading examples from YAML configuration files")
examples.load_examples_from_configs(force, load_test_data)
@@ -112,4 +103,222 @@ def load_examples(
force: bool = False,
) -> None:
"""Loads a set of Slices and Dashboards and a supporting dataset"""
# Show deprecation warning
click.echo(
click.style(
"\nWARNING: 'superset load-examples' is deprecated. "
"Please use 'superset examples load' instead.\n",
fg="yellow",
),
err=True,
)
load_examples_run(load_test_data, load_big_data, only_metadata, force)
# New CLI structure
@click.group(name="examples", help="Manage example data")
def examples_cli() -> None:
"""Group for example-related commands."""
pass
@examples_cli.command(name="load", help="Load example data into the database")
@with_appcontext
@transaction()
@click.option("--load-test-data", "-t", is_flag=True, help="Load additional test data")
@click.option("--load-big-data", "-b", is_flag=True, help="Load additional big data")
@click.option(
"--only-metadata",
"-m",
is_flag=True,
help="Only load metadata, skip actual data",
)
@click.option(
"--force",
"-f",
is_flag=True,
help="Force load data even if table already exists",
)
def load(
load_test_data: bool = False,
load_big_data: bool = False,
only_metadata: bool = False,
force: bool = False,
) -> None:
"""Load example datasets, charts, and dashboards."""
load_examples_run(
load_test_data, load_big_data, only_metadata, force, cleanup=False
)
def clear_old_examples() -> bool:
"""
Clear old Python-generated examples.
Returns True if clear was performed, False otherwise.
"""
from superset import db
from superset.connectors.sqla.models import SqlaTable
from superset.examples.utils import _has_old_examples
from superset.models.core import Database
from superset.models.dashboard import Dashboard, dashboard_slices
from superset.models.slice import Slice
# Check if old examples exist
if not _has_old_examples():
logger.info("No old examples found to clear")
return False
# Find the examples database
examples_db = db.session.query(Database).filter_by(database_name="examples").first()
if not examples_db:
return False
logger.info("Found examples database (id=%s)", examples_db.id)
logger.info("Clearing old examples...")
# 1. Get all datasets from examples database
example_datasets = (
db.session.query(SqlaTable).filter_by(database_id=examples_db.id).all()
)
dataset_ids = [ds.id for ds in example_datasets]
logger.info("Found %d example datasets", len(example_datasets))
# 2. Find all charts using these datasets
example_charts = []
if dataset_ids:
example_charts = (
db.session.query(Slice)
.filter(
Slice.datasource_id.in_(dataset_ids),
Slice.datasource_type == "table",
)
.all()
)
logger.info("Found %d example charts", len(example_charts))
chart_ids = [chart.id for chart in example_charts]
# 3. Find dashboards that contain these charts
example_dashboards = []
if chart_ids:
# Get dashboards that have relationships with our example charts
example_dashboards = (
db.session.query(Dashboard)
.join(dashboard_slices)
.filter(dashboard_slices.c.slice_id.in_(chart_ids))
.distinct()
.all()
)
logger.info("Found %d example dashboards", len(example_dashboards))
# Remove dashboard-slice relationships first
db.session.execute(
dashboard_slices.delete().where(dashboard_slices.c.slice_id.in_(chart_ids))
)
logger.info(
"Removed dashboard-slice relationships for %d charts",
len(chart_ids),
)
# 4. Delete dashboards that are now empty (contain only example charts)
for dashboard in example_dashboards:
# Since we already deleted the relationships, check if dashboard is empty
remaining_charts = (
db.session.query(dashboard_slices.c.slice_id)
.filter(dashboard_slices.c.dashboard_id == dashboard.id)
.count()
)
if remaining_charts == 0:
db.session.delete(dashboard)
logger.info(
"Deleted dashboard: %s (slug: %s)",
dashboard.dashboard_title,
dashboard.slug,
)
else:
logger.info(
"Keeping dashboard %s as it contains non-example charts",
dashboard.dashboard_title,
)
# 5. Delete charts
for chart in example_charts:
db.session.delete(chart)
logger.info("Deleted %d example charts", len(example_charts))
# 6. Delete the database - this will cascade delete all datasets,
# columns, and metrics thanks to the cascade="all, delete-orphan"
db.session.delete(examples_db)
logger.info("Examples database and all related objects removed successfully")
return True
@examples_cli.command(name="clear-old", help="Clear old Python-based example data")
@with_appcontext
@transaction()
@click.option(
"--confirm",
is_flag=True,
help="Skip confirmation prompt",
)
def clear_old(confirm: bool = False) -> None:
"""Clear old Python-generated example datasets, charts, and dashboards."""
if not confirm:
click.confirm(
"This will delete old Python-based example data. Are you sure?",
abort=True,
)
try:
if clear_old_examples():
logger.info("Old examples cleared successfully")
else:
logger.info("No old examples found to clear")
except Exception as e:
logger.error(f"Failed to clear old examples: {e}")
raise
@examples_cli.command(name="clear", help="Clear all example data (NOT YET IMPLEMENTED)")
@with_appcontext
def clear() -> None:
"""Clear all example data including YAML-based examples."""
click.echo(
click.style(
"Clearing YAML-based examples is NOT YET IMPLEMENTED.\n"
"Use 'superset examples clear-old' to remove old Python-based examples.",
fg="yellow",
)
)
@examples_cli.command(name="reload", help="Clear and reload example data")
@with_appcontext
@transaction()
@click.option("--load-test-data", "-t", is_flag=True, help="Load additional test data")
@click.option("--load-big-data", "-b", is_flag=True, help="Load additional big data")
@click.option(
"--only-metadata",
"-m",
is_flag=True,
help="Only load metadata, skip actual data",
)
@click.option(
"--force",
"-f",
is_flag=True,
help="Force load data even if table already exists",
)
def reload(
load_test_data: bool = False,
load_big_data: bool = False,
only_metadata: bool = False,
force: bool = False,
) -> None:
"""Clear existing examples and load fresh ones."""
# This is essentially the old --cleanup behavior
load_examples_run(load_test_data, load_big_data, only_metadata, force, cleanup=True)

View File

@@ -16,6 +16,7 @@
# under the License.
import copy
import logging
from inspect import isclass
from typing import Any
@@ -27,6 +28,8 @@ from superset.models.slice import Slice
from superset.utils import json
from superset.utils.core import AnnotationType, get_user
logger = logging.getLogger(__name__)
def filter_chart_annotations(chart_config: dict[str, Any]) -> None:
"""
@@ -63,10 +66,13 @@ def import_chart(
if not overwrite or not can_write:
return existing
config["id"] = existing.id
logger.info(f"Updating existing chart: {config.get('slice_name')}")
elif not can_write:
raise ImportFailedError(
"Chart doesn't exist and user doesn't have permission to create charts"
)
else:
logger.info(f"Creating new chart: {config.get('slice_name')}")
filter_chart_annotations(config)

View File

@@ -123,6 +123,9 @@ class ExportDashboardsCommand(ExportModelsCommand):
include_defaults=True,
export_uuids=True,
)
# Remove theme_id from export to make dashboards theme-free
payload.pop("theme_id", None)
# TODO (betodealmeida): move this logic to export_to_dict once this
# becomes the default export endpoint
for key, new_name in JSON_KEYS.items():

View File

@@ -166,11 +166,14 @@ def import_dashboard( # noqa: C901
elif not overwrite or not can_write:
return existing
config["id"] = existing.id
logger.info(f"Updating existing dashboard: {config.get('dashboard_title')}")
elif not can_write:
raise ImportFailedError(
"Dashboard doesn't exist and user doesn't "
"have permission to create dashboards"
)
else:
logger.info(f"Creating new dashboard: {config.get('dashboard_title')}")
# TODO (betodealmeida): move this logic to import_from_dict
config = config.copy()

View File

@@ -46,10 +46,13 @@ def import_database(
if not overwrite or not can_write:
return existing
config["id"] = existing.id
logger.info(f"Updating existing database: {config.get('database_name')}")
elif not can_write:
raise ImportFailedError(
"Database doesn't exist and user doesn't have permission to create databases" # noqa: E501
)
else:
logger.info(f"Creating new database: {config.get('database_name')}")
# Check if this URI is allowed
if app.config["PREVENT_UNSAFE_DB_CONNECTIONS"]:
try:

View File

@@ -124,11 +124,13 @@ def import_dataset( # noqa: C901
if not overwrite or not can_write:
return existing
config["id"] = existing.id
logger.info(f"Updating existing dataset: {config.get('table_name')}")
elif not can_write:
raise ImportFailedError(
"Dataset doesn't exist and user doesn't have permission to create datasets"
)
else:
logger.info(f"Creating new dataset: {config.get('table_name')}")
# TODO (betodealmeida): move this logic to import_from_dict
config = config.copy()
@@ -209,7 +211,12 @@ def load_data(data_uri: str, dataset: SqlaTable, database: Database) -> None:
data = request.urlopen(data_uri) # pylint: disable=consider-using-with # noqa: S310
if data_uri.endswith(".gz"):
data = gzip.open(data)
df = pd.read_csv(data, encoding="utf-8")
# Determine file format based on URI
if ".json" in data_uri:
df = pd.read_json(data, encoding="utf-8")
else:
df = pd.read_csv(data, encoding="utf-8")
dtype = get_dtype(df, dataset)
# convert temporal columns

View File

@@ -195,4 +195,5 @@ class ImportExamplesCommand(ImportModelsCommand):
{"dashboard_id": dashboard_id, "slice_id": chart_id}
for (dashboard_id, chart_id) in dashboard_chart_ids
]
db.session.execute(dashboard_slices.insert(), values)
if values:
db.session.execute(dashboard_slices.insert(), values)

View File

@@ -1155,7 +1155,7 @@ class CeleryConfig: # pylint: disable=too-few-public-methods
}
CELERY_CONFIG: type[CeleryConfig] | None = CeleryConfig
CELERY_CONFIG: type[CeleryConfig] = CeleryConfig
# Set celery config to None to disable all the above configuration
# CELERY_CONFIG = None

View File

@@ -1,71 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import polyline
from sqlalchemy import inspect, String, Text
from superset import db
from superset.sql.parse import Table
from superset.utils import json
from ..utils.database import get_example_database # noqa: TID252
from .helpers import get_table_connector_registry, read_example_data
logger = logging.getLogger(__name__)
def load_bart_lines(only_metadata: bool = False, force: bool = False) -> None:
tbl_name = "bart_lines"
database = get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
df = read_example_data(
"examples://bart-lines.json.gz", encoding="latin-1", compression="gzip"
)
df["path_json"] = df.path.map(json.dumps)
df["polyline"] = df.path.map(polyline.encode)
del df["path"]
df.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"color": String(255),
"name": String(255),
"polyline": Text,
"path_json": Text,
},
index=False,
)
logger.debug(f"Creating table {tbl_name} reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
db.session.add(tbl)
tbl.description = "BART lines"
tbl.database = database
tbl.filter_select_enabled = True
tbl.fetch_metadata()

View File

@@ -1,869 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import textwrap
from typing import Union
import pandas as pd
from sqlalchemy import DateTime, inspect, String
from sqlalchemy.sql import column
from superset import app, db, security_manager
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.models.core import Database
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils import json
from superset.utils.core import DatasourceType
from ..utils.database import get_example_database # noqa: TID252
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
read_example_data,
update_slice_ids,
)
logger = logging.getLogger(__name__)
def gen_filter(
subject: str, comparator: str, operator: str = "=="
) -> dict[str, Union[bool, str]]:
return {
"clause": "WHERE",
"comparator": comparator,
"expressionType": "SIMPLE",
"operator": operator,
"subject": subject,
}
def load_data(tbl_name: str, database: Database, sample: bool = False) -> None:
pdf = read_example_data("examples://birth_names2.json.gz", compression="gzip")
# TODO(bkyryliuk): move load examples data into the pytest fixture
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="ms")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="ms")
pdf = pdf.head(100) if sample else pdf
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
# TODO(bkyryliuk): use TIMESTAMP type for presto
"ds": DateTime if database.backend != "presto" else String(255),
"gender": String(16),
"state": String(10),
"name": String(255),
},
method="multi",
index=False,
)
logger.debug("Done loading table!")
logger.debug("-" * 80)
def load_birth_names(
only_metadata: bool = False, force: bool = False, sample: bool = False
) -> None:
"""Loading birth name dataset from a zip file in the repo"""
database = get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
tbl_name = "birth_names"
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
load_data(tbl_name, database, sample=sample)
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name, schema=schema).first()
if not obj:
logger.debug(f"Creating table [{tbl_name}] reference")
obj = table(table_name=tbl_name, schema=schema)
db.session.add(obj)
_set_table_metadata(obj, database)
_add_table_metrics(obj)
slices, _ = create_slices(obj)
create_dashboard(slices)
def _set_table_metadata(datasource: SqlaTable, database: "Database") -> None:
datasource.main_dttm_col = "ds"
datasource.database = database
datasource.filter_select_enabled = True
datasource.fetch_metadata()
def _add_table_metrics(datasource: SqlaTable) -> None:
# By accessing the attribute first, we make sure `datasource.columns` and
# `datasource.metrics` are already loaded. Otherwise accessing them later
# may trigger an unnecessary and unexpected `after_update` event.
columns, metrics = datasource.columns, datasource.metrics
if not any(col.column_name == "num_california" for col in columns):
col_state = str(column("state").compile(db.engine))
col_num = str(column("num").compile(db.engine))
columns.append(
TableColumn(
column_name="num_california",
expression=f"CASE WHEN {col_state} = 'CA' THEN {col_num} ELSE 0 END",
)
)
if not any(col.metric_name == "sum__num" for col in metrics):
col = str(column("num").compile(db.engine))
metrics.append(SqlMetric(metric_name="sum__num", expression=f"SUM({col})"))
for col in columns:
if col.column_name == "ds": # type: ignore
col.is_dttm = True # type: ignore
break
datasource.columns = columns
datasource.metrics = metrics
def create_slices(tbl: SqlaTable) -> tuple[list[Slice], list[Slice]]:
owner = security_manager.get_user_by_id(1)
metrics = [
{
"expressionType": "SIMPLE",
"column": {"column_name": "num", "type": "BIGINT"},
"aggregate": "SUM",
"label": "Births",
"optionName": "metric_11",
}
]
metric = "sum__num"
defaults = {
"compare_lag": "10",
"compare_suffix": "o10Y",
"limit": "25",
"granularity_sqla": "ds",
"groupby": [],
"row_limit": app.config["ROW_LIMIT"],
"time_range": "100 years ago : now",
"viz_type": "table",
"markup_type": "markdown",
}
default_query_context = {
"result_format": "json",
"result_type": "full",
"datasource": {
"id": tbl.id,
"type": "table",
},
"queries": [
{
"columns": [],
"metrics": [],
},
],
}
slice_kwargs = {
"datasource_id": tbl.id,
"datasource_type": DatasourceType.TABLE,
}
logger.debug("Creating some slices")
slices = [
Slice(
**slice_kwargs,
slice_name="Participants",
viz_type="big_number",
params=get_slice_json(
defaults,
viz_type="big_number",
granularity_sqla="ds",
compare_lag="5",
compare_suffix="over 5Y",
metric=metric,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Genders",
viz_type="pie",
params=get_slice_json(
defaults, viz_type="pie", groupby=["gender"], metric=metric
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Trends",
viz_type="echarts_timeseries_line",
params=get_slice_json(
defaults,
viz_type="echarts_timeseries_line",
groupby=["name"],
granularity_sqla="ds",
rich_tooltip=True,
show_legend=True,
metrics=metrics,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Genders by State",
viz_type="echarts_timeseries_bar",
params=get_slice_json(
defaults,
adhoc_filters=[
{
"clause": "WHERE",
"expressionType": "SIMPLE",
"filterOptionName": "2745eae5",
"comparator": ["other"],
"operator": "NOT IN",
"subject": "state",
}
],
viz_type="echarts_timeseries_bar",
metrics=[
{
"expressionType": "SIMPLE",
"column": {"column_name": "num_boys", "type": "BIGINT(20)"},
"aggregate": "SUM",
"label": "Boys",
"optionName": "metric_11",
},
{
"expressionType": "SIMPLE",
"column": {"column_name": "num_girls", "type": "BIGINT(20)"},
"aggregate": "SUM",
"label": "Girls",
"optionName": "metric_12",
},
],
groupby=["state"],
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Girls",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["name"],
adhoc_filters=[gen_filter("gender", "girl")],
row_limit=50,
timeseries_limit_metric=metric,
metrics=[metric],
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Girl Name Cloud",
viz_type="word_cloud",
params=get_slice_json(
defaults,
viz_type="word_cloud",
size_from="10",
series="name",
size_to="70",
rotation="square",
limit="100",
adhoc_filters=[gen_filter("gender", "girl")],
metric=metric,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Boys",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["name"],
adhoc_filters=[gen_filter("gender", "boy")],
row_limit=50,
timeseries_limit_metric=metric,
metrics=[metric],
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Boy Name Cloud",
viz_type="word_cloud",
params=get_slice_json(
defaults,
viz_type="word_cloud",
size_from="10",
series="name",
size_to="70",
rotation="square",
limit="100",
adhoc_filters=[gen_filter("gender", "boy")],
metric=metric,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Top 10 Girl Name Share",
viz_type="echarts_area",
params=get_slice_json(
defaults,
adhoc_filters=[gen_filter("gender", "girl")],
comparison_type="values",
groupby=["name"],
limit=10,
stacked_style="expand",
time_grain_sqla="P1D",
viz_type="echarts_area",
x_axis_forma="smart_date",
metrics=metrics,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Top 10 Boy Name Share",
viz_type="echarts_area",
params=get_slice_json(
defaults,
adhoc_filters=[gen_filter("gender", "boy")],
comparison_type="values",
groupby=["name"],
limit=10,
stacked_style="expand",
time_grain_sqla="P1D",
viz_type="echarts_area",
x_axis_forma="smart_date",
metrics=metrics,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Pivot Table v2",
viz_type="pivot_table_v2",
params=get_slice_json(
defaults,
viz_type="pivot_table_v2",
groupbyRows=["name"],
groupbyColumns=["state"],
metrics=[metric],
),
query_context=get_slice_json(
default_query_context,
queries=[
{
"columns": ["name", "state"],
"metrics": [metric],
}
],
),
owners=[],
),
]
misc_slices = [
Slice(
**slice_kwargs,
slice_name="Average and Sum Trends",
viz_type="mixed_timeseries",
params=get_slice_json(
defaults,
viz_type="mixed_timeseries",
metrics=[
{
"expressionType": "SIMPLE",
"column": {"column_name": "num", "type": "BIGINT(20)"},
"aggregate": "AVG",
"label": "AVG(num)",
"optionName": "metric_vgops097wej_g8uff99zhk7",
}
],
metrics_b=["sum__num"],
granularity_sqla="ds",
yAxisIndex=0,
yAxisIndexB=1,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Num Births Trend",
viz_type="echarts_timeseries_line",
params=get_slice_json(
defaults, viz_type="echarts_timeseries_line", metrics=metrics
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Daily Totals",
viz_type="table",
params=get_slice_json(
defaults,
groupby=["ds"],
time_range="1983 : 2023",
viz_type="table",
metrics=metrics,
),
query_context=get_slice_json(
default_query_context,
queries=[
{
"columns": ["ds"],
"metrics": metrics,
"time_range": "1983 : 2023",
}
],
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Number of California Births",
viz_type="big_number_total",
params=get_slice_json(
defaults,
metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
viz_type="big_number_total",
granularity_sqla="ds",
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Top 10 California Names Timeseries",
viz_type="echarts_timeseries_line",
params=get_slice_json(
defaults,
metrics=[
{
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
}
],
viz_type="echarts_timeseries_line",
granularity_sqla="ds",
groupby=["name"],
timeseries_limit_metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
limit="10",
),
owners=[owner] if owner else [],
),
Slice(
**slice_kwargs,
slice_name="Names Sorted by Num in California",
viz_type="table",
params=get_slice_json(
defaults,
metrics=metrics,
groupby=["name"],
row_limit=50,
timeseries_limit_metric={
"expressionType": "SIMPLE",
"column": {
"column_name": "num_california",
"expression": "CASE WHEN state = 'CA' THEN num ELSE 0 END",
},
"aggregate": "SUM",
"label": "SUM(num_california)",
},
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Number of Girls",
viz_type="big_number_total",
params=get_slice_json(
defaults,
metric=metric,
viz_type="big_number_total",
granularity_sqla="ds",
adhoc_filters=[gen_filter("gender", "girl")],
subheader="total female participants",
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Pivot Table",
viz_type="pivot_table_v2",
params=get_slice_json(
defaults,
viz_type="pivot_table_v2",
groupbyRows=["name"],
groupbyColumns=["state"],
metrics=metrics,
),
owners=[],
),
]
for slc in slices:
merge_slice(slc)
for slc in misc_slices:
merge_slice(slc)
misc_dash_slices.add(slc.slice_name)
return slices, misc_slices
def create_dashboard(slices: list[Slice]) -> Dashboard:
logger.debug("Creating a dashboard")
dash = db.session.query(Dashboard).filter_by(slug="births").first()
if not dash:
dash = Dashboard()
db.session.add(dash)
dash.published = True
dash.json_metadata = textwrap.dedent(
"""\
{
"label_colors": {
"Girls": "#FF69B4",
"Boys": "#ADD8E6",
"girl": "#FF69B4",
"boy": "#ADD8E6"
}
}"""
)
# pylint: disable=line-too-long
pos = json.loads( # noqa: TID251
textwrap.dedent(
"""\
{
"CHART-6GdlekVise": {
"children": [],
"id": "CHART-6GdlekVise",
"meta": {
"chartId": 5547,
"height": 50,
"sliceName": "Top 10 Girl Name Share",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-6n9jxb30JG": {
"children": [],
"id": "CHART-6n9jxb30JG",
"meta": {
"chartId": 5540,
"height": 36,
"sliceName": "Genders by State",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW--EyBZQlDi"
],
"type": "CHART"
},
"CHART-Jj9qh1ol-N": {
"children": [],
"id": "CHART-Jj9qh1ol-N",
"meta": {
"chartId": 5545,
"height": 50,
"sliceName": "Boy Name Cloud",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"CHART-ODvantb_bF": {
"children": [],
"id": "CHART-ODvantb_bF",
"meta": {
"chartId": 5548,
"height": 50,
"sliceName": "Top 10 Boy Name Share",
"width": 5
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"CHART-PAXUUqwmX9": {
"children": [],
"id": "CHART-PAXUUqwmX9",
"meta": {
"chartId": 5538,
"height": 34,
"sliceName": "Genders",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "CHART"
},
"CHART-_T6n_K9iQN": {
"children": [],
"id": "CHART-_T6n_K9iQN",
"meta": {
"chartId": 5539,
"height": 36,
"sliceName": "Trends",
"width": 7
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW--EyBZQlDi"
],
"type": "CHART"
},
"CHART-eNY0tcE_ic": {
"children": [],
"id": "CHART-eNY0tcE_ic",
"meta": {
"chartId": 5537,
"height": 34,
"sliceName": "Participants",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "CHART"
},
"CHART-g075mMgyYb": {
"children": [],
"id": "CHART-g075mMgyYb",
"meta": {
"chartId": 5541,
"height": 50,
"sliceName": "Girls",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-n-zGGE6S1y": {
"children": [],
"id": "CHART-n-zGGE6S1y",
"meta": {
"chartId": 5542,
"height": 50,
"sliceName": "Girl Name Cloud",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-eh0w37bWbR"
],
"type": "CHART"
},
"CHART-vJIPjmcbD3": {
"children": [],
"id": "CHART-vJIPjmcbD3",
"meta": {
"chartId": 5543,
"height": 50,
"sliceName": "Boys",
"width": 3
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-kzWtcvo8R1"
],
"type": "CHART"
},
"DASHBOARD_VERSION_KEY": "v2",
"GRID_ID": {
"children": [
"ROW-2n0XgiHDgs",
"ROW--EyBZQlDi",
"ROW-eh0w37bWbR",
"ROW-kzWtcvo8R1"
],
"id": "GRID_ID",
"parents": [
"ROOT_ID"
],
"type": "GRID"
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {
"text": "Births"
},
"type": "HEADER"
},
"MARKDOWN-zaflB60tbC": {
"children": [],
"id": "MARKDOWN-zaflB60tbC",
"meta": {
"code": "<div style=\\"text-align:center\\"> <h1>Birth Names Dashboard</h1> <img src=\\"/static/assets/images/babies.png\\" style=\\"width:50%;\\"></div>",
"height": 34,
"width": 6
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-2n0XgiHDgs"
],
"type": "MARKDOWN"
},
"ROOT_ID": {
"children": [
"GRID_ID"
],
"id": "ROOT_ID",
"type": "ROOT"
},
"ROW--EyBZQlDi": {
"children": [
"CHART-_T6n_K9iQN",
"CHART-6n9jxb30JG"
],
"id": "ROW--EyBZQlDi",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-2n0XgiHDgs": {
"children": [
"CHART-eNY0tcE_ic",
"MARKDOWN-zaflB60tbC",
"CHART-PAXUUqwmX9"
],
"id": "ROW-2n0XgiHDgs",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-eh0w37bWbR": {
"children": [
"CHART-g075mMgyYb",
"CHART-n-zGGE6S1y",
"CHART-6GdlekVise"
],
"id": "ROW-eh0w37bWbR",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
},
"ROW-kzWtcvo8R1": {
"children": [
"CHART-vJIPjmcbD3",
"CHART-Jj9qh1ol-N",
"CHART-ODvantb_bF"
],
"id": "ROW-kzWtcvo8R1",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
}
}
""" # noqa: E501
)
)
# pylint: enable=line-too-long
# dashboard v2 doesn't allow add markup slice
dash.slices = [slc for slc in slices if slc.viz_type != "markup"]
update_slice_ids(pos)
dash.dashboard_title = "USA Births Names"
dash.position_json = json.dumps(pos, indent=4) # noqa: TID251
dash.slug = "births"
return dash

View File

@@ -0,0 +1,26 @@
slice_name: Birth in France by department in 2016
description: null
certified_by: null
certification_details: null
viz_type: country_map
params:
entity: DEPT_ID
granularity_sqla: ''
metric:
aggregate: AVG
column:
column_name: '2004'
type: INT
expressionType: SIMPLE
label: Boys
optionName: metric_112342
row_limit: 500000
select_country: france
since: ''
until: ''
viz_type: country_map
query_context: null
cache_timeout: null
uuid: 6bd584f1-0ef5-44fc-8a05-61400f83bb62
version: 1.0.0
dataset_uuid: c21dd48d-9a4b-4a08-a926-47c3601c2a8d

View File

@@ -0,0 +1,21 @@
slice_name: OSM Long/Lat
description: null
certified_by: null
certification_details: null
viz_type: osm
params:
all_columns:
- occupancy
all_columns_x: LON
all_columns_y: LAT
granularity_sqla: day
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
row_limit: 500000
since: '2014-01-01'
until: now
viz_type: mapbox
query_context: null
cache_timeout: null
uuid: a4e90860-c8f5-4c50-8c04-06b2e144809c
version: 1.0.0
dataset_uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7

View File

@@ -0,0 +1,31 @@
slice_name: Parallel Coordinates
description: null
certified_by: null
certification_details: null
viz_type: para
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby: []
limit: 100
markup_type: markdown
metrics:
- sum__SP_POP_TOTL
- sum__SP_RUR_TOTL_ZS
- sum__SH_DYN_AIDS
row_limit: 50000
secondary_metric: sum__SP_POP_TOTL
series: country_name
show_bubbles: true
since: '2011-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: '2012-01-01'
viz_type: para
query_context: null
cache_timeout: null
uuid: 041377c4-0ca9-4a40-8abd-befcd137c0dc
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,30 @@
slice_name: Pivot Table v2
description: null
certified_by: null
certification_details: null
viz_type: pivot_table_v2
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
groupbyColumns:
- state
groupbyRows:
- name
limit: '25'
markup_type: markdown
metrics:
- sum__num
row_limit: 50000
time_range: '100 years ago : now'
viz_type: pivot_table_v2
query_context: "{\n \"datasource\": {\n \"id\": 2,\n \"type\": \"\
table\"\n },\n \"queries\": [\n {\n \"columns\": [\n \
\ \"name\",\n \"state\"\n ],\n \
\ \"metrics\": [\n \"sum__num\"\n ]\n }\n ],\n\
\ \"result_format\": \"json\",\n \"result_type\": \"full\"\n}"
cache_timeout: null
uuid: 86778b63-19d8-4278-a79f-c90a1b31e162
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,32 @@
slice_name: Average and Sum Trends
description: null
certified_by: null
certification_details: null
viz_type: mixed_timeseries
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
limit: '25'
markup_type: markdown
metrics:
- aggregate: AVG
column:
column_name: num
type: BIGINT(20)
expressionType: SIMPLE
label: AVG(num)
optionName: metric_vgops097wej_g8uff99zhk7
metrics_b:
- sum__num
row_limit: 50000
time_range: '100 years ago : now'
viz_type: mixed_timeseries
yAxisIndex: 0
yAxisIndexB: 1
query_context: null
cache_timeout: null
uuid: 9c690f97-9196-5e01-bec9-8f4975ea5108
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,31 @@
slice_name: Boy Name Cloud
description: null
certified_by: null
certification_details: null
viz_type: word_cloud
params:
adhoc_filters:
- clause: WHERE
comparator: boy
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
limit: '100'
markup_type: markdown
metric: sum__num
rotation: square
row_limit: 50000
series: name
size_from: '10'
size_to: '70'
time_range: '100 years ago : now'
viz_type: word_cloud
query_context: null
cache_timeout: null
uuid: 6994ec83-0cf2-4a26-97e2-1e30b0002aa0
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,30 @@
slice_name: Boys
description: null
certified_by: null
certification_details: null
viz_type: table
params:
adhoc_filters:
- clause: WHERE
comparator: boy
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '25'
markup_type: markdown
metrics:
- sum__num
row_limit: 50
time_range: '100 years ago : now'
timeseries_limit_metric: sum__num
viz_type: table
query_context: null
cache_timeout: null
uuid: 0af97164-82f0-42bb-a611-7093e5c56596
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,28 @@
slice_name: Daily Totals
description: null
certified_by: null
certification_details: null
viz_type: table
params:
granularity_sqla: ds
groupby:
- ds
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
row_limit: 50
time_range: '1983 : 2023'
timeseries_limit_metric: sum__num
viz_type: table
query_context: null
cache_timeout: null
uuid: a3d4f2e1-8c9b-4d2a-9e7f-1b6c8d5e2f4a
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,22 @@
slice_name: Genders
description: null
certified_by: null
certification_details: null
viz_type: pie
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- gender
limit: '25'
markup_type: markdown
metric: sum__num
row_limit: 50000
time_range: '100 years ago : now'
viz_type: pie
query_context: null
cache_timeout: null
uuid: fb05dca0-bd3e-4953-a0a5-94b51de3a653
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,44 @@
slice_name: Genders by State
description: null
certified_by: null
certification_details: null
viz_type: echarts_timeseries_bar
params:
adhoc_filters:
- clause: WHERE
comparator:
- other
expressionType: SIMPLE
filterOptionName: 2745eae5
operator: NOT IN
subject: state
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- state
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num_boys
type: BIGINT(20)
expressionType: SIMPLE
label: Boys
optionName: metric_11
- aggregate: SUM
column:
column_name: num_girls
type: BIGINT(20)
expressionType: SIMPLE
label: Girls
optionName: metric_12
row_limit: 50000
time_range: '100 years ago : now'
viz_type: echarts_timeseries_bar
query_context: null
cache_timeout: null
uuid: 2cc25185-3d8c-494c-aa3c-14f081ac7e54
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,31 @@
slice_name: Girl Name Cloud
description: null
certified_by: null
certification_details: null
viz_type: word_cloud
params:
adhoc_filters:
- clause: WHERE
comparator: girl
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
limit: '100'
markup_type: markdown
metric: sum__num
rotation: square
row_limit: 50000
series: name
size_from: '10'
size_to: '70'
time_range: '100 years ago : now'
viz_type: word_cloud
query_context: null
cache_timeout: null
uuid: ba6574fe-a6c0-41ef-9499-1ea6ff36bd2d
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,30 @@
slice_name: Girls
description: null
certified_by: null
certification_details: null
viz_type: table
params:
adhoc_filters:
- clause: WHERE
comparator: girl
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '25'
markup_type: markdown
metrics:
- sum__num
row_limit: 50
time_range: '100 years ago : now'
timeseries_limit_metric: sum__num
viz_type: table
query_context: null
cache_timeout: null
uuid: 44cfa30e-af8e-4176-8612-4df0c0609516
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,36 @@
slice_name: Names Sorted by Num in California
description: null
certified_by: null
certification_details: null
viz_type: table
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
row_limit: 50
time_range: '100 years ago : now'
timeseries_limit_metric:
aggregate: SUM
column:
column_name: num_california
expression: CASE WHEN state = 'CA' THEN num ELSE 0 END
expressionType: SIMPLE
label: SUM(num_california)
viz_type: table
query_context: null
cache_timeout: null
uuid: e49ed2c4-b8a3-5736-bafe-4658790b113a
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,31 @@
slice_name: Num Births Trend
description: null
certified_by: null
certification_details: null
viz_type: echarts_timeseries_line
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
rich_tooltip: true
row_limit: 50000
show_legend: true
time_range: '100 years ago : now'
viz_type: echarts_timeseries_line
query_context: null
cache_timeout: null
uuid: 5b8c76e5-0e5e-45c1-b07e-3b2cb9b9c7e8
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,27 @@
slice_name: Number of California Births
description: null
certified_by: null
certification_details: null
viz_type: big_number_total
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
limit: '25'
markup_type: markdown
metric:
aggregate: SUM
column:
column_name: num_california
expression: CASE WHEN state = 'CA' THEN num ELSE 0 END
expressionType: SIMPLE
label: SUM(num_california)
row_limit: 50000
time_range: '100 years ago : now'
viz_type: big_number_total
query_context: null
cache_timeout: null
uuid: 400ee69f-eda4-5fe8-bc30-299184e08048
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,28 @@
slice_name: Number of Girls
description: null
certified_by: null
certification_details: null
viz_type: big_number_total
params:
adhoc_filters:
- clause: WHERE
comparator: girl
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
limit: '25'
markup_type: markdown
metric: sum__num
row_limit: 50000
subheader: total female participants
time_range: '100 years ago : now'
viz_type: big_number_total
query_context: null
cache_timeout: null
uuid: 2f1a8720-7ea6-5b0f-b419-b75163f6bf17
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,21 @@
slice_name: Participants
description: null
certified_by: null
certification_details: null
viz_type: big_number
params:
compare_lag: '5'
compare_suffix: over 5Y
granularity_sqla: ds
groupby: []
limit: '25'
markup_type: markdown
metric: sum__num
row_limit: 50000
time_range: '100 years ago : now'
viz_type: big_number
query_context: null
cache_timeout: null
uuid: 89ae3c32-eafa-4466-82cf-8c4328420782
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,32 @@
slice_name: Pivot Table
description: null
certified_by: null
certification_details: null
viz_type: pivot_table_v2
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby: []
groupbyColumns:
- state
groupbyRows:
- name
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
row_limit: 50000
time_range: '100 years ago : now'
viz_type: pivot_table_v2
query_context: null
cache_timeout: null
uuid: b9038f33-aea3-52de-840b-0a32f4c0eb41
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,39 @@
slice_name: Top 10 Boy Name Share
description: null
certified_by: null
certification_details: null
viz_type: echarts_area
params:
adhoc_filters:
- clause: WHERE
comparator: boy
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
comparison_type: values
granularity_sqla: ds
groupby:
- name
limit: 10
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
row_limit: 50000
stacked_style: expand
time_grain_sqla: P1D
time_range: '100 years ago : now'
viz_type: echarts_area
x_axis_forma: smart_date
query_context: null
cache_timeout: null
uuid: f35cca46-bb11-440e-8ba1-7f021bfe52a7
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,35 @@
slice_name: Top 10 California Names Timeseries
description: null
certified_by: null
certification_details: null
viz_type: echarts_timeseries_line
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '10'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num_california
expression: CASE WHEN state = 'CA' THEN num ELSE 0 END
expressionType: SIMPLE
label: SUM(num_california)
row_limit: 50000
time_range: '100 years ago : now'
timeseries_limit_metric:
aggregate: SUM
column:
column_name: num_california
expression: CASE WHEN state = 'CA' THEN num ELSE 0 END
expressionType: SIMPLE
label: SUM(num_california)
viz_type: echarts_timeseries_line
query_context: null
cache_timeout: null
uuid: 6a587b9e-e28b-5c2a-abb9-c6c1f4fd56b5
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,39 @@
slice_name: Top 10 Girl Name Share
description: null
certified_by: null
certification_details: null
viz_type: echarts_area
params:
adhoc_filters:
- clause: WHERE
comparator: girl
expressionType: SIMPLE
operator: ==
subject: gender
compare_lag: '10'
compare_suffix: o10Y
comparison_type: values
granularity_sqla: ds
groupby:
- name
limit: 10
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
row_limit: 50000
stacked_style: expand
time_grain_sqla: P1D
time_range: '100 years ago : now'
viz_type: echarts_area
x_axis_forma: smart_date
query_context: null
cache_timeout: null
uuid: da76899a-d75c-467b-b0ce-cfa4819ed1b1
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,31 @@
slice_name: Trends
description: null
certified_by: null
certification_details: null
viz_type: echarts_timeseries_line
params:
compare_lag: '10'
compare_suffix: o10Y
granularity_sqla: ds
groupby:
- name
limit: '25'
markup_type: markdown
metrics:
- aggregate: SUM
column:
column_name: num
type: BIGINT
expressionType: SIMPLE
label: Births
optionName: metric_11
rich_tooltip: true
row_limit: 50000
show_legend: true
time_range: '100 years ago : now'
viz_type: echarts_timeseries_line
query_context: null
cache_timeout: null
uuid: c6024db9-1695-4aa6-b846-42d9c96bfcbf
version: 1.0.0
dataset_uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76

View File

@@ -0,0 +1,120 @@
slice_name: Rise & Fall of Video Game Consoles
description: null
certified_by: null
certification_details: null
viz_type: echarts_area
params:
adhoc_filters: []
annotation_layers: []
bottom_margin: auto
color_scheme: supersetColors
comparison_type: values
contribution: false
datasource: 21__table
granularity_sqla: year
groupby:
- platform
label_colors:
'0': '#1FA8C9'
'1': '#454E7C'
'2600': '#666666'
3DO: '#B2B2B2'
3DS: '#D1C6BC'
Action: '#1FA8C9'
Adventure: '#454E7C'
DC: '#A38F79'
DS: '#8FD3E4'
Europe: '#5AC189'
Fighting: '#5AC189'
GB: '#FDE380'
GBA: '#ACE1C4'
GC: '#5AC189'
GEN: '#3CCCCB'
GG: '#EFA1AA'
Japan: '#FF7F44'
Microsoft Game Studios: '#D1C6BC'
Misc: '#FF7F44'
N64: '#1FA8C9'
NES: '#9EE5E5'
NG: '#A1A6BD'
Nintendo: '#D3B3DA'
North America: '#666666'
Other: '#E04355'
PC: '#EFA1AA'
PCFX: '#FDE380'
PS: '#A1A6BD'
PS2: '#FCC700'
PS3: '#3CCCCB'
PS4: '#B2B2B2'
PSP: '#FEC0A1'
PSV: '#FCC700'
Platform: '#666666'
Puzzle: '#E04355'
Racing: '#FCC700'
Role-Playing: '#A868B7'
SAT: '#A868B7'
SCD: '#8FD3E4'
SNES: '#454E7C'
Shooter: '#3CCCCB'
Simulation: '#A38F79'
Sports: '#8FD3E4'
Strategy: '#A1A6BD'
TG16: '#FEC0A1'
Take-Two Interactive: '#9EE5E5'
WS: '#ACE1C4'
Wii: '#A38F79'
WiiU: '#E04355'
X360: '#A868B7'
XB: '#D3B3DA'
XOne: '#FF7F44'
line_interpolation: linear
metrics:
- aggregate: SUM
column:
column_name: global_sales
description: null
expression: null
filterable: true
groupby: true
id: 887
is_dttm: false
optionName: _col_Global_Sales
python_date_format: null
type: DOUBLE PRECISION
verbose_name: null
expressionType: SIMPLE
hasCustomLabel: false
isNew: false
label: SUM(Global_Sales)
optionName: metric_ufl75addr8c_oqqhdumirpn
sqlExpression: null
order_desc: true
queryFields:
groupby: groupby
metrics: metrics
rich_tooltip: true
rolling_type: None
row_limit: null
show_brush: auto
show_legend: false
slice_id: 659
stacked_style: stream
time_grain_sqla: null
time_range: No filter
url_params:
preselect_filters: '{"1389": {"platform": ["PS", "PS2", "PS3", "PS4"], "genre":
null, "__time_range": "No filter"}}'
viz_type: echarts_area
x_axis_format: smart_date
x_axis_label: Year Published
x_axis_showminmax: true
x_ticks_layout: auto
y_axis_bounds:
- null
- null
y_axis_format: SMART_NUMBER
query_context: null
cache_timeout: null
uuid: 3d926244-6e32-5e42-8ade-7302b83a65d7
version: 1.0.0
dataset_uuid: 53d47c0c-c03d-47f0-b9ac-81225f808283

View File

@@ -0,0 +1,30 @@
slice_name: Box plot
description: null
certified_by: null
certification_details: null
viz_type: box_plot
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby:
- region
limit: '25'
markup_type: markdown
metrics:
- sum__SP_POP_TOTL
row_limit: 50000
show_bubbles: true
since: '1960-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: now
viz_type: box_plot
whisker_options: Min/max (no outliers)
x_ticks_layout: staggered
query_context: null
cache_timeout: null
uuid: d31ba9c7-798b-4f84-87ef-ab31721680a8
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,29 @@
slice_name: Growth Rate
description: null
certified_by: null
certification_details: null
viz_type: echarts_timeseries_line
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby:
- country_name
limit: '25'
markup_type: markdown
metrics:
- sum__SP_POP_TOTL
num_period_compare: '10'
row_limit: 50000
show_bubbles: true
since: '1960-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: '2014-01-02'
viz_type: echarts_timeseries_line
query_context: null
cache_timeout: null
uuid: cfcd7c5e-4759-4b28-bb7c-e2200508e978
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,51 @@
slice_name: Life Expectancy VS Rural %
description: null
certified_by: null
certification_details: null
viz_type: bubble
params:
adhoc_filters:
- clause: WHERE
comparator:
- TCA
- MNP
- DMA
- MHL
- MCO
- SXM
- CYM
- TUV
- IMY
- KNA
- ASM
- ADO
- AMA
- PLW
expressionType: SIMPLE
filterOptionName: 2745eae5
operator: NOT IN
subject: country_code
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_name
granularity_sqla: year
groupby: []
limit: 0
markup_type: markdown
max_bubble_size: '50'
row_limit: 50000
series: region
show_bubbles: true
since: '2011-01-01'
size: sum__SP_POP_TOTL
time_range: '2014-01-01 : 2014-01-02'
until: '2011-01-02'
viz_type: bubble
x: sum__SP_RUR_TOTL_ZS
y: sum__SP_DYN_LE00_IN
query_context: null
cache_timeout: null
uuid: fa927236-7b66-4d03-ae6c-463d2d394123
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,28 @@
slice_name: Most Populated Countries
description: null
certified_by: null
certification_details: null
viz_type: table
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby:
- country_name
limit: '25'
markup_type: markdown
metrics:
- sum__SP_POP_TOTL
row_limit: 50000
show_bubbles: true
since: '2014-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: '2014-01-02'
viz_type: table
query_context: null
cache_timeout: null
uuid: 4183745e-1cc4-4f88-9ae6-973c69845ce4
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,36 @@
slice_name: '% Rural'
description: null
certified_by: null
certification_details: null
viz_type: world_map
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby: []
limit: '25'
markup_type: markdown
metric: sum__SP_RUR_TOTL_ZS
num_period_compare: '10'
row_limit: 50000
secondary_metric:
aggregate: SUM
column:
column_name: SP_RUR_TOTL
optionName: _col_SP_RUR_TOTL
type: DOUBLE
expressionType: SIMPLE
hasCustomLabel: true
label: Rural Population
show_bubbles: true
since: '2014-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: '2014-01-02'
viz_type: world_map
query_context: null
cache_timeout: null
uuid: 8d889488-edb5-40cb-a69c-e2c14f009e2b
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,38 @@
slice_name: Rural Breakdown
description: null
certified_by: null
certification_details: null
viz_type: sunburst_v2
params:
columns:
- region
- country_name
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby: []
limit: '25'
markup_type: markdown
metric: sum__SP_POP_TOTL
row_limit: 50000
secondary_metric:
aggregate: SUM
column:
column_name: SP_RUR_TOTL
optionName: _col_SP_RUR_TOTL
type: DOUBLE
expressionType: SIMPLE
hasCustomLabel: true
label: Rural Population
show_bubbles: true
since: '2011-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: '2011-01-02'
viz_type: sunburst_v2
query_context: null
cache_timeout: null
uuid: 70a2e07b-0f45-4532-96ae-0c6db52d2e7c
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,28 @@
slice_name: Treemap
description: null
certified_by: null
certification_details: null
viz_type: treemap_v2
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby:
- region
- country_code
limit: '25'
markup_type: markdown
metric: sum__SP_POP_TOTL
row_limit: 50000
show_bubbles: true
since: '1960-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: now
viz_type: treemap_v2
query_context: null
cache_timeout: null
uuid: fc941a12-88a0-42e7-ac48-c1ec4ed84640
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,28 @@
slice_name: World's Pop Growth
description: null
certified_by: null
certification_details: null
viz_type: echarts_area
params:
compare_lag: '10'
compare_suffix: o10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby:
- region
limit: '25'
markup_type: markdown
metrics:
- sum__SP_POP_TOTL
row_limit: 50000
show_bubbles: true
since: '1960-01-01'
time_range: '2014-01-01 : 2014-01-02'
until: now
viz_type: echarts_area
query_context: null
cache_timeout: null
uuid: e18b5d28-3a3d-43ea-8a20-b198b44b08e3
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,26 @@
slice_name: World's Population
description: null
certified_by: null
certification_details: null
viz_type: big_number
params:
compare_lag: '10'
compare_suffix: over 10Y
country_fieldtype: cca3
entity: country_code
granularity_sqla: year
groupby: []
limit: '25'
markup_type: markdown
metric: sum__SP_POP_TOTL
row_limit: 50000
show_bubbles: true
since: '2000'
time_range: '2014-01-01 : 2014-01-02'
until: '2014-01-02'
viz_type: big_number
query_context: null
cache_timeout: null
uuid: c50fc6e3-96fc-4e72-877b-2ea1a5e25c7a
version: 1.0.0
dataset_uuid: 3b851597-e0e9-42a1-83e4-55547811742e

View File

@@ -0,0 +1,48 @@
slice_name: Deck.gl Arcs
description: null
certified_by: null
certification_details: null
viz_type: deck_arc
params:
color_picker:
a: 1
b: 135
g: 122
r: 0
datasource: 10__table
end_spatial:
latCol: LATITUDE_DEST
lonCol: LONGITUDE_DEST
type: latlong
granularity_sqla: null
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
row_limit: 5000
slice_id: 42
start_spatial:
latCol: LATITUDE
lonCol: LONGITUDE
type: latlong
stroke_width: 1
time_grain_sqla: null
time_range: ' : '
viewport:
altitude: 1.5
bearing: 8.546256357301871
height: 642
latitude: 44.596651438714254
longitude: -91.84340711201104
maxLatitude: 85.05113
maxPitch: 60
maxZoom: 20
minLatitude: -85.05113
minPitch: 0
minZoom: 0
pitch: 60
width: 997
zoom: 2.929837070560775
viz_type: deck_arc
query_context: null
cache_timeout: null
uuid: 51a68f80-d538-4094-bb9e-346aad49b306
version: 1.0.0
dataset_uuid: 92980b06-cbec-4f34-9c2e-7308edc8c2b9

View File

@@ -0,0 +1,43 @@
slice_name: Deck.gl Grid
description: null
certified_by: null
certification_details: null
viz_type: deck_grid
params:
autozoom: false
color_picker:
a: 1
b: 0
g: 255
r: 14
datasource: 5__table
extruded: true
granularity_sqla: null
grid_size: 120
groupby: []
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
point_radius: Auto
point_radius_fixed:
type: fix
value: 2000
point_radius_unit: Pixels
row_limit: 5000
size: count
spatial:
latCol: LAT
lonCol: LON
type: latlong
time_grain_sqla: null
time_range: No filter
viewport:
bearing: 155.80099696026355
latitude: 37.7942314882596
longitude: -122.42066918995666
pitch: 53.470800300695146
zoom: 12.699690845482069
viz_type: deck_grid
query_context: null
cache_timeout: null
uuid: a1b96ab6-3c0b-4cbc-b13a-a70749e84068
version: 1.0.0
dataset_uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7

View File

@@ -0,0 +1,42 @@
slice_name: Deck.gl Hexagons
description: null
certified_by: null
certification_details: null
viz_type: deck_hex
params:
color_picker:
a: 1
b: 0
g: 255
r: 14
datasource: 5__table
extruded: true
granularity_sqla: null
grid_size: 40
groupby: []
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
point_radius: Auto
point_radius_fixed:
type: fix
value: 2000
point_radius_unit: Pixels
row_limit: 5000
size: count
spatial:
latCol: LAT
lonCol: LON
type: latlong
time_grain_sqla: null
time_range: No filter
viewport:
bearing: -2.3984797349335167
latitude: 37.789795085160335
longitude: -122.40632230075536
pitch: 54.08961642447763
zoom: 13.835465702403654
viz_type: deck_hex
query_context: null
cache_timeout: null
uuid: bdfdce5d-c44d-4c63-8a45-0b2a1a29715b
version: 1.0.0
dataset_uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7

View File

@@ -0,0 +1,48 @@
slice_name: Deck.gl Path
description: null
certified_by: null
certification_details: null
viz_type: deck_path
params:
color_picker:
a: 1
b: 135
g: 122
r: 0
datasource: 12__table
js_columns:
- color
js_data_mutator: "data => data.map(d => ({\n ...d,\n color: colors.hexToRGB(d.extraProps.color)\n\
}));"
js_onclick_href: ''
js_tooltip: ''
line_column: path_json
line_type: json
line_width: 150
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
reverse_long_lat: false
row_limit: 5000
slice_id: 43
time_grain_sqla: null
time_range: ' : '
viewport:
altitude: 1.5
bearing: 0
height: 1094
latitude: 37.73671752604488
longitude: -122.18885402582598
maxLatitude: 85.05113
maxPitch: 60
maxZoom: 20
minLatitude: -85.05113
minPitch: 0
minZoom: 0
pitch: 0
width: 669
zoom: 9.51847667620428
viz_type: deck_path
query_context: null
cache_timeout: null
uuid: 6332daf6-e442-469d-b66c-a6a38423d4c7
version: 1.0.0
dataset_uuid: 151c283f-c076-437a-8e2f-1cf65fe6db0d

View File

@@ -0,0 +1,88 @@
slice_name: Deck.gl Polygons
description: null
certified_by: null
certification_details: null
viz_type: deck_polygon
params:
datasource: 11__table
extruded: true
fill_color_picker:
a: 1
b: 73
g: 65
r: 3
filled: true
granularity_sqla: null
js_columns: []
js_data_mutator: ''
js_onclick_href: ''
js_tooltip: ''
legend_format: .1s
legend_position: tr
line_column: contour
line_type: json
line_width: 10
line_width_unit: meters
linear_color_scheme: oranges
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
metric:
aggregate: SUM
column:
column_name: population
description: null
expression: null
filterable: true
groupby: true
id: 1332
is_dttm: false
optionName: _col_population
python_date_format: null
type: BIGINT
verbose_name: null
expressionType: SIMPLE
hasCustomLabel: true
label: Population
optionName: metric_t2v4qbfiz1_w6qgpx4h2p
sqlExpression: null
multiplier: 0.1
point_radius_fixed:
type: metric
value:
aggregate: null
column: null
expressionType: SQL
hasCustomLabel: null
label: Density
optionName: metric_c5rvwrzoo86_293h6yrv2ic
sqlExpression: SUM(population)/SUM(area)
reverse_long_lat: false
slice_id: 41
stroke_color_picker:
a: 1
b: 135
g: 122
r: 0
stroked: false
time_grain_sqla: null
time_range: ' : '
viewport:
altitude: 1.5
bearing: 37.89506450385642
height: 906
latitude: 37.752020331384834
longitude: -122.43388541747726
maxLatitude: 85.05113
maxPitch: 60
maxZoom: 20
minLatitude: -85.05113
minPitch: 0
minZoom: 0
pitch: 60
width: 667
zoom: 11.133995608594631
viz_type: deck_polygon
query_context: null
cache_timeout: null
uuid: f3236785-149e-4cab-9408-f2cc69afd977
version: 1.0.0
dataset_uuid: a480e881-e90d-4dc8-818e-f9338c3ca839

View File

@@ -0,0 +1,42 @@
slice_name: Deck.gl Scatterplot
description: null
certified_by: null
certification_details: null
viz_type: deck_scatter
params:
color_picker:
a: 0.82
b: 3
g: 0
r: 205
datasource: 5__table
granularity_sqla: null
groupby: []
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
max_radius: 250
min_radius: 1
multiplier: 10
point_radius_fixed:
type: metric
value: count
point_unit: square_m
row_limit: 5000
size: count
spatial:
latCol: LAT
lonCol: LON
type: latlong
time_grain_sqla: null
time_range: ' : '
viewport:
bearing: -4.952916738791771
latitude: 37.78926922909199
longitude: -122.42613341901688
pitch: 4.750411100577438
zoom: 12.729132798697304
viz_type: deck_scatter
query_context: null
cache_timeout: null
uuid: cc75c4d5-8f79-4ffd-8e75-06162d4a867f
version: 1.0.0
dataset_uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7

View File

@@ -0,0 +1,41 @@
slice_name: Deck.gl Screen grid
description: null
certified_by: null
certification_details: null
viz_type: deck_screengrid
params:
color_picker:
a: 1
b: 0
g: 255
r: 14
datasource: 5__table
granularity_sqla: null
grid_size: 20
groupby: []
mapbox_style: https://tile.openstreetmap.org/{z}/{x}/{y}.png
point_radius: Auto
point_radius_fixed:
type: fix
value: 2000
point_unit: square_m
row_limit: 5000
size: count
spatial:
latCol: LAT
lonCol: LON
type: latlong
time_grain_sqla: null
time_range: No filter
viewport:
bearing: -4.952916738791771
latitude: 37.76024135844065
longitude: -122.41827069521386
pitch: 4.750411100577438
zoom: 14.161641703941438
viz_type: deck_screengrid
query_context: null
cache_timeout: null
uuid: 966c802c-4733-489f-b65b-385083c85d90
version: 1.0.0
dataset_uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7

View File

@@ -0,0 +1,83 @@
dashboard_title: Misc Charts
description: null
css: null
slug: misc_charts
certified_by: null
certification_details: null
published: false
uuid: 55a4fe9f-2682-4b0d-84c7-49ded4be11db
position:
CHART-HJOYVMV0E7:
children: []
id: CHART-HJOYVMV0E7
meta:
chartId: 30
height: 69
sliceName: OSM Long/Lat
uuid: a4e90860-c8f5-4c50-8c04-06b2e144809c
width: 4
parents:
- ROOT_ID
- GRID_ID
- ROW-S1MK4M4A4X
- COLUMN-ByUFVf40EQ
type: CHART
CHART-S1WYNz4AVX:
children: []
id: CHART-S1WYNz4AVX
meta:
chartId: 10
height: 69
sliceName: Parallel Coordinates
uuid: 041377c4-0ca9-4a40-8abd-befcd137c0dc
width: 4
parents:
- ROOT_ID
- GRID_ID
- ROW-SytNzNA4X
type: CHART
CHART-rkgF4G4A4X:
children: []
id: CHART-rkgF4G4A4X
meta:
chartId: 31
height: 69
sliceName: Birth in France by department in 2016
uuid: 6bd584f1-0ef5-44fc-8a05-61400f83bb62
width: 4
parents:
- ROOT_ID
- GRID_ID
- ROW-SytNzNA4X
type: CHART
DASHBOARD_VERSION_KEY: v2
GRID_ID:
children:
- ROW-SytNzNA4X
id: GRID_ID
parents:
- ROOT_ID
type: GRID
HEADER_ID:
id: HEADER_ID
meta:
text: Misc Charts
type: HEADER
ROOT_ID:
children:
- GRID_ID
id: ROOT_ID
type: ROOT
ROW-SytNzNA4X:
children:
- CHART-rkgF4G4A4X
- CHART-S1WYNz4AVX
- CHART-HJOYVMV0E7
id: ROW-SytNzNA4X
meta:
background: BACKGROUND_TRANSPARENT
parents:
- ROOT_ID
- GRID_ID
type: ROW
version: 1.0.0

View File

@@ -0,0 +1,263 @@
dashboard_title: USA Births Names
description: null
css: null
slug: births
certified_by: null
certification_details: null
published: true
uuid: fb7d30bc-b160-4371-861c-235d19bf6e25
position:
CHART-6GdlekVise:
children: []
id: CHART-6GdlekVise
meta:
chartId: 19
height: 50
sliceName: Top 10 Girl Name Share
width: 5
uuid: da76899a-d75c-467b-b0ce-cfa4819ed1b1
parents:
- ROOT_ID
- GRID_ID
- ROW-eh0w37bWbR
type: CHART
CHART-6n9jxb30JG:
children: []
id: CHART-6n9jxb30JG
meta:
chartId: 14
height: 36
sliceName: Genders by State
width: 5
uuid: 2cc25185-3d8c-494c-aa3c-14f081ac7e54
parents:
- ROOT_ID
- GRID_ID
- ROW--EyBZQlDi
type: CHART
CHART-Jj9qh1ol-N:
children: []
id: CHART-Jj9qh1ol-N
meta:
chartId: 18
height: 50
sliceName: Boy Name Cloud
width: 4
uuid: 6994ec83-0cf2-4a26-97e2-1e30b0002aa0
parents:
- ROOT_ID
- GRID_ID
- ROW-kzWtcvo8R1
type: CHART
CHART-ODvantb_bF:
children: []
id: CHART-ODvantb_bF
meta:
chartId: 20
height: 50
sliceName: Top 10 Boy Name Share
width: 5
uuid: f35cca46-bb11-440e-8ba1-7f021bfe52a7
parents:
- ROOT_ID
- GRID_ID
- ROW-kzWtcvo8R1
type: CHART
CHART-PAXUUqwmX9:
children: []
id: CHART-PAXUUqwmX9
meta:
chartId: 12
height: 34
sliceName: Genders
width: 3
uuid: fb05dca0-bd3e-4953-a0a5-94b51de3a653
parents:
- ROOT_ID
- GRID_ID
- ROW-2n0XgiHDgs
type: CHART
CHART-_T6n_K9iQN:
children: []
id: CHART-_T6n_K9iQN
meta:
chartId: 13
height: 36
sliceName: Trends
width: 7
uuid: c6024db9-1695-4aa6-b846-42d9c96bfcbf
parents:
- ROOT_ID
- GRID_ID
- ROW--EyBZQlDi
type: CHART
CHART-eNY0tcE_ic:
children: []
id: CHART-eNY0tcE_ic
meta:
chartId: 11
height: 34
sliceName: Participants
width: 3
uuid: 89ae3c32-eafa-4466-82cf-8c4328420782
parents:
- ROOT_ID
- GRID_ID
- ROW-2n0XgiHDgs
type: CHART
CHART-g075mMgyYb:
children: []
id: CHART-g075mMgyYb
meta:
chartId: 15
height: 50
sliceName: Girls
width: 3
uuid: 44cfa30e-af8e-4176-8612-4df0c0609516
parents:
- ROOT_ID
- GRID_ID
- ROW-eh0w37bWbR
type: CHART
CHART-n-zGGE6S1y:
children: []
id: CHART-n-zGGE6S1y
meta:
chartId: 16
height: 50
sliceName: Girl Name Cloud
width: 4
uuid: ba6574fe-a6c0-41ef-9499-1ea6ff36bd2d
parents:
- ROOT_ID
- GRID_ID
- ROW-eh0w37bWbR
type: CHART
CHART-vJIPjmcbD3:
children: []
id: CHART-vJIPjmcbD3
meta:
chartId: 17
height: 50
sliceName: Boys
width: 3
uuid: 0af97164-82f0-42bb-a611-7093e5c56596
parents:
- ROOT_ID
- GRID_ID
- ROW-kzWtcvo8R1
type: CHART
DASHBOARD_VERSION_KEY: v2
GRID_ID:
children:
- ROW-2n0XgiHDgs
- ROW--EyBZQlDi
- ROW-eh0w37bWbR
- ROW-kzWtcvo8R1
- ROW-N-0P6H2KVI
id: GRID_ID
parents:
- ROOT_ID
type: GRID
HEADER_ID:
id: HEADER_ID
meta:
text: Births
type: HEADER
MARKDOWN-zaflB60tbC:
children: []
id: MARKDOWN-zaflB60tbC
meta:
code: <div style="text-align:center"> <h1>Birth Names Dashboard</h1> <img
src="/static/assets/images/babies.png" style="width:50%;"></div>
height: 34
width: 6
parents:
- ROOT_ID
- GRID_ID
- ROW-2n0XgiHDgs
type: MARKDOWN
ROOT_ID:
children:
- GRID_ID
id: ROOT_ID
type: ROOT
ROW--EyBZQlDi:
children:
- CHART-_T6n_K9iQN
- CHART-6n9jxb30JG
id: ROW--EyBZQlDi
meta:
background: BACKGROUND_TRANSPARENT
parents:
- ROOT_ID
- GRID_ID
type: ROW
ROW-2n0XgiHDgs:
children:
- CHART-eNY0tcE_ic
- MARKDOWN-zaflB60tbC
- CHART-PAXUUqwmX9
id: ROW-2n0XgiHDgs
meta:
background: BACKGROUND_TRANSPARENT
parents:
- ROOT_ID
- GRID_ID
type: ROW
ROW-eh0w37bWbR:
children:
- CHART-g075mMgyYb
- CHART-n-zGGE6S1y
- CHART-6GdlekVise
id: ROW-eh0w37bWbR
meta:
background: BACKGROUND_TRANSPARENT
parents:
- ROOT_ID
- GRID_ID
type: ROW
ROW-kzWtcvo8R1:
children:
- CHART-vJIPjmcbD3
- CHART-Jj9qh1ol-N
- CHART-ODvantb_bF
id: ROW-kzWtcvo8R1
meta:
background: BACKGROUND_TRANSPARENT
parents:
- ROOT_ID
- GRID_ID
type: ROW
ROW-N-0P6H2KVI:
children:
- CHART-A62J4Z7R
id: ROW-N-0P6H2KVI
meta:
'0': ROOT_ID
background: BACKGROUND_TRANSPARENT
type: ROW
parents:
- ROOT_ID
- GRID_ID
CHART-A62J4Z7R:
children: []
id: CHART-A62J4Z7R
meta:
chartId: 21
height: 50
sliceName: Pivot Table v2
uuid: 86778b63-19d8-4278-a79f-c90a1b31e162
width: 4
type: CHART
parents:
- ROOT_ID
- GRID_ID
- ROW-N-0P6H2KVI
metadata:
label_colors:
Girls: '#FF69B4'
Boys: '#ADD8E6'
girl: '#FF69B4'
boy: '#ADD8E6'
version: 1.0.0

View File

@@ -0,0 +1,175 @@
dashboard_title: World Bank's Data
description: null
css: null
slug: world_health
certified_by: null
certification_details: null
published: true
uuid: d37232b3-43b9-486a-a132-e387dc0ff8de
position:
CHART-37982887:
children: []
id: CHART-37982887
meta:
chartId: 1
height: 52
sliceName: World's Population
width: 2
uuid: c50fc6e3-96fc-4e72-877b-2ea1a5e25c7a
type: CHART
CHART-17e0f8d8:
children: []
id: CHART-17e0f8d8
meta:
chartId: 2
height: 92
sliceName: Most Populated Countries
width: 3
uuid: 4183745e-1cc4-4f88-9ae6-973c69845ce4
type: CHART
CHART-2ee52f30:
children: []
id: CHART-2ee52f30
meta:
chartId: 3
height: 38
sliceName: Growth Rate
width: 6
uuid: cfcd7c5e-4759-4b28-bb7c-e2200508e978
type: CHART
CHART-2d5b6871:
children: []
id: CHART-2d5b6871
meta:
chartId: 4
height: 52
sliceName: '% Rural'
width: 7
uuid: 8d889488-edb5-40cb-a69c-e2c14f009e2b
type: CHART
CHART-0fd0d252:
children: []
id: CHART-0fd0d252
meta:
chartId: 5
height: 50
sliceName: Life Expectancy VS Rural %
width: 8
uuid: fa927236-7b66-4d03-ae6c-463d2d394123
type: CHART
CHART-97f4cb48:
children: []
id: CHART-97f4cb48
meta:
chartId: 6
height: 38
sliceName: Rural Breakdown
width: 3
uuid: 70a2e07b-0f45-4532-96ae-0c6db52d2e7c
type: CHART
CHART-b5e05d6f:
children: []
id: CHART-b5e05d6f
meta:
chartId: 7
height: 50
sliceName: World's Pop Growth
width: 4
uuid: e18b5d28-3a3d-43ea-8a20-b198b44b08e3
type: CHART
CHART-e76e9f5f:
children: []
id: CHART-e76e9f5f
meta:
chartId: 8
height: 50
sliceName: Box plot
width: 4
uuid: d31ba9c7-798b-4f84-87ef-ab31721680a8
type: CHART
CHART-a4808bba:
children: []
id: CHART-a4808bba
meta:
chartId: 9
height: 50
sliceName: Treemap
width: 8
uuid: fc941a12-88a0-42e7-ac48-c1ec4ed84640
type: CHART
COLUMN-071bbbad:
children:
- ROW-1e064e3c
- ROW-afdefba9
id: COLUMN-071bbbad
meta:
background: BACKGROUND_TRANSPARENT
width: 9
type: COLUMN
COLUMN-fe3914b8:
children:
- CHART-37982887
id: COLUMN-fe3914b8
meta:
background: BACKGROUND_TRANSPARENT
width: 2
type: COLUMN
GRID_ID:
children:
- ROW-46632bc2
- ROW-3fa26c5d
- ROW-812b3f13
id: GRID_ID
type: GRID
HEADER_ID:
id: HEADER_ID
meta:
text: World's Bank Data
type: HEADER
ROOT_ID:
children:
- GRID_ID
id: ROOT_ID
type: ROOT
ROW-1e064e3c:
children:
- COLUMN-fe3914b8
- CHART-2d5b6871
id: ROW-1e064e3c
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
ROW-3fa26c5d:
children:
- CHART-b5e05d6f
- CHART-0fd0d252
id: ROW-3fa26c5d
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
ROW-46632bc2:
children:
- COLUMN-071bbbad
- CHART-17e0f8d8
id: ROW-46632bc2
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
ROW-812b3f13:
children:
- CHART-a4808bba
- CHART-e76e9f5f
id: ROW-812b3f13
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
ROW-afdefba9:
children:
- CHART-2ee52f30
- CHART-97f4cb48
id: ROW-afdefba9
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
DASHBOARD_VERSION_KEY: v2
version: 1.0.0

View File

@@ -0,0 +1,130 @@
dashboard_title: deck.gl Demo
description: null
css: null
slug: deck
certified_by: null
certification_details: null
published: true
uuid: b78795f1-0b33-41a9-a6c7-186f38a526ad
position:
CHART-3afd9d70:
meta:
chartId: 32
sliceName: Deck.gl Scatterplot
width: 6
height: 50
uuid: cc75c4d5-8f79-4ffd-8e75-06162d4a867f
type: CHART
id: CHART-3afd9d70
children: []
CHART-2ee7fa5e:
meta:
chartId: 33
sliceName: Deck.gl Screen grid
width: 6
height: 50
uuid: 966c802c-4733-489f-b65b-385083c85d90
type: CHART
id: CHART-2ee7fa5e
children: []
CHART-201f7715:
meta:
chartId: 34
sliceName: Deck.gl Hexagons
width: 6
height: 50
uuid: bdfdce5d-c44d-4c63-8a45-0b2a1a29715b
type: CHART
id: CHART-201f7715
children: []
CHART-d02f6c40:
meta:
chartId: 35
sliceName: Deck.gl Grid
width: 6
height: 50
uuid: a1b96ab6-3c0b-4cbc-b13a-a70749e84068
type: CHART
id: CHART-d02f6c40
children: []
CHART-2673431d:
meta:
chartId: 36
sliceName: Deck.gl Polygons
width: 6
height: 50
uuid: f3236785-149e-4cab-9408-f2cc69afd977
type: CHART
id: CHART-2673431d
children: []
CHART-85265a60:
meta:
chartId: 37
sliceName: Deck.gl Arcs
width: 6
height: 50
uuid: 51a68f80-d538-4094-bb9e-346aad49b306
type: CHART
id: CHART-85265a60
children: []
CHART-2b87513c:
meta:
chartId: 38
sliceName: Deck.gl Path
width: 6
height: 50
uuid: 6332daf6-e442-469d-b66c-a6a38423d4c7
type: CHART
id: CHART-2b87513c
children: []
GRID_ID:
type: GRID
id: GRID_ID
children:
- ROW-a7b16cb5
- ROW-72c218a5
- ROW-957ba55b
- ROW-af041bdd
HEADER_ID:
meta:
text: deck.gl Demo
type: HEADER
id: HEADER_ID
ROOT_ID:
type: ROOT
id: ROOT_ID
children:
- GRID_ID
ROW-72c218a5:
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
id: ROW-72c218a5
children:
- CHART-d02f6c40
- CHART-201f7715
ROW-957ba55b:
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
id: ROW-957ba55b
children:
- CHART-2673431d
- CHART-85265a60
ROW-a7b16cb5:
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
id: ROW-a7b16cb5
children:
- CHART-3afd9d70
- CHART-2ee7fa5e
ROW-af041bdd:
meta:
background: BACKGROUND_TRANSPARENT
type: ROW
id: ROW-af041bdd
children:
- CHART-2b87513c
DASHBOARD_VERSION_KEY: v2
version: 1.0.0

View File

@@ -0,0 +1,80 @@
table_name: bart_lines
main_dttm_col: null
description: BART lines
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: 151c283f-c076-437a-8e2f-1cf65fe6db0d
metrics:
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: name
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(255)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: color
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(255)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: path_json
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: polyline
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/bart-lines.json.gz

View File

@@ -0,0 +1,209 @@
table_name: birth_france_by_region
main_dttm_col: dttm
description: null
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: c21dd48d-9a4b-4a08-a926-47c3601c2a8d
metrics:
- metric_name: avg__2004
verbose_name: null
metric_type: null
expression: AVG("2004")
description: null
d3format: null
currency: null
extra: null
warning_text: null
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: DEPT_ID
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(10)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2010'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2003'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2004'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2005'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2006'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2007'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2008'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2009'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2011'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2012'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2013'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: '2014'
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: dttm
verbose_name: null
is_dttm: true
is_active: true
type: DATE
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/paris_iris.json.gz

View File

@@ -0,0 +1,137 @@
table_name: birth_names
main_dttm_col: ds
description: null
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: 4ec507ac-bece-4d2b-8dc3-cfb7c3515e76
metrics:
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
- metric_name: sum__num
verbose_name: null
metric_type: null
expression: SUM(num)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: num_california
verbose_name: null
is_dttm: false
is_active: true
type: null
advanced_data_type: null
groupby: true
filterable: true
expression: CASE WHEN state = 'CA' THEN num ELSE 0 END
description: null
python_date_format: null
extra: null
- column_name: ds
verbose_name: null
is_dttm: true
is_active: true
type: TIMESTAMP WITHOUT TIME ZONE
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: state
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(10)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: gender
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(16)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: name
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(255)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: num_boys
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: num_girls
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: num
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/birth_names2.json.gz

View File

@@ -0,0 +1,560 @@
table_name: flights
main_dttm_col: ds
description: Random set of flights in the US
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: 92980b06-cbec-4f34-9c2e-7308edc8c2b9
metrics:
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: ds
verbose_name: null
is_dttm: true
is_active: true
type: TIMESTAMP WITHOUT TIME ZONE
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LATE_AIRCRAFT_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: ARRIVAL_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DEPARTURE_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: WEATHER_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIRLINE_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIR_SYSTEM_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: ARRIVAL_TIME
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: SECURITY_DELAY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LATITUDE_DEST
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: ELAPSED_TIME
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DEPARTURE_TIME
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LATITUDE
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIR_TIME
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: TAXI_IN
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: TAXI_OUT
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LONGITUDE_DEST
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LONGITUDE
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: WHEELS_OFF
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: WHEELS_ON
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: CANCELLATION_REASON
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: SCHEDULED_ARRIVAL
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DESTINATION_AIRPORT
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: CANCELLED
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: SCHEDULED_DEPARTURE
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DISTANCE
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DAY_OF_WEEK
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DAY
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: TAIL_NUMBER
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: YEAR
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: STATE_DEST
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIRPORT_DEST
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIRLINE
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: STATE
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: ORIGIN_AIRPORT
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: AIRPORT
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: FLIGHT_NUMBER
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: SCHEDULED_TIME
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DIVERTED
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: MONTH
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: CITY_DEST
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: COUNTRY_DEST
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: CITY
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: COUNTRY
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/flight_data.csv.gz

View File

@@ -0,0 +1,212 @@
table_name: long_lat
main_dttm_col: datetime
description: null
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: 605eaec7-ebf1-4fea-ac4b-07652fcb46e7
metrics:
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: datetime
verbose_name: null
is_dttm: true
is_active: true
type: TIMESTAMP WITHOUT TIME ZONE
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LAT
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: DISTRICT
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: CITY
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: ID
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: REGION
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: LON
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: radius_miles
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: occupancy
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: delimited
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(60)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: geohash
verbose_name: null
is_dttm: false
is_active: true
type: VARCHAR(12)
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: POSTCODE
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: NUMBER
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: STREET
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: UNIT
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/san_francisco.csv.gz

View File

@@ -0,0 +1,80 @@
table_name: sf_population_polygons
main_dttm_col: null
description: Population density of San Francisco
default_endpoint: null
offset: 0
cache_timeout: null
catalog: null
schema: public
sql: null
params: null
template_params: null
filter_select_enabled: true
fetch_values_predicate: null
extra: null
normalize_columns: false
always_filter_main_dttm: false
folders: null
uuid: a480e881-e90d-4dc8-818e-f9338c3ca839
metrics:
- metric_name: count
verbose_name: COUNT(*)
metric_type: count
expression: COUNT(*)
description: null
d3format: null
currency: null
extra: null
warning_text: null
columns:
- column_name: area
verbose_name: null
is_dttm: false
is_active: true
type: DOUBLE PRECISION
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: population
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: zipcode
verbose_name: null
is_dttm: false
is_active: true
type: BIGINT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
- column_name: contour
verbose_name: null
is_dttm: false
is_active: true
type: TEXT
advanced_data_type: null
groupby: true
filterable: true
expression: null
description: null
python_date_format: null
extra: null
version: 1.0.0
database_uuid: a2dc77af-e654-49bb-b321-40f6b559a1ee
data: https://cdn.jsdelivr.net/gh/apache-superset/examples-data@master/sf_population.json.gz

File diff suppressed because it is too large Load Diff

View File

@@ -14,42 +14,13 @@
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from .bart_lines import load_bart_lines
from .big_data import load_big_data
from .birth_names import load_birth_names
from .country_map import load_country_map_data
from .css_templates import load_css_templates
from .deck import load_deck_dash
from .energy import load_energy
from .flights import load_flights
from .long_lat import load_long_lat_data
from .misc_dashboard import load_misc_dashboard
from .multiformat_time_series import load_multiformat_time_series
from .paris import load_paris_iris_geojson
from .random_time_series import load_random_time_series_data
from .sf_population_polygons import load_sf_population_polygons
from .supported_charts_dashboard import load_supported_charts_dashboard
from .tabbed_dashboard import load_tabbed_dashboard
from .utils import load_examples_from_configs
from .world_bank import load_world_bank_health_n_pop
from .utils import cleanup_old_examples, load_examples_from_configs
__all__ = [
"load_bart_lines",
"load_big_data",
"load_birth_names",
"cleanup_old_examples",
"load_country_map_data",
"load_css_templates",
"load_deck_dash",
"load_energy",
"load_flights",
"load_long_lat_data",
"load_misc_dashboard",
"load_multiformat_time_series",
"load_paris_iris_geojson",
"load_random_time_series_data",
"load_sf_population_polygons",
"load_supported_charts_dashboard",
"load_tabbed_dashboard",
"load_examples_from_configs",
"load_world_bank_health_n_pop",
]

View File

@@ -1,547 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
from superset import db
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.utils import json
from superset.utils.core import DatasourceType
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
update_slice_ids,
)
logger = logging.getLogger(__name__)
COLOR_RED = {"r": 205, "g": 0, "b": 3, "a": 0.82}
POSITION_JSON = """\
{
"CHART-3afd9d70": {
"meta": {
"chartId": 66,
"sliceName": "Deck.gl Scatterplot",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-3afd9d70",
"children": []
},
"CHART-2ee7fa5e": {
"meta": {
"chartId": 67,
"sliceName": "Deck.gl Screen grid",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2ee7fa5e",
"children": []
},
"CHART-201f7715": {
"meta": {
"chartId": 68,
"sliceName": "Deck.gl Hexagons",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-201f7715",
"children": []
},
"CHART-d02f6c40": {
"meta": {
"chartId": 69,
"sliceName": "Deck.gl Grid",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-d02f6c40",
"children": []
},
"CHART-2673431d": {
"meta": {
"chartId": 70,
"sliceName": "Deck.gl Polygons",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2673431d",
"children": []
},
"CHART-85265a60": {
"meta": {
"chartId": 71,
"sliceName": "Deck.gl Arcs",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-85265a60",
"children": []
},
"CHART-2b87513c": {
"meta": {
"chartId": 72,
"sliceName": "Deck.gl Path",
"width": 6,
"height": 50
},
"type": "CHART",
"id": "CHART-2b87513c",
"children": []
},
"GRID_ID": {
"type": "GRID",
"id": "GRID_ID",
"children": [
"ROW-a7b16cb5",
"ROW-72c218a5",
"ROW-957ba55b",
"ROW-af041bdd"
]
},
"HEADER_ID": {
"meta": {
"text": "deck.gl Demo"
},
"type": "HEADER",
"id": "HEADER_ID"
},
"ROOT_ID": {
"type": "ROOT",
"id": "ROOT_ID",
"children": [
"GRID_ID"
]
},
"ROW-72c218a5": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-72c218a5",
"children": [
"CHART-d02f6c40",
"CHART-201f7715"
]
},
"ROW-957ba55b": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-957ba55b",
"children": [
"CHART-2673431d",
"CHART-85265a60"
]
},
"ROW-a7b16cb5": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-a7b16cb5",
"children": [
"CHART-3afd9d70",
"CHART-2ee7fa5e"
]
},
"ROW-af041bdd": {
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"type": "ROW",
"id": "ROW-af041bdd",
"children": [
"CHART-2b87513c"
]
},
"DASHBOARD_VERSION_KEY": "v2"
}"""
def load_deck_dash() -> None: # pylint: disable=too-many-statements
logger.debug("Loading deck.gl dashboard")
slices = []
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name="long_lat").first()
slice_data = {
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"color_picker": COLOR_RED,
"datasource": "5__table",
"granularity_sqla": None,
"groupby": [],
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"multiplier": 10,
"point_radius_fixed": {"type": "metric", "value": "count"},
"point_unit": "square_m",
"min_radius": 1,
"max_radius": 250,
"row_limit": 5000,
"time_range": " : ",
"size": "count",
"time_grain_sqla": None,
"viewport": {
"bearing": -4.952916738791771,
"latitude": 37.78926922909199,
"longitude": -122.42613341901688,
"pitch": 4.750411100577438,
"zoom": 12.729132798697304,
},
"viz_type": "deck_scatter",
}
logger.debug("Creating Scatterplot slice")
slc = Slice(
slice_name="Deck.gl Scatterplot",
viz_type="deck_scatter",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"point_unit": "square_m",
"row_limit": 5000,
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_screengrid",
"time_range": "No filter",
"point_radius": "Auto",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 20,
"viewport": {
"zoom": 14.161641703941438,
"longitude": -122.41827069521386,
"bearing": -4.952916738791771,
"latitude": 37.76024135844065,
"pitch": 4.750411100577438,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
logger.debug("Creating Screen Grid slice")
slc = Slice(
slice_name="Deck.gl Screen grid",
viz_type="deck_screengrid",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"row_limit": 5000,
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_hex",
"time_range": "No filter",
"point_radius_unit": "Pixels",
"point_radius": "Auto",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 40,
"extruded": True,
"viewport": {
"latitude": 37.789795085160335,
"pitch": 54.08961642447763,
"zoom": 13.835465702403654,
"longitude": -122.40632230075536,
"bearing": -2.3984797349335167,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
logger.debug("Creating Hex slice")
slc = Slice(
slice_name="Deck.gl Hexagons",
viz_type="deck_hex",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"autozoom": False,
"spatial": {"type": "latlong", "lonCol": "LON", "latCol": "LAT"},
"row_limit": 5000,
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"granularity_sqla": None,
"size": "count",
"viz_type": "deck_grid",
"point_radius_unit": "Pixels",
"point_radius": "Auto",
"time_range": "No filter",
"color_picker": {"a": 1, "r": 14, "b": 0, "g": 255},
"grid_size": 120,
"extruded": True,
"viewport": {
"longitude": -122.42066918995666,
"bearing": 155.80099696026355,
"zoom": 12.699690845482069,
"latitude": 37.7942314882596,
"pitch": 53.470800300695146,
},
"point_radius_fixed": {"type": "fix", "value": 2000},
"datasource": "5__table",
"time_grain_sqla": None,
"groupby": [],
}
logger.debug("Creating Grid slice")
slc = Slice(
slice_name="Deck.gl Grid",
viz_type="deck_grid",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
polygon_tbl = (
db.session.query(table).filter_by(table_name="sf_population_polygons").first()
)
slice_data = {
"datasource": "11__table",
"viz_type": "deck_polygon",
"slice_id": 41,
"granularity_sqla": None,
"time_grain_sqla": None,
"time_range": " : ",
"line_column": "contour",
"metric": {
"aggregate": "SUM",
"column": {
"column_name": "population",
"description": None,
"expression": None,
"filterable": True,
"groupby": True,
"id": 1332,
"is_dttm": False,
"optionName": "_col_population",
"python_date_format": None,
"type": "BIGINT",
"verbose_name": None,
},
"expressionType": "SIMPLE",
"hasCustomLabel": True,
"label": "Population",
"optionName": "metric_t2v4qbfiz1_w6qgpx4h2p",
"sqlExpression": None,
},
"line_type": "json",
"linear_color_scheme": "oranges",
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"viewport": {
"longitude": -122.43388541747726,
"latitude": 37.752020331384834,
"zoom": 11.133995608594631,
"bearing": 37.89506450385642,
"pitch": 60,
"width": 667,
"height": 906,
"altitude": 1.5,
"maxZoom": 20,
"minZoom": 0,
"maxPitch": 60,
"minPitch": 0,
"maxLatitude": 85.05113,
"minLatitude": -85.05113,
},
"reverse_long_lat": False,
"fill_color_picker": {"r": 3, "g": 65, "b": 73, "a": 1},
"stroke_color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"filled": True,
"stroked": False,
"extruded": True,
"multiplier": 0.1,
"line_width": 10,
"line_width_unit": "meters",
"point_radius_fixed": {
"type": "metric",
"value": {
"aggregate": None,
"column": None,
"expressionType": "SQL",
"hasCustomLabel": None,
"label": "Density",
"optionName": "metric_c5rvwrzoo86_293h6yrv2ic",
"sqlExpression": "SUM(population)/SUM(area)",
},
},
"js_columns": [],
"js_data_mutator": "",
"js_tooltip": "",
"js_onclick_href": "",
"legend_format": ".1s",
"legend_position": "tr",
}
logger.debug("Creating Polygon slice")
slc = Slice(
slice_name="Deck.gl Polygons",
viz_type="deck_polygon",
datasource_type=DatasourceType.TABLE,
datasource_id=polygon_tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"datasource": "10__table",
"viz_type": "deck_arc",
"slice_id": 42,
"granularity_sqla": None,
"time_grain_sqla": None,
"time_range": " : ",
"start_spatial": {
"type": "latlong",
"latCol": "LATITUDE",
"lonCol": "LONGITUDE",
},
"end_spatial": {
"type": "latlong",
"latCol": "LATITUDE_DEST",
"lonCol": "LONGITUDE_DEST",
},
"row_limit": 5000,
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"viewport": {
"altitude": 1.5,
"bearing": 8.546256357301871,
"height": 642,
"latitude": 44.596651438714254,
"longitude": -91.84340711201104,
"maxLatitude": 85.05113,
"maxPitch": 60,
"maxZoom": 20,
"minLatitude": -85.05113,
"minPitch": 0,
"minZoom": 0,
"pitch": 60,
"width": 997,
"zoom": 2.929837070560775,
},
"color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"stroke_width": 1,
}
logger.debug("Creating Arc slice")
slc = Slice(
slice_name="Deck.gl Arcs",
viz_type="deck_arc",
datasource_type=DatasourceType.TABLE,
datasource_id=db.session.query(table)
.filter_by(table_name="flights")
.first()
.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slice_data = {
"datasource": "12__table",
"slice_id": 43,
"viz_type": "deck_path",
"time_grain_sqla": None,
"time_range": " : ",
"line_column": "path_json",
"line_type": "json",
"row_limit": 5000,
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"viewport": {
"longitude": -122.18885402582598,
"latitude": 37.73671752604488,
"zoom": 9.51847667620428,
"bearing": 0,
"pitch": 0,
"width": 669,
"height": 1094,
"altitude": 1.5,
"maxZoom": 20,
"minZoom": 0,
"maxPitch": 60,
"minPitch": 0,
"maxLatitude": 85.05113,
"minLatitude": -85.05113,
},
"color_picker": {"r": 0, "g": 122, "b": 135, "a": 1},
"line_width": 150,
"reverse_long_lat": False,
"js_columns": ["color"],
"js_data_mutator": "data => data.map(d => ({\n"
" ...d,\n"
" color: colors.hexToRGB(d.extraProps.color)\n"
"}));",
"js_tooltip": "",
"js_onclick_href": "",
}
logger.debug("Creating Path slice")
slc = Slice(
slice_name="Deck.gl Path",
viz_type="deck_path",
datasource_type=DatasourceType.TABLE,
datasource_id=db.session.query(table)
.filter_by(table_name="bart_lines")
.first()
.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
slices.append(slc)
slug = "deck"
logger.debug("Creating a dashboard")
title = "deck.gl Demo"
dash = db.session.query(Dashboard).filter_by(slug=slug).first()
if not dash:
dash = Dashboard()
db.session.add(dash)
dash.published = True
js = POSITION_JSON
pos = json.loads(js)
slices = update_slice_ids(pos)
dash.position_json = json.dumps(pos, indent=4)
dash.dashboard_title = title
dash.slug = slug
dash.slices = slices

View File

@@ -1,76 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import pandas as pd
from sqlalchemy import DateTime, inspect
import superset.utils.database as database_utils
from superset import db
from superset.sql.parse import Table
from .helpers import get_table_connector_registry, read_example_data
logger = logging.getLogger(__name__)
def load_flights(only_metadata: bool = False, force: bool = False) -> None:
"""Loading random time series data from a zip file in the repo"""
tbl_name = "flights"
database = database_utils.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
pdf = read_example_data(
"examples://flight_data.csv.gz", encoding="latin-1", compression="gzip"
)
# Loading airports info to join and get lat/long
airports = read_example_data(
"examples://airports.csv.gz", encoding="latin-1", compression="gzip"
)
airports = airports.set_index("IATA_CODE")
pdf["ds"] = (
pdf.YEAR.map(str) + "-0" + pdf.MONTH.map(str) + "-0" + pdf.DAY.map(str)
)
pdf.ds = pd.to_datetime(pdf.ds)
pdf.drop(columns=["DAY", "MONTH", "YEAR"])
pdf = pdf.join(airports, on="ORIGIN_AIRPORT", rsuffix="_ORIG")
pdf = pdf.join(airports, on="DESTINATION_AIRPORT", rsuffix="_DEST")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={"ds": DateTime},
index=False,
)
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
db.session.add(tbl)
tbl.description = "Random set of flights in the US"
tbl.database = database
tbl.filter_select_enabled = True
tbl.fetch_metadata()
logger.debug("Done loading table!")

View File

@@ -49,7 +49,7 @@ from urllib.error import HTTPError
import pandas as pd
from superset import app, db
from superset import db
from superset.connectors.sqla.models import SqlaTable
from superset.models.slice import Slice
from superset.utils import json
@@ -78,11 +78,6 @@ def get_table_connector_registry() -> Any:
return SqlaTable
def get_examples_folder() -> str:
"""Return local path to the examples folder (when vendored)."""
return os.path.join(app.config["BASE_DIR"], "examples")
def update_slice_ids(pos: dict[Any, Any]) -> list[Slice]:
"""Update slice ids in ``position_json`` and return the slices found."""
slice_components = [

View File

@@ -1,127 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import datetime
import logging
import random
import geohash
from sqlalchemy import DateTime, Float, inspect, String
import superset.utils.database as database_utils
from superset import db
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils.core import DatasourceType
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
read_example_data,
)
logger = logging.getLogger(__name__)
def load_long_lat_data(only_metadata: bool = False, force: bool = False) -> None:
"""Loading lat/long data from a csv file in the repo"""
tbl_name = "long_lat"
database = database_utils.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
pdf = read_example_data(
"examples://san_francisco.csv.gz", encoding="utf-8", compression="gzip"
)
start = datetime.datetime.now().replace(
hour=0, minute=0, second=0, microsecond=0
)
pdf["datetime"] = [
start + datetime.timedelta(hours=i * 24 / (len(pdf) - 1))
for i in range(len(pdf))
]
pdf["occupancy"] = [random.randint(1, 6) for _ in range(len(pdf))] # noqa: S311
pdf["radius_miles"] = [random.uniform(1, 3) for _ in range(len(pdf))] # noqa: S311
pdf["geohash"] = pdf[["LAT", "LON"]].apply(
lambda x: geohash.encode(*x), axis=1
)
pdf["delimited"] = pdf["LAT"].map(str).str.cat(pdf["LON"].map(str), sep=",")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"longitude": Float(),
"latitude": Float(),
"number": Float(),
"street": String(100),
"unit": String(10),
"city": String(50),
"district": String(50),
"region": String(50),
"postcode": Float(),
"id": String(100),
"datetime": DateTime(),
"occupancy": Float(),
"radius_miles": Float(),
"geohash": String(12),
"delimited": String(60),
},
index=False,
)
logger.debug("Done loading table!")
logger.debug("-" * 80)
logger.debug("Creating table reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
db.session.add(obj)
obj.main_dttm_col = "datetime"
obj.database = database
obj.filter_select_enabled = True
obj.fetch_metadata()
tbl = obj
slice_data = {
"granularity_sqla": "day",
"since": "2014-01-01",
"until": "now",
"viz_type": "mapbox",
"all_columns_x": "LON",
"all_columns_y": "LAT",
"mapbox_style": "https://tile.openstreetmap.org/{z}/{x}/{y}.png",
"all_columns": ["occupancy"],
"row_limit": 500000,
}
logger.debug("Creating a slice")
slc = Slice(
slice_name="OSM Long/Lat",
viz_type="osm",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
misc_dash_slices.add(slc.slice_name)
merge_slice(slc)

View File

@@ -1,145 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import textwrap
from superset import db
from superset.models.dashboard import Dashboard
from superset.utils import json
from .helpers import update_slice_ids
logger = logging.getLogger(__name__)
DASH_SLUG = "misc_charts"
def load_misc_dashboard() -> None:
"""Loading a dashboard featuring misc charts"""
logger.debug("Creating the dashboard")
db.session.expunge_all()
dash = db.session.query(Dashboard).filter_by(slug=DASH_SLUG).first()
if not dash:
dash = Dashboard()
db.session.add(dash)
js = textwrap.dedent(
"""\
{
"CHART-HJOYVMV0E7": {
"children": [],
"id": "CHART-HJOYVMV0E7",
"meta": {
"chartId": 3969,
"height": 69,
"sliceName": "OSM Long/Lat",
"uuid": "164efe31-295b-4408-aaa6-2f4bfb58a212",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-S1MK4M4A4X",
"COLUMN-ByUFVf40EQ"
],
"type": "CHART"
},
"CHART-S1WYNz4AVX": {
"children": [],
"id": "CHART-S1WYNz4AVX",
"meta": {
"chartId": 3989,
"height": 69,
"sliceName": "Parallel Coordinates",
"uuid": "e84f7e74-031a-47bb-9f80-ae0694dcca48",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-SytNzNA4X"
],
"type": "CHART"
},
"CHART-rkgF4G4A4X": {
"children": [],
"id": "CHART-rkgF4G4A4X",
"meta": {
"chartId": 3970,
"height": 69,
"sliceName": "Birth in France by department in 2016",
"uuid": "54583ae9-c99a-42b5-a906-7ee2adfe1fb1",
"width": 4
},
"parents": [
"ROOT_ID",
"GRID_ID",
"ROW-SytNzNA4X"
],
"type": "CHART"
},
"DASHBOARD_VERSION_KEY": "v2",
"GRID_ID": {
"children": [
"ROW-SytNzNA4X"
],
"id": "GRID_ID",
"parents": [
"ROOT_ID"
],
"type": "GRID"
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {
"text": "Misc Charts"
},
"type": "HEADER"
},
"ROOT_ID": {
"children": [
"GRID_ID"
],
"id": "ROOT_ID",
"type": "ROOT"
},
"ROW-SytNzNA4X": {
"children": [
"CHART-rkgF4G4A4X",
"CHART-S1WYNz4AVX",
"CHART-HJOYVMV0E7"
],
"id": "ROW-SytNzNA4X",
"meta": {
"background": "BACKGROUND_TRANSPARENT"
},
"parents": [
"ROOT_ID",
"GRID_ID"
],
"type": "ROW"
}
}
"""
)
pos = json.loads(js)
slices = update_slice_ids(pos)
dash.dashboard_title = "Misc Charts"
dash.position_json = json.dumps(pos, indent=4)
dash.slug = DASH_SLUG
dash.slices = slices

View File

@@ -1,134 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
from typing import Optional
import pandas as pd
from sqlalchemy import BigInteger, Date, DateTime, inspect, String
from superset import app, db
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils.core import DatasourceType
from ..utils.database import get_example_database # noqa: TID252
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
read_example_data,
)
logger = logging.getLogger(__name__)
def load_multiformat_time_series( # pylint: disable=too-many-locals
only_metadata: bool = False, force: bool = False
) -> None:
"""Loading time series data from a zip file in the repo"""
tbl_name = "multiformat_time_series"
database = get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
pdf = read_example_data(
"examples://multiformat_time_series.json.gz", compression="gzip"
)
# TODO(bkyryliuk): move load examples data into the pytest fixture
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d")
pdf.ds2 = pd.to_datetime(pdf.ds2, unit="s")
pdf.ds2 = pdf.ds2.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds2 = pd.to_datetime(pdf.ds2, unit="s")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"ds": String(255) if database.backend == "presto" else Date,
"ds2": String(255) if database.backend == "presto" else DateTime,
"epoch_s": BigInteger,
"epoch_ms": BigInteger,
"string0": String(100),
"string1": String(100),
"string2": String(100),
"string3": String(100),
},
index=False,
)
logger.debug("Done loading table!")
logger.debug("-" * 80)
logger.debug(f"Creating table [{tbl_name}] reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
db.session.add(obj)
obj.main_dttm_col = "ds"
obj.database = database
obj.filter_select_enabled = True
dttm_and_expr_dict: dict[str, tuple[Optional[str], None]] = {
"ds": (None, None),
"ds2": (None, None),
"epoch_s": ("epoch_s", None),
"epoch_ms": ("epoch_ms", None),
"string2": ("%Y%m%d-%H%M%S", None),
"string1": ("%Y-%m-%d^%H:%M:%S", None),
"string0": ("%Y-%m-%d %H:%M:%S.%f", None),
"string3": ("%Y/%m/%d%H:%M:%S.%f", None),
}
for col in obj.columns:
dttm_and_expr = dttm_and_expr_dict[col.column_name]
col.python_date_format = dttm_and_expr[0]
col.database_expression = dttm_and_expr[1]
col.is_dttm = True
obj.fetch_metadata()
tbl = obj
logger.debug("Creating Heatmap charts")
for i, col in enumerate(tbl.columns):
slice_data = {
"metrics": ["count"],
"granularity_sqla": col.column_name,
"row_limit": app.config["ROW_LIMIT"],
"since": "2015",
"until": "2016",
"viz_type": "cal_heatmap",
"domain_granularity": "month",
"subdomain_granularity": "day",
}
slc = Slice(
slice_name=f"Calendar Heatmap multiformat {i}",
viz_type="cal_heatmap",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)
misc_dash_slices.add("Calendar Heatmap multiformat 0")

View File

@@ -1,67 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
from sqlalchemy import inspect, String, Text
import superset.utils.database as database_utils
from superset import db
from superset.sql.parse import Table
from superset.utils import json
from .helpers import get_table_connector_registry, read_example_data
logger = logging.getLogger(__name__)
def load_paris_iris_geojson(only_metadata: bool = False, force: bool = False) -> None:
tbl_name = "paris_iris_mapping"
database = database_utils.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
df = read_example_data("examples://paris_iris.json.gz", compression="gzip")
df["features"] = df.features.map(json.dumps)
df.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"color": String(255),
"name": String(255),
"features": Text,
"type": Text,
},
index=False,
)
logger.debug(f"Creating table {tbl_name} reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
db.session.add(tbl)
tbl.description = "Map of Paris"
tbl.database = database
tbl.filter_select_enabled = True
tbl.fetch_metadata()

View File

@@ -1,101 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import pandas as pd
from sqlalchemy import DateTime, inspect, String
import superset.utils.database as database_utils
from superset import app, db
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils.core import DatasourceType
from .helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
read_example_data,
)
logger = logging.getLogger(__name__)
def load_random_time_series_data(
only_metadata: bool = False, force: bool = False
) -> None:
"""Loading random time series data from a zip file in the repo"""
tbl_name = "random_time_series"
database = database_utils.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
pdf = read_example_data(
"examples://random_time_series.json.gz", compression="gzip"
)
if database.backend == "presto":
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.ds = pdf.ds.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.ds = pd.to_datetime(pdf.ds, unit="s")
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={"ds": DateTime if database.backend != "presto" else String(255)},
index=False,
)
logger.debug("Done loading table!")
logger.debug("-" * 80)
logger.debug(f"Creating table [{tbl_name}] reference")
table = get_table_connector_registry()
obj = db.session.query(table).filter_by(table_name=tbl_name).first()
if not obj:
obj = table(table_name=tbl_name, schema=schema)
db.session.add(obj)
obj.main_dttm_col = "ds"
obj.database = database
obj.filter_select_enabled = True
obj.fetch_metadata()
tbl = obj
slice_data = {
"granularity_sqla": "ds",
"row_limit": app.config["ROW_LIMIT"],
"since": "2019-01-01",
"until": "2019-02-01",
"metrics": ["count"],
"viz_type": "cal_heatmap",
"domain_granularity": "month",
"subdomain_granularity": "day",
}
logger.debug("Creating a slice")
slc = Slice(
slice_name="Calendar Heatmap",
viz_type="cal_heatmap",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(slice_data),
)
merge_slice(slc)

View File

@@ -1,71 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
from sqlalchemy import BigInteger, Float, inspect, Text
import superset.utils.database as database_utils
from superset import db
from superset.sql.parse import Table
from superset.utils import json
from .helpers import get_table_connector_registry, read_example_data
logger = logging.getLogger(__name__)
def load_sf_population_polygons(
only_metadata: bool = False, force: bool = False
) -> None:
tbl_name = "sf_population_polygons"
database = database_utils.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
df = read_example_data(
"examples://sf_population.json.gz", compression="gzip"
)
df["contour"] = df.contour.map(json.dumps)
df.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=500,
dtype={
"zipcode": BigInteger,
"population": BigInteger,
"contour": Text,
"area": Float,
},
index=False,
)
logger.debug(f"Creating table {tbl_name} reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
db.session.add(tbl)
tbl.description = "Population density of San Francisco"
tbl.database = database
tbl.filter_select_enabled = True
tbl.fetch_metadata()

View File

@@ -30,6 +30,54 @@ _logger = logging.getLogger(__name__)
YAML_EXTENSIONS = {".yaml", ".yml"}
# Known example UUID from YAML files (USA Births dashboard)
BIRTHS_DASHBOARD_UUID = "fb7d30bc-b160-4371-861c-235d19bf6e25"
BIRTHS_DASHBOARD_SLUG = "births"
def _has_old_examples() -> bool:
"""
Check if old pre-YAML examples exist by looking for a known dashboard.
If the births dashboard exists with a different UUID than expected,
we know these are old examples.
"""
from superset import db
from superset.models.dashboard import Dashboard
try:
# Check if births dashboard exists with wrong UUID (indicating old examples)
births_dashboard = (
db.session.query(Dashboard).filter_by(slug=BIRTHS_DASHBOARD_SLUG).first()
)
if births_dashboard and str(births_dashboard.uuid) != BIRTHS_DASHBOARD_UUID:
_logger.info(
f"Found old births dashboard with UUID {births_dashboard.uuid} "
f"(expected {BIRTHS_DASHBOARD_UUID})"
)
return True
except Exception as e:
_logger.debug(f"Error checking for old examples: {e}")
# If we can't check (e.g., database not set up), assume no old examples
return False
return False
def cleanup_old_examples() -> bool:
"""
Clean up old pre-YAML examples if they exist.
Returns True if cleanup was performed, False otherwise.
"""
from superset.cli.examples import clear_old_examples
try:
return clear_old_examples()
except Exception as e:
_logger.error(f"Failed to clean up examples: {e}")
raise
def load_examples_from_configs(
force_data: bool = False, load_test_data: bool = False
@@ -37,9 +85,19 @@ def load_examples_from_configs(
"""
Load all the examples inside superset/examples/configs/.
"""
# Check if old examples exist before loading new ones
if _has_old_examples():
_logger.warning(
"Old, pre-YAML examples detected, skipping example import. "
"Existing examples will be preserved."
)
return
contents = load_contents(load_test_data)
_logger.info(f"Found {len(contents)} YAML configuration files to import")
command = ImportExamplesCommand(contents, overwrite=True, force_data=force_data)
command.run()
_logger.info("Finished loading examples from YAML configuration files")
def load_contents(load_test_data: bool = False) -> dict[str, Any]:
@@ -59,7 +117,9 @@ def load_contents(load_test_data: bool = False) -> dict[str, Any]:
for child_name in (files("superset") / str(path_name)).iterdir()
)
elif Path(str(path_name)).suffix.lower() in YAML_EXTENSIONS:
if load_test_data and test_re.search(str(path_name)) is None:
# When load_test_data is False, skip test files
# When load_test_data is True, load all files (including test files)
if not load_test_data and test_re.search(str(path_name)) is not None:
continue
contents[Path(str(path_name))] = (
files("superset") / str(path_name)

View File

@@ -1,478 +0,0 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import os
import pandas as pd
from sqlalchemy import DateTime, inspect, String
from sqlalchemy.sql import column
import superset.utils.database
from superset import app, db
from superset.connectors.sqla.models import BaseDatasource, SqlMetric
from superset.examples.helpers import (
get_examples_folder,
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
read_example_data,
update_slice_ids,
)
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils import core as utils, json
from superset.utils.core import DatasourceType
logger = logging.getLogger(__name__)
def load_world_bank_health_n_pop( # pylint: disable=too-many-locals
only_metadata: bool = False,
force: bool = False,
sample: bool = False,
) -> None:
"""Loads the world bank health dataset, slices and a dashboard"""
tbl_name = "wb_health_population"
database = superset.utils.database.get_example_database()
with database.get_sqla_engine() as engine:
schema = inspect(engine).default_schema_name
table_exists = database.has_table(Table(tbl_name, schema))
if not only_metadata and (not table_exists or force):
pdf = read_example_data("examples://countries.json.gz", compression="gzip")
pdf.columns = [col.replace(".", "_") for col in pdf.columns]
if database.backend == "presto":
pdf.year = pd.to_datetime(pdf.year)
pdf.year = pdf.year.dt.strftime("%Y-%m-%d %H:%M%:%S")
else:
pdf.year = pd.to_datetime(pdf.year)
pdf = pdf.head(100) if sample else pdf
pdf.to_sql(
tbl_name,
engine,
schema=schema,
if_exists="replace",
chunksize=50,
dtype={
# TODO(bkyryliuk): use TIMESTAMP type for presto
"year": DateTime if database.backend != "presto" else String(255),
"country_code": String(3),
"country_name": String(255),
"region": String(255),
},
method="multi",
index=False,
)
logger.debug("Creating table [wb_health_population] reference")
table = get_table_connector_registry()
tbl = db.session.query(table).filter_by(table_name=tbl_name).first()
if not tbl:
tbl = table(table_name=tbl_name, schema=schema)
db.session.add(tbl)
tbl.description = utils.readfile(
os.path.join(get_examples_folder(), "countries.md")
)
tbl.main_dttm_col = "year"
tbl.database = database
tbl.filter_select_enabled = True
metrics = [
"sum__SP_POP_TOTL",
"sum__SH_DYN_AIDS",
"sum__SH_DYN_AIDS",
"sum__SP_RUR_TOTL_ZS",
"sum__SP_DYN_LE00_IN",
"sum__SP_RUR_TOTL",
]
for metric in metrics:
if not any(col.metric_name == metric for col in tbl.metrics):
aggr_func = metric[:3]
col = str(column(metric[5:]).compile(db.engine))
tbl.metrics.append(
SqlMetric(metric_name=metric, expression=f"{aggr_func}({col})")
)
tbl.fetch_metadata()
slices = create_slices(tbl)
misc_dash_slices.add(slices[-1].slice_name)
for slc in slices:
merge_slice(slc)
logger.debug("Creating a World's Health Bank dashboard")
dash_name = "World Bank's Data"
slug = "world_health"
dash = db.session.query(Dashboard).filter_by(slug=slug).first()
if not dash:
dash = Dashboard()
db.session.add(dash)
dash.published = True
pos = dashboard_positions
slices = update_slice_ids(pos)
dash.dashboard_title = dash_name
dash.position_json = json.dumps(pos, indent=4)
dash.slug = slug
dash.slices = slices
def create_slices(tbl: BaseDatasource) -> list[Slice]:
metric = "sum__SP_POP_TOTL"
metrics = ["sum__SP_POP_TOTL"]
secondary_metric = {
"aggregate": "SUM",
"column": {
"column_name": "SP_RUR_TOTL",
"optionName": "_col_SP_RUR_TOTL",
"type": "DOUBLE",
},
"expressionType": "SIMPLE",
"hasCustomLabel": True,
"label": "Rural Population",
}
defaults = {
"compare_lag": "10",
"compare_suffix": "o10Y",
"limit": "25",
"granularity_sqla": "year",
"groupby": [],
"row_limit": app.config["ROW_LIMIT"],
"since": "2014-01-01",
"until": "2014-01-02",
"time_range": "2014-01-01 : 2014-01-02",
"markup_type": "markdown",
"country_fieldtype": "cca3",
"entity": "country_code",
"show_bubbles": True,
}
return [
Slice(
slice_name="World's Population",
viz_type="big_number",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="2000",
viz_type="big_number",
compare_lag="10",
metric="sum__SP_POP_TOTL",
compare_suffix="over 10Y",
),
),
Slice(
slice_name="Most Populated Countries",
viz_type="table",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="table",
metrics=["sum__SP_POP_TOTL"],
groupby=["country_name"],
),
),
Slice(
slice_name="Growth Rate",
viz_type="echarts_timeseries_line",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="echarts_timeseries_line",
since="1960-01-01",
metrics=["sum__SP_POP_TOTL"],
num_period_compare="10",
groupby=["country_name"],
),
),
Slice(
slice_name="% Rural",
viz_type="world_map",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="world_map",
metric="sum__SP_RUR_TOTL_ZS",
num_period_compare="10",
secondary_metric=secondary_metric,
),
),
Slice(
slice_name="Life Expectancy VS Rural %",
viz_type="bubble",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="bubble",
since="2011-01-01",
until="2011-01-02",
series="region",
limit=0,
entity="country_name",
x="sum__SP_RUR_TOTL_ZS",
y="sum__SP_DYN_LE00_IN",
size="sum__SP_POP_TOTL",
max_bubble_size="50",
adhoc_filters=[
{
"clause": "WHERE",
"expressionType": "SIMPLE",
"filterOptionName": "2745eae5",
"comparator": [
"TCA",
"MNP",
"DMA",
"MHL",
"MCO",
"SXM",
"CYM",
"TUV",
"IMY",
"KNA",
"ASM",
"ADO",
"AMA",
"PLW",
],
"operator": "NOT IN",
"subject": "country_code",
}
],
),
),
Slice(
slice_name="Rural Breakdown",
viz_type="sunburst_v2",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
viz_type="sunburst_v2",
columns=["region", "country_name"],
since="2011-01-01",
until="2011-01-02",
metric=metric,
secondary_metric=secondary_metric,
),
),
Slice(
slice_name="World's Pop Growth",
viz_type="echarts_area",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
viz_type="echarts_area",
groupby=["region"],
metrics=metrics,
),
),
Slice(
slice_name="Box plot",
viz_type="box_plot",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
whisker_options="Min/max (no outliers)",
x_ticks_layout="staggered",
viz_type="box_plot",
groupby=["region"],
metrics=metrics,
),
),
Slice(
slice_name="Treemap",
viz_type="treemap_v2",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="1960-01-01",
until="now",
viz_type="treemap_v2",
metric="sum__SP_POP_TOTL",
groupby=["region", "country_code"],
),
),
Slice(
slice_name="Parallel Coordinates",
viz_type="para",
datasource_type=DatasourceType.TABLE,
datasource_id=tbl.id,
params=get_slice_json(
defaults,
since="2011-01-01",
until="2012-01-01",
viz_type="para",
limit=100,
metrics=["sum__SP_POP_TOTL", "sum__SP_RUR_TOTL_ZS", "sum__SH_DYN_AIDS"],
secondary_metric="sum__SP_POP_TOTL",
series="country_name",
),
),
]
dashboard_positions = {
"CHART-37982887": {
"children": [],
"id": "CHART-37982887",
"meta": {
"chartId": 41,
"height": 52,
"sliceName": "World's Population",
"width": 2,
},
"type": "CHART",
},
"CHART-17e0f8d8": {
"children": [],
"id": "CHART-17e0f8d8",
"meta": {
"chartId": 42,
"height": 92,
"sliceName": "Most Populated Countries",
"width": 3,
},
"type": "CHART",
},
"CHART-2ee52f30": {
"children": [],
"id": "CHART-2ee52f30",
"meta": {"chartId": 43, "height": 38, "sliceName": "Growth Rate", "width": 6},
"type": "CHART",
},
"CHART-2d5b6871": {
"children": [],
"id": "CHART-2d5b6871",
"meta": {"chartId": 44, "height": 52, "sliceName": "% Rural", "width": 7},
"type": "CHART",
},
"CHART-0fd0d252": {
"children": [],
"id": "CHART-0fd0d252",
"meta": {
"chartId": 45,
"height": 50,
"sliceName": "Life Expectancy VS Rural %",
"width": 8,
},
"type": "CHART",
},
"CHART-97f4cb48": {
"children": [],
"id": "CHART-97f4cb48",
"meta": {
"chartId": 46,
"height": 38,
"sliceName": "Rural Breakdown",
"width": 3,
},
"type": "CHART",
},
"CHART-b5e05d6f": {
"children": [],
"id": "CHART-b5e05d6f",
"meta": {
"chartId": 47,
"height": 50,
"sliceName": "World's Pop Growth",
"width": 4,
},
"type": "CHART",
},
"CHART-e76e9f5f": {
"children": [],
"id": "CHART-e76e9f5f",
"meta": {"chartId": 48, "height": 50, "sliceName": "Box plot", "width": 4},
"type": "CHART",
},
"CHART-a4808bba": {
"children": [],
"id": "CHART-a4808bba",
"meta": {"chartId": 49, "height": 50, "sliceName": "Treemap", "width": 8},
"type": "CHART",
},
"COLUMN-071bbbad": {
"children": ["ROW-1e064e3c", "ROW-afdefba9"],
"id": "COLUMN-071bbbad",
"meta": {"background": "BACKGROUND_TRANSPARENT", "width": 9},
"type": "COLUMN",
},
"COLUMN-fe3914b8": {
"children": ["CHART-37982887"],
"id": "COLUMN-fe3914b8",
"meta": {"background": "BACKGROUND_TRANSPARENT", "width": 2},
"type": "COLUMN",
},
"GRID_ID": {
"children": ["ROW-46632bc2", "ROW-3fa26c5d", "ROW-812b3f13"],
"id": "GRID_ID",
"type": "GRID",
},
"HEADER_ID": {
"id": "HEADER_ID",
"meta": {"text": "World's Bank Data"},
"type": "HEADER",
},
"ROOT_ID": {"children": ["GRID_ID"], "id": "ROOT_ID", "type": "ROOT"},
"ROW-1e064e3c": {
"children": ["COLUMN-fe3914b8", "CHART-2d5b6871"],
"id": "ROW-1e064e3c",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-3fa26c5d": {
"children": ["CHART-b5e05d6f", "CHART-0fd0d252"],
"id": "ROW-3fa26c5d",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-46632bc2": {
"children": ["COLUMN-071bbbad", "CHART-17e0f8d8"],
"id": "ROW-46632bc2",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-812b3f13": {
"children": ["CHART-a4808bba", "CHART-e76e9f5f"],
"id": "ROW-812b3f13",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"ROW-afdefba9": {
"children": ["CHART-2ee52f30", "CHART-97f4cb48"],
"id": "ROW-afdefba9",
"meta": {"background": "BACKGROUND_TRANSPARENT"},
"type": "ROW",
},
"DASHBOARD_VERSION_KEY": "v2",
}

220
tests/fixtures/birth_names_helpers.py vendored Normal file
View File

@@ -0,0 +1,220 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Test helper functions for birth_names dataset.
Extracted from the original birth_names.py example file.
"""
import textwrap
from typing import Union
from sqlalchemy.sql import column
from superset import app, db
from superset.connectors.sqla.models import SqlaTable, SqlMetric, TableColumn
from superset.examples.helpers import (
get_slice_json,
merge_slice,
misc_dash_slices,
update_slice_ids,
)
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.utils import json
from superset.utils.core import DatasourceType
def gen_filter(
subject: str, comparator: str, operator: str = "=="
) -> dict[str, Union[bool, str]]:
return {
"clause": "WHERE",
"comparator": comparator,
"expressionType": "SIMPLE",
"operator": operator,
"subject": subject,
}
def _set_table_metadata(datasource: SqlaTable, database) -> None:
datasource.main_dttm_col = "ds"
datasource.database = database
datasource.filter_select_enabled = True
datasource.fetch_metadata()
def _add_table_metrics(datasource: SqlaTable) -> None:
# By accessing the attribute first, we make sure `datasource.columns` and
# `datasource.metrics` are already loaded. Otherwise accessing them later
# may trigger an unnecessary and unexpected `after_update` event.
columns, metrics = datasource.columns, datasource.metrics
if not any(col.column_name == "num_california" for col in columns):
col_state = str(column("state").compile(db.engine))
col_num = str(column("num").compile(db.engine))
columns.append(
TableColumn(
column_name="num_california",
expression=f"CASE WHEN {col_state} = 'CA' THEN {col_num} ELSE 0 END",
)
)
if not any(col.metric_name == "sum__num" for col in metrics):
col = str(column("num").compile(db.engine))
metrics.append(SqlMetric(metric_name="sum__num", expression=f"SUM({col})"))
for col in columns:
if col.column_name == "ds": # type: ignore
col.is_dttm = True # type: ignore
break
datasource.columns = columns
datasource.metrics = metrics
def create_slices(tbl: SqlaTable) -> tuple[list[Slice], list[Slice]]:
metrics = [
{
"expressionType": "SIMPLE",
"column": {"column_name": "num", "type": "BIGINT"},
"aggregate": "SUM",
"label": "Births",
"optionName": "metric_11",
}
]
metric = "sum__num"
defaults = {
"compare_lag": "10",
"compare_suffix": "o10Y",
"limit": "25",
"granularity_sqla": "ds",
"groupby": [],
"row_limit": app.config["ROW_LIMIT"],
"time_range": "100 years ago : now",
"viz_type": "table",
"markup_type": "markdown",
}
slice_kwargs = {
"datasource_id": tbl.id,
"datasource_type": DatasourceType.TABLE,
}
slices = [
Slice(
**slice_kwargs,
slice_name="Participants",
viz_type="big_number",
params=get_slice_json(
defaults,
viz_type="big_number",
granularity_sqla="ds",
compare_lag="5",
compare_suffix="over 5Y",
metric=metric,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Genders",
viz_type="pie",
params=get_slice_json(
defaults, viz_type="pie", groupby=["gender"], metric=metric
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Trends",
viz_type="echarts_timeseries_line",
params=get_slice_json(
defaults,
viz_type="echarts_timeseries_line",
groupby=["name"],
granularity_sqla="ds",
rich_tooltip=True,
show_legend=True,
metrics=metrics,
),
owners=[],
),
Slice(
**slice_kwargs,
slice_name="Pivot Table",
viz_type="pivot_table_v2",
params=get_slice_json(
defaults,
viz_type="pivot_table_v2",
groupbyRows=["name"],
groupbyColumns=["state"],
metrics=metrics,
),
owners=[],
),
]
misc_slices: list[Slice] = []
for slc in slices:
merge_slice(slc)
for slc in misc_slices:
merge_slice(slc)
misc_dash_slices.add(slc.slice_name)
return slices, misc_slices
def create_dashboard(slices: list[Slice]) -> Dashboard:
dash = db.session.query(Dashboard).filter_by(slug="births").first()
if not dash:
dash = Dashboard()
db.session.add(dash)
dash.published = True
dash.json_metadata = textwrap.dedent(
"""\
{
"label_colors": {
"Girls": "#FF69B4",
"Boys": "#ADD8E6",
"girl": "#FF69B4",
"boy": "#ADD8E6"
}
}"""
)
pos = {
"DASHBOARD_VERSION_KEY": "v2",
"ROOT_ID": {"children": ["GRID_ID"], "id": "ROOT_ID", "type": "ROOT"},
"GRID_ID": {
"children": [],
"id": "GRID_ID",
"parents": ["ROOT_ID"],
"type": "GRID",
},
}
dash.slices = [slc for slc in slices if slc.viz_type != "markup"]
update_slice_ids(pos)
dash.dashboard_title = "USA Births Names"
dash.position_json = json.dumps(pos, indent=4)
dash.slug = "births"
return dash

View File

@@ -23,17 +23,16 @@ from sqlalchemy.sql import column
import superset.utils.database as database_utils
from superset import db
from superset.connectors.sqla.models import SqlMetric
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils.core import DatasourceType
from .helpers import (
from superset.examples.helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
misc_dash_slices,
read_example_data,
)
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils.core import DatasourceType
logger = logging.getLogger(__name__)

View File

@@ -22,19 +22,18 @@ from sqlalchemy import inspect
from superset import db
from superset.connectors.sqla.models import SqlaTable
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils import json
from superset.utils.core import DatasourceType
from ..utils.database import get_example_database # noqa: TID252
from .helpers import (
from superset.examples.helpers import (
get_slice_json,
get_table_connector_registry,
merge_slice,
update_slice_ids,
)
from superset.models.dashboard import Dashboard
from superset.models.slice import Slice
from superset.sql.parse import Table
from superset.utils import json
from superset.utils.core import DatasourceType
from superset.utils.database import get_example_database
DASH_SLUG = "supported_charts_dash"
logger = logging.getLogger(__name__)

View File

@@ -18,11 +18,10 @@ import logging
import textwrap
from superset import db
from superset.examples.helpers import update_slice_ids
from superset.models.dashboard import Dashboard
from superset.utils import json
from .helpers import update_slice_ids
logger = logging.getLogger(__name__)

43
tests/fixtures/world_bank_helpers.py vendored Normal file
View File

@@ -0,0 +1,43 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""
Test helper functions for world_bank dataset.
Extracted from the original world_bank.py example file.
"""
from superset.connectors.sqla.models import SqlaTable
from superset.models.slice import Slice
def create_slices(tbl: SqlaTable) -> list[Slice]:
"""Create minimal test slices for world bank data."""
# Return empty list for now - tests should use YAML examples instead
return []
# Minimal dashboard position data
dashboard_positions = {
"DASHBOARD_VERSION_KEY": "v2",
"ROOT_ID": {"children": ["GRID_ID"], "id": "ROOT_ID", "type": "ROOT"},
"GRID_ID": {
"children": [],
"id": "GRID_ID",
"parents": ["ROOT_ID"],
"type": "GRID",
},
}

View File

@@ -20,14 +20,15 @@ import pytest
from superset import app, db # noqa: F401
from superset.common.db_query_status import QueryStatus
from superset.connectors.sqla.models import SqlaTable
from superset.extensions import cache_manager
from superset.utils import json
from tests.integration_tests.base_tests import SupersetTestCase
from tests.integration_tests.constants import ADMIN_USERNAME
from tests.integration_tests.fixtures.birth_names_dashboard import (
load_birth_names_dashboard_with_slices, # noqa: F401
load_birth_names_data, # noqa: F401
)
from tests.integration_tests.fixtures.query_context import get_query_context
class TestCache(SupersetTestCase):
@@ -47,11 +48,10 @@ class TestCache(SupersetTestCase):
app.config["DATA_CACHE_CONFIG"] = {"CACHE_TYPE": "NullCache"}
cache_manager.init_app(app)
slc = self.get_slice("Pivot Table v2")
slc = self.get_slice("Pivot Table")
# Get chart metadata
metadata = self.get_json_resp(f"api/v1/chart/{slc.id}")
query_context = json.loads(metadata.get("result").get("query_context"))
# Get query context using the fixture
query_context = get_query_context("birth_names")
query_context["form_data"] = slc.form_data
# Request chart for the first time
@@ -83,11 +83,16 @@ class TestCache(SupersetTestCase):
}
cache_manager.init_app(app)
slc = self.get_slice("Pivot Table v2")
slc = self.get_slice("Pivot Table")
# Get chart metadata
metadata = self.get_json_resp(f"api/v1/chart/{slc.id}")
query_context = json.loads(metadata.get("result").get("query_context"))
# Clear the datasource cache timeout to test fallback to DATA_CACHE_CONFIG
datasource = db.session.query(SqlaTable).filter_by(id=slc.datasource_id).one()
original_cache_timeout = datasource.cache_timeout
datasource.cache_timeout = None
db.session.commit()
# Get query context using the fixture
query_context = get_query_context("birth_names")
query_context["form_data"] = slc.form_data
# Request chart for the first time
@@ -123,6 +128,10 @@ class TestCache(SupersetTestCase):
# should not exists in `cache`
assert cache_manager.cache.get(cached_result["cache_key"]) is None
# reset datasource cache timeout
datasource.cache_timeout = original_cache_timeout
db.session.commit()
# reset cache config
app.config["DATA_CACHE_CONFIG"] = data_cache_config
app.config["CACHE_DEFAULT_TIMEOUT"] = cache_default_timeout

View File

@@ -1077,9 +1077,7 @@ class TestChartApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCase):
"""
self.login(GAMMA_USERNAME)
chart_no_access = (
db.session.query(Slice)
.filter_by(slice_name="Girl Name Cloud")
.one_or_none()
db.session.query(Slice).filter_by(slice_name="Trends").one_or_none()
)
uri = f"api/v1/chart/{chart_no_access.id}"
rv = self.client.get(uri)
@@ -1935,7 +1933,7 @@ class TestChartApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCase):
@parameterized.expand(
[
"Pivot Table v2", # Non-legacy charts
"Pivot Table", # Non-legacy charts
],
)
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
@@ -2011,7 +2009,7 @@ class TestChartApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCase):
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_warm_up_cache_error(self) -> None:
self.login(ADMIN_USERNAME)
slc = self.get_slice("Pivot Table v2")
slc = self.get_slice("Pivot Table")
with mock.patch.object(ChartDataCommand, "run") as mock_run:
mock_run.side_effect = ChartDataQueryFailedError(
@@ -2039,7 +2037,7 @@ class TestChartApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCase):
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_warm_up_cache_no_query_context(self) -> None:
self.login(ADMIN_USERNAME)
slc = self.get_slice("Pivot Table v2")
slc = self.get_slice("Pivot Table")
with mock.patch.object(Slice, "get_query_context") as mock_get_query_context:
mock_get_query_context.return_value = None
@@ -2062,7 +2060,7 @@ class TestChartApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCase):
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_warm_up_cache_no_datasource(self) -> None:
self.login(ADMIN_USERNAME)
slc = self.get_slice("Top 10 Girl Name Share")
slc = self.get_slice("Genders")
with mock.patch.object(
Slice,

View File

@@ -426,7 +426,7 @@ class TestChartWarmUpCacheCommand(SupersetTestCase):
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
@pytest.mark.skip(reason="This test will be changed to use the api/v1/data")
def test_warm_up_cache(self):
slc = self.get_slice("Top 10 Girl Name Share")
slc = self.get_slice("Genders")
result = ChartWarmUpCacheCommand(slc.id, None, None).run()
assert result == {
"chart_id": slc.id,

View File

@@ -227,7 +227,7 @@ class TestCore(SupersetTestCase):
def test_slice_data(self):
# slice data should have some required attributes
self.login(ADMIN_USERNAME)
slc = self.get_slice(slice_name="Top 10 Girl Name Share")
slc = self.get_slice(slice_name="Genders")
slc_data_attributes = slc.data.keys()
assert "changed_on" in slc_data_attributes
assert "modified" in slc_data_attributes
@@ -310,7 +310,7 @@ class TestCore(SupersetTestCase):
@pytest.mark.usefixtures("load_birth_names_dashboard_with_slices")
def test_warm_up_cache_error(self) -> None:
self.login(ADMIN_USERNAME)
slc = self.get_slice("Pivot Table v2")
slc = self.get_slice("Pivot Table")
with mock.patch.object(
ChartDataCommand,

View File

@@ -1489,7 +1489,7 @@ class TestDashboardApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCas
"alpha2", "password", "Alpha", email="alpha2@superset.org"
)
existing_slice = (
db.session.query(Slice).filter_by(slice_name="Girl Name Cloud").first()
db.session.query(Slice).filter_by(slice_name="Participants").first()
)
dashboard = self.insert_dashboard(
"title", "slug1", [user_alpha1.id], slices=[existing_slice], published=True
@@ -1515,7 +1515,7 @@ class TestDashboardApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCas
"alpha2", "password", "Alpha", email="alpha2@superset.org"
)
existing_slice = (
db.session.query(Slice).filter_by(slice_name="Girl Name Cloud").first()
db.session.query(Slice).filter_by(slice_name="Participants").first()
)
dashboard_count = 4
@@ -1985,7 +1985,7 @@ class TestDashboardApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCas
admin = self.get_user("admin")
slices = []
slices.append(db.session.query(Slice).filter_by(slice_name="Trends").one())
slices.append(db.session.query(Slice).filter_by(slice_name="Boys").one())
slices.append(db.session.query(Slice).filter_by(slice_name="Genders").one())
# Insert dashboard with admin as owner
dashboard = self.insert_dashboard(
@@ -2016,7 +2016,7 @@ class TestDashboardApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCas
assert rv.status_code == 200
# Check that chart named Boys does not contain alpha 1 in its owners
boys = db.session.query(Slice).filter_by(slice_name="Boys").one()
boys = db.session.query(Slice).filter_by(slice_name="Genders").one()
assert user_alpha1 not in boys.owners
# Revert owners on slice
@@ -2223,7 +2223,7 @@ class TestDashboardApi(ApiOwnersTestCaseMixin, InsertChartMixin, SupersetTestCas
"alpha2", "password", "Alpha", email="alpha2@superset.org"
)
existing_slice = (
db.session.query(Slice).filter_by(slice_name="Girl Name Cloud").first()
db.session.query(Slice).filter_by(slice_name="Participants").first()
)
dashboard = self.insert_dashboard(
"title", "slug1", [user_alpha1.id], slices=[existing_slice], published=True

View File

@@ -100,7 +100,7 @@ class TestDashboardRoleBasedSecurity(BaseTestDashboardSecurity):
self.create_user_with_roles(username, [new_role], should_create_roles=True)
slice = (
db.session.query(Slice) # noqa: F405
.filter_by(slice_name="Girl Name Cloud")
.filter_by(slice_name="Participants")
.one_or_none()
)
dashboard_to_access = create_dashboard_to_db(published=True, slices=[slice])
@@ -141,7 +141,7 @@ class TestDashboardRoleBasedSecurity(BaseTestDashboardSecurity):
slice = (
db.session.query(Slice) # noqa: F405
.filter_by(slice_name="Girl Name Cloud")
.filter_by(slice_name="Participants")
.one_or_none()
)
dashboard = create_dashboard_to_db(published=True, slices=[slice])
@@ -164,7 +164,7 @@ class TestDashboardRoleBasedSecurity(BaseTestDashboardSecurity):
slice = (
db.session.query(Slice) # noqa: F405
.filter_by(slice_name="Girl Name Cloud")
.filter_by(slice_name="Participants")
.one_or_none()
)
dashboard = create_dashboard_to_db(published=True, slices=[slice])
@@ -192,7 +192,7 @@ class TestDashboardRoleBasedSecurity(BaseTestDashboardSecurity):
slice = (
db.session.query(Slice) # noqa: F405
.filter_by(slice_name="Girl Name Cloud")
.filter_by(slice_name="Participants")
.one_or_none()
)
dashboard_to_access = create_dashboard_to_db(published=True, slices=[slice])

View File

@@ -74,7 +74,7 @@ def _create_dashboards():
fetch_values_predicate="123 = 123",
)
from superset.examples.birth_names import create_dashboard, create_slices
from tests.fixtures.birth_names_helpers import create_dashboard, create_slices
slices, _ = create_slices(table)
dash = create_dashboard(slices)
@@ -93,7 +93,10 @@ def _create_table(
database=database,
fetch_values_predicate=fetch_values_predicate,
)
from superset.examples.birth_names import _add_table_metrics, _set_table_metadata
from tests.fixtures.birth_names_helpers import (
_add_table_metrics,
_set_table_metadata,
)
_set_table_metadata(table, database)
_add_table_metrics(table)

View File

@@ -105,7 +105,7 @@ def create_dashboard_for_loaded_data():
def _create_world_bank_slices(table: SqlaTable) -> list[Slice]:
from superset.examples.world_bank import create_slices
from tests.fixtures.world_bank_helpers import create_slices
slices = create_slices(table)
_commit_slices(slices)
@@ -123,7 +123,7 @@ def _commit_slices(slices: list[Slice]):
def _create_world_bank_dashboard(table: SqlaTable) -> Dashboard:
from superset.examples.helpers import update_slice_ids
from superset.examples.world_bank import dashboard_positions
from tests.fixtures.world_bank_helpers import dashboard_positions
pos = dashboard_positions
slices = update_slice_ids(pos)

View File

@@ -630,7 +630,7 @@ class TestSqlaTableModel(SupersetTestCase):
.filter_by(
datasource_id=tbl.id,
datasource_type=tbl.type,
slice_name="Pivot Table v2",
slice_name="Pivot Table",
)
.first()
)