docs: cut 6.1.0 versions for docs, admin_docs, developer_docs, components

- Snapshot all four versioned docs sections at v6.1.0; master continues to
  serve as "Next" (lastVersion: current, banner: unreleased) so editing
  master keeps updating the canonical URLs
- Enable the previously-disabled components plugin and version it
- Rename stale "developer_portal" references to "developer_docs" across
  package.json scripts, manage-versions.mjs, theme files (DocVersionBadge,
  DocVersionBanner), DOCS_CLAUDE.md, and README.md (URL backward-compat
  redirect /developer_portal/* preserved)
- Add admin_docs version scripts; drop dead "tutorials" plugin id from
  the version badge
- Generalize fixVersionedImports in manage-versions.mjs to walk every
  section's snapshot and rewrite ../../src/ and ../../data/ imports,
  catching admin_docs and components files that previous version cuts
  would have broken
- Remove orphan files: developer_portal_versions.json,
  tutorials_versions.json, and stray empty versions.json files inside
  components/ and developer_docs/ content directories
This commit is contained in:
Superset Dev
2026-05-02 11:53:56 -07:00
parent d23b0cad92
commit 752ebd47cb
1872 changed files with 72562 additions and 78 deletions

View File

@@ -0,0 +1,8 @@
{
"label": "Databases",
"position": 1,
"link": {
"type": "doc",
"id": "databases/index"
}
}

View File

@@ -0,0 +1,93 @@
---
title: Connecting to Databases
sidebar_label: Overview
sidebar_position: 1
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabaseIndex } from '@site/src/components/databases';
import databaseData from '@site/src/data/databases.json';
# Connecting to Databases
Superset does not ship bundled with connectivity to databases. The main step in connecting
Superset to a database is to **install the proper database driver(s)** in your environment.
:::note
You'll need to install the required packages for the database you want to use as your metadata database
as well as the packages needed to connect to the databases you want to access through Superset.
For information about setting up Superset's metadata database, please refer to
installation documentations ([Docker Compose](/admin-docs/installation/docker-compose), [Kubernetes](/admin-docs/installation/kubernetes))
:::
## Supported Databases
Superset supports **80 databases** with varying levels of feature support.
Click on any database name to see detailed documentation including connection strings,
authentication methods, and configuration options.
<DatabaseIndex data={databaseData} />
## Installing Database Drivers
Superset requires a Python [DB-API database driver](https://peps.python.org/pep-0249/)
and a [SQLAlchemy dialect](https://docs.sqlalchemy.org/en/20/dialects/) to be installed for
each database engine you want to connect to.
### Installing Drivers in Docker
For Docker deployments, create a `requirements-local.txt` file in the `docker` directory:
```bash
# Create the requirements file
touch ./docker/requirements-local.txt
# Add your driver (e.g., for PostgreSQL)
echo "psycopg2-binary" >> ./docker/requirements-local.txt
```
Then restart your containers. The drivers will be installed automatically.
### Installing Drivers with pip
For non-Docker installations:
```bash
pip install <driver-package>
```
See individual database pages for the specific driver packages needed.
## Connecting Through the UI
1. Go to **Settings → Data: Database Connections**
2. Click **+ DATABASE**
3. Select your database type or enter a SQLAlchemy URI
4. Click **Test Connection** to verify
5. Click **Connect** to save
## Contributing
To add or update database documentation, add a `metadata` attribute to your engine spec class in
`superset/db_engine_specs/`. Documentation is auto-generated from these metadata attributes.
See [METADATA_STATUS.md](https://github.com/apache/superset/blob/master/superset/db_engine_specs/METADATA_STATUS.md)
for the current status of database documentation and the [README](https://github.com/apache/superset/blob/master/superset/db_engine_specs/README.md) for the metadata schema.

View File

@@ -0,0 +1,6 @@
{
"label": "Supported Databases",
"position": 2,
"collapsed": true,
"collapsible": true
}

View File

@@ -0,0 +1,31 @@
---
title: Amazon Athena
sidebar_label: Amazon Athena
description: "Amazon Athena is an interactive query service for analyzing data in S3 using SQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.athena","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":false,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":30,"max_score":201,"documentation":{"description":"Amazon Athena is an interactive query service for analyzing data in S3 using SQL.","logo":"amazon-athena.jpg","homepage_url":"https://aws.amazon.com/athena/","categories":["Cloud - AWS","Query Engines","Proprietary"],"pypi_packages":["pyathena[pandas]"],"connection_string":"awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}","drivers":[{"name":"PyAthena (REST)","pypi_package":"pyathena[pandas]","connection_string":"awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}","is_recommended":true,"notes":"No Java required. URL-encode special characters (e.g., s3:// -> s3%3A//)."},{"name":"PyAthenaJDBC","pypi_package":"PyAthenaJDBC","connection_string":"awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}","is_recommended":false,"notes":"Requires Amazon Athena JDBC driver."}],"engine_parameters":[{"name":"IAM Role Assumption","description":"Assume a specific IAM role for queries","json":{"connect_args":{"role_arn":"<role arn>"}}}],"notes":"URL-encode special characters in s3_staging_dir (e.g., s3:// becomes s3%3A//).","category":"Cloud - AWS","custom_errors":[{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors at or near \"%(syntax_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"awsathena","engine_name":"Amazon Athena","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Amazon Athena" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Amazon DynamoDB
sidebar_label: Amazon DynamoDB
description: "Amazon DynamoDB is a serverless NoSQL database with SQL via PartiQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.dynamodb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":32,"max_score":201,"documentation":{"description":"Amazon DynamoDB is a serverless NoSQL database with SQL via PartiQL.","logo":"aws.png","homepage_url":"https://aws.amazon.com/dynamodb/","categories":["Cloud - AWS","Search & NoSQL","Proprietary"],"pypi_packages":["pydynamodb"],"connection_string":"dynamodb://{aws_access_key_id}:{aws_secret_access_key}@dynamodb.{region}.amazonaws.com:443?connector=superset","parameters":{"aws_access_key_id":"AWS access key ID","aws_secret_access_key":"AWS secret access key","region":"AWS region (e.g., us-east-1)"},"notes":"Uses PartiQL for SQL queries. Requires connector=superset parameter.","docs_url":"https://github.com/passren/PyDynamoDB","category":"Cloud - AWS"},"engine":"dynamodb","engine_name":"Amazon DynamoDB","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Amazon DynamoDB" database={databaseInfo} />

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,31 @@
---
title: Apache Doris
sidebar_label: Apache Doris
description: "Apache Doris is a high-performance real-time analytical database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.doris","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":79,"max_score":201,"documentation":{"description":"Apache Doris is a high-performance real-time analytical database.","logo":"doris.png","homepage_url":"https://doris.apache.org/","categories":["Apache Projects","Analytical Databases","Open Source"],"pypi_packages":["pydoris"],"connection_string":"doris://{username}:{password}@{host}:{port}/{catalog}.{database}","default_port":9030,"parameters":{"username":"User name","password":"Password","host":"Doris FE Host","port":"Doris FE port","catalog":"Catalog name","database":"Database name"},"category":"Apache Projects","custom_errors":[{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\" or the password is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015],"invalid_fields":["username","password"]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"Unknown Doris server host \"%(hostname)s\".","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007],"invalid_fields":["host"]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down and can't be reached.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009],"invalid_fields":["host","port"]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_REGEX","message_template":"Unable to connect to database \"%(database)s\".","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015],"invalid_fields":["database"]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors near \"%(server_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"pydoris","engine_name":"Apache Doris","engine_aliases":["doris"],"default_driver":"pydoris","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Apache Doris" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Drill
sidebar_label: Apache Drill
description: "Apache Drill is a schema-free SQL query engine for Hadoop and NoSQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.drill","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":50,"max_score":201,"documentation":{"description":"Apache Drill is a schema-free SQL query engine for Hadoop and NoSQL.","logo":"apache-drill.png","homepage_url":"https://drill.apache.org/","categories":["Apache Projects","Query Engines","Open Source"],"pypi_packages":["sqlalchemy-drill"],"connection_string":"drill+sadrill://{username}:{password}@{host}:{port}/{storage_plugin}?use_ssl=True","default_port":8047,"drivers":[{"name":"SQLAlchemy (REST)","pypi_package":"sqlalchemy-drill","connection_string":"drill+sadrill://{username}:{password}@{host}:{port}/{storage_plugin}?use_ssl=True","is_recommended":true},{"name":"JDBC","pypi_package":"sqlalchemy-drill","connection_string":"drill+jdbc://{username}:{password}@{host}:{port}","is_recommended":false,"notes":"Requires Drill JDBC Driver installation.","docs_url":"https://drill.apache.org/docs/using-the-jdbc-driver/"},{"name":"ODBC","pypi_package":"sqlalchemy-drill","is_recommended":false,"notes":"See Apache Drill documentation for ODBC setup.","docs_url":"https://drill.apache.org/docs/installing-the-driver-on-linux/"}],"connection_examples":[{"description":"Local embedded mode","connection_string":"drill+sadrill://localhost:8047/dfs?use_ssl=False"}],"category":"Apache Projects"},"engine":"drill","engine_name":"Apache Drill","engine_aliases":[],"default_driver":"sadrill","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Drill" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Druid
sidebar_label: Apache Druid
description: "Apache Druid is a high performance real-time analytics database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.druid","limit_method":1,"limit_clause":true,"joins":false,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":47,"max_score":201,"documentation":{"description":"Apache Druid is a high performance real-time analytics database.","logo":"druid.png","homepage_url":"https://druid.apache.org/","categories":["Apache Projects","Time Series Databases","Open Source"],"pypi_packages":["pydruid"],"connection_string":"druid://{username}:{password}@{host}:{port}/druid/v2/sql","default_port":9088,"parameters":{"username":"Database username","password":"Database password","host":"IP address or URL of the host","port":"Default 9088"},"ssl_configuration":{"custom_certificate":"Add certificate in Root Certificate field. pydruid will automatically use https.","disable_ssl_verification":{"engine_params":{"connect_args":{"scheme":"https","ssl_verify_cert":false}}}},"advanced_features":{"aggregations":"Define common aggregations in datasource edit view under List Druid Column tab.","post_aggregations":"Create metrics with postagg as Metric Type and provide valid JSON post-aggregation definition."},"notes":"A native Druid connector ships with Superset (behind DRUID_IS_ACTIVE flag) but SQLAlchemy connector via pydruid is preferred.","compatible_databases":[{"name":"Imply","description":"Imply is a fully-managed cloud platform and enterprise distribution built on Apache Druid. It provides real-time analytics with enterprise security and support.","logo":"imply.png","homepage_url":"https://imply.io/","categories":["Time Series Databases","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["pydruid"],"connection_string":"druid://{username}:{password}@{host}/druid/v2/sql","docs_url":"https://docs.imply.io/"}],"category":"Apache Projects"},"engine":"druid","engine_name":"Apache Druid","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Druid" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Hive
sidebar_label: Apache Hive
description: "Apache Hive is a data warehouse infrastructure built on Hadoop."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.hive","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":false,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":767,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":true,"expand_data":true,"query_cost_estimation":true,"sql_validation":false,"score":140,"max_score":201,"documentation":{"description":"Apache Hive is a data warehouse infrastructure built on Hadoop.","logo":"apache-hive.svg","homepage_url":"https://hive.apache.org/","categories":["Apache Projects","Query Engines","Open Source"],"pypi_packages":["pyhive"],"connection_string":"hive://hive@{hostname}:{port}/{database}","default_port":10000,"category":"Apache Projects"},"engine":"hive","engine_name":"Apache Hive","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Apache Hive" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Impala
sidebar_label: Apache Impala
description: "Apache Impala is an open-source massively parallel processing SQL query engine."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.impala","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":37,"max_score":201,"documentation":{"description":"Apache Impala is an open-source massively parallel processing SQL query engine.","logo":"apache-impala.png","homepage_url":"https://impala.apache.org/","categories":["Apache Projects","Query Engines","Open Source"],"pypi_packages":["impyla"],"connection_string":"impala://{hostname}:{port}/{database}","default_port":21050,"category":"Apache Projects"},"engine":"impala","engine_name":"Apache Impala","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Impala" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache IoTDB
sidebar_label: Apache IoTDB
description: "Apache IoTDB is a time series database designed for IoT data, with efficient storage and query capabilities for massive time series data."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":false,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":false,"SIX_HOURS":false,"DAY":false,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":false,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":false},"module":"superset.db_engine_specs.iotdb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":20,"max_score":201,"documentation":{"description":"Apache IoTDB is a time series database designed for IoT data, with efficient storage and query capabilities for massive time series data.","logo":"apache-iotdb.svg","homepage_url":"https://iotdb.apache.org/","categories":["Apache Projects","Time Series Databases","Open Source"],"pypi_packages":["apache-iotdb"],"connection_string":"iotdb://{username}:{password}@{hostname}:{port}","default_port":6667,"parameters":{"username":"Database username (default: root)","password":"Database password (default: root)","hostname":"IP address or hostname","port":"Default 6667"},"notes":"The IoTDB SQLAlchemy dialect was written to integrate with Apache Superset. IoTDB uses a hierarchical data model, which is reorganized into a relational model for SQL queries.","category":"Apache Projects"},"engine":"iotdb","engine_name":"Apache IoTDB","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache IoTDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Kylin
sidebar_label: Apache Kylin
description: "Apache Kylin is an open-source OLAP engine for big data."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.kylin","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Apache Kylin is an open-source OLAP engine for big data.","logo":"apache-kylin.png","homepage_url":"https://kylin.apache.org/","categories":["Apache Projects","Analytical Databases","Open Source"],"pypi_packages":["kylinpy"],"connection_string":"kylin://{username}:{password}@{hostname}:{port}/{project}?{param1}={value1}&{param2}={value2}","default_port":7070,"category":"Apache Projects"},"engine":"kylin","engine_name":"Apache Kylin","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Kylin" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Phoenix
sidebar_label: Apache Phoenix
description: "Apache Phoenix is a relational database layer over Apache HBase, providing low-latency SQL queries over HBase data."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.phoenix","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Apache Phoenix is a relational database layer over Apache HBase, providing low-latency SQL queries over HBase data.","logo":"apache-phoenix.png","homepage_url":"https://phoenix.apache.org/","categories":["Apache Projects","Analytical Databases","Open Source"],"pypi_packages":["phoenixdb"],"connection_string":"phoenix://{hostname}:{port}/","default_port":8765,"notes":"Phoenix provides a SQL interface to Apache HBase. The phoenixdb driver connects via the Phoenix Query Server and supports a subset of SQLAlchemy.","category":"Apache Projects"},"engine":"phoenix","engine_name":"Apache Phoenix","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Phoenix" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Pinot
sidebar_label: Apache Pinot
description: "Apache Pinot is a real-time distributed OLAP datastore."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.pinot","limit_method":1,"limit_clause":true,"joins":false,"subqueries":false,"alias_in_select":false,"alias_in_orderby":false,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":32,"max_score":201,"documentation":{"description":"Apache Pinot is a real-time distributed OLAP datastore.","logo":"apache-pinot.svg","homepage_url":"https://pinot.apache.org/","categories":["Apache Projects","Time Series Databases","Open Source"],"pypi_packages":["pinotdb"],"connection_string":"pinot+http://{broker_host}:{broker_port}/query?controller=http://{controller_host}:{controller_port}/","default_port":8099,"connection_examples":[{"description":"With authentication","connection_string":"pinot://{username}:{password}@{broker_host}:{broker_port}/query/sql?controller=http://{controller_host}:{controller_port}/verify_ssl=true"}],"engine_parameters":[{"name":"Multi-stage Query Engine","description":"Enable for Explore view, joins, window functions","json":{"connect_args":{"use_multistage_engine":"true"}},"docs_url":"https://docs.pinot.apache.org/reference/multi-stage-engine"}],"category":"Apache Projects"},"engine":"pinot","engine_name":"Apache Pinot","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Pinot" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Solr
sidebar_label: Apache Solr
description: "Apache Solr is an open-source enterprise search platform."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":false,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":false,"SIX_HOURS":false,"DAY":false,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":false,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":false},"module":"superset.db_engine_specs.solr","limit_method":1,"limit_clause":true,"joins":false,"subqueries":false,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":20,"max_score":201,"documentation":{"description":"Apache Solr is an open-source enterprise search platform.","logo":"apache-solr.png","homepage_url":"https://solr.apache.org/","categories":["Apache Projects","Search & NoSQL","Open Source"],"pypi_packages":["sqlalchemy-solr"],"connection_string":"solr://{username}:{password}@{host}:{port}/{server_path}/{collection}[/?use_ssl=true|false]","default_port":8983,"category":"Apache Projects"},"engine":"solr","engine_name":"Apache Solr","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Apache Solr" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Apache Spark SQL
sidebar_label: Apache Spark SQL
description: "Apache Spark SQL is a module for structured data processing."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.spark","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":false,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":767,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":true,"expand_data":true,"query_cost_estimation":true,"sql_validation":false,"score":140,"max_score":201,"documentation":{"description":"Apache Spark SQL is a module for structured data processing.","logo":"apache-spark.png","homepage_url":"https://spark.apache.org/sql/","categories":["Apache Projects","Query Engines","Open Source"],"pypi_packages":["pyhive"],"connection_string":"hive://hive@{hostname}:{port}/{database}","default_port":10000,"category":"Apache Projects"},"engine":"hive","engine_name":"Apache Spark SQL","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Apache Spark SQL" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Arc
sidebar_label: Arc
description: "Arc is a data platform with multiple connection options."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.arc","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Arc is a data platform with multiple connection options.","categories":["Other Databases","Proprietary"],"pypi_packages":["arc-superset-arrow"],"connection_string":"arc+arrow://{api_key}@{hostname}:{port}/{database}","parameters":{"api_key":"Arc API key","hostname":"Arc hostname","port":"Arc port","database":"Database name"},"drivers":[{"name":"Apache Arrow (Recommended)","pypi_package":"arc-superset-arrow","connection_string":"arc+arrow://{api_key}@{hostname}:{port}/{database}","is_recommended":true,"notes":"Recommended for production. Provides 3-5x better performance using Apache Arrow IPC."},{"name":"JSON","pypi_package":"arc-superset-dialect","connection_string":"arc+json://{api_key}@{hostname}:{port}/{database}","is_recommended":false}],"notes":"Arc supports multiple databases (schemas) within a single instance. Each Arc database appears as a schema in SQL Lab.","category":"Other Databases"},"engine":"arc","engine_name":"Arc","engine_aliases":[],"default_driver":"arrow","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Arc" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Ascend
sidebar_label: Ascend
description: "Ascend.io is a data automation platform for building data pipelines."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.ascend","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":38,"max_score":201,"documentation":{"description":"Ascend.io is a data automation platform for building data pipelines.","logo":"ascend.webp","homepage_url":"https://www.ascend.io/","categories":["Cloud Data Warehouses","Analytical Databases","Hosted Open Source"],"pypi_packages":["impyla"],"connection_string":"ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true","category":"Other Databases"},"engine":"ascend","engine_name":"Ascend","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Ascend" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Aurora MySQL (Data API)
sidebar_label: Aurora MySQL (Data API)
description: "MySQL is a popular open-source relational database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.aurora","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":59,"max_score":201,"documentation":{"description":"MySQL is a popular open-source relational database.","logo":"mysql.png","homepage_url":"https://www.mysql.com/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["mysqlclient"],"connection_string":"mysql://{username}:{password}@{host}/{database}","default_port":3306,"parameters":{"username":"Database username","password":"Database password","host":"localhost, 127.0.0.1, IP address, or hostname","database":"Database name"},"host_examples":[{"platform":"Localhost","host":"localhost or 127.0.0.1"},{"platform":"Docker on Linux","host":"172.18.0.1"},{"platform":"Docker on macOS","host":"docker.for.mac.host.internal"},{"platform":"On-premise","host":"IP address or hostname"}],"drivers":[{"name":"mysqlclient","pypi_package":"mysqlclient","connection_string":"mysql://{username}:{password}@{host}/{database}","is_recommended":true,"notes":"Recommended driver. May fail with caching_sha2_password auth."},{"name":"mysql-connector-python","pypi_package":"mysql-connector-python","connection_string":"mysql+mysqlconnector://{username}:{password}@{host}/{database}","is_recommended":false,"notes":"Required for newer MySQL databases using caching_sha2_password authentication."}],"compatible_databases":[{"name":"MariaDB","description":"MariaDB is a community-developed fork of MySQL, fully compatible with MySQL.","logo":"mariadb.png","homepage_url":"https://mariadb.org/","pypi_packages":["mysqlclient"],"connection_string":"mysql://{username}:{password}@{host}:{port}/{database}","categories":["Open Source"]},{"name":"Amazon Aurora MySQL","description":"Amazon Aurora MySQL is a fully managed, MySQL-compatible relational database with up to 5x the throughput of standard MySQL.","logo":"aws-aurora.jpg","homepage_url":"https://aws.amazon.com/rds/aurora/","pypi_packages":["sqlalchemy-aurora-data-api"],"connection_string":"mysql+auroradataapi://{aws_access_id}:{aws_secret_access_key}@/{database_name}?aurora_cluster_arn={aurora_cluster_arn}&secret_arn={secret_arn}&region_name={region_name}","parameters":{"aws_access_id":"AWS Access Key ID","aws_secret_access_key":"AWS Secret Access Key","database_name":"Database name","aurora_cluster_arn":"Aurora cluster ARN","secret_arn":"Secrets Manager ARN for credentials","region_name":"AWS region (e.g., us-east-1)"},"notes":"Uses the Data API for serverless access. Standard MySQL connections also work with mysqlclient.","categories":["Cloud - AWS","Hosted Open Source"]}],"category":"Traditional RDBMS"},"engine":"mysql","engine_name":"Aurora MySQL (Data API)","engine_aliases":[],"default_driver":"auroradataapi","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Aurora MySQL (Data API)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Aurora MySQL
sidebar_label: Aurora MySQL
description: "MySQL is a popular open-source relational database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.aurora","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":59,"max_score":201,"documentation":{"description":"MySQL is a popular open-source relational database.","logo":"mysql.png","homepage_url":"https://www.mysql.com/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["mysqlclient"],"connection_string":"mysql://{username}:{password}@{host}/{database}","default_port":3306,"parameters":{"username":"Database username","password":"Database password","host":"localhost, 127.0.0.1, IP address, or hostname","database":"Database name"},"host_examples":[{"platform":"Localhost","host":"localhost or 127.0.0.1"},{"platform":"Docker on Linux","host":"172.18.0.1"},{"platform":"Docker on macOS","host":"docker.for.mac.host.internal"},{"platform":"On-premise","host":"IP address or hostname"}],"drivers":[{"name":"mysqlclient","pypi_package":"mysqlclient","connection_string":"mysql://{username}:{password}@{host}/{database}","is_recommended":true,"notes":"Recommended driver. May fail with caching_sha2_password auth."},{"name":"mysql-connector-python","pypi_package":"mysql-connector-python","connection_string":"mysql+mysqlconnector://{username}:{password}@{host}/{database}","is_recommended":false,"notes":"Required for newer MySQL databases using caching_sha2_password authentication."}],"compatible_databases":[{"name":"MariaDB","description":"MariaDB is a community-developed fork of MySQL, fully compatible with MySQL.","logo":"mariadb.png","homepage_url":"https://mariadb.org/","pypi_packages":["mysqlclient"],"connection_string":"mysql://{username}:{password}@{host}:{port}/{database}","categories":["Open Source"]},{"name":"Amazon Aurora MySQL","description":"Amazon Aurora MySQL is a fully managed, MySQL-compatible relational database with up to 5x the throughput of standard MySQL.","logo":"aws-aurora.jpg","homepage_url":"https://aws.amazon.com/rds/aurora/","pypi_packages":["sqlalchemy-aurora-data-api"],"connection_string":"mysql+auroradataapi://{aws_access_id}:{aws_secret_access_key}@/{database_name}?aurora_cluster_arn={aurora_cluster_arn}&secret_arn={secret_arn}&region_name={region_name}","parameters":{"aws_access_id":"AWS Access Key ID","aws_secret_access_key":"AWS Secret Access Key","database_name":"Database name","aurora_cluster_arn":"Aurora cluster ARN","secret_arn":"Secrets Manager ARN for credentials","region_name":"AWS region (e.g., us-east-1)"},"notes":"Uses the Data API for serverless access. Standard MySQL connections also work with mysqlclient.","categories":["Cloud - AWS","Hosted Open Source"]}],"category":"Traditional RDBMS"},"engine":"mysql","engine_name":"Aurora MySQL","engine_aliases":[],"default_driver":"mysqldb","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Aurora MySQL" database={databaseInfo} />

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,31 @@
---
title: Azure Data Explorer (KQL)
sidebar_label: Azure Data Explorer (KQL)
description: "Documentation for Azure Data Explorer (KQL) database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.kusto","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":false,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":40,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Cloud - Azure"},"engine":"kustokql","engine_name":"Azure Data Explorer (KQL)","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Azure Data Explorer (KQL)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Azure Data Explorer
sidebar_label: Azure Data Explorer
description: "Azure Data Explorer (Kusto) is a fast, fully managed data analytics service from Microsoft Azure. Query data using SQL or native KQL syntax."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.kusto","limit_method":2,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":false,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":54,"max_score":201,"documentation":{"description":"Azure Data Explorer (Kusto) is a fast, fully managed data analytics service from Microsoft Azure. Query data using SQL or native KQL syntax.","logo":"kusto.png","homepage_url":"https://azure.microsoft.com/en-us/products/data-explorer/","categories":["Cloud - Azure","Analytical Databases","Proprietary"],"pypi_packages":["sqlalchemy-kusto"],"connection_string":"kustosql+https://{cluster}.kusto.windows.net/{database}?msi=False&azure_ad_client_id={client_id}&azure_ad_client_secret={client_secret}&azure_ad_tenant_id={tenant_id}","parameters":{"cluster":"Azure Data Explorer cluster name","database":"Database name","client_id":"Azure AD application (client) ID","client_secret":"Azure AD application secret","tenant_id":"Azure AD tenant ID"},"drivers":[{"name":"SQL Interface (Recommended)","pypi_package":"sqlalchemy-kusto","connection_string":"kustosql+https://{cluster}.kusto.windows.net/{database}?msi=False&azure_ad_client_id={client_id}&azure_ad_client_secret={client_secret}&azure_ad_tenant_id={tenant_id}","is_recommended":true,"notes":"Use familiar SQL syntax to query Azure Data Explorer."},{"name":"KQL (Kusto Query Language)","pypi_package":"sqlalchemy-kusto","connection_string":"kustokql+https://{cluster}.kusto.windows.net/{database}?msi=False&azure_ad_client_id={client_id}&azure_ad_client_secret={client_secret}&azure_ad_tenant_id={tenant_id}","is_recommended":false,"notes":"Use native Kusto Query Language for advanced analytics."}],"category":"Cloud - Azure"},"engine":"kustosql","engine_name":"Azure Data Explorer","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Azure Data Explorer" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Azure Synapse
sidebar_label: Azure Synapse
description: "Azure Synapse Analytics is a cloud-based enterprise data warehouse from Microsoft that combines big data and data warehousing."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.mssql","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":false,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":44,"max_score":201,"documentation":{"description":"Azure Synapse Analytics is a cloud-based enterprise data warehouse from Microsoft that combines big data and data warehousing.","logo":"azure.svg","homepage_url":"https://azure.microsoft.com/en-us/products/synapse-analytics/","categories":["Cloud Data Warehouses","Analytical Databases","Proprietary"],"pypi_packages":["pymssql"],"connection_string":"mssql+pymssql://{username}@{server}:{password}@{server}.database.windows.net:1433/{database}","category":"Cloud - Azure","custom_errors":[{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\", password, or database name \"%(database)s\" is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"The hostname \"%(hostname)s\" cannot be resolved.","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007]},{"regex_name":"CONNECTION_PORT_CLOSED_REGEX","message_template":"Port %(port)s on hostname \"%(hostname)s\" refused the connection.","error_type":"CONNECTION_PORT_CLOSED_ERROR","category":"Connection","description":"Port closed or refused","issue_codes":[1008]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down, and can't be reached on port %(port)s.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009]}]},"engine":"mssql","engine_name":"Azure Synapse","engine_aliases":[],"default_driver":"pyodbc","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Azure Synapse" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: base
sidebar_label: base
description: "Documentation for base database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.presto","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":true,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":109,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Other Databases","custom_errors":[{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve the column \"%(column_name)s\" at line %(location)s.","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]},{"regex_name":"TABLE_DOES_NOT_EXIST_REGEX","message_template":"The table \"%(table_name)s\" does not exist. A valid table must be used to run this query.","error_type":"TABLE_DOES_NOT_EXIST_ERROR","category":"Query","description":"Table not found","issue_codes":[1003,1005]},{"regex_name":"SCHEMA_DOES_NOT_EXIST_REGEX","message_template":"The schema \"%(schema_name)s\" does not exist. A valid schema must be used to run this query.","error_type":"SCHEMA_DOES_NOT_EXIST_ERROR","category":"Query","description":"Schema not found","issue_codes":[1003,1016]},{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\" or the password is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"The hostname \"%(hostname)s\" cannot be resolved.","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down, and can't be reached on port %(port)s.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009]},{"regex_name":"CONNECTION_PORT_CLOSED_REGEX","message_template":"Port %(port)s on hostname \"%(hostname)s\" refused the connection.","error_type":"CONNECTION_PORT_CLOSED_ERROR","category":"Connection","description":"Port closed or refused","issue_codes":[1008]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_ERROR","message_template":"Unable to connect to catalog named \"%(catalog_name)s\".","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015]}]},"engine":"base","engine_name":"base","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="base" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: ClickHouse (sqlalchemy)
sidebar_label: ClickHouse (sqlalchemy)
description: "Documentation for ClickHouse (sqlalchemy) database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.clickhouse","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":51,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Analytical Databases"},"engine":"clickhouse","engine_name":"ClickHouse (sqlalchemy)","engine_aliases":[],"default_driver":null,"supports_file_upload":false,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="ClickHouse (sqlalchemy)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: ClickHouse
sidebar_label: ClickHouse
description: "ClickHouse is an open-source column-oriented database for real-time analytics using SQL. It's known for extremely fast query performance on large datasets."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.clickhouse","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":61,"max_score":201,"documentation":{"description":"ClickHouse is an open-source column-oriented database for real-time analytics using SQL. It's known for extremely fast query performance on large datasets.","logo":"clickhouse.png","homepage_url":"https://clickhouse.com/","categories":["Analytical Databases","Open Source"],"pypi_packages":["clickhouse-connect>=0.13.0"],"connection_string":"clickhousedb://{username}:{password}@{host}:{port}/{database}","default_port":8123,"drivers":[{"name":"clickhouse-connect (Recommended)","pypi_package":"clickhouse-connect>=0.13.0","connection_string":"clickhousedb://{username}:{password}@{host}:{port}/{database}","is_recommended":true,"notes":"Official ClickHouse Python driver with native protocol support."},{"name":"clickhouse-sqlalchemy (Legacy)","pypi_package":"clickhouse-sqlalchemy","connection_string":"clickhouse://{username}:{password}@{host}:{port}/{database}","is_recommended":false,"notes":"Older driver using HTTP interface. Use clickhouse-connect for new deployments."}],"connection_examples":[{"description":"Altinity Cloud","connection_string":"clickhousedb://demo:demo@github.demo.trial.altinity.cloud/default?secure=true"},{"description":"Local (no auth, no SSL)","connection_string":"clickhousedb://localhost/default"}],"install_instructions":"echo \"clickhouse-connect>=0.13.0\" >> ./docker/requirements-local.txt","compatible_databases":[{"name":"ClickHouse Cloud","description":"ClickHouse Cloud is the official fully-managed cloud service for ClickHouse. It provides automatic scaling, built-in backups, and enterprise security features.","logo":"clickhouse.png","homepage_url":"https://clickhouse.cloud/","categories":["Analytical Databases","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["clickhouse-connect>=0.13.0"],"connection_string":"clickhousedb://{username}:{password}@{host}:8443/{database}?secure=true","parameters":{"username":"ClickHouse Cloud username","password":"ClickHouse Cloud password","host":"Your ClickHouse Cloud hostname","database":"Database name (default)"},"docs_url":"https://clickhouse.com/docs/en/cloud"},{"name":"Altinity.Cloud","description":"Altinity.Cloud is a managed ClickHouse service providing Kubernetes-native deployments with enterprise support.","logo":"altinity.png","homepage_url":"https://altinity.cloud/","categories":["Analytical Databases","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["clickhouse-connect>=0.13.0"],"connection_string":"clickhousedb://{username}:{password}@{host}/{database}?secure=true","docs_url":"https://docs.altinity.com/"}],"category":"Analytical Databases"},"engine":"clickhousedb","engine_name":"ClickHouse","engine_aliases":[],"default_driver":"connect","supports_file_upload":false,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="ClickHouse" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Cloudflare D1
sidebar_label: Cloudflare D1
description: "Cloudflare D1 is a serverless SQLite database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.d1","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":41,"max_score":201,"documentation":{"description":"Cloudflare D1 is a serverless SQLite database.","logo":"cloudflare.png","homepage_url":"https://developers.cloudflare.com/d1/","categories":["Cloud Data Warehouses","Traditional RDBMS","Hosted Open Source"],"pypi_packages":["superset-engine-d1"],"connection_string":"d1://{cloudflare_account_id}:{cloudflare_api_token}@{cloudflare_d1_database_id}","parameters":{"cloudflare_account_id":"Cloudflare account ID","cloudflare_api_token":"Cloudflare API token","cloudflare_d1_database_id":"D1 database ID"},"install_instructions":"pip install superset-engine-d1","category":"Other Databases"},"engine":"d1","engine_name":"Cloudflare D1","engine_aliases":[],"default_driver":"d1","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Cloudflare D1" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: CockroachDB
sidebar_label: CockroachDB
description: "CockroachDB is a distributed SQL database built for cloud applications."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.cockroachdb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":63,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":94,"max_score":201,"documentation":{"description":"CockroachDB is a distributed SQL database built for cloud applications.","logo":"cockroachdb.png","homepage_url":"https://www.cockroachlabs.com/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["cockroachdb"],"connection_string":"cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable","default_port":26257,"docs_url":"https://github.com/cockroachdb/sqlalchemy-cockroachdb","category":"Other Databases"},"engine":"cockroachdb","engine_name":"CockroachDB","engine_aliases":["postgres"],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="CockroachDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Couchbase
sidebar_label: Couchbase
description: "Couchbase is a distributed NoSQL document database with SQL++ support."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.couchbase","limit_method":1,"limit_clause":true,"joins":false,"subqueries":false,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":27,"max_score":201,"documentation":{"description":"Couchbase is a distributed NoSQL document database with SQL++ support.","logo":"couchbase.svg","homepage_url":"https://www.couchbase.com/","categories":["Search & NoSQL","Open Source"],"pypi_packages":["couchbase-sqlalchemy"],"connection_string":"couchbase://{username}:{password}@{host}:{port}?ssl=true","default_port":8091,"parameters":{"username":"Couchbase username","password":"Couchbase password","host":"Couchbase host or connection string for cloud","port":"Couchbase port (default 8091)","database":"Couchbase database/bucket name"},"drivers":[{"name":"couchbase-sqlalchemy","pypi_package":"couchbase-sqlalchemy","connection_string":"couchbase://{username}:{password}@{host}:{port}?ssl=true","is_recommended":true}],"category":"Search & NoSQL"},"engine":"couchbase","engine_name":"Couchbase","engine_aliases":["couchbasedb"],"default_driver":"couchbase","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Couchbase" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: CrateDB
sidebar_label: CrateDB
description: "CrateDB is a distributed SQL database for machine data and IoT workloads."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.crate","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"CrateDB is a distributed SQL database for machine data and IoT workloads.","logo":"cratedb.svg","homepage_url":"https://cratedb.com","categories":["Time Series Databases","Open Source"],"pypi_packages":["crate","sqlalchemy-cratedb"],"connection_string":"crate://{host}:{port}","default_port":4200,"parameters":{"host":"CrateDB host","port":"CrateDB HTTP port (default 4200)"},"drivers":[{"name":"crate","pypi_package":"crate[sqlalchemy]","connection_string":"crate://{host}:{port}","is_recommended":true}],"category":"Other Databases"},"engine":"crate","engine_name":"CrateDB","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="CrateDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databend (legacy)
sidebar_label: Databend (legacy)
description: "Documentation for Databend (legacy) database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databend","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":51,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Other Databases"},"engine":"databend","engine_name":"Databend (legacy)","engine_aliases":[],"default_driver":null,"supports_file_upload":false,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Databend (legacy)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databend
sidebar_label: Databend
description: "Databend is a modern cloud-native data warehouse with instant elasticity and pay-as-you-go pricing. Built in Rust for high performance."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databend","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":51,"max_score":201,"documentation":{"description":"Databend is a modern cloud-native data warehouse with instant elasticity and pay-as-you-go pricing. Built in Rust for high performance.","logo":"databend.png","homepage_url":"https://www.databend.com/","categories":["Cloud Data Warehouses","Analytical Databases","Proprietary"],"pypi_packages":["databend-sqlalchemy"],"connection_string":"databend://{username}:{password}@{host}:{port}/{database}?secure=true","default_port":443,"parameters":{"username":"Database username","password":"Database password","host":"Databend host","port":"Databend port (default 443 for HTTPS)","database":"Database name"},"category":"Other Databases"},"engine":"databend","engine_name":"Databend","engine_aliases":[],"default_driver":"databend","supports_file_upload":false,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Databend" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databricks Interactive Cluster
sidebar_label: Databricks Interactive Cluster
description: "Apache Hive is a data warehouse infrastructure built on Hadoop."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databricks","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":false,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":767,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":true,"expand_data":true,"query_cost_estimation":true,"sql_validation":false,"score":140,"max_score":201,"documentation":{"description":"Apache Hive is a data warehouse infrastructure built on Hadoop.","logo":"apache-hive.svg","homepage_url":"https://hive.apache.org/","categories":["Apache Projects","Query Engines","Open Source"],"pypi_packages":["pyhive"],"connection_string":"hive://hive@{hostname}:{port}/{database}","default_port":10000,"category":"Cloud Data Warehouses"},"engine":"databricks","engine_name":"Databricks Interactive Cluster","engine_aliases":[],"default_driver":"pyhive","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Databricks Interactive Cluster" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databricks (legacy)
sidebar_label: Databricks (legacy)
description: "Documentation for Databricks (legacy) database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databricks","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":true,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":70,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"databricks+connector://token:{access_token}@{host}:{port}/{database_name}","category":"Cloud Data Warehouses"},"engine":"databricks","engine_name":"Databricks (legacy)","engine_aliases":[],"default_driver":"connector","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Databricks (legacy)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databricks SQL Endpoint
sidebar_label: Databricks SQL Endpoint
description: "Documentation for Databricks SQL Endpoint database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databricks","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":30,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Cloud Data Warehouses"},"engine":"databricks","engine_name":"Databricks SQL Endpoint","engine_aliases":[],"default_driver":"pyodbc","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Databricks SQL Endpoint" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Databricks
sidebar_label: Databricks
description: "Databricks is a unified analytics platform built on Apache Spark, providing data engineering, data science, and machine learning capabilities in the cloud. Use "
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.databricks","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":true,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":70,"max_score":201,"documentation":{"description":"Databricks is a unified analytics platform built on Apache Spark, providing data engineering, data science, and machine learning capabilities in the cloud. Use the Python Connector for SQL warehouses and clusters.","logo":"databricks.png","homepage_url":"https://www.databricks.com/","categories":["Cloud Data Warehouses","Analytical Databases","Hosted Open Source"],"pypi_packages":["apache-superset[databricks]"],"install_instructions":"pip install apache-superset[databricks]","connection_string":"databricks://token:{access_token}@{host}:{port}?http_path={http_path}&catalog={catalog}&schema={schema}","parameters":{"access_token":"Personal access token from Settings > User Settings","host":"Server hostname from cluster JDBC/ODBC settings","port":"Port (default 443)","http_path":"HTTP path from cluster JDBC/ODBC settings"},"drivers":[{"name":"Databricks Python Connector (Recommended)","pypi_package":"databricks-sql-connector","connection_string":"databricks://token:{access_token}@{host}:{port}?http_path={http_path}&catalog={catalog}&schema={schema}","is_recommended":true,"notes":"Official Databricks connector. Best for SQL warehouses and clusters."},{"name":"Hive Connector (Interactive Clusters)","pypi_package":"databricks-dbapi[sqlalchemy]","connection_string":"databricks+pyhive://token:{access_token}@{host}:{port}/{database}","is_recommended":false,"notes":"For Interactive Clusters. Requires http_path in engine parameters."},{"name":"ODBC (SQL Endpoints)","pypi_package":"pyodbc","connection_string":"databricks+pyodbc://token:{access_token}@{host}:{port}/{database}","is_recommended":false,"notes":"Requires ODBC driver. For serverless SQL warehouses."},{"name":"databricks-dbapi (Legacy)","pypi_package":"databricks-dbapi[sqlalchemy]","connection_string":"databricks+connector://token:{access_token}@{host}:{port}/{database}","is_recommended":false,"notes":"Legacy connector. Use Python Connector for new deployments."}],"category":"Cloud Data Warehouses"},"engine":"databricks","engine_name":"Databricks","engine_aliases":[],"default_driver":"databricks-sql-python","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Databricks" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Denodo
sidebar_label: Denodo
description: "Denodo is a data virtualization platform for logical data management."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.denodo","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":27,"max_score":201,"documentation":{"description":"Denodo is a data virtualization platform for logical data management.","logo":"denodo.png","homepage_url":"https://www.denodo.com/","categories":["Query Engines","Proprietary"],"pypi_packages":["psycopg2"],"connection_string":"denodo://{username}:{password}@{host}:{port}/{database}","default_port":9996,"parameters":{"username":"Denodo username","password":"Denodo password","host":"Denodo VDP server hostname","port":"ODBC port (default 9996)","database":"Virtual database name"},"drivers":[{"name":"psycopg2","pypi_package":"psycopg2","connection_string":"denodo://{username}:{password}@{host}:{port}/{database}","is_recommended":true,"notes":"Uses PostgreSQL wire protocol."}],"category":"Other Databases","custom_errors":[{"message_template":"Incorrect username or password.","error_type":"CONNECTION_INVALID_USERNAME_ERROR","category":"Authentication","description":"Invalid username","issue_codes":[1012],"invalid_fields":["username","password"]},{"message_template":"Please enter a password.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015],"invalid_fields":["password"]},{"message_template":"Hostname \"%(hostname)s\" cannot be resolved.","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007],"invalid_fields":["host"]},{"message_template":"Server refused the connection: check hostname and port.","error_type":"CONNECTION_PORT_CLOSED_ERROR","category":"Connection","description":"Port closed or refused","issue_codes":[1008],"invalid_fields":["host","port"]},{"message_template":"Unable to connect to database \"%(database)s\"","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015],"invalid_fields":["database"]},{"message_template":"Unable to connect to database \"%(database)s\": database does not exist or insufficient permissions","error_type":"CONNECTION_DATABASE_PERMISSIONS_ERROR","category":"Permissions","description":"Insufficient permissions","issue_codes":[1017],"invalid_fields":["database"]},{"message_template":"Please check your query for syntax errors at or near \"%(err)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]},{"message_template":"Column \"%(column)s\" not found in \"%(view)s\".","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]},{"message_template":"Invalid aggregation expression.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]},{"message_template":"\"%(exp)s\" is neither an aggregation function nor appears in the GROUP BY clause.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"denodo","engine_name":"Denodo","engine_aliases":[],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Denodo" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Dremio
sidebar_label: Dremio
description: "Dremio is a data lakehouse platform for fast, self-service analytics."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.dremio","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Dremio is a data lakehouse platform for fast, self-service analytics.","logo":"dremio.png","homepage_url":"https://www.dremio.com/","categories":["Query Engines","Proprietary"],"pypi_packages":["sqlalchemy_dremio"],"connection_string":"dremio+flight://data.dremio.cloud:443/?Token={token}&UseEncryption=true","parameters":{"token":"Personal Access Token (PAT) or API token"},"drivers":[{"name":"Arrow Flight (Recommended)","pypi_package":"sqlalchemy_dremio","connection_string":"dremio+flight://data.dremio.cloud:443/?Token={token}&UseEncryption=true","is_recommended":true},{"name":"ODBC","pypi_package":"sqlalchemy_dremio","connection_string":"dremio+pyodbc://{token}@{host}:31010/dremio","is_recommended":false,"notes":"Requires Dremio ODBC drivers installed."}],"category":"Other Databases"},"engine":"dremio","engine_name":"Dremio","engine_aliases":["dremio+flight"],"default_driver":"flight","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Dremio" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: DuckDB
sidebar_label: DuckDB
description: "DuckDB is an in-process OLAP database designed for fast analytical queries on local data. Supports CSV, Parquet, JSON, and many other file formats."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.duckdb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":38,"max_score":201,"documentation":{"description":"DuckDB is an in-process OLAP database designed for fast analytical queries on local data. Supports CSV, Parquet, JSON, and many other file formats.","logo":"duckdb.png","homepage_url":"https://duckdb.org/","categories":["Analytical Databases","Open Source"],"pypi_packages":["duckdb-engine"],"connection_string":"duckdb:////path/to/duck.db","drivers":[{"name":"duckdb-engine","pypi_package":"duckdb-engine","connection_string":"duckdb:////path/to/duck.db","is_recommended":true}],"notes":"DuckDB supports both local file and in-memory databases. Use `:memory:` for in-memory database.","compatible_databases":[{"name":"MotherDuck","description":"MotherDuck is a serverless cloud analytics platform built on DuckDB, offering collaborative data sharing and cloud-native scalability.","logo":"motherduck.png","homepage_url":"https://motherduck.com/","pypi_packages":["duckdb","duckdb-engine"],"connection_string":"duckdb:///md:{database}?motherduck_token={token}","parameters":{"database":"MotherDuck database name","motherduck_token":"Service token from MotherDuck dashboard"},"notes":"Cloud-hosted DuckDB with collaboration features.","categories":["Hosted Open Source"]}],"category":"Other Databases","custom_errors":[{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve the column \"%(column_name)s\"","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]}]},"engine":"duckdb","engine_name":"DuckDB","engine_aliases":[],"default_driver":"duckdb_engine","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="DuckDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Elasticsearch
sidebar_label: Elasticsearch
description: "Elasticsearch is a distributed search and analytics engine. Query data using Elasticsearch SQL or OpenSearch SQL syntax."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.elasticsearch","limit_method":1,"limit_clause":true,"joins":false,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":false,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":37,"max_score":201,"documentation":{"description":"Elasticsearch is a distributed search and analytics engine. Query data using Elasticsearch SQL or OpenSearch SQL syntax.","logo":"elasticsearch.png","homepage_url":"https://www.elastic.co/elasticsearch/","categories":["Search & NoSQL","Open Source"],"pypi_packages":["elasticsearch-dbapi"],"connection_string":"elasticsearch+https://{user}:{password}@{host}:9243/","default_port":9243,"parameters":{"user":"Elasticsearch username","password":"Elasticsearch password","host":"Elasticsearch host"},"drivers":[{"name":"Elasticsearch SQL API (Recommended)","pypi_package":"elasticsearch-dbapi","connection_string":"elasticsearch+https://{user}:{password}@{host}:9243/","is_recommended":true,"notes":"For Elastic Cloud and self-hosted Elasticsearch with SQL enabled."},{"name":"OpenDistro / OpenSearch SQL","pypi_package":"elasticsearch-dbapi","connection_string":"odelasticsearch+https://{user}:{password}@{host}:9200/","is_recommended":false,"notes":"For OpenDistro Elasticsearch or Amazon OpenSearch Service."}],"compatible_databases":[{"name":"Elastic Cloud","description":"Elastic Cloud is the official managed Elasticsearch service from Elastic. It includes Elasticsearch, Kibana, and enterprise features with automatic scaling.","logo":"elasticsearch.png","homepage_url":"https://www.elastic.co/cloud/","categories":["Search & NoSQL","Hosted Open Source"],"pypi_packages":["elasticsearch-dbapi"],"connection_string":"elasticsearch+https://{user}:{password}@{deployment}.{region}.cloud.es.io:9243/","docs_url":"https://www.elastic.co/guide/en/cloud/current/"},{"name":"Amazon OpenSearch Service","description":"Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) is a managed search and analytics service on AWS.","logo":"elasticsearch.png","homepage_url":"https://aws.amazon.com/opensearch-service/","categories":["Search & NoSQL","Cloud - AWS","Hosted Open Source"],"pypi_packages":["elasticsearch-dbapi"],"connection_string":"odelasticsearch+https://{user}:{password}@{host}:443/","docs_url":"https://docs.aws.amazon.com/opensearch-service/latest/developerguide/"}],"category":"Search & NoSQL"},"engine":"elasticsearch","engine_name":"Elasticsearch","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Elasticsearch" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Exasol
sidebar_label: Exasol
description: "Exasol is a high-performance, in-memory, MPP analytical database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.exasol","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Exasol is a high-performance, in-memory, MPP analytical database.","logo":"exasol.png","homepage_url":"https://www.exasol.com/","categories":["Analytical Databases","Proprietary"],"pypi_packages":["sqlalchemy-exasol"],"connection_string":"exa+pyodbc://{username}:{password}@{dsn}","default_port":8563,"parameters":{"username":"Database username","password":"Database password","dsn":"DSN name configured in odbc.ini"},"drivers":[{"name":"pyodbc","pypi_package":"sqlalchemy-exasol","connection_string":"exa+pyodbc://{username}:{password}@{dsn}","is_recommended":true,"notes":"Requires ODBC driver and DSN configuration."},{"name":"turbodbc","pypi_package":"sqlalchemy-exasol[turbodbc]","connection_string":"exa+turbodbc://{username}:{password}@{dsn}","is_recommended":false,"notes":"Faster but requires additional dependencies."},{"name":"websocket","pypi_package":"sqlalchemy-exasol[websocket]","connection_string":"exa+websocket://{username}:{password}@{host}:{port}/{schema}","is_recommended":false,"notes":"Pure Python, no ODBC required."}],"category":"Other Databases"},"engine":"exa","engine_name":"Exasol","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Exasol" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Firebird
sidebar_label: Firebird
description: "Firebird is an open-source relational database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.firebird","limit_method":3,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":26,"max_score":201,"documentation":{"description":"Firebird is an open-source relational database.","logo":"firebird.png","homepage_url":"https://firebirdsql.org/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["sqlalchemy-firebird"],"version_requirements":"sqlalchemy-firebird>=0.7.0,<0.8","connection_string":"firebird+fdb://{username}:{password}@{host}:{port}//{path_to_db_file}","default_port":3050,"connection_examples":[{"description":"Local database","connection_string":"firebird+fdb://SYSDBA:masterkey@192.168.86.38:3050//Library/Frameworks/Firebird.framework/Versions/A/Resources/examples/empbuild/employee.fdb"}],"category":"Other Databases"},"engine":"firebird","engine_name":"Firebird","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Firebird" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Firebolt
sidebar_label: Firebolt
description: "Firebolt is a cloud data warehouse designed for high-performance analytics."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.firebolt","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Firebolt is a cloud data warehouse designed for high-performance analytics.","logo":"firebolt.png","homepage_url":"https://www.firebolt.io/","categories":["Cloud Data Warehouses","Analytical Databases","Proprietary"],"pypi_packages":["firebolt-sqlalchemy"],"connection_string":"firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={account_name}","parameters":{"client_id":"Service account client ID","client_secret":"Service account client secret","database":"Database name","engine_name":"Engine name","account_name":"Account name"},"drivers":[{"name":"firebolt-sqlalchemy","pypi_package":"firebolt-sqlalchemy","connection_string":"firebolt://{client_id}:{client_secret}@{database}/{engine_name}?account_name={account_name}","is_recommended":true}],"category":"Other Databases"},"engine":"firebolt","engine_name":"Firebolt","engine_aliases":[],"default_driver":"firebolt","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Firebolt" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Google BigQuery
sidebar_label: Google BigQuery
description: "Google BigQuery is a serverless, highly scalable data warehouse."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.bigquery","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":true,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":false,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":true,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":83,"max_score":201,"documentation":{"description":"Google BigQuery is a serverless, highly scalable data warehouse.","logo":"google-big-query.svg","homepage_url":"https://cloud.google.com/bigquery/","categories":["Cloud - Google","Analytical Databases","Proprietary"],"pypi_packages":["sqlalchemy-bigquery"],"connection_string":"bigquery://{project_id}","install_instructions":"echo \"sqlalchemy-bigquery\" >> ./docker/requirements-local.txt","authentication_methods":[{"name":"Service Account JSON","description":"Upload service account credentials JSON or paste in Secure Extra","secure_extra":{"credentials_info":{"type":"service_account","project_id":"...","private_key_id":"...","private_key":"...","client_email":"...","client_id":"...","auth_uri":"...","token_uri":"..."}}}],"notes":"Create a Service Account via GCP console with access to BigQuery datasets. For CSV/Excel uploads, also install pandas_gbq.","warnings":["Google BigQuery Python SDK is not compatible with gevent. Use a worker type other than gevent when deploying with gunicorn."],"docs_url":"https://github.com/googleapis/python-bigquery-sqlalchemy","category":"Cloud - Google","custom_errors":[{"regex_name":"CONNECTION_DATABASE_PERMISSIONS_REGEX","message_template":"Unable to connect. Verify that the following roles are set on the service account: \"BigQuery Data Viewer\", \"BigQuery Metadata Viewer\", \"BigQuery Job User\" and the following permissions are set \"bigquery.readsessions.create\", \"bigquery.readsessions.getData\"","error_type":"CONNECTION_DATABASE_PERMISSIONS_ERROR","category":"Permissions","description":"Insufficient permissions","issue_codes":[1017]},{"regex_name":"TABLE_DOES_NOT_EXIST_REGEX","message_template":"The table \"%(table)s\" does not exist. A valid table must be used to run this query.","error_type":"TABLE_DOES_NOT_EXIST_ERROR","category":"Query","description":"Table not found","issue_codes":[1003,1005]},{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve column \"%(column)s\" at line %(location)s.","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]},{"regex_name":"SCHEMA_DOES_NOT_EXIST_REGEX","message_template":"The schema \"%(schema)s\" does not exist. A valid schema must be used to run this query.","error_type":"SCHEMA_DOES_NOT_EXIST_ERROR","category":"Query","description":"Schema not found","issue_codes":[1003,1016]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors at or near \"%(syntax_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"bigquery","engine_name":"Google BigQuery","engine_aliases":[],"default_driver":"bigquery","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Google BigQuery" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Google Datastore
sidebar_label: Google Datastore
description: "Google Cloud Datastore is a highly scalable NoSQL database for your applications."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.datastore","limit_method":3,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":true,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":73,"max_score":201,"documentation":{"description":"Google Cloud Datastore is a highly scalable NoSQL database for your applications.","logo":"datastore.png","homepage_url":"https://cloud.google.com/datastore/","categories":["Cloud - Google","Search & NoSQL","Proprietary"],"pypi_packages":["python-datastore-sqlalchemy"],"connection_string":"datastore://{project_id}/?database={database_id}","authentication_methods":[{"name":"Service Account JSON","description":"Upload service account credentials JSON or paste in Secure Extra","secure_extra":{"credentials_info":{"type":"service_account","project_id":"...","private_key_id":"...","private_key":"...","client_email":"...","client_id":"...","auth_uri":"...","token_uri":"..."}}}],"notes":"Create a Service Account via GCP console with access to datastore datasets.","docs_url":"https://github.com/splasky/Python-datastore-sqlalchemy","category":"Cloud - Google","custom_errors":[{"regex_name":"CONNECTION_DATABASE_PERMISSIONS_REGEX","message_template":"Unable to connect. Verify that the following roles are set on the service account: \"Cloud Datastore Viewer\", \"Cloud Datastore User\", \"Cloud Datastore Creator\"","error_type":"CONNECTION_DATABASE_PERMISSIONS_ERROR","category":"Permissions","description":"Insufficient permissions","issue_codes":[1017]},{"regex_name":"TABLE_DOES_NOT_EXIST_REGEX","message_template":"The table \"%(table)s\" does not exist. A valid table must be used to run this query.","error_type":"TABLE_DOES_NOT_EXIST_ERROR","category":"Query","description":"Table not found","issue_codes":[1003,1005]},{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve column \"%(column)s\" at line %(location)s.","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]},{"regex_name":"SCHEMA_DOES_NOT_EXIST_REGEX","message_template":"The schema \"%(schema)s\" does not exist. A valid schema must be used to run this query.","error_type":"SCHEMA_DOES_NOT_EXIST_ERROR","category":"Query","description":"Schema not found","issue_codes":[1003,1016]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors at or near \"%(syntax_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"datastore","engine_name":"Google Datastore","engine_aliases":[],"default_driver":"datastore","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Google Datastore" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Google Sheets
sidebar_label: Google Sheets
description: "Google Sheets allows querying spreadsheets as SQL tables via shillelagh."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.gsheets","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":61,"max_score":201,"documentation":{"description":"Google Sheets allows querying spreadsheets as SQL tables via shillelagh.","logo":"google-sheets.svg","homepage_url":"https://www.google.com/sheets/about/","categories":["Cloud - Google","Hosted Open Source"],"pypi_packages":["shillelagh[gsheetsapi]"],"install_instructions":"pip install \"apache-superset[gsheets]\"","connection_string":"gsheets://","notes":"Requires Google service account credentials or OAuth2 authentication. See docs for setup instructions.","category":"Cloud - Google","custom_errors":[{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors near \"%(server_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"gsheets","engine_name":"Google Sheets","engine_aliases":[],"default_driver":"apsw","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Google Sheets" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Greenplum
sidebar_label: Greenplum
description: "VMware Greenplum is a massively parallel processing (MPP) database built on PostgreSQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.greenplum","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":63,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":94,"max_score":201,"documentation":{"description":"VMware Greenplum is a massively parallel processing (MPP) database built on PostgreSQL.","logo":"greenplum.png","homepage_url":"https://greenplum.org/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["sqlalchemy-greenplum","psycopg2"],"connection_string":"greenplum://{username}:{password}@{host}:{port}/{database}","default_port":5432,"parameters":{"username":"Database username","password":"Database password","host":"Greenplum coordinator host","port":"Default 5432","database":"Database name"},"docs_url":"https://docs.vmware.com/en/VMware-Greenplum/","category":"Other Databases"},"engine":"greenplum","engine_name":"Greenplum","engine_aliases":["postgres"],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Greenplum" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Hologres
sidebar_label: Hologres
description: "Alibaba Cloud Hologres is a real-time interactive analytics service, fully compatible with PostgreSQL 11."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.hologres","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":34,"max_score":201,"documentation":{"description":"Alibaba Cloud Hologres is a real-time interactive analytics service, fully compatible with PostgreSQL 11.","logo":"hologres.png","homepage_url":"https://www.alibabacloud.com/product/hologres","categories":["Cloud Data Warehouses","Analytical Databases","Proprietary"],"pypi_packages":["psycopg2"],"connection_string":"postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}","parameters":{"username":"AccessKey ID of your Alibaba Cloud account","password":"AccessKey secret of your Alibaba Cloud account","host":"Public endpoint of the Hologres instance","port":"Port number of the Hologres instance","database":"Name of the Hologres database"},"default_port":80,"notes":"Uses the PostgreSQL driver. psycopg2 comes bundled with Superset.","category":"Other Databases"},"engine":"hologres","engine_name":"Hologres","engine_aliases":[],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Hologres" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: IBM Db2 for i
sidebar_label: IBM Db2 for i
description: "IBM Db2 is a family of data management products for enterprise workloads, available on-premises, in containers, and across cloud platforms."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.ibmi","limit_method":2,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":38,"max_score":201,"documentation":{"description":"IBM Db2 is a family of data management products for enterprise workloads, available on-premises, in containers, and across cloud platforms.","logo":"ibm-db2.svg","homepage_url":"https://www.ibm.com/db2","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["ibm_db_sa"],"connection_string":"db2+ibm_db://{username}:{password}@{hostname}:{port}/{database}","default_port":50000,"drivers":[{"name":"ibm_db_sa (with LIMIT)","connection_string":"db2+ibm_db://{username}:{password}@{hostname}:{port}/{database}","is_recommended":true},{"name":"ibm_db_sa (without LIMIT syntax)","connection_string":"ibm_db_sa://{username}:{password}@{hostname}:{port}/{database}","is_recommended":false,"notes":"Use for older DB2 versions without LIMIT [n] syntax. Recommended for SQL Lab."}],"compatible_databases":[{"name":"IBM Db2 for i (AS/400)","description":"Db2 for i is a fully integrated database engine on IBM i (AS/400) systems. Uses a different SQLAlchemy driver optimized for IBM i.","logo":"ibm-db2.svg","homepage_url":"https://www.ibm.com/products/db2-for-i","pypi_packages":["sqlalchemy-ibmi"],"connection_string":"ibmi://{username}:{password}@{host}/{database}","parameters":{"username":"IBM i username","password":"IBM i password","host":"IBM i system host","database":"Library/schema name"},"docs_url":"https://github.com/IBM/sqlalchemy-ibmi","categories":["Proprietary"]}],"docs_url":"https://github.com/ibmdb/python-ibmdbsa","category":"Other Databases"},"engine":"ibmi","engine_name":"IBM Db2 for i","engine_aliases":["ibm_db_sa"],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="IBM Db2 for i" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: IBM Db2
sidebar_label: IBM Db2
description: "IBM Db2 is a family of data management products for enterprise workloads, available on-premises, in containers, and across cloud platforms."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.db2","limit_method":2,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":30,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":38,"max_score":201,"documentation":{"description":"IBM Db2 is a family of data management products for enterprise workloads, available on-premises, in containers, and across cloud platforms.","logo":"ibm-db2.svg","homepage_url":"https://www.ibm.com/db2","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["ibm_db_sa"],"connection_string":"db2+ibm_db://{username}:{password}@{hostname}:{port}/{database}","default_port":50000,"drivers":[{"name":"ibm_db_sa (with LIMIT)","connection_string":"db2+ibm_db://{username}:{password}@{hostname}:{port}/{database}","is_recommended":true},{"name":"ibm_db_sa (without LIMIT syntax)","connection_string":"ibm_db_sa://{username}:{password}@{hostname}:{port}/{database}","is_recommended":false,"notes":"Use for older DB2 versions without LIMIT [n] syntax. Recommended for SQL Lab."}],"compatible_databases":[{"name":"IBM Db2 for i (AS/400)","description":"Db2 for i is a fully integrated database engine on IBM i (AS/400) systems. Uses a different SQLAlchemy driver optimized for IBM i.","logo":"ibm-db2.svg","homepage_url":"https://www.ibm.com/products/db2-for-i","pypi_packages":["sqlalchemy-ibmi"],"connection_string":"ibmi://{username}:{password}@{host}/{database}","parameters":{"username":"IBM i username","password":"IBM i password","host":"IBM i system host","database":"Library/schema name"},"docs_url":"https://github.com/IBM/sqlalchemy-ibmi","categories":["Proprietary"]}],"docs_url":"https://github.com/ibmdb/python-ibmdbsa","category":"Other Databases"},"engine":"db2","engine_name":"IBM Db2","engine_aliases":["ibm_db_sa"],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="IBM Db2" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: IBM Netezza Performance Server
sidebar_label: IBM Netezza Performance Server
description: "IBM Netezza Performance Server is a data warehouse appliance."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.netezza","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"IBM Netezza Performance Server is a data warehouse appliance.","logo":"netezza.png","homepage_url":"https://www.ibm.com/products/netezza","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["nzalchemy"],"connection_string":"netezza+nzpy://{username}:{password}@{hostname}:{port}/{database}","default_port":5480,"category":"Other Databases"},"engine":"netezza","engine_name":"IBM Netezza Performance Server","engine_aliases":[],"default_driver":"nzpy","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="IBM Netezza Performance Server" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: MariaDB
sidebar_label: MariaDB
description: "MariaDB is a community-developed fork of MySQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.mariadb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":59,"max_score":201,"documentation":{"description":"MariaDB is a community-developed fork of MySQL.","logo":"mariadb.png","homepage_url":"https://mariadb.org/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["mysqlclient"],"connection_string":"mysql://{username}:{password}@{host}/{database}","default_port":3306,"notes":"Uses the MySQL driver. Fully compatible with MySQL connector.","category":"Traditional RDBMS"},"engine":"mariadb","engine_name":"MariaDB","engine_aliases":[],"default_driver":"mysqldb","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="MariaDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Microsoft SQL Server
sidebar_label: Microsoft SQL Server
description: "Microsoft SQL Server is a relational database management system."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.mssql","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":false,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":44,"max_score":201,"documentation":{"description":"Microsoft SQL Server is a relational database management system.","logo":"msql.png","homepage_url":"https://www.microsoft.com/en-us/sql-server","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["pymssql"],"connection_string":"mssql+pymssql://{username}:{password}@{host}:{port}/{database}","default_port":1433,"drivers":[{"name":"pymssql","pypi_package":"pymssql","connection_string":"mssql+pymssql://{username}:{password}@{host}:{port}/{database}","is_recommended":true},{"name":"pyodbc","pypi_package":"pyodbc","connection_string":"mssql+pyodbc:///?odbc_connect=Driver%3D%7BODBC+Driver+17+for+SQL+Server%7D%3BServer%3Dtcp%3A%3C{host}%3E%2C1433%3BDatabase%3D{database}%3BUid%3D{username}%3BPwd%3D{password}%3BEncrypt%3Dyes%3BConnection+Timeout%3D30","is_recommended":false,"notes":"Connection string must be URL-encoded. Special characters like @ need encoding."}],"docs_url":"https://docs.sqlalchemy.org/en/20/core/engines.html#escaping-special-characters-such-as-signs-in-passwords","category":"Cloud - Azure","custom_errors":[{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\", password, or database name \"%(database)s\" is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"The hostname \"%(hostname)s\" cannot be resolved.","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007]},{"regex_name":"CONNECTION_PORT_CLOSED_REGEX","message_template":"Port %(port)s on hostname \"%(hostname)s\" refused the connection.","error_type":"CONNECTION_PORT_CLOSED_ERROR","category":"Connection","description":"Port closed or refused","issue_codes":[1008]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down, and can't be reached on port %(port)s.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009]}]},"engine":"mssql","engine_name":"Microsoft SQL Server","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Microsoft SQL Server" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: MonetDB
sidebar_label: MonetDB
description: "MonetDB is an open-source column-oriented relational database for high-performance analytics."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.monetdb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":26,"max_score":201,"documentation":{"description":"MonetDB is an open-source column-oriented relational database for high-performance analytics.","logo":"monet-db.png","homepage_url":"https://www.monetdb.org/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["sqlalchemy-monetdb","pymonetdb"],"connection_string":"monetdb://{username}:{password}@{host}:{port}/{database}","default_port":50000,"parameters":{"username":"Database username (default: monetdb)","password":"Database password (default: monetdb)","host":"MonetDB server host","port":"Default 50000","database":"Database name"},"docs_url":"https://www.monetdb.org/documentation/","category":"Other Databases"},"engine":"monetdb","engine_name":"MonetDB","engine_aliases":[],"default_driver":"pymonetdb","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="MonetDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: MongoDB
sidebar_label: MongoDB
description: "MongoDB is a document-oriented, operational NoSQL database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.mongodb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":32,"max_score":201,"documentation":{"description":"MongoDB is a document-oriented, operational NoSQL database.","logo":"mongodb.png","homepage_url":"https://www.mongodb.com/","categories":["Search & NoSQL","Proprietary"],"pypi_packages":["pymongosql"],"connection_string":"mongodb://{username}:{password}@{host}:{port}/{database}?mode=superset","parameters":{"username":"Username for MongoDB","password":"Password for MongoDB","host":"MongoDB host","port":"MongoDB port","database":"Database name"},"drivers":[{"name":"MongoDB Atlas Cloud","pypi_package":"pymongosql","connection_string":"mongodb+srv://{username}:{password}@{host}/{database}?mode=superset","notes":"For MongoDB Atlas cloud service.","is_recommended":true},{"name":"MongoDB Cluster","pypi_package":"pymongosql","connection_string":"mongodb://{username}:{password}@{host}:{port}/{database}?mode=superset","is_recommended":false,"notes":"For self-hosted MongoDB instances."}],"notes":"Uses PartiQL for SQL queries. Requires mode=superset parameter.","docs_url":"https://github.com/passren/PyMongoSQL","category":"Other Databases"},"engine":"mongodb","engine_name":"MongoDB","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="MongoDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: MotherDuck
sidebar_label: MotherDuck
description: "MotherDuck is a serverless cloud analytics platform built on DuckDB. It combines the simplicity of DuckDB with cloud-scale data sharing and collaboration."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.duckdb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":58,"max_score":201,"documentation":{"description":"MotherDuck is a serverless cloud analytics platform built on DuckDB. It combines the simplicity of DuckDB with cloud-scale data sharing and collaboration.","logo":"motherduck.png","homepage_url":"https://motherduck.com/","categories":["Analytical Databases","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["duckdb","duckdb-engine"],"connection_string":"duckdb:///md:{database}?motherduck_token={token}","parameters":{"database":"MotherDuck database name","token":"Service token from MotherDuck dashboard"},"docs_url":"https://motherduck.com/docs/getting-started/","drivers":[{"name":"duckdb-engine","pypi_package":"duckdb-engine","connection_string":"duckdb:///md:{database}?motherduck_token={token}","is_recommended":true}],"category":"Other Databases","custom_errors":[{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve the column \"%(column_name)s\"","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]}]},"engine":"motherduck","engine_name":"MotherDuck","engine_aliases":["duckdb"],"default_driver":"duckdb_engine","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="MotherDuck" database={databaseInfo} />

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,31 @@
---
title: OceanBase
sidebar_label: OceanBase
description: "OceanBase is a distributed relational database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.oceanbase","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":59,"max_score":201,"documentation":{"description":"OceanBase is a distributed relational database.","logo":"oceanbase.svg","homepage_url":"https://www.oceanbase.com/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["oceanbase_py"],"connection_string":"oceanbase://{username}:{password}@{host}:{port}/{database}","category":"Other Databases","custom_errors":[{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\" or the password is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015],"invalid_fields":["username","password"]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"Unknown OceanBase server host \"%(hostname)s\".","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007],"invalid_fields":["host"]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down and can't be reached.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009],"invalid_fields":["host","port"]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_REGEX","message_template":"Unable to connect to database \"%(database)s\".","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015],"invalid_fields":["database"]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors near \"%(server_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"oceanbase","engine_name":"OceanBase","engine_aliases":["oceanbase","oceanbase_py"],"default_driver":"oceanbase","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="OceanBase" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Ocient
sidebar_label: Ocient
description: "Ocient is a hyperscale data analytics database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.ocient","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":false,"max_column_name":30,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":38,"max_score":201,"documentation":{"description":"Ocient is a hyperscale data analytics database.","categories":["Analytical Databases","Proprietary"],"pypi_packages":["sqlalchemy-ocient"],"connection_string":"ocient://{username}:{password}@{host}:{port}/{database}","install_instructions":"pip install sqlalchemy-ocient","category":"Other Databases","custom_errors":[{"regex_name":"CONNECTION_INVALID_USERNAME_REGEX","message_template":"The username \"%(username)s\" does not exist.","error_type":"CONNECTION_INVALID_USERNAME_ERROR","category":"Authentication","description":"Invalid username","issue_codes":[1012]},{"regex_name":"CONNECTION_INVALID_PASSWORD_REGEX","message_template":"The user/password combination is not valid (Incorrect password for user).","error_type":"CONNECTION_INVALID_PASSWORD_ERROR","category":"Authentication","description":"Invalid password","issue_codes":[1013]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_REGEX","message_template":"Could not connect to database: \"%(database)s\"","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"Could not resolve hostname: \"%(host)s\".","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007]},{"regex_name":"CONNECTION_INVALID_PORT_ERROR","message_template":"Port out of range 0-65535","error_type":"CONNECTION_INVALID_PORT_ERROR"},{"regex_name":"INVALID_CONNECTION_STRING_REGEX","message_template":"Invalid Connection String: Expecting String of the form 'ocient://user:pass@host:port/database'.","error_type":"GENERIC_DB_ENGINE_ERROR","category":"General","description":"Database engine error","issue_codes":[1002]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Syntax Error: %(qualifier)s input \"%(input)s\" expecting \"%(expected)s","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]},{"regex_name":"TABLE_DOES_NOT_EXIST_REGEX","message_template":"Table or View \"%(table)s\" does not exist.","error_type":"TABLE_DOES_NOT_EXIST_ERROR","category":"Query","description":"Table not found","issue_codes":[1003,1005]},{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"Invalid reference to column: \"%(column)s\"","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]}]},"engine":"ocient","engine_name":"Ocient","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Ocient" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: OpenSearch (OpenDistro)
sidebar_label: OpenSearch (OpenDistro)
description: "Documentation for OpenSearch (OpenDistro) database connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.elasticsearch","limit_method":1,"limit_clause":true,"joins":false,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":true,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":false,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":26,"max_score":201,"documentation":{"pypi_packages":[],"connection_string":"engine+driver://user:password@host:port/dbname[?key=value&key=value...]","category":"Other Databases"},"engine":"odelasticsearch","engine_name":"OpenSearch (OpenDistro)","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="OpenSearch (OpenDistro)" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Oracle
sidebar_label: Oracle
description: "Oracle Database is a multi-model database management system."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.oracle","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Oracle Database is a multi-model database management system.","logo":"oraclelogo.png","homepage_url":"https://www.oracle.com/database/","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["oracledb"],"connection_string":"oracle://{username}:{password}@{hostname}:{port}","default_port":1521,"notes":"Previously used cx_Oracle, now uses oracledb.","docs_url":"https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html","category":"Other Databases"},"engine":"oracle","engine_name":"Oracle","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Oracle" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Parseable
sidebar_label: Parseable
description: "Parseable is a distributed log analytics database with SQL-like query interface."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.parseable","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":28,"max_score":201,"documentation":{"description":"Parseable is a distributed log analytics database with SQL-like query interface.","categories":["Search & NoSQL","Open Source"],"pypi_packages":["sqlalchemy-parseable"],"connection_string":"parseable://{username}:{password}@{hostname}:{port}/{stream_name}","connection_examples":[{"description":"Example connection","connection_string":"parseable://admin:admin@demo.parseable.com:443/ingress-nginx"}],"notes":"Stream name in URI represents the Parseable logstream to query. Supports HTTP (80) and HTTPS (443).","docs_url":"https://www.parseable.io","category":"Other Databases"},"engine":"parseable","engine_name":"Parseable","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Parseable" database={databaseInfo} />

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,31 @@
---
title: Presto
sidebar_label: Presto
description: "Presto is a distributed SQL query engine for big data."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.presto","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":true,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":true,"expand_data":true,"query_cost_estimation":true,"sql_validation":true,"score":159,"max_score":201,"documentation":{"description":"Presto is a distributed SQL query engine for big data.","logo":"presto-og.png","homepage_url":"https://prestodb.io/","categories":["Query Engines","Open Source"],"pypi_packages":["pyhive"],"install_instructions":"pip install \"apache-superset[presto]\"","connection_string":"presto://{hostname}:{port}/{database}","default_port":8080,"parameters":{"hostname":"Presto coordinator hostname","port":"Presto coordinator port (default 8080)","database":"Catalog name"},"drivers":[{"name":"PyHive","pypi_package":"pyhive","connection_string":"presto://{hostname}:{port}/{database}","is_recommended":true}],"category":"Query Engines","custom_errors":[{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve the column \"%(column_name)s\" at line %(location)s.","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]},{"regex_name":"TABLE_DOES_NOT_EXIST_REGEX","message_template":"The table \"%(table_name)s\" does not exist. A valid table must be used to run this query.","error_type":"TABLE_DOES_NOT_EXIST_ERROR","category":"Query","description":"Table not found","issue_codes":[1003,1005]},{"regex_name":"SCHEMA_DOES_NOT_EXIST_REGEX","message_template":"The schema \"%(schema_name)s\" does not exist. A valid schema must be used to run this query.","error_type":"SCHEMA_DOES_NOT_EXIST_ERROR","category":"Query","description":"Schema not found","issue_codes":[1003,1016]},{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\" or the password is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015]},{"regex_name":"CONNECTION_INVALID_HOSTNAME_REGEX","message_template":"The hostname \"%(hostname)s\" cannot be resolved.","error_type":"CONNECTION_INVALID_HOSTNAME_ERROR","category":"Connection","description":"Invalid hostname","issue_codes":[1007]},{"regex_name":"CONNECTION_HOST_DOWN_REGEX","message_template":"The host \"%(hostname)s\" might be down, and can't be reached on port %(port)s.","error_type":"CONNECTION_HOST_DOWN_ERROR","category":"Connection","description":"Host unreachable","issue_codes":[1009]},{"regex_name":"CONNECTION_PORT_CLOSED_REGEX","message_template":"Port %(port)s on hostname \"%(hostname)s\" refused the connection.","error_type":"CONNECTION_PORT_CLOSED_ERROR","category":"Connection","description":"Port closed or refused","issue_codes":[1008]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_ERROR","message_template":"Unable to connect to catalog named \"%(catalog_name)s\".","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015]}]},"engine":"presto","engine_name":"Presto","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Presto" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: RisingWave
sidebar_label: RisingWave
description: "RisingWave is a distributed streaming database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.risingwave","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":63,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":94,"max_score":201,"documentation":{"description":"RisingWave is a distributed streaming database.","logo":"risingwave.svg","homepage_url":"https://risingwave.com/","categories":["Analytical Databases","Open Source"],"pypi_packages":["sqlalchemy-risingwave"],"connection_string":"risingwave://root@{hostname}:{port}/{database}?sslmode=disable","default_port":4566,"docs_url":"https://github.com/risingwavelabs/sqlalchemy-risingwave","category":"Other Databases"},"engine":"risingwave","engine_name":"RisingWave","engine_aliases":["postgres"],"default_driver":"","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="RisingWave" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: SAP HANA
sidebar_label: SAP HANA
description: "SAP HANA is an in-memory relational database and application platform."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":false,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.hana","limit_method":2,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":30,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":27,"max_score":201,"documentation":{"description":"SAP HANA is an in-memory relational database and application platform.","logo":"sap-hana.png","homepage_url":"https://www.sap.com/products/technology-platform/hana.html","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["hdbcli","sqlalchemy-hana"],"install_instructions":"pip install apache_superset[hana]","connection_string":"hana://{username}:{password}@{host}:{port}","default_port":30015,"docs_url":"https://github.com/SAP/sqlalchemy-hana","category":"Other Databases"},"engine":"hana","engine_name":"SAP HANA","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="SAP HANA" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: SAP Sybase
sidebar_label: SAP Sybase
description: "SAP ASE (formerly Sybase) is an enterprise relational database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.sybase","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":false,"max_column_name":128,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":44,"max_score":201,"documentation":{"description":"SAP ASE (formerly Sybase) is an enterprise relational database.","logo":"sybase.png","homepage_url":"https://www.sap.com/products/technology-platform/sybase-ase.html","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["sqlalchemy-sybase","pyodbc"],"connection_string":"sybase+pyodbc://{username}:{password}@{dsn}","parameters":{"username":"Database username","password":"Database password","dsn":"ODBC Data Source Name configured for SAP ASE"},"notes":"Requires SAP ASE ODBC driver installed and configured as a DSN.","docs_url":"https://help.sap.com/docs/SAP_ASE","category":"Other Databases"},"engine":"sybase","engine_name":"SAP Sybase","engine_aliases":["sybase_sqlany"],"default_driver":"pyodbc","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="SAP Sybase" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Shillelagh
sidebar_label: Shillelagh
description: "Shillelagh is a Python library that allows querying many data sources using SQL, including Google Sheets, CSV files, and APIs."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.shillelagh","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":41,"max_score":201,"documentation":{"description":"Shillelagh is a Python library that allows querying many data sources using SQL, including Google Sheets, CSV files, and APIs.","logo":"shillelagh.png","homepage_url":"https://shillelagh.readthedocs.io/","categories":["Other Databases","Open Source"],"pypi_packages":["shillelagh[gsheetsapi]"],"connection_string":"shillelagh://","notes":"Shillelagh uses virtual tables to query external data sources. Google Sheets requires OAuth credentials configured.","category":"Other Databases"},"engine":"shillelagh","engine_name":"Shillelagh","engine_aliases":[],"default_driver":"apsw","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Shillelagh" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: SingleStore
sidebar_label: SingleStore
description: "SingleStore is a distributed SQL database for real-time analytics and transactions."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.singlestore","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":256,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":68,"max_score":201,"documentation":{"description":"SingleStore is a distributed SQL database for real-time analytics and transactions.","logo":"singlestore.png","homepage_url":"https://www.singlestore.com/","categories":["Analytical Databases","Proprietary"],"pypi_packages":["singlestoredb"],"connection_string":"singlestoredb://{username}:{password}@{host}:{port}/{database}","default_port":3306,"parameters":{"username":"Database username","password":"Database password","host":"SingleStore host","port":"SingleStore port (default 3306)","database":"Database name"},"drivers":[{"name":"singlestoredb","pypi_package":"singlestoredb","connection_string":"singlestoredb://{username}:{password}@{host}:{port}/{database}","is_recommended":true}],"category":"Other Databases"},"engine":"singlestoredb","engine_name":"SingleStore","engine_aliases":[],"default_driver":"singlestoredb","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="SingleStore" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Snowflake
sidebar_label: Snowflake
description: "Snowflake is a cloud-native data warehouse."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.snowflake","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":256,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":72,"max_score":201,"documentation":{"description":"Snowflake is a cloud-native data warehouse.","logo":"snowflake.svg","homepage_url":"https://www.snowflake.com/","categories":["Cloud Data Warehouses","Analytical Databases","Proprietary"],"pypi_packages":["snowflake-sqlalchemy"],"connection_string":"snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}","install_instructions":"echo \"snowflake-sqlalchemy\" >> ./docker/requirements-local.txt","connection_examples":[{"description":"With role and warehouse","connection_string":"snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}"},{"description":"With defaults (role/warehouse optional)","connection_string":"snowflake://{user}:{password}@{account}.{region}/{database}"}],"authentication_methods":[{"name":"Key Pair Authentication","description":"Use RSA key pair instead of password","requirements":"Key pair must be generated and public key registered in Snowflake","notes":"Merge multi-line private key to one line with \\n between lines."}],"notes":"Schema is not required in connection string. Ensure user has privileges for all databases/schemas/tables/views/warehouses.","docs_url":"https://docs.snowflake.com/en/user-guide/key-pair-auth.html","category":"Cloud Data Warehouses","custom_errors":[{"regex_name":"OBJECT_DOES_NOT_EXIST_REGEX","message_template":"%(object)s does not exist in this database.","error_type":"OBJECT_DOES_NOT_EXIST_ERROR","category":"Query","description":"Object not found","issue_codes":[1029]},{"regex_name":"SYNTAX_ERROR_REGEX","message_template":"Please check your query for syntax errors at or near \"%(syntax_error)s\". Then, try running your query again.","error_type":"SYNTAX_ERROR","category":"Query","description":"SQL syntax error","issue_codes":[1030]}]},"engine":"snowflake","engine_name":"Snowflake","engine_aliases":[],"default_driver":"snowflake","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Snowflake" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: SQLite
sidebar_label: SQLite
description: "SQLite is a self-contained, serverless SQL database engine."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.sqlite","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":41,"max_score":201,"documentation":{"description":"SQLite is a self-contained, serverless SQL database engine.","logo":"sqlite.png","homepage_url":"https://www.sqlite.org/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":[],"connection_string":"sqlite:///path/to/file.db?check_same_thread=false","notes":"No additional library needed. SQLite is bundled with Python.","category":"Traditional RDBMS","custom_errors":[{"regex_name":"COLUMN_DOES_NOT_EXIST_REGEX","message_template":"We can't seem to resolve the column \"%(column_name)s\"","error_type":"COLUMN_DOES_NOT_EXIST_ERROR","category":"Query","description":"Column not found","issue_codes":[1003,1004]}]},"engine":"sqlite","engine_name":"SQLite","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="SQLite" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: StarRocks
sidebar_label: StarRocks
description: "StarRocks is a high-performance analytical database for real-time analytics."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.starrocks","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":false,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":89,"max_score":201,"documentation":{"description":"StarRocks is a high-performance analytical database for real-time analytics.","logo":"starrocks.png","homepage_url":"https://www.starrocks.io/","categories":["Analytical Databases","Open Source"],"pypi_packages":["starrocks"],"connection_string":"starrocks://{username}:{password}@{host}:{port}/{catalog}.{database}","default_port":9030,"parameters":{"username":"Database username","password":"Database password","host":"StarRocks FE host","port":"Query port (default 9030)","catalog":"Catalog name","database":"Database name"},"drivers":[{"name":"starrocks","pypi_package":"starrocks","connection_string":"starrocks://{username}:{password}@{host}:{port}/{catalog}.{database}","is_recommended":true},{"name":"mysqlclient","pypi_package":"mysqlclient","connection_string":"mysql://{username}:{password}@{host}:{port}/{database}","is_recommended":false,"notes":"MySQL-compatible driver for StarRocks."},{"name":"PyMySQL","pypi_package":"pymysql","connection_string":"mysql+pymysql://{username}:{password}@{host}:{port}/{database}","is_recommended":false,"notes":"Pure Python MySQL driver, no compilation required."}],"compatible_databases":[{"name":"CelerData","description":"CelerData is a fully-managed cloud analytics service built on StarRocks. It provides instant elasticity, automatic scaling, and enterprise features.","logo":"celerdata.png","homepage_url":"https://celerdata.com/","categories":["Analytical Databases","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["starrocks"],"connection_string":"starrocks://{username}:{password}@{host}:{port}/{catalog}.{database}","parameters":{"username":"CelerData username","password":"CelerData password","host":"CelerData cluster endpoint","port":"Query port (default 9030)","catalog":"Catalog name","database":"Database name"},"docs_url":"https://docs.celerdata.com/"}],"category":"Analytical Databases","custom_errors":[{"regex_name":"CONNECTION_ACCESS_DENIED_REGEX","message_template":"Either the username \"%(username)s\" or the password is incorrect.","error_type":"CONNECTION_ACCESS_DENIED_ERROR","category":"Authentication","description":"Access denied","issue_codes":[1014,1015],"invalid_fields":["username","password"]},{"regex_name":"CONNECTION_UNKNOWN_DATABASE_REGEX","message_template":"Unable to connect to database \"%(database)s\".","error_type":"CONNECTION_UNKNOWN_DATABASE_ERROR","category":"Connection","description":"Unknown database","issue_codes":[1015],"invalid_fields":["database"]}]},"engine":"starrocks","engine_name":"StarRocks","engine_aliases":[],"default_driver":"starrocks","supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="StarRocks" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Superset meta database
sidebar_label: Superset meta database
description: "Superset meta database is an experimental feature that enables querying across multiple configured databases using a single connection."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":true,"YEAR":true},"module":"superset.db_engine_specs.superset","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":true,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":false,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":31,"max_score":201,"documentation":{"description":"Superset meta database is an experimental feature that enables querying across multiple configured databases using a single connection.","logo":"superset.svg","homepage_url":"https://superset.apache.org/","categories":["Other Databases"],"pypi_packages":[],"connection_string":"superset://","notes":"This is an internal Superset feature. Enable with ENABLE_SUPERSET_META_DB feature flag. Allows cross-database queries using virtual tables.","category":"Other Databases"},"engine":"superset","engine_name":"Superset meta database","engine_aliases":[],"default_driver":"","supports_file_upload":false,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Superset meta database" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: TDengine
sidebar_label: TDengine
description: "TDengine is a high-performance time-series database for IoT."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":false,"QUARTER":false,"QUARTER_YEAR":false,"YEAR":false},"module":"superset.db_engine_specs.tdengine","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":64,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":25,"max_score":201,"documentation":{"description":"TDengine is a high-performance time-series database for IoT.","logo":"tdengine.png","homepage_url":"https://tdengine.com/","categories":["Time Series Databases","Open Source"],"pypi_packages":["taospy","taos-ws-py"],"connection_string":"taosws://{user}:{password}@{host}:{port}","default_port":6041,"connection_examples":[{"description":"Local connection","connection_string":"taosws://root:taosdata@127.0.0.1:6041"}],"docs_url":"https://www.tdengine.com","category":"Other Databases"},"engine":"taosws","engine_name":"TDengine","engine_aliases":[],"default_driver":"taosws","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="TDengine" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Teradata
sidebar_label: Teradata
description: "Teradata is an enterprise data warehouse platform."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":false,"FIVE_SECONDS":false,"THIRTY_SECONDS":false,"MINUTE":true,"FIVE_MINUTES":false,"TEN_MINUTES":false,"FIFTEEN_MINUTES":false,"THIRTY_MINUTES":false,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.teradata","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":30,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":27,"max_score":201,"documentation":{"description":"Teradata is an enterprise data warehouse platform.","logo":"teradata.png","homepage_url":"https://www.teradata.com/","categories":["Traditional RDBMS","Proprietary"],"pypi_packages":["teradatasqlalchemy"],"connection_string":"teradatasql://{user}:{password}@{host}","default_port":1025,"drivers":[{"name":"teradatasqlalchemy (Recommended)","pypi_package":"teradatasqlalchemy","connection_string":"teradatasql://{user}:{password}@{host}","is_recommended":true,"notes":"No ODBC drivers required."},{"name":"sqlalchemy-teradata (ODBC)","pypi_package":"sqlalchemy-teradata","is_recommended":false,"notes":"Requires ODBC driver installation.","docs_url":"https://downloads.teradata.com/download/connectivity/odbc-driver/linux"}],"category":"Other Databases"},"engine":"teradatasql","engine_name":"Teradata","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Teradata" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: TimescaleDB
sidebar_label: TimescaleDB
description: "TimescaleDB is an open-source relational database for time-series and analytics, built on PostgreSQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.timescaledb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":34,"max_score":201,"documentation":{"description":"TimescaleDB is an open-source relational database for time-series and analytics, built on PostgreSQL.","logo":"timescale.png","homepage_url":"https://www.timescale.com/","categories":["Analytical Databases","Open Source"],"pypi_packages":["psycopg2"],"connection_string":"postgresql://{username}:{password}@{host}:{port}/{database}","default_port":5432,"connection_examples":[{"description":"Timescale Cloud (SSL required)","connection_string":"postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=require"}],"notes":"Uses the PostgreSQL driver. psycopg2 comes bundled with Superset.","docs_url":"https://docs.timescale.com/","category":"Other Databases"},"engine":"timescaledb","engine_name":"TimescaleDB","engine_aliases":[],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="TimescaleDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Trino
sidebar_label: Trino
description: "Trino is a distributed SQL query engine for big data analytics."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":false,"HALF_HOUR":true,"HOUR":true,"SIX_HOURS":true,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":true,"WEEK_STARTING_MONDAY":true,"WEEK_ENDING_SATURDAY":true,"WEEK_ENDING_SUNDAY":true,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.trino","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":true,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":true,"function_names":true,"user_impersonation":true,"file_upload":true,"get_extra_table_metadata":true,"dbapi_exception_mapping":true,"custom_errors":false,"dynamic_schema":true,"catalog":true,"dynamic_catalog":true,"ssh_tunneling":true,"query_cancelation":true,"get_metrics":false,"where_latest_partition":true,"expand_data":false,"query_cost_estimation":true,"sql_validation":false,"score":149,"max_score":201,"documentation":{"description":"Trino is a distributed SQL query engine for big data analytics.","logo":"trino.png","homepage_url":"https://trino.io/","categories":["Query Engines","Open Source"],"pypi_packages":["trino"],"install_instructions":"pip install \"apache-superset[trino]\"","connection_string":"trino://{username}:{password}@{hostname}:{port}/{catalog}","default_port":8080,"parameters":{"username":"Trino username","password":"Trino password (if authentication is enabled)","hostname":"Trino coordinator hostname","port":"Trino coordinator port (default 8080)","catalog":"Catalog name"},"drivers":[{"name":"trino","pypi_package":"trino","connection_string":"trino://{username}:{password}@{hostname}:{port}/{catalog}","is_recommended":true}],"compatible_databases":[{"name":"Starburst Galaxy","description":"Starburst Galaxy is a fully-managed cloud analytics platform built on Trino. It provides data lake analytics with enterprise security and governance.","logo":"starburst.png","homepage_url":"https://www.starburst.io/platform/starburst-galaxy/","categories":["Query Engines","Cloud Data Warehouses","Hosted Open Source"],"pypi_packages":["trino"],"connection_string":"trino://{username}:{password}@{host}:{port}/{catalog}","parameters":{"username":"Starburst Galaxy username (email/role)","password":"Starburst Galaxy password or token","host":"Your Galaxy cluster hostname","port":"Port (default 443)","catalog":"Catalog name"},"docs_url":"https://docs.starburst.io/starburst-galaxy/"},{"name":"Starburst Enterprise","description":"Starburst Enterprise is a self-managed Trino distribution with enterprise features, security, and support.","logo":"starburst.png","homepage_url":"https://www.starburst.io/platform/starburst-enterprise/","categories":["Query Engines","Hosted Open Source"],"pypi_packages":["trino"],"connection_string":"trino://{username}:{password}@{hostname}:{port}/{catalog}","docs_url":"https://docs.starburst.io/"}],"category":"Query Engines"},"engine":"trino","engine_name":"Trino","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":true,"supports_catalog":true,"supports_dynamic_catalog":true};
<DatabasePage name="Trino" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: Vertica
sidebar_label: Vertica
description: "Vertica is a column-oriented analytics database."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.vertica","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":34,"max_score":201,"documentation":{"description":"Vertica is a column-oriented analytics database.","logo":"vertica.png","homepage_url":"https://www.vertica.com/","categories":["Analytical Databases","Proprietary"],"pypi_packages":["sqlalchemy-vertica-python"],"connection_string":"vertica+vertica_python://{username}:{password}@{host}/{database}","default_port":5433,"parameters":{"username":"Database username","password":"Database password","host":"localhost, IP address, or hostname (cloud or on-prem)","database":"Database name","port":"Default 5433"},"notes":"Supports load balancer backup host configuration.","docs_url":"http://www.vertica.com/","category":"Analytical Databases"},"engine":"vertica","engine_name":"Vertica","engine_aliases":[],"default_driver":null,"supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="Vertica" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: YDB
sidebar_label: YDB
description: "YDB is a distributed SQL database by Yandex."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":false,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.ydb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":false,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":23,"max_score":201,"documentation":{"description":"YDB is a distributed SQL database by Yandex.","logo":"ydb.svg","homepage_url":"https://ydb.tech/","categories":["Traditional RDBMS","Open Source"],"pypi_packages":["ydb-sqlalchemy"],"connection_string":"ydb://{host}:{port}/{database_name}","default_port":2135,"engine_parameters":[{"name":"Protocol","description":"Specify connection protocol (default: grpc)","secure_extra":{"protocol":"grpcs"}}],"authentication_methods":[{"name":"Static Credentials","description":"Username/password authentication","secure_extra":{"credentials":{"username":"...","password":"..."}}},{"name":"Access Token","description":"Token-based authentication","secure_extra":{"credentials":{"token":"..."}}},{"name":"Service Account","description":"Service account JSON credentials","secure_extra":{"credentials":{"service_account_json":{"id":"...","service_account_id":"...","private_key":"..."}}}}],"category":"Other Databases"},"engine":"yql","engine_name":"YDB","engine_aliases":["yql+ydb","ydb"],"default_driver":"ydb","supports_file_upload":false,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="YDB" database={databaseInfo} />

View File

@@ -0,0 +1,31 @@
---
title: YugabyteDB
sidebar_label: YugabyteDB
description: "YugabyteDB is a distributed SQL database built on top of PostgreSQL."
hide_title: true
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
import { DatabasePage } from '@site/src/components/databases';
export const databaseInfo = {"time_grains":{"SECOND":true,"FIVE_SECONDS":true,"THIRTY_SECONDS":true,"MINUTE":true,"FIVE_MINUTES":true,"TEN_MINUTES":true,"FIFTEEN_MINUTES":true,"THIRTY_MINUTES":true,"HALF_HOUR":false,"HOUR":true,"SIX_HOURS":false,"DAY":true,"WEEK":true,"WEEK_STARTING_SUNDAY":false,"WEEK_STARTING_MONDAY":false,"WEEK_ENDING_SATURDAY":false,"WEEK_ENDING_SUNDAY":false,"MONTH":true,"QUARTER":true,"QUARTER_YEAR":false,"YEAR":true},"module":"superset.db_engine_specs.yugabytedb","limit_method":1,"limit_clause":true,"joins":true,"subqueries":true,"alias_in_select":true,"alias_in_orderby":true,"time_groupby_inline":false,"alias_to_source_column":false,"order_by_not_in_select":true,"expressions_in_orderby":false,"cte_in_subquery":true,"max_column_name":null,"sql_comments":true,"escaped_colons":true,"masked_encrypted_extra":false,"column_type_mapping":false,"function_names":false,"user_impersonation":false,"file_upload":true,"get_extra_table_metadata":false,"dbapi_exception_mapping":false,"custom_errors":false,"dynamic_schema":false,"catalog":false,"dynamic_catalog":false,"ssh_tunneling":true,"query_cancelation":false,"get_metrics":false,"where_latest_partition":false,"expand_data":false,"query_cost_estimation":false,"sql_validation":false,"score":34,"max_score":201,"documentation":{"description":"YugabyteDB is a distributed SQL database built on top of PostgreSQL.","logo":"yugabyte.png","homepage_url":"https://www.yugabyte.com/","categories":["Cloud Data Warehouses","Traditional RDBMS","Open Source"],"pypi_packages":["psycopg2"],"connection_string":"postgresql://{username}:{password}@{host}:{port}/{database}","default_port":5433,"notes":"Uses the PostgreSQL driver. psycopg2 comes bundled with Superset.","docs_url":"https://docs.yugabyte.com/","category":"Other Databases"},"engine":"yugabytedb","engine_name":"YugabyteDB","engine_aliases":[],"default_driver":"psycopg2","supports_file_upload":true,"supports_dynamic_schema":false,"supports_catalog":false,"supports_dynamic_catalog":false};
<DatabasePage name="YugabyteDB" database={databaseInfo} />

View File

@@ -0,0 +1,341 @@
---
sidebar_position: 9
title: Frequently Asked Questions
description: Common questions about Apache Superset including performance, database support, visualizations, and configuration.
keywords: [superset faq, superset questions, superset help, data visualization faq]
---
import FAQSchema from '@site/src/components/FAQSchema';
<FAQSchema faqs={[
{
question: "How big of a dataset can Superset handle?",
answer: "Superset can work with even gigantic databases. Superset acts as a thin layer above your underlying databases or data engines, which do all the processing. Superset simply visualizes the results of the query. The key to achieving acceptable performance is whether your database can execute queries and return results at acceptable speed."
},
{
question: "What are the computing specifications required to run Superset?",
answer: "The specs depend on how many users you have and their activity, not on the size of your data. Community members have reported 8GB RAM, 2vCPUs as adequate for a moderately-sized instance. Monitor your resource usage and adjust as needed."
},
{
question: "Can I join or query multiple tables at one time?",
answer: "Not in the Explore or Visualization UI directly. A Superset SQLAlchemy datasource can only be a single table or a view. You can create a view that joins tables, or use SQL Lab where you can write SQL queries to join multiple tables."
},
{
question: "How do I create my own visualization?",
answer: "Read the instructions in the Creating Visualization Plugins documentation to learn how to build custom visualizations for Superset."
},
{
question: "Can I upload and visualize CSV data?",
answer: "Yes! Superset supports CSV upload functionality. Read the Exploring Data documentation to learn how to enable and use CSV upload."
},
{
question: "Why are my queries timing out?",
answer: "There are many possible causes. For SQL Lab, Superset allows queries to run up to 6 hours by default (configurable via SQLLAB_ASYNC_TIME_LIMIT_SEC). For dashboard timeouts, check your gateway/proxy timeout settings and adjust SUPERSET_WEBSERVER_TIMEOUT in superset_config.py."
},
{
question: "Why is the map not visible in the geospatial visualization?",
answer: "You need to register a free account at Mapbox.com, obtain an API key, and add it to your .env file at the key MAPBOX_API_KEY."
},
{
question: "What database engine can I use as a backend for Superset?",
answer: "Superset is tested using MySQL, PostgreSQL, and SQLite backends for storing its internal metadata. While Superset supports many databases as data sources, only these are recommended for the metadata store in production."
},
{
question: "Does Superset work with my database?",
answer: "Superset supports any database with a Python SQLAlchemy dialect and DBAPI driver. Check the Connecting to Databases documentation for the full list of supported databases."
},
{
question: "Does Superset offer a public API?",
answer: "Yes, Superset has a public REST API documented using Swagger. Enable FAB_API_SWAGGER_UI in superset_config.py to access interactive API documentation at /swagger/v1."
},
{
question: "Does Superset collect any telemetry data?",
answer: "Superset uses Scarf by default to collect basic telemetry data to help maintainers understand version usage. Users can opt out by setting the SCARF_ANALYTICS environment variable to false."
},
{
question: "Does Superset have a trash bin to recover deleted assets?",
answer: "No, there is no built-in way to recover deleted dashboards, charts, or datasets. It is recommended to take periodic backups of the metadata database and use export functionality for recovery."
}
]} />
# FAQ
## How big of a dataset can Superset handle?
Superset can work with even gigantic databases! Superset acts as a thin layer above your underlying
databases or data engines, which do all the processing. Superset simply visualizes the results of
the query.
The key to achieving acceptable performance in Superset is whether your database can execute queries
and return results at a speed that is acceptable to your users. If you experience slow performance with
Superset, benchmark and tune your data warehouse.
## What are the computing specifications required to run Superset?
The specs of your Superset installation depend on how many users you have and what their activity is, not
on the size of your data. Superset admins in the community have reported 8GB RAM, 2vCPUs as adequate to
run a moderately-sized instance. To develop Superset, e.g., compile code or build images, you may
need more power.
Monitor your resource usage and increase or decrease as needed. Note that Superset usage has a tendency
to occur in spikes, e.g., if everyone in a meeting loads the same dashboard at once.
Superset's application metadata does not require a very large database to store it, though
the log file grows over time.
## Can I join / query multiple tables at one time?
Not in the Explore or Visualization UI. A Superset SQLAlchemy datasource can only be a single table
or a view.
When working with tables, the solution would be to create a table that contains all the fields
needed for your analysis, most likely through some scheduled batch process.
A view is a simple logical layer that abstracts an arbitrary SQL query as a virtual table. This can
allow you to join and union multiple tables and to apply some transformation using arbitrary SQL
expressions. The limitation there is your database performance, as Superset effectively will run a
query on top of your query (view). A good practice may be to limit yourself to joining your main
large table to one or many small tables only, and avoid using _GROUP BY_ where possible as Superset
will do its own _GROUP BY_ and doing the work twice might slow down performance.
Whether you use a table or a view, performance depends on how fast your database can deliver
the result to users interacting with Superset.
However, if you are using SQL Lab, there is no such limitation. You can write SQL queries to join
multiple tables as long as your database account has access to the tables.
## How do I create my own visualization?
We recommend reading the instructions in
[Creating Visualization Plugins](/developer-docs/contributing/howtos#creating-visualization-plugins).
## Can I upload and visualize CSV data?
Absolutely! Read the instructions [here](/user-docs/using-superset/exploring-data) to learn
how to enable and use CSV upload.
## Why are my queries timing out?
There are many possible causes for why a long-running query might time out.
For running long query from Sql Lab, by default Superset allows it run as long as 6 hours before it
being killed by celery. If you want to increase the time for running query, you can specify the
timeout in configuration. For example:
```python
SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6
```
If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are
probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response
from Superset server (which is processing long queries), these web servers will send 504 status code
to clients directly. Superset has a client-side timeout limit to address this issue. If query didnt
come back within client-side timeout (60 seconds by default), Superset will display warning message
to avoid gateway timeout message. If you have a longer gateway timeout limit, you can change the
timeout settings in **superset_config.py**:
```python
SUPERSET_WEBSERVER_TIMEOUT = 60
```
## Why is the map not visible in the geospatial visualization?
You need to register a free account at [Mapbox.com](https://www.mapbox.com), obtain an API key, and add it
to **.env** at the key MAPBOX_API_KEY:
```python
MAPBOX_API_KEY = "longstringofalphanumer1c"
```
## How to limit the timed refresh on a dashboard?
By default, the dashboard timed refresh feature allows you to automatically re-query every slice on
a dashboard according to a set schedule. Sometimes, however, you wont want all of the slices to be
refreshed - especially if some data is slow moving, or run heavy queries. To exclude specific slices
from the timed refresh process, add the `timed_refresh_immune_slices` key to the dashboard JSON
Metadata field:
```json
{
"filter_immune_slices": [],
"expanded_slices": {},
"filter_immune_slice_fields": {},
"timed_refresh_immune_slices": [324]
}
```
In the example above, if a timed refresh is set for the dashboard, then every slice except 324 will
be automatically re-queried on schedule.
Slice refresh will also be staggered over the specified period. You can turn off this staggering by
setting the `stagger_refresh` to false and modify the stagger period by setting `stagger_time` to a
value in milliseconds in the JSON Metadata field:
```json
{
"stagger_refresh": false,
"stagger_time": 2500
}
```
Here, the entire dashboard will refresh at once if periodic refresh is on. The stagger time of 2.5
seconds is ignored.
**Why does flask fab or superset freeze/hang/not responding when started (my home directory is
NFS mounted)?**
By default, Superset creates and uses an SQLite database at `~/.superset/superset.db`. SQLite is
known to [not work well if used on NFS](https://www.sqlite.org/lockingv3.html) due to broken file
locking implementation on NFS.
You can override this path using the **SUPERSET_HOME** environment variable.
Another workaround is to change where superset stores the sqlite database by adding the following in
`superset_config.py`:
```python
SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db?check_same_thread=false'
```
You can read more about customizing Superset using the configuration file
[here](/admin-docs/configuration/configuring-superset).
## What if the table schema changed?
Table schemas evolve, and Superset needs to reflect that. Its pretty common in the life cycle of a
dashboard to want to add a new dimension or metric. To get Superset to discover your new columns,
all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset
whose schema has changed, and hit **Sync columns from source** from the **Columns** tab.
Behind the scene, the new columns will get merged. Following this, you may want to re-edit the
table afterwards to configure the Columns tab, check the appropriate boxes and save again.
## What database engine can I use as a backend for Superset?
To clarify, the database backend is an OLTP database used by Superset to store its internal
information like your list of users and dashboard definitions. While Superset supports a
[variety of databases as data _sources_](/user-docs/databases/#installing-database-drivers),
only a few database engines are supported for use as the OLTP backend / metadata store.
Superset is tested using MySQL, PostgreSQL, and SQLite backends. Its recommended you install
Superset on one of these database servers for production. Installation on other OLTP databases
may work but isnt tested. It has been reported that [Microsoft SQL Server does _not_
work as a Superset backend](https://github.com/apache/superset/issues/18961). Column-store,
non-OLTP databases are not designed for this type of workload.
## How can I configure OAuth authentication and authorization?
You can take a look at this Flask-AppBuilder
[configuration example](https://github.com/dpgaspar/Flask-AppBuilder/blob/master/examples/oauth/config.py).
## Is there a way to force the dashboard to use specific colors?
It is possible on a per-dashboard basis by providing a mapping of labels to colors in the JSON
Metadata attribute using the `label_colors` key. You can use either the full hex color, a named color,
like `red`, `coral` or `lightblue`, or the index in the current color palette (0 for first color, 1 for
second etc). Example:
```json
{
"label_colors": {
"foo": "#FF69B4",
"bar": "lightblue",
"baz": 0
}
}
```
## Does Superset work with [insert database engine here]?
The [Connecting to Databases section](/user-docs/databases/) provides the best
overview for supported databases. Database engines not listed on that page may work too. We rely on
the community to contribute to this knowledge base.
For a database engine to be supported in Superset through the SQLAlchemy connector, it requires
having a Python compliant [SQLAlchemy dialect](https://docs.sqlalchemy.org/en/13/dialects/) as well
as a [DBAPI driver](https://www.python.org/dev/peps/pep-0249/) defined. Database that have limited
SQL support may work as well. For instance its possible to connect to Druid through the SQLAlchemy
connector even though Druid does not support joins and subqueries. Another key element for a
database to be supported is through the Superset Database Engine Specification interface. This
interface allows for defining database-specific configurations and logic that go beyond the
SQLAlchemy and DBAPI scope. This includes features like:
- date-related SQL function that allow Superset to fetch different time granularities when running
time-series queries
- whether the engine supports subqueries. If false, Superset may run 2-phase queries to compensate
for the limitation
- methods around processing logs and inferring the percentage of completion of a query
- technicalities as to how to handle cursors and connections if the driver is not standard DBAPI
Beyond the SQLAlchemy connector, its also possible, though much more involved, to extend Superset
and write your own connector. The only example of this at the moment is the Druid connector, which
is getting superseded by Druids growing SQL support and the recent availability of a DBAPI and
SQLAlchemy driver. If the database you are considering integrating has any kind of SQL support,
its probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible
the database needs to have support for running OLAP-type queries and should be able to do things that
are typical in basic SQL:
- aggregate data
- apply filters
- apply HAVING-type filters
- be schema-aware, expose columns and types
## Does Superset offer a public API?
Yes, a public REST API, and the surface of that API formal is expanding steadily. You can read more about this API and
interact with it using Swagger [here](/developer-docs/api).
Some of the
original vision for the collection of endpoints under **/api/v1** was originally specified in
[SIP-17](https://github.com/apache/superset/issues/7259) and constant progress has been
made to cover more and more use cases.
The API available is documented using [Swagger](https://swagger.io/) and the documentation can be
made available under **/swagger/v1** by enabling the following flag in `superset_config.py`:
```python
FAB_API_SWAGGER_UI = True
```
There are other undocumented [private] ways to interact with Superset programmatically that offer no
guarantees and are not recommended but may fit your use case temporarily:
- using the ORM (SQLAlchemy) directly
- using the internal FAB ModelView API (to be deprecated in Superset)
- altering the source code in your fork
## How can I see usage statistics (e.g., monthly active users)?
This functionality is not included with Superset, but you can extract and analyze Superset's application
metadata to see what actions have occurred. By default, user activities are logged in the `logs` table
in Superset's metadata database. One company has published a write-up of [how they analyzed Superset
usage, including example queries](https://engineering.hometogo.com/monitor-superset-usage-via-superset-c7f9fba79525).
## What Does Hours Offset in the Edit Dataset view do?
In the Edit Dataset view, you can specify a time offset. This field lets you configure the
number of hours to be added or subtracted from the time column.
This can be used, for example, to convert UTC time to local time.
## Does Superset collect any telemetry data?
Superset uses [Scarf](https://about.scarf.sh/) by default to collect basic telemetry data upon installing and/or running Superset. This data helps the maintainers of Superset better understand which versions of Superset are being used, in order to prioritize patch/minor releases and security fixes.
We use the [Scarf Gateway](https://docs.scarf.sh/gateway/) to sit in front of container registries, the [scarf-js](https://about.scarf.sh/package-sdks) package to track `npm` installations, and a Scarf pixel to gather anonymous analytics on Superset page views.
Scarf purges PII and provides aggregated statistics. Superset users can easily opt out of analytics in various ways documented [here](https://docs.scarf.sh/gateway/#do-not-track) and [here](https://docs.scarf.sh/package-analytics/#as-a-user-of-a-package-using-scarf-js-how-can-i-opt-out-of-analytics).
Superset maintainers can also opt out of telemetry data collection by setting the `SCARF_ANALYTICS` environment variable to `false` in the Superset container (or anywhere Superset/webpack are run).
Additional opt-out instructions for Docker users are available on the [Docker Installation](/admin-docs/installation/docker-compose) page.
## Does Superset have an archive panel or trash bin from which a user can recover deleted assets?
No. Currently, there is no way to recover a deleted Superset dashboard/chart/dataset/database from the UI. However, there is an [ongoing discussion](https://github.com/apache/superset/discussions/18386) about implementing such a feature.
Hence, it is recommended to take periodic backups of the metadata database. For recovery, you can launch a recovery instance of a Superset server with the backed-up copy of the DB attached and use the Export Dashboard button in the Superset UI (or the `superset export-dashboards` CLI command). Then, take the .zip file and import it into the current Superset instance.
Alternatively, you can programmatically take regular exports of the assets as a backup.
## I ran a security scan of the Superset container image and it showed dozens of "high" and "critical" vulnerabilities! Can you release a version of Superset without these?
You are talking about dependency CVEs: identified vulnerabilities in software that Superset uses. Most of these CVEs are in the Linux kernel or Python, both of which have many other people working on their security.
We address these dependency CVEs as best we can by regularly updating our dependencies to newer versions. We use bots to assist with that and cheerfully welcome pull requests from humans that fix dependency CVEs.
The Superset [security team](https://superset.apache.org/docs/security/#reporting-security-vulnerabilities) focuses primarily on vulnerabilities _in Superset itself_. See our [CVEs page](https://superset.apache.org/docs/security/cves) for a list of past Superset CVEs.

View File

@@ -0,0 +1,269 @@
---
hide_title: true
sidebar_position: 1
---
import DatabaseLogoWall from '@site/src/components/databases/DatabaseLogoWall';
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# Superset
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/license/apache-2-0)
[![Latest Release on Github](https://img.shields.io/github/v/release/apache/superset?sort=semver)](https://github.com/apache/superset/releases/latest)
[![Build Status](https://github.com/apache/superset/actions/workflows/superset-python-unittest.yml/badge.svg)](https://github.com/apache/superset/actions)
[![PyPI version](https://badge.fury.io/py/apache_superset.svg)](https://badge.fury.io/py/apache_superset)
[![PyPI](https://img.shields.io/pypi/pyversions/apache_superset.svg?maxAge=2592000)](https://pypi.python.org/pypi/apache_superset)
[![GitHub Stars](https://img.shields.io/github/stars/apache/superset?style=social)](https://github.com/apache/superset/stargazers)
[![Contributors](https://img.shields.io/github/contributors/apache/superset)](https://github.com/apache/superset/graphs/contributors)
[![Last Commit](https://img.shields.io/github/last-commit/apache/superset)](https://github.com/apache/superset/commits/master)
[![Open Issues](https://img.shields.io/github/issues/apache/superset)](https://github.com/apache/superset/issues)
[![Open PRs](https://img.shields.io/github/issues-pr/apache/superset)](https://github.com/apache/superset/pulls)
[![Get on Slack](https://img.shields.io/badge/slack-join-orange.svg)](http://bit.ly/join-superset-slack)
[![Documentation](https://img.shields.io/badge/docs-apache.org-blue.svg)](https://superset.apache.org)
<picture width="500">
<source
width="600"
media="(prefers-color-scheme: dark)"
src="https://superset.apache.org/img/superset-logo-horiz-dark.svg"
alt="Superset logo (dark)"
/>
<img
width="600"
src="https://superset.apache.org/img/superset-logo-horiz-apache.svg"
alt="Superset logo (light)"
/>
</picture>
A modern, enterprise-ready business intelligence web application.
[**Why Superset?**](#why-superset) |
[**Supported Databases**](#supported-databases) |
[**Installation and Configuration**](#installation-and-configuration) |
[**Release Notes**](https://github.com/apache/superset/blob/master/RELEASING/README.md#release-notes-for-recent-releases) |
[**Get Involved**](#get-involved) |
[**Contributor Guide**](#contributor-guide) |
[**Resources**](#resources) |
[**Organizations Using Superset**](https://superset.apache.org/inTheWild)
## Why Superset?
Superset is a modern data exploration and data visualization platform. Superset can replace or augment proprietary business intelligence tools for many teams. Superset integrates well with a variety of data sources.
Superset provides:
- A **no-code interface** for building charts quickly
- A powerful, web-based **SQL Editor** for advanced querying
- A **lightweight semantic layer** for quickly defining custom dimensions and metrics
- Out of the box support for **nearly any SQL** database or data engine
- A wide array of **beautiful visualizations** to showcase your data, ranging from simple bar charts to geospatial visualizations
- Lightweight, configurable **caching layer** to help ease database load
- Highly extensible **security roles and authentication** options
- An **API** for programmatic customization
- A **cloud-native architecture** designed from the ground up for scale
## Screenshots & Gifs
**Video Overview**
<!-- File hosted here https://github.com/apache/superset-site/raw/lfs/superset-video-4k.mp4 -->
[superset-video-1080p.webm](https://github.com/user-attachments/assets/b37388f7-a971-409c-96a7-90c4e31322e6)
<br/>
**Large Gallery of Visualizations**
<kbd><img title="Gallery" src="https://superset.apache.org/img/screenshots/gallery.jpg"/></kbd><br/>
**Craft Beautiful, Dynamic Dashboards**
<kbd><img title="View Dashboards" src="https://superset.apache.org/img/screenshots/dashboard.jpg"/></kbd><br/>
**No-Code Chart Builder**
<kbd><img title="Slice & dice your data" src="https://superset.apache.org/img/screenshots/explore.jpg"/></kbd><br/>
**Powerful SQL Editor**
<kbd><img title="SQL Lab" src="https://superset.apache.org/img/screenshots/sql_lab.jpg"/></kbd><br/>
## Supported Databases
Superset can query data from any SQL-speaking datastore or data engine (Presto, Trino, Athena, [and more](/user-docs/databases/)) that has a Python DB-API driver and a SQLAlchemy dialect.
Here are some of the major database solutions that are supported:
<DatabaseLogoWall />
<!--
SUPPORTED_DATABASES block removed — logos are now rendered dynamically
by the DatabaseLogoWall component above, using databases.json as the
single source of truth. The README.md retains its own static copy
(maintained by generate-database-docs.mjs --update-readme).
-->
<!--
<p align="center">
<a href="/user-docs/databases/supported/amazon-athena" title="Amazon Athena"><img src="/img/databases/amazon-athena.jpg" alt="Amazon Athena" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/amazon-dynamodb" title="Amazon DynamoDB"><img src="/img/databases/aws.png" alt="Amazon DynamoDB" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/amazon-redshift" title="Amazon Redshift"><img src="/img/databases/redshift.png" alt="Amazon Redshift" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-doris" title="Apache Doris"><img src="/img/databases/doris.png" alt="Apache Doris" width="103" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-drill" title="Apache Drill"><img src="/img/databases/apache-drill.png" alt="Apache Drill" width="81" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-druid" title="Apache Druid"><img src="/img/databases/druid.png" alt="Apache Druid" width="117" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-hive" title="Apache Hive"><img src="/img/databases/apache-hive.svg" alt="Apache Hive" width="44" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-impala" title="Apache Impala"><img src="/img/databases/apache-impala.png" alt="Apache Impala" width="21" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-kylin" title="Apache Kylin"><img src="/img/databases/apache-kylin.png" alt="Apache Kylin" width="44" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-pinot" title="Apache Pinot"><img src="/img/databases/apache-pinot.svg" alt="Apache Pinot" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-solr" title="Apache Solr"><img src="/img/databases/apache-solr.png" alt="Apache Solr" width="79" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/apache-spark-sql" title="Apache Spark SQL"><img src="/img/databases/apache-spark.png" alt="Apache Spark SQL" width="75" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/ascend" title="Ascend"><img src="/img/databases/ascend.webp" alt="Ascend" width="117" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/aurora-mysql-data-api" title="Aurora MySQL (Data API)"><img src="/img/databases/mysql.png" alt="Aurora MySQL (Data API)" width="77" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/aurora-postgresql-data-api" title="Aurora PostgreSQL (Data API)"><img src="/img/databases/postgresql.svg" alt="Aurora PostgreSQL (Data API)" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/azure-data-explorer" title="Azure Data Explorer"><img src="/img/databases/kusto.png" alt="Azure Data Explorer" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/azure-synapse" title="Azure Synapse"><img src="/img/databases/azure.svg" alt="Azure Synapse" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/clickhouse" title="ClickHouse"><img src="/img/databases/clickhouse.png" alt="ClickHouse" width="150" height="37" /></a> &nbsp;
<a href="/user-docs/databases/supported/cloudflare-d1" title="Cloudflare D1"><img src="/img/databases/cloudflare.png" alt="Cloudflare D1" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/cockroachdb" title="CockroachDB"><img src="/img/databases/cockroachdb.png" alt="CockroachDB" width="150" height="24" /></a> &nbsp;
<a href="/user-docs/databases/supported/couchbase" title="Couchbase"><img src="/img/databases/couchbase.svg" alt="Couchbase" width="150" height="35" /></a> &nbsp;
<a href="/user-docs/databases/supported/cratedb" title="CrateDB"><img src="/img/databases/cratedb.svg" alt="CrateDB" width="180" height="24" /></a> &nbsp;
<a href="/user-docs/databases/supported/databend" title="Databend"><img src="/img/databases/databend.png" alt="Databend" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/databricks" title="Databricks"><img src="/img/databases/databricks.png" alt="Databricks" width="152" height="24" /></a> &nbsp;
<a href="/user-docs/databases/supported/denodo" title="Denodo"><img src="/img/databases/denodo.png" alt="Denodo" width="138" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/dremio" title="Dremio"><img src="/img/databases/dremio.png" alt="Dremio" width="126" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/duckdb" title="DuckDB"><img src="/img/databases/duckdb.png" alt="DuckDB" width="52" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/elasticsearch" title="Elasticsearch"><img src="/img/databases/elasticsearch.png" alt="Elasticsearch" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/exasol" title="Exasol"><img src="/img/databases/exasol.png" alt="Exasol" width="72" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/firebird" title="Firebird"><img src="/img/databases/firebird.png" alt="Firebird" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/firebolt" title="Firebolt"><img src="/img/databases/firebolt.png" alt="Firebolt" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/google-bigquery" title="Google BigQuery"><img src="/img/databases/google-big-query.svg" alt="Google BigQuery" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/google-sheets" title="Google Sheets"><img src="/img/databases/google-sheets.svg" alt="Google Sheets" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/greenplum" title="Greenplum"><img src="/img/databases/greenplum.png" alt="Greenplum" width="124" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/hologres" title="Hologres"><img src="/img/databases/hologres.png" alt="Hologres" width="44" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/ibm-db2" title="IBM Db2"><img src="/img/databases/ibm-db2.svg" alt="IBM Db2" width="91" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/ibm-netezza-performance-server" title="IBM Netezza Performance Server"><img src="/img/databases/netezza.png" alt="IBM Netezza Performance Server" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/mariadb" title="MariaDB"><img src="/img/databases/mariadb.png" alt="MariaDB" width="150" height="37" /></a> &nbsp;
<a href="/user-docs/databases/supported/microsoft-sql-server" title="Microsoft SQL Server"><img src="/img/databases/msql.png" alt="Microsoft SQL Server" width="50" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/monetdb" title="MonetDB"><img src="/img/databases/monet-db.png" alt="MonetDB" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/mongodb" title="MongoDB"><img src="/img/databases/mongodb.png" alt="MongoDB" width="150" height="38" /></a> &nbsp;
<a href="/user-docs/databases/supported/motherduck" title="MotherDuck"><img src="/img/databases/motherduck.png" alt="MotherDuck" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/oceanbase" title="OceanBase"><img src="/img/databases/oceanbase.svg" alt="OceanBase" width="175" height="24" /></a> &nbsp;
<a href="/user-docs/databases/supported/oracle" title="Oracle"><img src="/img/databases/oraclelogo.png" alt="Oracle" width="111" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/presto" title="Presto"><img src="/img/databases/presto-og.png" alt="Presto" width="127" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/risingwave" title="RisingWave"><img src="/img/databases/risingwave.svg" alt="RisingWave" width="147" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/sap-hana" title="SAP HANA"><img src="/img/databases/sap-hana.png" alt="SAP HANA" width="137" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/sap-sybase" title="SAP Sybase"><img src="/img/databases/sybase.png" alt="SAP Sybase" width="100" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/shillelagh" title="Shillelagh"><img src="/img/databases/shillelagh.png" alt="Shillelagh" width="40" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/singlestore" title="SingleStore"><img src="/img/databases/singlestore.png" alt="SingleStore" width="150" height="31" /></a> &nbsp;
<a href="/user-docs/databases/supported/snowflake" title="Snowflake"><img src="/img/databases/snowflake.svg" alt="Snowflake" width="76" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/sqlite" title="SQLite"><img src="/img/databases/sqlite.png" alt="SQLite" width="84" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/starrocks" title="StarRocks"><img src="/img/databases/starrocks.png" alt="StarRocks" width="149" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/superset-meta-database" title="Superset meta database"><img src="/img/databases/superset.svg" alt="Superset meta database" width="150" height="39" /></a> &nbsp;
<a href="/user-docs/databases/supported/tdengine" title="TDengine"><img src="/img/databases/tdengine.png" alt="TDengine" width="140" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/teradata" title="Teradata"><img src="/img/databases/teradata.png" alt="Teradata" width="124" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/timescaledb" title="TimescaleDB"><img src="/img/databases/timescale.png" alt="TimescaleDB" width="150" height="36" /></a> &nbsp;
<a href="/user-docs/databases/supported/trino" title="Trino"><img src="/img/databases/trino.png" alt="Trino" width="89" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/vertica" title="Vertica"><img src="/img/databases/vertica.png" alt="Vertica" width="128" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/ydb" title="YDB"><img src="/img/databases/ydb.svg" alt="YDB" width="110" height="40" /></a> &nbsp;
<a href="/user-docs/databases/supported/yugabytedb" title="YugabyteDB"><img src="/img/databases/yugabyte.png" alt="YugabyteDB" width="150" height="26" /></a>
</p>
SUPPORTED_DATABASES_END -->
**A more comprehensive list of supported databases** along with the configuration instructions can be found [here](/user-docs/databases/).
Want to add support for your datastore or data engine? Read more [here](/user-docs/faq#does-superset-work-with-insert-database-engine-here) about the technical requirements.
## Installation and Configuration
Try out Superset's [quickstart](/user-docs/quickstart) guide or learn about [the options for production deployments](/admin-docs/installation/installation-methods).
## Get Involved
- Ask and answer questions on [StackOverflow](https://stackoverflow.com/questions/tagged/apache-superset) using the **apache-superset** tag
- [Join our community's Slack](http://bit.ly/join-superset-slack)
and please read our [Slack Community Guidelines](https://github.com/apache/superset/blob/master/CODE_OF_CONDUCT.md#slack-community-guidelines)
- [Join our dev@superset.apache.org Mailing list](https://lists.apache.org/list.html?dev@superset.apache.org). To join, simply send an email to [dev-subscribe@superset.apache.org](mailto:dev-subscribe@superset.apache.org)
- If you want to help troubleshoot GitHub Issues involving the numerous database drivers that Superset supports, please consider adding your name and the databases you have access to on the [Superset Database Familiarity Rolodex](https://docs.google.com/spreadsheets/d/1U1qxiLvOX0kBTUGME1AHHi6Ywel6ECF8xk_Qy-V9R8c/edit#gid=0)
- Join Superset's Town Hall and [Operational Model](https://preset.io/blog/the-superset-operational-model-wants-you/) recurring meetings. Meeting info is available on the [Superset Community Calendar](https://superset.apache.org/community)
## Contributor Guide
Interested in contributing? Check out our
[Developer Docs](https://superset.apache.org/developer-docs/)
to find resources around contributing along with a detailed guide on
how to set up a development environment.
## Resources
- [Superset "In the Wild"](https://superset.apache.org/inTheWild) - see who's using Superset, and [add your organization](https://github.com/apache/superset/edit/master/RESOURCES/INTHEWILD.yaml) to the list!
- [Feature Flags](/admin-docs/configuration/feature-flags) - the status of Superset's Feature Flags.
- [Standard Roles](https://github.com/apache/superset/blob/master/RESOURCES/STANDARD_ROLES.md) - How RBAC permissions map to roles.
- [Superset Wiki](https://github.com/apache/superset/wiki) - Tons of additional community resources: best practices, community content and other information.
- [Superset SIPs](https://github.com/orgs/apache/projects/170) - The status of Superset's SIPs (Superset Improvement Proposals) for both consensus and implementation status.
Understanding the Superset Points of View
- [The Case for Dataset-Centric Visualization](https://preset.io/blog/dataset-centric-visualization/)
- [Understanding the Superset Semantic Layer](https://preset.io/blog/understanding-superset-semantic-layer/)
- Getting Started with Superset
- [Superset in 2 Minutes using Docker Compose](/admin-docs/installation/docker-compose#installing-superset-locally-using-docker-compose)
- [Installing Database Drivers](/admin-docs/configuration/configuring-superset#installing-database-drivers)
- [Building New Database Connectors](https://preset.io/blog/building-database-connector/)
- [Create Your First Dashboard](/user-docs/using-superset/creating-your-first-dashboard)
- [Comprehensive Tutorial for Contributing Code to Apache Superset
](https://preset.io/blog/tutorial-contributing-code-to-apache-superset/)
- [Resources to master Superset by Preset](https://preset.io/resources/)
- Deploying Superset
- [Official Docker image](https://hub.docker.com/r/apache/superset)
- [Helm Chart](https://github.com/apache/superset/tree/master/helm/superset)
- Recordings of Past [Superset Community Events](https://preset.io/events)
- [Mixed Time Series Charts](https://preset.io/events/mixed-time-series-visualization-in-superset-workshop/)
- [How the Bing Team Customized Superset for the Internal Self-Serve Data & Analytics Platform](https://preset.io/events/how-the-bing-team-heavily-customized-superset-for-their-internal-data/)
- [Live Demo: Visualizing MongoDB and Pinot Data using Trino](https://preset.io/events/2021-04-13-visualizing-mongodb-and-pinot-data-using-trino/)
- [Introduction to the Superset API](https://preset.io/events/introduction-to-the-superset-api/)
- [Building a Database Connector for Superset](https://preset.io/events/2021-02-16-building-a-database-connector-for-superset/)
- Visualizations
- [Creating Viz Plugins](https://superset.apache.org/developer-docs/contributing/howtos#creating-visualization-plugins)
- [Managing and Deploying Custom Viz Plugins](https://medium.com/nmc-techblog/apache-superset-manage-custom-viz-plugins-in-production-9fde1a708e55)
- [Why Apache Superset is Betting on Apache ECharts](https://preset.io/blog/2021-4-1-why-echarts/)
- [Superset API](/developer-docs/api)
## Repo Activity
<a href="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats?repo_id=39464018" target="_blank" align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=dark" width="655" height="auto" />
<img alt="Performance Stats of apache/superset - Last 28 days" src="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=light" width="655" height="auto" />
</picture>
</a>
<!-- Made with [OSS Insight](https://ossinsight.io/) -->
<!-- telemetry/analytics pixel: -->
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=bc1c90cd-bc04-4e11-8c7b-289fb2839492" />

View File

@@ -0,0 +1,261 @@
---
hide_title: true
sidebar_position: 1
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# Superset
[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/license/apache-2-0)
[![Latest Release on Github](https://img.shields.io/github/v/release/apache/superset?sort=semver)](https://github.com/apache/superset/releases/latest)
[![Build Status](https://github.com/apache/superset/actions/workflows/superset-python-unittest.yml/badge.svg)](https://github.com/apache/superset/actions)
[![PyPI version](https://badge.fury.io/py/apache_superset.svg)](https://badge.fury.io/py/apache_superset)
[![PyPI](https://img.shields.io/pypi/pyversions/apache_superset.svg?maxAge=2592000)](https://pypi.python.org/pypi/apache_superset)
[![GitHub Stars](https://img.shields.io/github/stars/apache/superset?style=social)](https://github.com/apache/superset/stargazers)
[![Contributors](https://img.shields.io/github/contributors/apache/superset)](https://github.com/apache/superset/graphs/contributors)
[![Last Commit](https://img.shields.io/github/last-commit/apache/superset)](https://github.com/apache/superset/commits/master)
[![Open Issues](https://img.shields.io/github/issues/apache/superset)](https://github.com/apache/superset/issues)
[![Open PRs](https://img.shields.io/github/issues-pr/apache/superset)](https://github.com/apache/superset/pulls)
[![Get on Slack](https://img.shields.io/badge/slack-join-orange.svg)](http://bit.ly/join-superset-slack)
[![Documentation](https://img.shields.io/badge/docs-apache.org-blue.svg)](https://superset.apache.org)
<picture width="500">
<source
width="600"
media="(prefers-color-scheme: dark)"
src="https://superset.apache.org/img/superset-logo-horiz-dark.svg"
alt="Superset logo (dark)"
/>
<img
width="600"
src="https://superset.apache.org/img/superset-logo-horiz-apache.svg"
alt="Superset logo (light)"
/>
</picture>
A modern, enterprise-ready business intelligence web application.
### Documentation
- **[User Guide](https://superset.apache.org/user-docs/)** — For analysts and business users. Explore data, build charts, create dashboards, and connect databases.
- **[Administrator Guide](https://superset.apache.org/admin-docs/)** — Install, configure, and operate Superset. Covers security, scaling, and database drivers.
- **[Developer Guide](https://superset.apache.org/developer-docs/)** — Contribute to Superset or build on its REST API and extension framework.
[**Why Superset?**](#why-superset) |
[**Supported Databases**](#supported-databases) |
[**Release Notes**](https://github.com/apache/superset/blob/master/RELEASING/README.md#release-notes-for-recent-releases) |
[**Get Involved**](#get-involved) |
[**Resources**](#resources) |
[**Organizations Using Superset**](https://superset.apache.org/inTheWild)
## Why Superset?
Superset is a modern data exploration and data visualization platform. Superset can replace or augment proprietary business intelligence tools for many teams. Superset integrates well with a variety of data sources.
Superset provides:
- A **no-code interface** for building charts quickly
- A powerful, web-based **SQL Editor** for advanced querying
- A **lightweight semantic layer** for quickly defining custom dimensions and metrics
- Out of the box support for **nearly any SQL** database or data engine
- A wide array of **beautiful visualizations** to showcase your data, ranging from simple bar charts to geospatial visualizations
- Lightweight, configurable **caching layer** to help ease database load
- Highly extensible **security roles and authentication** options
- An **API** for programmatic customization
- A **cloud-native architecture** designed from the ground up for scale
## Screenshots & Gifs
**Video Overview**
<!-- File hosted here https://github.com/apache/superset-site/raw/lfs/superset-video-4k.mp4 -->
[superset-video-1080p.webm](https://github.com/user-attachments/assets/b37388f7-a971-409c-96a7-90c4e31322e6)
<br/>
**Large Gallery of Visualizations**
<kbd><img title="Gallery" src="https://superset.apache.org/img/screenshots/gallery.jpg"/></kbd><br/>
**Craft Beautiful, Dynamic Dashboards**
<kbd><img title="View Dashboards" src="https://superset.apache.org/img/screenshots/dashboard.jpg"/></kbd><br/>
**No-Code Chart Builder**
<kbd><img title="Slice & dice your data" src="https://superset.apache.org/img/screenshots/explore.jpg"/></kbd><br/>
**Powerful SQL Editor**
<kbd><img title="SQL Lab" src="https://superset.apache.org/img/screenshots/sql_lab.jpg"/></kbd><br/>
## Supported Databases
Superset can query data from any SQL-speaking datastore or data engine (Presto, Trino, Athena, [and more](https://superset.apache.org/docs/databases)) that has a Python DB-API driver and a SQLAlchemy dialect.
Here are some of the major database solutions that are supported:
<!-- SUPPORTED_DATABASES_START -->
<p align="center">
<a href="https://superset.apache.org/docs/databases/supported/amazon-athena" title="Amazon Athena"><img src="docs/static/img/databases/amazon-athena.jpg" alt="Amazon Athena" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/amazon-dynamodb" title="Amazon DynamoDB"><img src="docs/static/img/databases/aws.png" alt="Amazon DynamoDB" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/amazon-redshift" title="Amazon Redshift"><img src="docs/static/img/databases/redshift.png" alt="Amazon Redshift" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-doris" title="Apache Doris"><img src="docs/static/img/databases/doris.png" alt="Apache Doris" width="103" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-drill" title="Apache Drill"><img src="docs/static/img/databases/apache-drill.png" alt="Apache Drill" width="81" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-druid" title="Apache Druid"><img src="docs/static/img/databases/druid.png" alt="Apache Druid" width="117" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-hive" title="Apache Hive"><img src="docs/static/img/databases/apache-hive.svg" alt="Apache Hive" width="44" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-impala" title="Apache Impala"><img src="docs/static/img/databases/apache-impala.png" alt="Apache Impala" width="21" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-kylin" title="Apache Kylin"><img src="docs/static/img/databases/apache-kylin.png" alt="Apache Kylin" width="44" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-pinot" title="Apache Pinot"><img src="docs/static/img/databases/apache-pinot.svg" alt="Apache Pinot" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-solr" title="Apache Solr"><img src="docs/static/img/databases/apache-solr.png" alt="Apache Solr" width="79" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/apache-spark-sql" title="Apache Spark SQL"><img src="docs/static/img/databases/apache-spark.png" alt="Apache Spark SQL" width="75" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/ascend" title="Ascend"><img src="docs/static/img/databases/ascend.webp" alt="Ascend" width="117" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/aurora-mysql-data-api" title="Aurora MySQL (Data API)"><img src="docs/static/img/databases/mysql.png" alt="Aurora MySQL (Data API)" width="77" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/aurora-postgresql-data-api" title="Aurora PostgreSQL (Data API)"><img src="docs/static/img/databases/postgresql.svg" alt="Aurora PostgreSQL (Data API)" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/azure-data-explorer" title="Azure Data Explorer"><img src="docs/static/img/databases/kusto.png" alt="Azure Data Explorer" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/azure-synapse" title="Azure Synapse"><img src="docs/static/img/databases/azure.svg" alt="Azure Synapse" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/clickhouse" title="ClickHouse"><img src="docs/static/img/databases/clickhouse.png" alt="ClickHouse" width="150" height="37" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/cloudflare-d1" title="Cloudflare D1"><img src="docs/static/img/databases/cloudflare.png" alt="Cloudflare D1" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/cockroachdb" title="CockroachDB"><img src="docs/static/img/databases/cockroachdb.png" alt="CockroachDB" width="150" height="24" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/couchbase" title="Couchbase"><img src="docs/static/img/databases/couchbase.svg" alt="Couchbase" width="150" height="35" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/cratedb" title="CrateDB"><img src="docs/static/img/databases/cratedb.svg" alt="CrateDB" width="180" height="24" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/databend" title="Databend"><img src="docs/static/img/databases/databend.png" alt="Databend" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/databricks" title="Databricks"><img src="docs/static/img/databases/databricks.png" alt="Databricks" width="152" height="24" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/denodo" title="Denodo"><img src="docs/static/img/databases/denodo.png" alt="Denodo" width="138" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/dremio" title="Dremio"><img src="docs/static/img/databases/dremio.png" alt="Dremio" width="126" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/duckdb" title="DuckDB"><img src="docs/static/img/databases/duckdb.png" alt="DuckDB" width="52" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/elasticsearch" title="Elasticsearch"><img src="docs/static/img/databases/elasticsearch.png" alt="Elasticsearch" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/exasol" title="Exasol"><img src="docs/static/img/databases/exasol.png" alt="Exasol" width="72" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/firebird" title="Firebird"><img src="docs/static/img/databases/firebird.png" alt="Firebird" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/firebolt" title="Firebolt"><img src="docs/static/img/databases/firebolt.png" alt="Firebolt" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/google-bigquery" title="Google BigQuery"><img src="docs/static/img/databases/google-big-query.svg" alt="Google BigQuery" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/google-sheets" title="Google Sheets"><img src="docs/static/img/databases/google-sheets.svg" alt="Google Sheets" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/greenplum" title="Greenplum"><img src="docs/static/img/databases/greenplum.png" alt="Greenplum" width="124" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/hologres" title="Hologres"><img src="docs/static/img/databases/hologres.png" alt="Hologres" width="44" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/ibm-db2" title="IBM Db2"><img src="docs/static/img/databases/ibm-db2.svg" alt="IBM Db2" width="91" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/ibm-netezza-performance-server" title="IBM Netezza Performance Server"><img src="docs/static/img/databases/netezza.png" alt="IBM Netezza Performance Server" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/mariadb" title="MariaDB"><img src="docs/static/img/databases/mariadb.png" alt="MariaDB" width="150" height="37" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/microsoft-sql-server" title="Microsoft SQL Server"><img src="docs/static/img/databases/msql.png" alt="Microsoft SQL Server" width="50" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/monetdb" title="MonetDB"><img src="docs/static/img/databases/monet-db.png" alt="MonetDB" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/mongodb" title="MongoDB"><img src="docs/static/img/databases/mongodb.png" alt="MongoDB" width="150" height="38" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/motherduck" title="MotherDuck"><img src="docs/static/img/databases/motherduck.png" alt="MotherDuck" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/oceanbase" title="OceanBase"><img src="docs/static/img/databases/oceanbase.svg" alt="OceanBase" width="175" height="24" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/oracle" title="Oracle"><img src="docs/static/img/databases/oraclelogo.png" alt="Oracle" width="111" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/presto" title="Presto"><img src="docs/static/img/databases/presto-og.png" alt="Presto" width="127" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/risingwave" title="RisingWave"><img src="docs/static/img/databases/risingwave.svg" alt="RisingWave" width="147" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/sap-hana" title="SAP HANA"><img src="docs/static/img/databases/sap-hana.png" alt="SAP HANA" width="137" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/sap-sybase" title="SAP Sybase"><img src="docs/static/img/databases/sybase.png" alt="SAP Sybase" width="100" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/shillelagh" title="Shillelagh"><img src="docs/static/img/databases/shillelagh.png" alt="Shillelagh" width="40" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/singlestore" title="SingleStore"><img src="docs/static/img/databases/singlestore.png" alt="SingleStore" width="150" height="31" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/snowflake" title="Snowflake"><img src="docs/static/img/databases/snowflake.svg" alt="Snowflake" width="76" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/sqlite" title="SQLite"><img src="docs/static/img/databases/sqlite.png" alt="SQLite" width="84" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/starrocks" title="StarRocks"><img src="docs/static/img/databases/starrocks.png" alt="StarRocks" width="149" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/superset-meta-database" title="Superset meta database"><img src="docs/static/img/databases/superset.svg" alt="Superset meta database" width="150" height="39" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/tdengine" title="TDengine"><img src="docs/static/img/databases/tdengine.png" alt="TDengine" width="140" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/teradata" title="Teradata"><img src="docs/static/img/databases/teradata.png" alt="Teradata" width="124" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/timescaledb" title="TimescaleDB"><img src="docs/static/img/databases/timescale.png" alt="TimescaleDB" width="150" height="36" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/trino" title="Trino"><img src="docs/static/img/databases/trino.png" alt="Trino" width="89" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/vertica" title="Vertica"><img src="docs/static/img/databases/vertica.png" alt="Vertica" width="128" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/ydb" title="YDB"><img src="docs/static/img/databases/ydb.svg" alt="YDB" width="110" height="40" /></a> &nbsp;
<a href="https://superset.apache.org/docs/databases/supported/yugabytedb" title="YugabyteDB"><img src="docs/static/img/databases/yugabyte.png" alt="YugabyteDB" width="150" height="26" /></a>
</p>
<!-- SUPPORTED_DATABASES_END -->
**A more comprehensive list of supported databases** along with the configuration instructions can be found [here](https://superset.apache.org/docs/databases).
Want to add support for your datastore or data engine? Read more [here](https://superset.apache.org/docs/frequently-asked-questions#does-superset-work-with-insert-database-engine-here) about the technical requirements.
## Installation and Configuration
Try out Superset's [quickstart](https://superset.apache.org/docs/quickstart/) guide or learn about [the options for production deployments](https://superset.apache.org/docs/installation/architecture/).
## Get Involved
- Ask and answer questions on [StackOverflow](https://stackoverflow.com/questions/tagged/apache-superset) using the **apache-superset** tag
- [Join our community's Slack](http://bit.ly/join-superset-slack)
and please read our [Slack Community Guidelines](https://github.com/apache/superset/blob/master/CODE_OF_CONDUCT.md#slack-community-guidelines)
- [Join our dev@superset.apache.org Mailing list](https://lists.apache.org/list.html?dev@superset.apache.org). To join, simply send an email to [dev-subscribe@superset.apache.org](mailto:dev-subscribe@superset.apache.org)
- If you want to help troubleshoot GitHub Issues involving the numerous database drivers that Superset supports, please consider adding your name and the databases you have access to on the [Superset Database Familiarity Rolodex](https://docs.google.com/spreadsheets/d/1U1qxiLvOX0kBTUGME1AHHi6Ywel6ECF8xk_Qy-V9R8c/edit#gid=0)
- Join Superset's Town Hall and [Operational Model](https://preset.io/blog/the-superset-operational-model-wants-you/) recurring meetings. Meeting info is available on the [Superset Community Calendar](https://superset.apache.org/community)
## Contributor Guide
Interested in contributing? Check out our
[Developer Guide](https://superset.apache.org/developer-docs/)
to find resources around contributing along with a detailed guide on
how to set up a development environment.
## Resources
- [Superset "In the Wild"](https://superset.apache.org/inTheWild) - see who's using Superset, and [add your organization](https://github.com/apache/superset/edit/master/RESOURCES/INTHEWILD.yaml) to the list!
- [Feature Flags](https://superset.apache.org/docs/configuration/feature-flags) - the status of Superset's Feature Flags.
- [Standard Roles](https://github.com/apache/superset/blob/master/RESOURCES/STANDARD_ROLES.md) - How RBAC permissions map to roles.
- [Superset Wiki](https://github.com/apache/superset/wiki) - Tons of additional community resources: best practices, community content and other information.
- [Superset SIPs](https://github.com/orgs/apache/projects/170) - The status of Superset's SIPs (Superset Improvement Proposals) for both consensus and implementation status.
Understanding the Superset Points of View
- [The Case for Dataset-Centric Visualization](https://preset.io/blog/dataset-centric-visualization/)
- [Understanding the Superset Semantic Layer](https://preset.io/blog/understanding-superset-semantic-layer/)
- Getting Started with Superset
- [Superset in 2 Minutes using Docker Compose](https://superset.apache.org/docs/installation/docker-compose#installing-superset-locally-using-docker-compose)
- [Installing Database Drivers](https://superset.apache.org/docs/configuration/databases#installing-database-drivers)
- [Building New Database Connectors](https://preset.io/blog/building-database-connector/)
- [Create Your First Dashboard](https://superset.apache.org/docs/using-superset/creating-your-first-dashboard/)
- [Comprehensive Tutorial for Contributing Code to Apache Superset
](https://preset.io/blog/tutorial-contributing-code-to-apache-superset/)
- [Resources to master Superset by Preset](https://preset.io/resources/)
- Deploying Superset
- [Official Docker image](https://hub.docker.com/r/apache/superset)
- [Helm Chart](https://github.com/apache/superset/tree/master/helm/superset)
- Recordings of Past [Superset Community Events](https://preset.io/events)
- [Mixed Time Series Charts](https://preset.io/events/mixed-time-series-visualization-in-superset-workshop/)
- [How the Bing Team Customized Superset for the Internal Self-Serve Data & Analytics Platform](https://preset.io/events/how-the-bing-team-heavily-customized-superset-for-their-internal-data/)
- [Live Demo: Visualizing MongoDB and Pinot Data using Trino](https://preset.io/events/2021-04-13-visualizing-mongodb-and-pinot-data-using-trino/)
- [Introduction to the Superset API](https://preset.io/events/introduction-to-the-superset-api/)
- [Building a Database Connector for Superset](https://preset.io/events/2021-02-16-building-a-database-connector-for-superset/)
- Visualizations
- [Creating Viz Plugins](https://superset.apache.org/docs/contributing/creating-viz-plugins/)
- [Managing and Deploying Custom Viz Plugins](https://medium.com/nmc-techblog/apache-superset-manage-custom-viz-plugins-in-production-9fde1a708e55)
- [Why Apache Superset is Betting on Apache ECharts](https://preset.io/blog/2021-4-1-why-echarts/)
- [Superset API](https://superset.apache.org/docs/rest-api)
## Repo Activity
<a href="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats?repo_id=39464018" target="_blank" align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=dark" width="655" height="auto" />
<img alt="Performance Stats of apache/superset - Last 28 days" src="https://next.ossinsight.io/widgets/official/compose-last-28-days-stats/thumbnail.png?repo_id=39464018&image_size=auto&color_scheme=light" width="655" height="auto" />
</picture>
</a>
<!-- Made with [OSS Insight](https://ossinsight.io/) -->
<!-- telemetry/analytics pixel: -->
<img referrerpolicy="no-referrer-when-downgrade" src="https://static.scarf.sh/a.png?x-pxid=bc1c90cd-bc04-4e11-8c7b-289fb2839492" />

View File

@@ -0,0 +1,88 @@
---
title: Quickstart
hide_title: false
sidebar_position: 2
---
**Ready to try Apache Superset?** This quickstart guide will help you
get up and running on your local machine in **3 simple steps**. Note that
it assumes that you have [Docker](https://www.docker.com),
[Docker Compose](https://docs.docker.com/compose/), and
[Git](https://git-scm.com/) installed.
:::caution
Although we recommend using `Docker Compose` for a quick start in a sandbox-type
environment and for other development-type use cases, **we
do not recommend this setup for production**. For this purpose please
refer to our
[Installing on Kubernetes](/admin-docs/installation/kubernetes)
page.
:::
### 1. Get Superset
```bash
git clone https://github.com/apache/superset
```
### 2. Start the latest official release of Superset
```bash
# Enter the repository you just cloned
$ cd superset
# Set the repo to the state associated with the latest official version
$ git checkout tags/6.0.0
# Fire up Superset using Docker Compose
$ docker compose -f docker-compose-image-tag.yml up
```
This may take a moment as Docker Compose will fetch the underlying
container images and will load up some examples. Once all containers
are downloaded and the output settles, you're ready to log in.
⚠️ If you get an error message like `validating superset\docker-compose-image-tag.yml: services.superset-worker-beat.env_file.0 must be a string`, you need to update your version of `docker-compose`.
Note that `docker-compose` is on the path to deprecation and you should now use `docker compose` instead.
### 3. Log into Superset
Now head over to [http://localhost:8088](http://localhost:8088) and log in with the default created account:
```bash
username: admin
password: admin
```
#### 🎉 Congratulations! Superset is now up and running on your machine! 🎉
### Wrapping Up
Once you're done with Superset, you can stop and delete just like any other container environment:
```bash
docker compose down
```
:::tip
You can use the same environment more than once, as Superset will persist data locally. However, make sure to properly stop all
processes by running Docker Compose `stop` command. By doing so, you can avoid data corruption and/or loss of data.
:::
## What's next?
From this point on, you can head on to:
- [Create your first Dashboard](/user-docs/using-superset/creating-your-first-dashboard)
- [Connect to a Database](/user-docs/databases/)
- [Using Docker Compose](/admin-docs/installation/docker-compose)
- [Configure Superset](/admin-docs/configuration/configuring-superset)
- [Installing on Kubernetes](/admin-docs/installation/kubernetes)
Or just explore our [Documentation](https://superset.apache.org/docs/intro)!
:::resources
- [Video: Superset in 2 Minutes](https://www.youtube.com/watch?v=AqousXQ7YHw)
- [Video: Superset 101](https://www.youtube.com/watch?v=mAIH3hUoxEE)
- [Tutorial: Creating Your First Dashboard](/user-docs/using-superset/creating-your-first-dashboard)
:::

View File

@@ -0,0 +1,78 @@
---
title: Granular Export Controls
sidebar_position: 4
---
# Granular Export Controls
Superset provides granular, permission-based controls for data export, image export, and clipboard operations. These replace the legacy `can_csv` permission with three fine-grained permissions that can be assigned independently to roles.
## Feature Flag
Granular export controls are gated behind the `GRANULAR_EXPORT_CONTROLS` feature flag. When the flag is disabled, the legacy `can_csv` permission behavior is preserved.
```python
FEATURE_FLAGS = {
"GRANULAR_EXPORT_CONTROLS": True,
}
```
## Permissions
| Permission | Resource | Controls |
| -------------------- | ---------- | ---------------------------------------------------------------------- |
| `can_export_data` | `Superset` | CSV, Excel, and JSON data exports from charts, dashboards, and SQL Lab |
| `can_export_image` | `Superset` | Screenshot (JPEG/PNG) and PDF exports from charts and dashboards |
| `can_copy_clipboard` | `Superset` | Copy-to-clipboard operations in SQL Lab and the Explore data pane |
## Default Role Assignments
The migration grants all three new permissions (`can_export_data`, `can_export_image`, `can_copy_clipboard`) to every role that currently has `can_csv`. This preserves existing behavior — no role loses access during the upgrade.
After the migration, admins can selectively revoke individual export permissions from any role to restrict access. For example, to prevent Gamma users from exporting data or images while still allowing clipboard operations, revoke `can_export_data` and `can_export_image` from the Gamma role.
## Configuration Steps
1. **Enable the feature flag** in `superset_config.py`:
```python
FEATURE_FLAGS = {
"GRANULAR_EXPORT_CONTROLS": True,
}
```
2. **Run the database migration** to register the new permissions:
```bash
superset db upgrade
```
3. **Initialize permissions** so roles are populated:
```bash
superset init
```
4. **Verify role assignments** in **Settings > List Roles**. Confirm that each role has the expected permissions from the table above.
5. **Customize as needed**: Grant or revoke individual export permissions on any role through the role editor.
## User Experience
When a user lacks a required export permission:
- **Menu items** (CSV, Excel, JSON, screenshot) appear **disabled** with an info tooltip icon explaining the restriction
- **Buttons** (SQL Lab download, clipboard copy) appear **disabled** with a tooltip on hover
- **API endpoints** return **403 Forbidden** when the corresponding permission is missing
## API Enforcement
The following API endpoints enforce granular export permissions when the feature flag is enabled:
| Endpoint | Required Permission |
| --------------------------------------------------------- | ------------------- |
| `GET /api/v1/chart/{id}/data/` (CSV/Excel format) | `can_export_data` |
| `GET /api/v1/chart/{id}/cache_screenshot/` | `can_export_image` |
| `POST /api/v1/dashboard/{id}/cache_dashboard_screenshot/` | `can_export_image` |
| `GET /api/v1/sqllab/export/{client_id}/` | `can_export_data` |
| `POST /api/v1/sqllab/export_streaming/` | `can_export_data` |

View File

@@ -0,0 +1,368 @@
---
title: Creating Your First Dashboard
hide_title: true
sidebar_position: 1
version: 1
---
import useBaseUrl from "@docusaurus/useBaseUrl";
## Creating Your First Dashboard
This section is focused on documentation for end-users who will be using Superset
for the data analysis and exploration workflow
(data analysts, business analysts, data
scientists, etc).
:::tip
In addition to this site, [Preset.io](http://preset.io/) maintains an updated set of end-user
documentation at [docs.preset.io](https://docs.preset.io/).
:::
This tutorial targets someone who wants to create charts and dashboards in Superset. Well show you
how to connect Superset to a new database and configure a table in that database for analysis.
Youll also explore the data youve exposed and add a visualization to a dashboard so that you get a
feel for the end-to-end user experience.
### Connecting to a new database
Superset itself doesn't have a storage layer to store your data but instead pairs with
your existing SQL-speaking database or data store.
First things first, we need to add the connection credentials to your database to be able
to query and visualize data from it. If you're using Superset locally via
[Docker compose](/admin-docs/installation/docker-compose), you can
skip this step because a Postgres database, named **examples**, is included and
pre-configured in Superset for you.
Under the **+** menu in the top right, select Data, and then the _Connect Database_ option:
<img src={useBaseUrl("/img/tutorial/tutorial_01_add_database_connection.png")} width="600" />{" "} <br/><br/>
Then select your database type in the resulting modal:
<img src={useBaseUrl("/img/tutorial/tutorial_02_select_database.png" )} width="600" />{" "} <br/><br/>
Once you've selected a database, you can configure a number of advanced options in this window,
or for the purposes of this walkthrough, you can click the link below all these fields:
<img src={useBaseUrl("/img/tutorial/tutorial_03a_database_connection_string_link.png" )} width="600" />{" "} <br/><br/>
Please note, if you are trying to connect to another locally running database (whether on host or another container), and you get the message `The port is closed.`, then you need to adjust the HOST to `host.docker.internal`
Once you've clicked that link you only need to specify two things (the database name and SQLAlchemy URI):
<img src={useBaseUrl("/img/tutorial/tutorial_03b_connection_string_details.png" )} width="600" />{" "} <br/><br/>
As noted in the text below the form, you should refer to the SQLAlchemy documentation on
[creating new connection URIs](https://docs.sqlalchemy.org/en/12/core/engines.html#database-urls)
for your target database.
Click the **Test Connection** button to confirm things work end to end. If the connection looks good, save the configuration
by clicking the **Connect** button in the bottom right corner of the modal window:
Congratulations, you've just added a new data source in Superset!
### Sharing a Database Connection
When adding a new database, you can share the connection with other Superset users. Shared connections appear in other users' database lists, making it easier to collaborate on the same data without requiring each user to configure the same connection separately.
To share a connection, enable the **Share connection with other users** option in the **Advanced** tab of the database connection modal before saving. You can change sharing settings later by editing the database connection.
### Registering a new table
Now that youve configured a data source, you can select specific tables (called **Datasets** in Superset)
that you want exposed in Superset for querying.
Navigate to **Data ‣ Datasets** and select the **+ Dataset** button in the top right corner.
<img src={useBaseUrl("/img/tutorial/tutorial_08_sources_tables.png" )} />
A modal window should pop up in front of you. Select your **Database**,
**Schema**, and **Table** using the drop downs that appear. In the following example,
we register the **cleaned_sales_data** table from the **examples** database.
<img src={useBaseUrl("/img/tutorial/tutorial_09_add_new_table.png" )} />
To finish, click the **Add** button in the bottom right corner. You should now see your dataset in the list of datasets.
### Organizing Datasets into Folders
The Datasets list view supports **folders** for organizing datasets into groups. To create and manage folders:
1. In the **Datasets** list, click the **Folders** panel on the left sidebar.
2. Click **+ New Folder** to create a top-level folder, or drag an existing folder to nest it.
3. Drag dataset rows onto a folder to move them in, or right-click a dataset and select **Move to folder**.
Folders are per-user organizational aids — they do not affect dataset access permissions or how other users see the datasets.
### Uploading Files via the OS File Manager (PWA)
When Superset is installed as a **Progressive Web App (PWA)** from your browser, your operating system will offer Superset as an option when opening CSV, Excel (`.xls`/`.xlsx`), and Parquet files. Double-clicking or right-clicking a supported file and selecting "Open with Superset" navigates directly to the upload workflow for that file.
To install Superset as a PWA, look for the install icon in your browser's address bar (Chrome, Edge) when visiting your Superset instance over HTTPS. PWA installation requires HTTPS and a valid manifest — your admin needs to confirm the app manifest is served correctly.
### Customizing column properties
Now that you've registered your dataset, you can configure column properties
for how the column should be treated in the Explore workflow:
- Is the column temporal? (should it be used for slicing & dicing in time series charts?)
- Should the column be filterable?
- Is the column dimensional?
- If it's a datetime column, how should Superset parse
the datetime format? (using the [ISO-8601 string pattern](https://en.wikipedia.org/wiki/ISO_8601))
<img src={useBaseUrl("/img/tutorial/tutorial_column_properties.png" )} />
### Superset semantic layer
Superset has a thin semantic layer that adds many quality of life improvements for analysts.
The Superset semantic layer can store 2 types of computed data:
1. Virtual metrics: you can write SQL queries that aggregate values
from multiple column (e.g. `SUM(recovered) / SUM(confirmed)`) and make them
available as columns for (e.g. `recovery_rate`) visualization in Explore.
Aggregate functions are allowed and encouraged for metrics.
<img src={useBaseUrl("/img/tutorial/tutorial_sql_metric.png" )} />
You can also certify metrics if you'd like for your team in this view.
1. Virtual calculated columns: you can write SQL queries that
customize the appearance and behavior
of a specific column (e.g. `CAST(recovery_rate as float)`).
Aggregate functions aren't allowed in calculated columns.
<img src={useBaseUrl("/img/tutorial/tutorial_calculated_column.png" )} />
:::resources
- [Using Metrics and Calculated Columns](https://docs.preset.io/docs/using-metrics-and-calculated-columns) - In-depth guide to the semantic layer
- [Blog: Understanding the Superset Semantic Layer](https://preset.io/blog/understanding-superset-semantic-layer/)
- [Blog: Unlocking the Power of Virtual Datasets](https://preset.io/blog/unlocking-the-power-of-virtual-datasets-in-apache-superset/)
:::
### Creating charts in Explore view
Superset has 2 main interfaces for exploring data:
- **Explore**: no-code viz builder. Select your dataset, select the chart,
customize the appearance, and publish.
- **SQL Lab**: SQL IDE for cleaning, joining, and preparing data for Explore workflow
We'll focus on the Explore view for creating charts right now.
To start the Explore workflow from the **Datasets** tab, start by clicking the name
of the dataset that will be powering your chart.
<img src={useBaseUrl("/img/tutorial/tutorial_launch_explore.png" )} /><br/><br/>
You're now presented with a powerful workflow for exploring data and iterating on charts.
- The **Dataset** view on the left-hand side has a list of columns and metrics,
scoped to the current dataset you selected.
- The **Data** preview below the chart area also gives you helpful data context.
- Using the **Data** tab and **Customize** tabs, you can change the visualization type,
select the temporal column, select the metric to group by, and customize
the aesthetics of the chart.
As you customize your chart using drop-down menus, make sure to click the **Run** button
to get visual feedback.
<img src={useBaseUrl("/img/tutorial/tutorial_explore_run.jpg" )} />
In the following screenshot, we craft a grouped Time-series Bar Chart to visualize
our quarterly sales data by product line just by clicking options in drop-down menus.
<img src={useBaseUrl("/img/tutorial/tutorial_explore_settings.jpg" )} />
### Creating a slice and dashboard
To save your chart, first click the **Save** button. You can either:
- Save your chart and add it to an existing dashboard
- Save your chart and add it to a new dashboard
In the following screenshot, we save the chart to a new "Superset Duper Sales Dashboard":
<img src={useBaseUrl("/img/tutorial/tutorial_save_slice.png" )} />
To publish, click **Save and goto Dashboard**.
Behind the scenes, Superset will create a slice and store all the information needed
to create your chart in its thin data layer
(the query, chart type, options selected, name, etc).
<img src={useBaseUrl("/img/tutorial/tutorial_first_dashboard.png" )} style={{width: "100%", maxWidth: "500px"}} />
To resize the chart, start by clicking the Edit Dashboard button in the top right corner.
<img src={useBaseUrl("/img/tutorial/tutorial_edit_button.png" )} width="300" />
Then, click and drag the bottom right corner of the chart until the chart layout snaps
into a position you like onto the underlying grid.
<img src={useBaseUrl("/img/tutorial/tutorial_chart_resize.png" )} style={{width: "100%", maxWidth: "500px"}} />
Click **Save** to persist the changes.
Congrats! Youve successfully linked, analyzed, and visualized data in Superset. There are a wealth
of other table configuration and visualization options, so please start exploring and creating
slices and dashboards of your own.
### Manage access to Dashboards
Access to dashboards is managed via owners and permissions. Non-owner access can be controlled
through dataset permissions or dashboard-level roles (using the `DASHBOARD_RBAC` feature flag).
For detailed information on configuring dashboard access, see the
[Dashboard Access Control](/admin-docs/security/#dashboard-access-control) section in the
Security documentation.
<img src={useBaseUrl("/img/tutorial/tutorial_dashboard_access.png" )} />
### Publishing a Dashboard
If you would like to make your dashboard available to other users, click on the `Draft` button next to the
title of your dashboard.
<img src={useBaseUrl("/img/tutorial/publish_button_dashboard.png" )} />
:::warning
Draft dashboards are only visible to the dashboard owners and admins. Published dashboards are visible to all users with access to the underlying datasets or if RBAC is enabled, to the roles that have been granted access to the dashboard.
:::
### Mark a Dashboard as Favorite
You can mark a dashboard as a favorite by clicking on the star icon next to the title of your dashboard. This makes it easier to find it in the list of dashboards or on the home page.
### Customizing dashboard
The following URL parameters can be used to modify how the dashboard is rendered:
- `standalone`:
- `0` (default): dashboard is displayed normally
- `1`: Top Navigation is hidden
- `2`: Top Navigation + title is hidden
- `3`: Top Navigation + title + top level tabs are hidden
- `show_filters`:
- `0`: render dashboard without Filter Bar
- `1` (default): render dashboard with Filter Bar if native filters are enabled
- `expand_filters`:
- (default): render dashboard with Filter Bar expanded if there are native filters
- `0`: render dashboard with Filter Bar collapsed
- `1`: render dashboard with Filter Bar expanded
For example, when running the local development build, the following will disable the
Top Nav and remove the Filter Bar:
`http://localhost:8088/superset/dashboard/my-dashboard/?standalone=1&show_filters=0`
### Table Chart Features
The **Table** chart type has several advanced capabilities worth knowing:
#### Conditional Formatting
Conditional formatting rules highlight cells based on their values. Rules can be applied to:
- **Numeric columns** — color cells above/below a threshold, or use a gradient across a range
- **String columns** — highlight cells matching specific text values or patterns
- **Boolean columns** — color cells that are `true` or `false`, or `null`/`not null`
Each rule has a **"Use gradient"** toggle: enabled applies a varying opacity (lighter = further from threshold), disabled applies a solid fill at full opacity regardless of value.
#### HTML Rendering in Table Cells
Table chart cells can render raw HTML, enabling rich formatting such as hyperlinks, colored badges, and icons directly in the data. Enable this per-column in the chart's **Column Configuration** panel by toggling **Render HTML**.
:::caution
Only enable HTML rendering for columns sourced from data you control. Rendering untrusted HTML can expose users to cross-site scripting (XSS) risks.
:::
#### Column Header Tooltips
Column headers display a tooltip with the column's **Description** from the dataset editor when the user hovers over them. Keep dataset column descriptions up to date to improve chart discoverability.
#### Display Controls
In dashboard view mode (without entering Edit mode), charts with configurable display options expose a **Display Controls** panel accessible from the chart's context menu. This surfaces controls such as Time Grain, Time Column, and layer visibility for applicable chart types — making it easy to adjust a chart's view without going to Explore.
### AG Grid Interactive Table
The **AG Grid Interactive Table** chart type is Superset's fully-featured data grid, suitable for large paginated datasets where the standard Table chart is not enough.
#### Server-Side Column Filters
AG Grid supports server-side column filters that query the full dataset — not just the loaded page. Filters are applied before data is sent to the browser, so results are correct even across millions of rows.
**Available filter types:**
| Column type | Filter options |
|---|---|
| Text | Contains, equals, starts with, ends with |
| Number | Equals, not equal, less than, greater than, between |
| Date | Before, after, between, blank |
| Set | Select from a list of distinct values |
**AND / OR logic:** Each column supports combining multiple conditions with AND or OR. Filters from different columns are always combined with AND.
**Interaction with pagination:** Server-side filters run as WHERE clauses in the underlying SQL query, so pagination always operates over the already-filtered result set.
#### Time Shift (Time Comparison)
AG Grid Interactive Table supports **Time Shift** (time comparison), matching the behavior of the standard Table chart. In the **Advanced Analytics** → **Time Comparison** section of the chart configuration, enter a shift expression (e.g., `1 year ago`, `minus 7 days`) to add comparison columns showing values from the offset period. Dashboard-level time range overrides apply to both the base and comparison periods.
### Dynamic Currency Formatting
Chart metric values can display currencies dynamically rather than using a fixed currency code. To enable:
1. Open the dataset editor for your dataset (**Datasets → Edit**).
2. In the **Advanced** tab, set **Currency Code Column** to the name of a column in your dataset that contains ISO 4217 currency codes (e.g., `USD`, `EUR`, `GBP`).
3. In the Explore chart configuration, open the metric's **Number format** section and select **Auto-detect** for currency.
When Auto-detect is active, each row uses the currency code from the designated column, so a single chart can display values in multiple currencies — each formatted correctly for its currency.
### ECharts Option Editor
For ECharts-based chart types (line, bar, area, scatter, pie, and others), Explore includes an advanced **ECharts Option Editor** that accepts raw JSON overrides for the underlying ECharts configuration.
Access it via the **Customize** tab → **ECharts Options** section at the bottom of the panel. The JSON you enter is deep-merged on top of Superset's generated ECharts config, so you can override specific options without rewriting the entire config.
**Example:** override the legend position and add a custom title:
```json
{
"legend": { "orient": "vertical", "right": "5%", "top": "middle" },
"title": { "text": "My Custom Title", "left": "center" }
}
```
:::caution
ECharts option overrides bypass Superset's validation layer. Invalid option keys are silently ignored by ECharts. Overrides that conflict with Superset-generated options (e.g., `series`) may produce unexpected results.
:::
### Table Chart: Exporting Filtered Data
When the **Search Box** is visible in a Table chart, the **Download** action exports only the rows currently visible after the search filter is applied — not the full underlying dataset. This matches the visual output and is intentional. To export the full dataset regardless of search state, use the **Download as CSV** option from the chart's three-dot menu in the dashboard or from the Explore chart toolbar before applying a search filter.
### Sharing a Specific Tab
When a dashboard has tabs, each tab gets its own shareable URL. Navigate to the tab you want to share and copy the URL from your browser's address bar — the tab anchor is encoded in the URL so that anyone opening the link lands directly on that tab.
### Auto-Refresh
Dashboards can be configured to refresh automatically at a fixed interval without user interaction. Open a dashboard, click the **⋮** (more options) menu in the top-right, and select **Set auto-refresh interval**. Choose an interval (e.g., every 10 seconds, 1 minute, or 10 minutes). The setting is per-session and resets when you close the tab.
:::note
Auto-refresh triggers a full data reload for all charts on the dashboard. For dashboards with expensive queries, choose longer intervals to avoid overloading your database.
:::
### Last Queried Timestamp
Charts can display a "Last queried at" timestamp showing when the chart data was last fetched. This is useful on auto-refreshing dashboards to confirm data freshness. Enable it in **Dashboard Properties → Styling → Show last queried time**.
### Saving a Chart to a Specific Tab
When saving or adding a chart to a dashboard from Explore, you can select which tab it should land on using the tab tree-select dropdown in the "Add to dashboard" modal.
:::resources
- [Dashboard Customization](https://docs.preset.io/docs/dashboard-customization) - Advanced dashboard styling and layout options
- [Blog: BI Dashboard Best Practices](https://preset.io/blog/bi-dashboard-best-practices/)
:::

View File

@@ -0,0 +1,131 @@
---
title: Embedding Superset
sidebar_position: 6
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
# Embedding Superset
Superset dashboards can be embedded directly in host applications using the `@superset-ui/embedded-sdk` package.
:::info Prerequisites
- The `EMBEDDED_SUPERSET` feature flag must be enabled.
- The embedding domain and allowed origins must be configured by an admin.
:::
## Quick Start
Install the SDK:
```bash
npm install @superset-ui/embedded-sdk
```
Embed a dashboard:
```javascript
import { embedDashboard } from '@superset-ui/embedded-sdk';
embedDashboard({
id: 'dashboard-uuid-here', // from Dashboard → Embed
supersetDomain: 'https://superset.example.com',
mountPoint: document.getElementById('superset-container'),
fetchGuestToken: () => fetchTokenFromYourBackend(),
dashboardUiConfig: {
hideTitle: true,
filters: { expanded: false },
},
});
```
`fetchGuestToken` must return a **guest token** obtained from your server by calling Superset's `/api/v1/security/guest_token/` endpoint with a service account. Do not call this endpoint from client-side code.
---
## Callbacks
### `resolvePermalinkUrl`
When a user copies a permalink from an embedded dashboard, Superset generates a URL on its own domain. In an embedded context this URL is usually not meaningful to the host application's users — the dashboard is rendered inside the host app, not at the Superset URL.
The `resolvePermalinkUrl` callback lets the host app intercept permalink generation and return a URL on the host domain instead:
```javascript
embedDashboard({
id: 'my-dashboard-uuid',
supersetDomain: 'https://superset.example.com',
mountPoint: document.getElementById('superset-container'),
fetchGuestToken: () => fetchGuestToken(),
/**
* Called when Superset generates a permalink.
* @param {Object} args - { key: string } — the permalink key
* @returns {string | null} - your host URL, or null to use Superset's default
*/
resolvePermalinkUrl: ({ key }) => {
return `https://myapp.example.com/dashboard?permalink=${key}`;
},
});
```
If the callback returns `null` or is not provided, Superset uses its own permalink URL as a fallback.
---
## Feature Flags for Embedded Mode
### `DISABLE_EMBEDDED_SUPERSET_LOGOUT`
Hides the logout button when Superset is embedded in a host application. This is useful when the host application manages the session lifecycle and you do not want users to accidentally log out of the embedded Superset session:
```python
# superset_config.py
FEATURE_FLAGS = {
"EMBEDDED_SUPERSET": True,
"DISABLE_EMBEDDED_SUPERSET_LOGOUT": True,
}
```
When enabled, the **Logout** menu item is removed from the user avatar dropdown in the embedded view. The session can still be invalidated server-side by revoking the guest token.
### `EMBEDDED_SUPERSET`
Must be `True` to enable the embedded SDK and the guest token endpoint. Without this flag, `embedDashboard` will fail to load.
---
## URL Parameters
The following URL parameters can be passed through the `urlParams` option in `dashboardUiConfig` or appended to the embedded iframe URL:
| Parameter | Values | Effect |
|-----------|--------|--------|
| `standalone` | `0`, `1`, `2`, `3` | `0`: normal; `1`: hide nav; `2`: hide nav + title; `3`: hide nav + title + tabs |
| `show_filters` | `0`, `1` | Show or hide the native filter bar |
| `expand_filters` | `0`, `1` | Start with filter bar expanded or collapsed |
---
## Security Notes
- **Guest tokens expire** — their lifetime is controlled by the `GUEST_TOKEN_JWT_EXP_SECONDS` config (default: 5 minutes). Refresh tokens before they expire using a token refresh mechanism in your host app.
- **Row-level security** — pass `rls` rules in the guest token request to restrict which rows are visible to the embedded user.
- **Allowed domains** — restrict which host origins can embed a dashboard by setting **Allowed Domains** per-dashboard in the *Embed* settings modal. Superset checks the request's `Referer` header against this list before serving the embedded view; an empty list allows any origin, so configure this explicitly for production.

View File

@@ -0,0 +1,360 @@
---
title: Exploring Data in Superset
hide_title: true
sidebar_position: 2
version: 1
---
import useBaseUrl from "@docusaurus/useBaseUrl";
## Exploring Data in Superset
Apache Superset enables users to explore data interactively through SQL queries, visual query builders, and rich visualizations, making it easier to understand datasets before building charts and dashboards.
In this tutorial, we will introduce key concepts in Apache Superset through the exploration of a real dataset which contains the flights made by employees of a UK-based organization in 2011.
The following information about each flight is given:
- The travelers department. For the purposes of this tutorial the departments have been renamed
Orange, Yellow and Purple.
- The cost of the ticket.
- The travel class (Economy, Premium Economy, Business and First Class).
- Whether the ticket was a single or return.
- The date of travel.
- Information about the origin and destination.
- The distance between the origin and destination, in kilometers (km).
### Enabling Data Upload Functionality
You may need to enable the functionality to upload a CSV or Excel file to your database. The following section
explains how to enable this functionality for the examples database.
In the top menu, select **Settings ‣ Data ‣ Database Connections**. Find the **examples** database in the list and
select the **Edit** button.
<img src={useBaseUrl("/img/tutorial/edit-record.png" )} />
In the resulting modal window, switch to the **Advanced** tab and open **Security** section.
Then, tick the checkbox for **Allow file uploads to database**. End by clicking the **Finish** button.
<img src={useBaseUrl("/img/tutorial/allow-file-uploads.png" )} />
### Loading CSV Data
Download the CSV dataset to your computer from
[GitHub](https://raw.githubusercontent.com/apache-superset/examples-data/master/tutorial_flights.csv).
In the top menu, select **Settings ‣ Data ‣ Database Connections**. Then, **Upload file to database ‣ Upload CSV**.
<img src={useBaseUrl("/img/tutorial/upload_a_csv.png" )} />
Then, select select the CSV file from your computer, select **Database** and **Schema**, and enter the **Table Name**
as _tutorial_flights_.
<img src={useBaseUrl("/img/tutorial/csv_to_database_configuration.png" )} />
Next enter the text _Travel Date_ into the **File settings ‣ Columns to be parsed as dates** field.
<img src={useBaseUrl("/img/tutorial/parse_dates_column.png" )} />
Leaving all the other options in their default settings, select **Upload** at the bottom of the page.
### Table Visualization
You should now see _tutorial_flights_ as a dataset in the **Datasets** tab. Click on the entry to
launch an Explore workflow using this dataset.
In this section, we'll create a table visualization
to show the number of flights and cost per travel class.
By default, Apache Superset only shows the last week of data. In our example, we want to visualize all
of the data in the dataset. Click the **Time ‣ Time Range** section and change
the **Range Type** to **No Filter**.
<img src={useBaseUrl("/img/tutorial/no_filter_on_time_filter.png" )} />
Click **Apply** to save.
Now, we want to specify the rows in our table by using the **Group by** option. Since in this
example, we want to understand different Travel Classes, we select **Travel Class** in this menu.
Next, we can specify the metrics we would like to see in our table with the **Metrics** option.
- `COUNT(*)`, which represents the number of rows in the table
(in this case, quantity of flights in each Travel Class)
- `SUM(Cost)`, which represents the total cost spent by each Travel Class
<img src={useBaseUrl("/img/tutorial/sum_cost_column.png" )} />
Finally, select **Run Query** to see the results of the table.
<img src={useBaseUrl("/img/tutorial/tutorial_table.png" )} />
To save the visualization, click on **Save** in the top left of the screen. In the following modal,
- Select the **Save as**
option and enter the chart name as Tutorial Table (you will be able to find it again through the
**Charts** screen, accessible in the top menu).
- Select **Add To Dashboard** and enter
Tutorial Dashboard. Finally, select **Save & Go To Dashboard**.
<img src={useBaseUrl("/img/tutorial/save_tutorial_table.png" )} />
### Dashboard Basics
Next, we are going to explore the dashboard interface. If youve followed the previous section, you
should already have the dashboard open. Otherwise, you can navigate to the dashboard by selecting
Dashboards on the top menu, then Tutorial dashboard from the list of dashboards.
On this dashboard you should see the table you created in the previous section. Select **Edit
dashboard** and then hover over the table. By selecting the bottom right hand corner of the table
(the cursor will change too), you can resize it by dragging and dropping.
<img src={useBaseUrl("/img/tutorial/resize_tutorial_table_on_dashboard.png" )} />
Finally, save your changes by selecting Save changes in the top right.
### Pivot Table
In this section, we will extend our analysis using a more complex visualization, Pivot Table. By the
end of this section, you will have created a table that shows the monthly spend on flights for the
first six months, by department, by travel class.
Create a new chart by selecting **+ ‣ Chart** from the top right corner. Choose
tutorial_flights again as a datasource, then click on the visualization type to get to the
visualization menu. Select the **Pivot Table** visualization (you can filter by entering text in the
search box) and then **Create New Chart**.
<img src={useBaseUrl("/img/tutorial/create_pivot.png" )} />
In the **Time** section, keep the Time Column as Travel Date (this is selected automatically as we
only have one time column in our dataset). Then select Time Grain to be month as having daily data
would be too granular to see patterns from. Then select the time range to be the first six months of
2011 by click on Last week in the Time Range section, then in Custom selecting a Start / end of 1st
January 2011 and 30th June 2011 respectively by either entering directly the dates or using the
calendar widget (by selecting the month name and then the year, you can move more quickly to far
away dates).
<img src={useBaseUrl("/img/tutorial/select_dates_pivot_table.png" )} />
Next, within the **Query** section, remove the default COUNT(\*) and add Cost, keeping the default
SUM aggregate. Note that Apache Superset will indicate the type of the metric by the symbol on the
left hand column of the list (ABC for string, # for number, a clock face for time, etc.).
In **Group by**, select **Time**: this will automatically use the Time Column and Time Grain
selections we defined in the Time section.
Within **Columns**, first select Department and then Travel Class. All set lets **Run Query** to
see some data!
<img src={useBaseUrl("/img/tutorial/tutorial_pivot_table.png" )} />
You should see months in the rows and Department and Travel Class in the columns. Publish this chart
to your existing Tutorial Dashboard you created earlier.
### Line Chart
In this section, we are going to create a line chart to understand the average price of a ticket by
month across the entire dataset.
In the Time section, as before, keep the Time Column as Travel Date and Time Grain as month but this
time for the Time range select No filter as we want to look at entire dataset.
Within Metrics, remove the default `COUNT(*)` metric and instead add `AVG(Cost)`, to show the mean value.
<img src={useBaseUrl("/img/tutorial/average_aggregate_for_cost.png" )} />
Next, select **Run Query** to show the data on the chart.
How does this look? Well, we can see that the average cost goes up in December. However, perhaps it
doesnt make sense to combine both single and return tickets, but rather show two separate lines for
each ticket type.
Lets do this by selecting Ticket Single or Return in the Group by box, and the selecting **Run
Query** again. Nice! We can see that on average single tickets are cheaper than returns and that the
big spike in December is caused by return tickets.
Our chart is looking pretty good already, but lets customize some more by going to the Customize
tab on the left hand pane. Within this pane, try changing the Color Scheme, removing the range
filter by selecting No in the Show Range Filter drop down and adding some labels using X Axis Label
and Y Axis Label.
<img src={useBaseUrl("/img/tutorial/tutorial_line_chart.png" )} />
Once youre done, publish the chart in your Tutorial Dashboard.
### Markup
In this section, we will add some text to our dashboard. If youre there already, you can navigate
to the dashboard by selecting Dashboards on the top menu, then Tutorial dashboard from the list of
dashboards. Got into edit mode by selecting **Edit dashboard**.
Within the Insert components pane, drag and drop a Markdown box on the dashboard. Look for the blue
lines which indicate the anchor where the box will go.
<img src={useBaseUrl("/img/tutorial/blue_bar_insert_component.png" )} />
Now, to edit the text, select the box. You can enter text, in markdown format (see
[this Markdown Cheatsheet](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet) for
more information about this format). You can toggle between Edit and Preview using the menu on the
top of the box.
<img src={useBaseUrl("/img/tutorial/markdown.png" )} />
To exit, select any other part of the dashboard. Finally, dont forget to keep your changes using
**Save changes**.
### Publishing Your Dashboard
If you have followed all of the steps outlined in the previous section, you should have a dashboard
that looks like the below. If you would like, you can rearrange the elements of the dashboard by
selecting **Edit dashboard** and dragging and dropping.
If you would like to make your dashboard available to other users, simply select Draft next to the
title of your dashboard on the top left to change your dashboard to be in Published state. You can
also favorite this dashboard by selecting the star.
<img src={useBaseUrl("/img/tutorial/publish_dashboard.png" )} />
### Annotations
Annotations allow you to add additional context to your chart. In this section, we will add an
annotation to the Tutorial Line Chart we made in a previous section. Specifically, we will add the
dates when some flights were cancelled by the UKs Civil Aviation Authority in response to the
eruption of the Grímsvötn volcano in Iceland (23-25 May 2011).
First, add an annotation layer by navigating to Manage ‣ Annotation Layers. Add a new annotation
layer by selecting the green plus sign to add a new record. Enter the name Volcanic Eruptions and
save. We can use this layer to refer to a number of different annotations.
Next, add an annotation by navigating to Manage ‣ Annotations and then create a new annotation by
selecting the green plus sign. Then, select the Volcanic Eruptions layer, add a short description
Grímsvötn and the eruption dates (23-25 May 2011) before finally saving.
<img src={useBaseUrl("/img/tutorial/edit_annotation.png" )} />
Then, navigate to the line chart by going to Charts then selecting Tutorial Line Chart from the
list. Next, go to the Annotations and Layers section and select Add Annotation Layer. Within this
dialogue:
- Name the layer as Volcanic Eruptions
- Change the Annotation Layer Type to Event
- Set the Annotation Source as Superset annotation
- Specify the Annotation Layer as Volcanic Eruptions
<img src={useBaseUrl("/img/tutorial/annotation_settings.png" )} />
Select **Apply** to see your annotation shown on the chart.
<img src={useBaseUrl("/img/tutorial/annotation.png" )} />
If you wish, you can change how your annotation looks by changing the settings in the Display
configuration section. Otherwise, select **OK** and finally **Save** to save your chart. If you keep
the default selection to overwrite the chart, your annotation will be saved to the chart and also
appear automatically in the Tutorial Dashboard.
### Advanced Analytics
In this section, we are going to explore the Advanced Analytics feature of Apache Superset that
allows you to apply additional transformations to your data. The three types of transformation are:
**Setting up the base chart**
In this section, were going to set up a base chart which we can then apply the different **Advanced
Analytics** features to. Start off by creating a new chart using the same _tutorial_flights_
datasource and the **Line Chart** visualization type. Within the Time section, set the Time Range as
1st October 2011 and 31st October 2011.
Next, in the query section, change the Metrics to the sum of Cost. Select **Run Query** to show the
chart. You should see the total cost per day for each month in October 2011.
<img src={useBaseUrl("/img/tutorial/advanced_analytics_base.png" )} />
Finally, save the visualization as Tutorial Advanced Analytics Base, adding it to the Tutorial
Dashboard.
### Rolling Mean
There is quite a lot of variation in the data, which makes it difficult to identify any trend. One
approach we can take is to show instead a rolling average of the time series. To do this, in the
**Moving Average** subsection of **Advanced Analytics**, select mean in the **Rolling** box and
enter 7 into both Periods and Min Periods. The period is the length of the rolling period expressed
as a multiple of the Time Grain. In our example, the Time Grain is day, so the rolling period is 7
days, such that on the 7th October 2011 the value shown would correspond to the first seven days of
October 2011. Lastly, by specifying Min Periods as 7, we ensure that our mean is always calculated
on 7 days and we avoid any ramp up period.
After displaying the chart by selecting **Run Query** you will see that the data is less variable
and that the series starts later as the ramp up period is excluded.
<img src={useBaseUrl("/img/tutorial/rolling_mean.png" )} />
Save the chart as Tutorial Rolling Mean and add it to the Tutorial Dashboard.
### Time Comparison
In this section, we will compare values in our time series to the value a week before. Start off by
opening the Tutorial Advanced Analytics Base chart, by going to **Charts** in the top menu and then
selecting the visualization name in the list (alternatively, find the chart in the Tutorial
Dashboard and select Explore chart from the menu for that visualization).
Next, in the Time Comparison subsection of **Advanced Analytics**, enter the Time Shift by typing in
“minus 1 week” (note this box accepts input in natural language). Run Query to see the new chart,
which has an additional series with the same values, shifted a week back in time.
<img src={useBaseUrl("/img/tutorial/time_comparison_two_series.png" )} />
Then, change the **Calculation type** to Absolute difference and select **Run Query**. We can now
see only one series again, this time showing the difference between the two series we saw
previously.
<img src={useBaseUrl("/img/tutorial/time_comparison_absolute_difference.png" )} />
Save the chart as Tutorial Time Comparison and add it to the Tutorial Dashboard.
### Resampling the data
In this section, well resample the data so that rather than having daily data we have weekly data.
As in the previous section, reopen the Tutorial Advanced Analytics Base chart.
Next, in the Python Functions subsection of **Advanced Analytics**, enter 7D, corresponding to seven
days, in the Rule and median as the Method and show the chart by selecting **Run Query**.
<img src={useBaseUrl("/img/tutorial/resample.png" )} />
Note that now we have a single data point every 7 days. In our case, the value showed corresponds to
the median value within the seven daily data points. For more information on the meaning of the
various options in this section, refer to the
[Pandas documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html).
Lastly, save your chart as Tutorial Resample and add it to the Tutorial Dashboard. Go to the
tutorial dashboard to see the four charts side by side and compare the different outputs.
### SQL Lab Tips
**Schema and table browser**: The left-side table browser uses a collapsible treeview — click a schema to expand its tables, and click a table to see its columns and sample data inline. This makes navigating large schemas much faster than the previous flat list.
**Find in editor**: Press **Ctrl+F** (or **Cmd+F** on Mac) to open the Monaco find/replace widget inside the SQL editor without leaving the editor.
**Resizable panels**: The dividers between the SQL editor, schema browser, and results pane are draggable. Adjust them to match your workflow and screen size.
**Dialect-aware Format SQL**: The **Format SQL** button applies the SQL dialect of the currently selected database — Trino, Presto, MySQL, PostgreSQL, etc. — rather than a generic formatter. Switch to a different database in the toolbar and re-format to get dialect-specific output. Jinja template syntax (`{{ }}`, `{% %}`) is preserved during formatting and will not cause format errors.
### Time Range Natural Language Expressions
The **Custom** time range picker accepts natural language expressions alongside specific dates:
- **Relative**: `Last 7 days`, `Last month`, `Last quarter`, `Last year`
- **Anchored**: `previous calendar week`, `previous calendar month`
- **"First of" expressions**: `first day of this week`, `first day of this month`, `first day of this quarter`, `first day of this year`, `first week of this year`
- **Offsets**: `30 days ago`, `1 year ago`, `next week`
These expressions are evaluated at query time, so saved charts always display data relative to the current date.
:::resources
- [Chart Walkthroughs](https://docs.preset.io/docs/chart-walkthroughs) - Detailed guides for most chart types
- [Blog: Why Apache ECharts is the Future of Apache Superset](https://preset.io/blog/2021-4-1-why-echarts/)
- [Blog: ECharts Time-Series Visualizations in Superset](https://preset.io/blog/echarts-time-series-visualizations-in-superset/)
- [Blog: Finding New Insights with Drill By](https://preset.io/blog/drill-by/)
- [Blog: From Drill Down to Drill By](https://preset.io/blog/drill-down-and-drill-by/)
- [Blog: Cross-Filtering in Apache Superset](https://preset.io/blog/cross-filtering-in-Superset-and-Preset/)
:::

View File

@@ -0,0 +1,334 @@
---
title: Issue Codes
sidebar_position: 5
version: 1
---
# Issue Code Reference
This page lists issue codes that may be displayed in
Superset and provides additional context.
## Issue 1000
```
The datasource is too large to query.
```
It's likely your datasource has grown too large to run the current
query, and is timing out. You can resolve this by reducing the
size of your datasource or by modifying your query to only process a
subset of your data.
## Issue 1001
```
The database is under an unusual load.
```
Your query may have timed out because of unusually high load on the
database engine. You can make your query simpler, or wait until the
database is under less load and try again.
## Issue 1002
```
The database returned an unexpected error.
```
Your query failed because of an error that occurred on the database.
This may be due to a syntax error, a bug in your query, or some other
internal failure within the database. This is usually not an
issue within Superset, but instead a problem with the underlying
database that serves your query.
## Issue 1003
```
There is a syntax error in the SQL query. Perhaps there was a misspelling or a typo.
```
Your query failed because of a syntax error within the underlying query. Please
validate that all columns or tables referenced within the query exist and are spelled
correctly.
## Issue 1004
```
The column was deleted or renamed in the database.
```
Your query failed because it is referencing a column that no longer exists in
the underlying datasource. You should modify the query to reference the
replacement column, or remove this column from your query.
## Issue 1005
```
The table was deleted or renamed in the database.
```
Your query failed because it is referencing a table that no longer exists in
the underlying database. You should modify your query to reference the correct
table.
## Issue 1006
```
One or more parameters specified in the query are missing.
```
Your query was not submitted to the database because it's missing one or more
parameters. You should define all the parameters referenced in the query in a
valid JSON document. Check that the parameters are spelled correctly and that
the document has a valid syntax.
## Issue 1007
```
The hostname provided can't be resolved.
```
The hostname provided when adding a new database is invalid and cannot be
resolved. Please check that there are no typos in the hostname.
## Issue 1008
```
The port is closed.
```
The port provided when adding a new database is not open. Please check that
the port number is correct, and that the database is running and listening on
that port.
## Issue 1009
```
The host might be down, and cannot be reached on the provided port.
```
The host provided when adding a new database doesn't seem to be up.
Additionally, it cannot be reached on the provided port. Please check that
there are no firewall rules preventing access to the host.
## Issue 1010
```
Superset encountered an error while running a command.
```
Something unexpected happened, and Superset encountered an error while
running a command. Please reach out to your administrator.
## Issue 1011
```
Superset encountered an unexpected error.
```
Something unexpected happened in the Superset backend. Please reach out
to your administrator.
## Issue 1012
```
The username provided when connecting to a database is not valid.
```
The user provided a username that doesn't exist in the database. Please check
that the username is typed correctly and exists in the database.
## Issue 1013
```
The password provided when connecting to a database is not valid.
```
The user provided a password that is incorrect. Please check that the
password is typed correctly.
## Issue 1014
```
Either the username or the password used are incorrect.
```
Either the username provided does not exist or the password was written incorrectly. Please
check that the username and password were typed correctly.
## Issue 1015
```
Either the database is spelled incorrectly or does not exist.
```
Either the database was written incorrectly or it does not exist. Check that it was typed correctly.
## Issue 1016
```
The schema was deleted or renamed in the database.
```
The schema was either removed or renamed. Check that the schema is typed correctly and exists.
## Issue 1017
```
The user doesn't have the proper permissions to connect to the database
```
We were unable to connect to your database. Please confirm that your service account has the Viewer and Job User roles on the project.
## Issue 1018
```
One or more parameters needed to configure a database are missing.
```
Not all parameters required to test, create, or edit a database were present. Please double check which parameters are needed, and that they are present.
## Issue 1019
```
The submitted payload has the incorrect format.
```
Please check that the request payload has the correct format (eg, JSON).
## Issue 1020
```
The submitted payload has the incorrect schema.
```
Please check that the request payload has the expected schema.
## Issue 1021
```
Results backend needed for asynchronous queries is not configured.
```
Your instance of Superset doesn't have a results backend configured, which is needed for asynchronous queries. Please contact an administrator for further assistance.
## Issue 1022
```
Database does not allow data manipulation.
```
Only `SELECT` statements are allowed against this database. Please contact an administrator if you need to run DML (data manipulation language) on this database.
## Issue 1023
```
CTAS (create table as select) doesn't have a SELECT statement at the end.
```
The last statement in a query run as CTAS (create table as select) MUST be a SELECT statement. Please make sure the last statement in the query is a SELECT.
## Issue 1024
```
CVAS (create view as select) query has more than one statement.
```
When running a CVAS (create view as select) the query should have a single statement. Please make sure the query has a single statement, and no extra semi-colons other than the last one.
## Issue 1025
```
CVAS (create view as select) query is not a SELECT statement.
```
When running a CVAS (create view as select) the query should be a SELECT statement. Please make sure the query has a single statement and it's a SELECT statement.
## Issue 1026
```
Query is too complex and takes too long to run.
```
The submitted query might be too complex to run under the time limit defined by your Superset administrator. Please double check your query and verify if it can be optimized. Alternatively, contact your administrator to increase the timeout period.
## Issue 1027
```
The database is currently running too many queries.
```
The database might be under heavy load, running too many queries. Please try again later, or contact an administrator for further assistance.
## Issue 1028
```
One or more parameters specified in the query are malformed.
```
The query contains one or more malformed template parameters. Please check your query and confirm that all template parameters are surround by double braces, for example, "\{\{ ds \}\}". Then, try running your query again.
## Issue 1029
```
The object does not exist in this database.
```
Either the schema, column, or table do not exist in the database.
## Issue 1030
```
The query potentially has a syntax error.
```
The query might have a syntax error. Please check and run again.
## Issue 1031
```
The results backend no longer has the data from the query.
```
The results from the query might have been deleted from the results backend after some period. Please re-run your query.
## Issue 1032
```
The query associated with the results was deleted.
```
The query associated with the stored results no longer exists. Please re-run your query.
## Issue 1033
```
The results stored in the backend were stored in a different format, and no longer can be deserialized.
```
The query results were stored in a format that is no longer supported. Please re-run your query.
## Issue 1034
```
The database port provided is invalid.
```
Please check that the provided database port is an integer between 0 and 65535 (inclusive).
## Issue 1035
```
Failed to start remote query on a worker.
```
The query was not started by an asynchronous worker. Please reach out to your administrator for further assistance.
## Issue 1036
```
The database was deleted.
```
The operation failed because the database referenced no longer exists. Please reach out to your administrator for further assistance.

View File

@@ -0,0 +1,274 @@
---
title: SQL Templating
sidebar_position: 4
description: Use Jinja templates in SQL Lab and virtual datasets to create dynamic queries
keywords: [sql templating, jinja, sql lab, virtual datasets, dynamic queries]
---
{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}
# SQL Templating
Superset supports [Jinja templating](https://jinja.palletsprojects.com/) in SQL Lab queries and virtual datasets. This allows you to write dynamic SQL that responds to filters, user context, and URL parameters.
:::note
SQL templating must be enabled by your administrator via the `ENABLE_TEMPLATE_PROCESSING` feature flag.
For advanced configuration options, see the [SQL Templating Configuration Guide](/admin-docs/configuration/sql-templating).
:::
## Using Jinja in Calculated Columns
Jinja template macros are available in calculated column expressions in the dataset editor — not just in SQL Lab queries and virtual datasets. This allows column expressions to reference the current user or dynamic context.
**Example: User-scoped calculated column**
```sql
CASE WHEN sales_rep = '{{ current_username() }}' THEN amount ELSE 0 END
```
**Example: Conditional display based on role**
Because `current_user_roles()` returns a Python list, test role membership with a Jinja
conditional at template time rather than matching against the list's string representation:
```sql
{% if 'Finance' in current_user_roles() %}revenue{% else %}NULL{% endif %} AS finance_revenue
```
:::note
The `ENABLE_TEMPLATE_PROCESSING` feature flag must be enabled by your administrator for Jinja in calculated columns to work.
:::
## Basic Usage
Jinja templates use double curly braces `{{ }}` for expressions and `{% %}` for logic blocks.
### Using Parameters
You can define parameters in SQL Lab via the **Parameters** menu as JSON:
```json
{
"my_table": "sales",
"start_date": "2024-01-01"
}
```
Then reference them in your query:
```sql
SELECT *
FROM {{ my_table }}
WHERE order_date >= '{{ start_date }}'
```
### Conditional Logic
Use Jinja's logic blocks for conditional SQL:
```sql
SELECT *
FROM orders
WHERE 1 = 1
{% if start_date %}
AND order_date >= '{{ start_date }}'
{% endif %}
{% if end_date %}
AND order_date < '{{ end_date }}'
{% endif %}
```
## Available Macros
Superset provides built-in macros for common use cases.
### User Context
| Macro | Description |
|-------|-------------|
| `{{ current_username() }}` | Returns the logged-in user's username |
| `{{ current_user_id() }}` | Returns the logged-in user's account ID |
| `{{ current_user_email() }}` | Returns the logged-in user's email |
| `{{ current_user_roles() }}` | Returns an array of the user's roles |
**Example: Row-level filtering by user**
```sql
SELECT *
FROM sales_data
WHERE sales_rep = '{{ current_username() }}'
```
**Example: Role-based access**
```sql
SELECT *
FROM users
WHERE role IN {{ current_user_roles()|where_in }}
```
### Filter Values
Access dashboard and chart filter values in your queries:
| Macro | Description |
|-------|-------------|
| `{{ filter_values('column') }}` | Returns filter values as a list |
| `{{ get_filters('column') }}` | Returns filters with operators |
**Example: Using filter values**
```sql
SELECT product, SUM(revenue) as total
FROM sales
WHERE region IN {{ filter_values('region')|where_in }}
GROUP BY product
```
The `where_in` filter converts the list to SQL format: `('value1', 'value2', 'value3')`
### Time Filters
For charts with time range filters:
| Macro | Description |
|-------|-------------|
| `{{ get_time_filter('column') }}` | Returns time filter with `from_expr` and `to_expr` |
**Example: Time-filtered virtual dataset**
```sql
{% set time_filter = get_time_filter("order_date", default="Last 7 days") %}
SELECT *
FROM orders
WHERE order_date >= {{ time_filter.from_expr }}
AND order_date < {{ time_filter.to_expr }}
```
### URL Parameters
Pass custom values via URL query strings:
```sql
SELECT *
FROM orders
WHERE country = '{{ url_param('country') }}'
```
Access via: `superset.example.com/sqllab?country=US`
### Reusing Dataset Definitions
Query existing datasets by ID:
```sql
-- Query a dataset (ID 42) as a table
SELECT * FROM {{ dataset(42) }} LIMIT 100
-- Include computed metrics
SELECT * FROM {{ dataset(42, include_metrics=True) }}
```
Reuse metric definitions across queries:
```sql
SELECT
category,
{{ metric('total_revenue') }} as revenue
FROM sales
GROUP BY category
```
## Testing Templates in SQL Lab
Some variables like `from_dttm` and `filter_values()` only work when filters are applied from dashboards or charts. To test in SQL Lab:
**Option 1: Use defaults**
```sql
SELECT *
FROM orders
WHERE date >= '{{ from_dttm | default("2024-01-01", true) }}'
```
**Option 2: Set test parameters**
Add to the Parameters menu:
```json
{
"_filters": [
{"col": "region", "op": "IN", "val": ["US", "EU"]}
]
}
```
**Option 3: Use `{% set %}`**
```sql
{% set start_date = "2024-01-01" %}
SELECT * FROM orders WHERE date >= '{{ start_date }}'
```
## Common Patterns
### Dynamic Table Selection
```sql
{% set table_name = url_param('table') or 'default_table' %}
SELECT * FROM {{ table_name }}
```
### User-Specific Data Access
```sql
SELECT *
FROM sensitive_data
WHERE department IN (
SELECT department
FROM user_permissions
WHERE username = '{{ current_username() }}'
)
```
### Time-Based Partitioning
```sql
{% set time_filter = get_time_filter("event_date", remove_filter=True) %}
SELECT *
FROM events
WHERE event_date >= {{ time_filter.from_expr }}
AND event_date < {{ time_filter.to_expr }}
```
Using `remove_filter=True` applies the filter in the inner query for better performance.
## Tips
- Use `|where_in` filter to convert lists to SQL `IN` clauses
- Use `|tojson` to serialize arrays as JSON strings
- Test queries with explicit parameter values before relying on filter context
- For complex templating needs, ask your administrator about custom Jinja macros
- **Format SQL is Jinja-aware**: The "Format SQL" button in SQL Lab correctly preserves `{{ }}` and `{% %}` template syntax and applies your selected database's SQL dialect when formatting.
:::resources
- [Admin Guide: SQL Templating Configuration](/admin-docs/configuration/sql-templating)
- [Blog: Intro to Jinja Templating in Apache Superset](https://preset.io/blog/intro-jinja-templating-apache-superset/)
:::

View File

@@ -0,0 +1,312 @@
---
title: Using AI with Superset
hide_title: true
sidebar_position: 5
version: 1
---
<!--
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
-->
# Using AI with Superset
Superset supports AI assistants through the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/). Connect Claude, ChatGPT, or other MCP-compatible clients to explore your data, build charts, create dashboards, and run SQL -- all through natural language.
:::info
Requires Superset 5.0+. Your admin must enable and deploy the MCP server before you can connect.
See the **[MCP Server admin guide](/admin-docs/configuration/mcp-server)** for setup instructions.
:::
---
## What Can AI Do with Superset?
### Explore Your Data
Ask your AI assistant to browse what's available in your Superset instance:
- **List datasets** -- see all datasets you have access to, with filtering and search
- **Get dataset details** -- column names, types, available metrics, and filters
- **List charts and dashboards** -- find existing visualizations by name or keyword
- **Get chart and dashboard details** -- understand what a chart shows, its query, and configuration
**Example prompts:**
> "What datasets are available?"
> "Show me the columns in the sales_orders dataset"
> "Find dashboards related to revenue"
### Build Charts
Describe the visualization you want and AI creates it for you:
- **Preview-first workflow** -- by default AI generates an Explore link so you can review the chart before it is saved. Say "save it" to commit permanently
- **Create charts from natural language** -- describe what you want to see and AI picks the right chart type, metrics, and dimensions
- **Preview before saving** -- `generate_chart` defaults to `save_chart=False`, showing the chart in Explore before it's committed. Ask AI to save once you're satisfied.
- **Modify existing charts** -- `update_chart` also supports preview mode so you can review changes before saving (update filters, change chart types, add metrics)
- **Get Explore links** -- open any chart in Superset's Explore view for further refinement
**Example prompts:**
> "Create a bar chart showing monthly revenue by region from the sales dataset"
> "Update chart 42 to use a line chart instead"
> "Give me a link to explore this chart further"
:::tip Preview-first workflow
Charts are **not saved by default**. The workflow is intentionally iterative:
1. **Explore** — AI generates an Explore link so you can see the chart before it exists in Superset
2. **Iterate** — ask the AI to adjust the chart; changes are previewed without touching the database
3. **Save** — when you're happy, say "save it" and the chart is permanently stored
To skip the preview and save immediately, include "and save it" in your prompt.
:::
### Create Dashboards
Build dashboards from a collection of charts:
- **Generate dashboards** -- create a new dashboard with a set of charts, automatically laid out
- **Add charts to existing dashboards** -- place a chart on an existing dashboard with automatic positioning
**Example prompts:**
> "Create a dashboard called 'Q4 Sales Overview' with charts 10, 15, and 22"
> "Add the revenue trend chart to the executive dashboard"
### Browse Databases
Discover what database connections are configured in your Superset instance:
- **List databases** -- see all database connections you have access to
- **Get database details** -- name, backend type (PostgreSQL, Snowflake, etc.), and connection status
**Example prompts:**
> "What databases are connected to Superset?"
> "Show me details about the data warehouse connection"
### Create Virtual Datasets
Build ad-hoc SQL datasets that can be used as the basis for charts:
- **Create virtual datasets** -- write a SQL query and save it as a reusable dataset
- **Use immediately in charts** -- the returned dataset ID can be passed directly to chart creation
**Example prompts:**
> "Create a dataset from: SELECT region, SUM(revenue) as total_revenue FROM orders GROUP BY region"
> "Make a virtual dataset called 'monthly_signups' from the users table filtered to last 12 months"
### Run SQL Queries
Execute SQL directly through your AI assistant:
- **Run queries** -- execute SQL with full Superset RBAC enforcement (you can only query data your roles allow)
- **Open SQL Lab** -- get a link to SQL Lab pre-populated with a query, ready to run and explore
- **Save queries** -- save a SQL query to SQL Lab's Saved Queries for later reuse
**Example prompts:**
> "Run this query: SELECT region, SUM(revenue) FROM sales GROUP BY region"
> "Open SQL Lab with a query to show the top 10 customers by order count"
> "Save this query as 'Weekly Revenue Report'"
### Analyze Chart Data
Pull the raw data behind any chart:
- **Get chart data** -- retrieve the data a chart displays, with support for JSON, CSV, and Excel export formats
- **Inspect results** -- useful for verifying what a visualization shows or feeding data into other tools
**Example prompts:**
> "Get the data behind chart 42"
> "Export chart 15 data as CSV"
### Check Instance Status
- **Health check** -- verify your Superset instance is up and the MCP connection is working
- **Instance info** -- get high-level statistics about your Superset instance (number of datasets, charts, dashboards)
**Example prompts:**
> "Is Superset healthy?"
> "How many dashboards are in this instance?"
---
## Connecting Your AI Client
Once your admin has deployed the MCP server, connect your AI client using the instructions below.
### Claude Desktop
Edit your Claude Desktop config file:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
```json
{
"mcpServers": {
"superset": {
"url": "http://localhost:5008/mcp"
}
}
}
```
Restart Claude Desktop. The hammer icon in the chat bar confirms the connection.
If your admin has enabled JWT authentication, you may need to include a token:
```json
{
"mcpServers": {
"superset": {
"command": "npx",
"args": [
"-y",
"mcp-remote@latest",
"http://your-superset-host:5008/mcp",
"--header",
"Authorization: Bearer YOUR_TOKEN"
]
}
}
}
```
### Claude Code (CLI)
Add to your project's `.mcp.json`:
```json
{
"mcpServers": {
"superset": {
"type": "url",
"url": "http://localhost:5008/mcp"
}
}
}
```
### ChatGPT
1. Click your profile icon > **Settings** > **Apps and Connectors**
2. Enable **Developer Mode** in Advanced Settings
3. In the chat composer, press **+** > **Add sources** > **App** > **Connect more** > **Create app**
4. Enter a name and your MCP server URL
5. Click **I understand and continue**
:::info
ChatGPT MCP connectors require a Pro, Team, Enterprise, or Edu plan.
:::
Ask your admin for the MCP server URL and any authentication tokens you need.
---
## Tips for Best Results
- **Be specific** -- "Create a bar chart of monthly revenue by region from the sales dataset" works better than "Make me a chart"
- **Start with exploration** -- ask what datasets and charts exist before creating new ones
- **Review AI-generated content** -- always check chart configurations and SQL before saving or sharing
- **Use Explore for refinement** -- ask AI for an Explore link, then fine-tune interactively in the Superset UI
- **Check permissions if you get errors** -- AI respects Superset's RBAC, so you can only access data your roles allow
---
## Available Tools Reference
### Exploration & Discovery
| Tool | Description |
|------|-------------|
| `health_check` | Verify the MCP server is running and connected |
| `get_instance_info` | Get instance statistics (dataset, chart, dashboard counts) |
| `get_schema` | Discover available charts, datasets, and dashboards with schema info |
### Datasets
| Tool | Description |
|------|-------------|
| `list_datasets` | List datasets with filtering and search |
| `get_dataset_info` | Get dataset metadata (columns, metrics, filters) |
| `create_virtual_dataset` | Create a virtual dataset from a SQL query |
### Charts
| Tool | Description |
|------|-------------|
| `list_charts` | List charts with filtering and search |
| `get_chart_info` | Get chart metadata and configuration |
| `get_chart_data` | Retrieve chart data (JSON, CSV, or Excel) |
| `get_chart_preview` | Generate a chart preview (URL, ASCII, table, or Vega-Lite) |
| `get_chart_type_schema` | Get the configuration schema for a chart type |
| `generate_chart` | Create a new chart from a specification (defaults to preview mode — review before saving) |
| `update_chart` | Modify an existing chart's configuration (pass `generate_preview=False` to persist immediately instead of returning a preview URL) |
| `update_chart_preview` | Update a cached chart preview without saving |
| `generate_explore_link` | Generate an Explore URL for interactive visualization |
### Dashboards
| Tool | Description |
|------|-------------|
| `list_dashboards` | List dashboards with filtering and search |
| `get_dashboard_info` | Get dashboard metadata and layout |
| `generate_dashboard` | Create a new dashboard with specified charts |
| `add_chart_to_existing_dashboard` | Add a chart to an existing dashboard |
### SQL
| Tool | Description |
|------|-------------|
| `execute_sql` | Run a SQL query with RBAC enforcement |
| `save_sql_query` | Persist a SQL query to SQL Lab's saved queries |
| `open_sql_lab_with_context` | Open SQL Lab with a pre-populated query |
### Databases
| Tool | Description |
|------|-------------|
| `list_databases` | List configured database connections |
| `get_database_info` | Get details about a specific database connection |
---
## Troubleshooting
### "Connection refused" or "Cannot connect"
- Confirm the MCP server URL with your admin
- For Claude Desktop: fully quit the app (not just close the window) and restart after config changes
- Check that the URL path ends with `/mcp` (e.g., `http://localhost:5008/mcp`)
### "Permission denied" or missing data
- Superset's RBAC controls what you can access through AI, just like in the Superset UI
- Ask your admin to verify your roles and permissions
- Try accessing the same data through the Superset web UI to confirm your access
### "Response too large"
- Ask for smaller result sets: use filters, reduce `page_size`, or request specific columns
- Example: "Show me the top 10 rows from the sales dataset" instead of "Show me all sales data"
### AI doesn't see Superset tools
- Verify the connection in your AI client (e.g., the hammer icon in Claude Desktop)
- Ask the AI "What Superset tools are available?" to confirm the connection
- Restart your AI client if you recently changed the configuration