chore: Update Docusaurus links (#18581)

* Fix links

* Fix internal link refs

* Add contribution page redirect
This commit is contained in:
Geido
2022-02-10 17:55:58 +02:00
committed by GitHub
parent f565230d8d
commit 9ca55a7c83
58 changed files with 94 additions and 82 deletions

View File

@@ -0,0 +1,4 @@
{
"label": "Connecting to Databases",
"position": 3
}

View File

@@ -0,0 +1,16 @@
---
title: Ascend.io
hide_title: true
sidebar_position: 10
version: 1
---
## Ascend.io
The recommended connector library to Ascend.io is [impyla](https://github.com/cloudera/impyla).
The expected connection string is formatted as follows:
```
ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true
```

View File

@@ -0,0 +1,34 @@
---
title: Amazon Athena
hide_title: true
sidebar_position: 4
version: 1
---
## AWS Athena
### PyAthenaJDBC
[PyAthenaJDBC](https://pypi.org/project/PyAthenaJDBC/) is a Python DB 2.0 compliant wrapper for the
[Amazon Athena JDBC driver](https://docs.aws.amazon.com/athena/latest/ug/connect-with-jdbc.html).
The connection string for Amazon Athena is as follows:
```
awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
```
Note that you'll need to escape & encode when forming the connection string like so:
```
s3://... -> s3%3A//...
```
### PyAthena
You can also use [PyAthena library](https://pypi.org/project/PyAthena/) (no Java required) with the
following connection string:
```
awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...
```

View File

@@ -0,0 +1,89 @@
---
title: Google BigQuery
hide_title: true
sidebar_position: 20
version: 1
---
## Google BigQuery
The recommended connector library for BigQuery is
[pybigquery](https://github.com/mxmzdlv/pybigquery).
### Install BigQuery Driver
Follow the steps [here](/docs/databases/docker-add-drivers) about how to
install new database drivers when setting up Superset locally via docker-compose.
```
echo "pybigquery" >> ./docker/requirements-local.txt
```
### Connecting to BigQuery
When adding a new BigQuery connection in Superset, you'll need to add the GCP Service Account
credentials file (as a JSON).
1. Create your Service Account via the Google Cloud Platform control panel, provide it access to the
appropriate BigQuery datasets, and download the JSON configuration file for the service account.
2. In Superset, you can either upload that JSON or add the JSON blob in the following format (this should be the content of your credential JSON file):
```
{
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
```
![CleanShot 2021-10-22 at 04 18 11](https://user-images.githubusercontent.com/52086618/138352958-a18ef9cb-8880-4ef1-88c1-452a9f1b8105.gif)
3. Additionally, can connect via SQLAlchemy URI instead
The connection string for BigQuery looks like:
```
bigquery://{project_id}
```
Go to the **Advanced** tab, Add a JSON blob to the **Secure Extra** field in the database configuration form with
the following format:
```
{
"credentials_info": <contents of credentials JSON file>
}
```
The resulting file should have this structure:
```
{
"credentials_info": {
"type": "service_account",
"project_id": "...",
"private_key_id": "...",
"private_key": "...",
"client_email": "...",
"client_id": "...",
"auth_uri": "...",
"token_uri": "...",
"auth_provider_x509_cert_url": "...",
"client_x509_cert_url": "..."
}
}
```
You should then be able to connect to your BigQuery datasets.
![CleanShot 2021-10-22 at 04 47 08](https://user-images.githubusercontent.com/52086618/138354340-df57f477-d3e5-42d4-b032-d901c69d2213.gif)
To be able to upload CSV or Excel files to BigQuery in Superset, you'll need to also add the
[pandas_gbq](https://github.com/pydata/pandas-gbq) library.

View File

@@ -0,0 +1,44 @@
---
title: Clickhouse
hide_title: true
sidebar_position: 15
version: 1
---
## Clickhouse
To use Clickhouse with Superset, you will need to add the following Python libraries:
```
clickhouse-driver==0.2.0
clickhouse-sqlalchemy==0.1.6
```
If running Superset using Docker Compose, add the following to your `./docker/requirements-local.txt` file:
```
clickhouse-driver>=0.2.0
clickhouse-sqlalchemy>=0.1.6
```
The recommended connector library for Clickhouse is
[sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse).
The expected connection string is formatted as follows:
```
clickhouse+native://<user>:<password>@<host>:<port>/<database>[?options…]clickhouse://{username}:{password}@{hostname}:{port}/{database}
```
Here's a concrete example of a real connection string:
```
clickhouse+native://demo:demo@github.demo.trial.altinity.cloud/default?secure=true
```
If you're using Clickhouse locally on your computer, you can get away with using a native protocol URL that
uses the default user without a password (and doesn't encrypt the connection):
```
clickhouse+native://localhost/default
```

View File

@@ -0,0 +1,17 @@
---
title: CockroachDB
hide_title: true
sidebar_position: 16
version: 1
---
## CockroachDB
The recommended connector library for CockroachDB is
[sqlalchemy-cockroachdb](https://github.com/cockroachdb/sqlalchemy-cockroachdb).
The expected connection string is formatted as follows:
```
cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable
```

View File

@@ -0,0 +1,24 @@
---
title: CrateDB
hide_title: true
sidebar_position: 36
version: 1
---
## CrateDB
The recommended connector library for CrateDB is
[crate](https://pypi.org/project/crate/).
You need to install the extras as well for this library.
We recommend adding something like the following
text to your requirements file:
```
crate[sqlalchemy]==0.26.0
```
The expected connection string is formatted as follows:
```
crate://crate@127.0.0.1:4200
```

View File

@@ -0,0 +1,67 @@
---
title: Databricks
hide_title: true
sidebar_position: 37
version: 1
---
## Databricks
To connect to Databricks, first install [databricks-dbapi](https://pypi.org/project/databricks-dbapi/) with the optional SQLAlchemy dependencies:
```bash
pip install databricks-dbapi[sqlalchemy]
```
There are two ways to connect to Databricks: using a Hive connector or an ODBC connector. Both ways work similarly, but only ODBC can be used to connect to [SQL endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html).
### Hive
To use the Hive connector you need the following information from your cluster:
- Server hostname
- Port
- HTTP path
These can be found under "Configuration" -> "Advanced Options" -> "JDBC/ODBC".
You also need an access token from "Settings" -> "User Settings" -> "Access Tokens".
Once you have all this information, add a database of type "Databricks (Hive)" in Superset, and use the following SQLAlchemy URI:
```
databricks+pyhive://token:{access token}@{server hostname}:{port}/{database name}
```
You also need to add the following configuration to "Other" -> "Engine Parameters", with your HTTP path:
```
{"connect_args": {"http_path": "sql/protocolv1/o/****"}}
```
### ODBC
For ODBC you first need to install the [ODBC drivers for your platform](https://databricks.com/spark/odbc-drivers-download).
For a regular connection use this as the SQLAlchemy URI:
```
databricks+pyodbc://token:{access token}@{server hostname}:{port}/{database name}
```
And for the connection arguments:
```
{"connect_args": {"http_path": "sql/protocolv1/o/****", "driver_path": "/path/to/odbc/driver"}}
```
The driver path should be:
- `/Library/simba/spark/lib/libsparkodbc_sbu.dylib` (Mac OS)
- `/opt/simba/spark/lib/64/libsparkodbc_sb64.so` (Linux)
For a connection to a SQL endpoint you need to use the HTTP path from the endpoint:
```
{"connect_args": {"http_path": "/sql/1.0/endpoints/****", "driver_path": "/path/to/odbc/driver"}}
```

View File

@@ -0,0 +1,76 @@
---
title: Using Database Connection UI
hide_title: true
sidebar_position: 3
version: 1
---
Here is the documentation on how to leverage the new DB Connection UI. This will provide admins the ability to enhance the UX for users who want to connect to new databases.
![db-conn-docs](https://user-images.githubusercontent.com/27827808/125499607-94e300aa-1c0f-4c60-b199-3f9de41060a3.gif)
There are now 3 steps when connecting to a database in the new UI:
Step 1: First the admin must inform superset what engine they want to connect to. This page is powered by the `/available` endpoint which pulls on the engines currently installed in your environment, so that only supported databases are shown.
Step 2: Next, the admin is prompted to enter database specific parameters. Depending on whether there is a dynamic form available for that specific engine, the admin will either see the new custom form or the legacy SQLAlchemy form. We currently have built dynamic forms for (Redshift, MySQL, Postgres, and BigQuery). The new form prompts the user for the parameters needed to connect (for example, username, password, host, port, etc.) and provides immediate feedback on errors.
Step 3: Finally, once the admin has connected to their DB using the dynamic form they have the opportunity to update any optional advanced settings.
We hope this feature will help eliminate a huge bottleneck for users to get into the application and start crafting datasets.
### How to setup up preferred database options and images
We added a new configuration option where the admin can define their preferred databases, in order:
```python
# A list of preferred databases, in order. These databases will be
# displayed prominently in the "Add Database" dialog. You should
# use the "engine_name" attribute of the corresponding DB engine spec
# in `superset/db_engine_specs/`.
PREFERRED_DATABASES: List[str] = [
"PostgreSQL",
"Presto",
"MySQL",
"SQLite",
]
```
For copyright reasons the logos for each database are not distributed with Superset.
### Setting images
- To set the images of your preferred database, admins must create a mapping in the `superset_text.yml` file with engine and location of the image. The image can be host locally inside your static/file directory or online (e.g. S3)
```python
DB_IMAGES:
postgresql: "path/to/image/postgres.jpg"
bigquery: "path/to/s3bucket/bigquery.jpg"
snowflake: "path/to/image/snowflake.jpg"
```
### How to add new database engines to available endpoint
Currently the new modal supports the following databases:
- Postgres
- Redshift
- MySQL
- BigQuery
When the user selects a database not in this list they will see the old dialog asking for the SQLAlchemy URI. New databases can be added gradually to the new flow. In order to support the rich configuration a DB engine spec needs to have the following attributes:
1. `parameters_schema`: a Marshmallow schema defining the parameters needed to configure the database. For Postgres this includes username, password, host, port, etc. ([see](https://github.com/apache/superset/blob/accee507c0819cd0d7bcfb5a3e1199bc81eeebf2/superset/db_engine_specs/base.py#L1309-L1320)).
2. `default_driver`: the name of the recommended driver for the DB engine spec. Many SQLAlchemy dialects support multiple drivers, but usually one is the official recommendation. For Postgres we use "psycopg2".
3. `sqlalchemy_uri_placeholder`: a string that helps the user in case they want to type the URI directly.
4. `encryption_parameters`: parameters used to build the URI when the user opts for an encrypted connection. For Postgres this is `{"sslmode": "require"}`.
In addition, the DB engine spec must implement these class methods:
- `build_sqlalchemy_uri(cls, parameters, encrypted_extra)`: this method receives the distinct parameters and builds the URI from them.
- `get_parameters_from_uri(cls, uri, encrypted_extra)`: this method does the opposite, extracting the parameters from a given URI.
- `validate_parameters(cls, parameters)`: this method is used for `onBlur` validation of the form. It should return a list of `SupersetError` indicating which parameters are missing, and which parameters are definitely incorrect ([example](https://github.com/apache/superset/blob/accee507c0819cd0d7bcfb5a3e1199bc81eeebf2/superset/db_engine_specs/base.py#L1404)).
For databases like MySQL and Postgres that use the standard format of `engine+driver://user:password@host:port/dbname` all you need to do is add the `BasicParametersMixin` to the DB engine spec, and then define the parameters 2-4 (`parameters_schema` is already present in the mixin).
For other databases you need to implement these methods yourself. The BigQuery DB engine spec is a good example of how to do that.

View File

@@ -0,0 +1,93 @@
---
title: Adding New Drivers in Docker
hide_title: true
sidebar_position: 2
version: 1
---
## Adding New Database Drivers in Docker
Superset requires a Python database driver to be installed for each additional type of database you
want to connect to. When setting up Superset locally via `docker-compose`, the drivers and packages
contained in
[requirements.txt](https://github.com/apache/superset/blob/master/requirements.txt) and
[requirements-dev.txt](https://github.com/apache/superset/blob/master/requirements-dev.txt)
will be installed automatically.
In this section, we'll walk through how to install the MySQL connector library. The connector
library installation process is the same for all additional libraries and we'll end this section
with the recommended connector library for each database.
### 1. Determine the driver you need
To figure out how to install the [database driver](/docs/databases/installing-database-drivers) of your choice.
In the example, we'll walk through the process of installing a MySQL driver in Superset.
### 2. Install MySQL Driver
As we are currently running inside of a Docker container via `docker compose`, we cannot simply run
`pip install mysqlclient` on our local shell and expect the drivers to be installed within the
Docker containers for superset.
In order to address this, the Superset `docker compose` setup comes with a mechanism for you to
install packages locally, which will be ignored by Git for the purposes of local development. Please
follow these steps:
Create `requirements-local.txt`
```
# From the repo root...
touch ./docker/requirements-local.txt
```
Add the driver selected in step above:
```
echo "mysqlclient" >> ./docker/requirements-local.txt
```
Rebuild your local image with the new driver baked in:
```
docker-compose build --force-rm
```
After the rebuild of the Docker images is complete (which make take a few minutes) you can relaunch using the following command:
```
docker-compose up
```
The other option is to start Superset via Docker Compose is using the recipe in `docker-compose-non-dev.yml`, which will use pre-built frontend assets and skip the building of front-end assets:
```
docker-compose -f docker-compose-non-dev.yml pull
docker-compose -f docker-compose-non-dev.yml up
```
### 3. Connect to MySQL
Now that you've got a MySQL driver installed locally, you should be able to test it out.
We can now create a Datasource in Superset that can be used to connect to a MySQL instance. Assuming
your MySQL instance is running locally and can be accessed via localhost, use the following
connection string in “SQL Alchemy URI”, by going to Sources > Databases > + icon (to add a new
datasource) in Superset.
For Docker running in Linux:
```
mysql://mysqluser:mysqluserpassword@localhost/example?charset=utf8
```
For Docker running in OSX:
```
mysql://mysqluser:mysqluserpassword@docker.for.mac.host.internal/example?charset=utf8
```
Then click “Test Connection”, which should give you an “OK” message. If not, please look at your
terminal for error messages, and reach out for help.
You can repeat this process for every database you want superset to be able to connect to.

View File

@@ -0,0 +1,26 @@
---
title: Dremio
hide_title: true
sidebar_position: 17
version: 1
---
## Dremio
The recommended connector library for Dremio is
[sqlalchemy_dremio](https://pypi.org/project/sqlalchemy-dremio/).
The expected connection string for ODBC (Default port is 31010) is formatted as follows:
```
dremio://{username}:{password}@{host}:{port}/{database_name}/dremio?SSL=1
```
The expected connection string for Arrow Flight (Dremio 4.9.1+. Default port is 32010) is formatted as follows:
```
dremio+flight://{username}:{password}@{host}:{port}/dremio
```
This [blog post by Dremio](https://www.dremio.com/tutorials/dremio-apache-superset/) has some
additional helpful instructions on connecting Superset to Dremio.

View File

@@ -0,0 +1,47 @@
---
title: Apache Drill
hide_title: true
sidebar_position: 6
version: 1
---
## Apache Drill
### SQLAlchemy
The recommended way to connect to Apache Drill is through SQLAlchemy. You can use the
[sqlalchemy-drill](https://github.com/JohnOmernik/sqlalchemy-drill) package.
Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC.
If you are connecting via JDBC, you must have the Drill JDBC Driver installed.
The basic connection string for Drill looks like this:
```
drill+sadrill://<username>:<password>@<host>:<port>/<storage_plugin>?use_ssl=True
```
To connect to Drill running on a local machine running in embedded mode you can use the following
connection string:
```
drill+sadrill://localhost:8047/dfs?use_ssl=False
```
### JDBC
Connecting to Drill through JDBC is more complicated and we recommend following
[this tutorial](https://drill.apache.org/docs/using-the-jdbc-driver/).
The connection string looks like:
```
drill+jdbc://<username>:<passsword>@<host>:<port>
```
### ODBC
We recommend reading the
[Apache Drill documentation](https://drill.apache.org/docs/installing-the-driver-on-linux/) and read
the [Github README](https://github.com/JohnOmernik/sqlalchemy-drill#usage-with-odbc) to learn how to
work with Drill through ODBC.

View File

@@ -0,0 +1,65 @@
---
title: Apache Druid
hide_title: true
sidebar_position: 7
version: 1
---
import useBaseUrl from "@docusaurus/useBaseUrl";
## Apache Druid
A native connector to Druid ships with Superset (behind the `DRUID_IS_ACTIVE` flag) but this is
slowly getting deprecated in favor of SQLAlchemy / DBAPI connector made available in the
[pydruid library](https://pythonhosted.org/pydruid/).
The connection string looks like:
```
druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql
```
### Customizing Druid Connection
When adding a connection to Druid, you can customize the connection a few different ways in the
**Add Database** form.
**Custom Certificate**
You can add certificates in the **Root Certificate** field when configuring the new database
connection to Druid:
<img src={useBaseUrl("/img/root-cert-example.png")} />{" "}
When using a custom certificate, pydruid will automatically use https scheme.
**Disable SSL Verification**
To disable SSL verification, add the following to the **Extras** field:
```
engine_params:
{"connect_args":
{"scheme": "https", "ssl_verify_cert": false}}
```
### Aggregations
Common aggregations or Druid metrics can be defined and used in Superset. The first and simpler use
case is to use the checkbox matrix exposed in your datasources edit view (**Sources -> Druid
Datasources -> [your datasource] -> Edit -> [tab] List Druid Column**).
Clicking the GroupBy and Filterable checkboxes will make the column appear in the related dropdowns
while in the Explore view. Checking Count Distinct, Min, Max or Sum will result in creating new
metrics that will appear in the **List Druid Metric** tab upon saving the datasource.
By editing these metrics, youll notice that their JSON element corresponds to Druid aggregation
definition. You can create your own aggregations manually from the **List Druid Metric** tab
following Druid documentation.
### Post-Aggregations
Druid supports post aggregation and this works in Superset. All you have to do is create a metric,
much like you would create an aggregation manually, but specify `postagg` as a `Metric Type`. You
then have to provide a valid json post-aggregation definition (as specified in the Druid docs) in
the JSON field.

View File

@@ -0,0 +1,68 @@
---
title: Elasticsearch
hide_title: true
sidebar_position: 18
version: 1
---
## Elasticsearch
The recommended connector library for Elasticsearch is
[elasticsearch-dbapi](https://github.com/preset-io/elasticsearch-dbapi).
The connection string for Elasticsearch looks like this:
```
elasticsearch+http://{user}:{password}@{host}:9200/
```
**Using HTTPS**
```
elasticsearch+https://{user}:{password}@{host}:9200/
```
Elasticsearch as a default limit of 10000 rows, so you can increase this limit on your cluster or
set Supersets row limit on config
```
ROW_LIMIT = 10000
```
You can query multiple indices on SQL Lab for example
```
SELECT timestamp, agent FROM "logstash"
```
But, to use visualizations for multiple indices you need to create an alias index on your cluster
```
POST /_aliases
{
"actions" : [
{ "add" : { "index" : "logstash-**", "alias" : "logstash_all" } }
]
}
```
Then register your table with the alias name logstasg_all
**Time zone**
By default, Superset uses UTC time zone for elasticsearch query. If you need to specify a time zone,
please edit your Database and enter the settings of your specified time zone in the Other > ENGINE PARAMETERS:
```
{
"connect_args": {
"time_zone": "Asia/Shanghai"
}
}
```
Another issue to note about the time zone problem is that before elasticsearch7.8, if you want to convert a string into a `DATETIME` object,
you need to use the `CAST` function,but this function does not support our `time_zone` setting. So it is recommended to upgrade to the version after elasticsearch7.8.
After elasticsearch7.8, you can use the `DATETIME_PARSE` function to solve this problem.
The DATETIME_PARSE function is to support our `time_zone` setting, and here you need to fill in your elasticsearch version number in the Other > VERSION setting.
the superset will use the `DATETIME_PARSE` function for conversion.

View File

@@ -0,0 +1,17 @@
---
title: Exasol
hide_title: true
sidebar_position: 19
version: 1
---
## Exasol
The recommended connector library for Exasol is
[sqlalchemy-exasol](https://github.com/exasol/sqlalchemy-exasol).
The connection string for Exasol looks like this:
```
exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC
```

View File

@@ -0,0 +1,69 @@
---
title: Extra Database Settings
hide_title: true
sidebar_position: 40
version: 1
---
## Extra Database Settings
### Deeper SQLAlchemy Integration
It is possible to tweak the database connection information using the parameters exposed by
SQLAlchemy. In the **Database edit** view, you can edit the **Extra** field as a JSON blob.
This JSON string contains extra configuration elements. The `engine_params` object gets unpacked
into the `sqlalchemy.create_engine` call, while the `metadata_params` get unpacked into the
`sqlalchemy.MetaData` call. Refer to the SQLAlchemy docs for more information.
### Schemas
Databases like Postgres and Redshift use the **schema** as the logical entity on top of the
**database**. For Superset to connect to a specific schema, you can set the **schema** parameter in
the **Edit Tables** form (Sources > Tables > Edit record).
### External Password Store for SQLAlchemy Connections
Superset can be configured to use an external store for database passwords. This is useful if you a
running a custom secret distribution framework and do not wish to store secrets in Supersets meta
database.
Example: Write a function that takes a single argument of type `sqla.engine.url` and returns the
password for the given connection string. Then set `SQLALCHEMY_CUSTOM_PASSWORD_STORE` in your config
file to point to that function.
```python
def example_lookup_password(url):
secret = <<get password from external framework>>
return 'secret'
SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_lookup_password
```
A common pattern is to use environment variables to make secrets available.
`SQLALCHEMY_CUSTOM_PASSWORD_STORE` can also be used for that purpose.
```python
def example_password_as_env_var(url):
# assuming the uri looks like
# mysql://localhost?superset_user:{SUPERSET_PASSWORD}
return url.password.format(os.environ)
SQLALCHEMY_CUSTOM_PASSWORD_STORE = example_password_as_env_var
```
### SSL Access to Databases
You can use the `Extra` field in the **Edit Databases** form to configure SSL:
```JSON
{
"metadata_params": {},
"engine_params": {
"connect_args":{
"sslmode":"require",
"sslrootcert": "/path/to/my/pem"
}
}
}
```

View File

@@ -0,0 +1,23 @@
---
title: Firebird
hide_title: true
sidebar_position: 38
version: 1
---
## Firebird
The recommended connector library for Firebird is [sqlalchemy-firebird](https://pypi.org/project/sqlalchemy-firebird/).
Superset has been tested on `sqlalchemy-firebird>=0.7.0, <0.8`.
The recommended connection string is:
```
firebird+fdb://{username}:{password}@{host}:{port}//{path_to_db_file}
```
Here's a connection string example of Superset connecting to a local Firebird database:
```
firebird+fdb://SYSDBA:masterkey@192.168.86.38:3050//Library/Frameworks/Firebird.framework/Versions/A/Resources/examples/empbuild/employee.fdb
```

View File

@@ -0,0 +1,27 @@
---
title: Firebolt
hide_title: true
sidebar_position: 39
version: 1
---
## Firebolt
The recommended connector library for Firebolt is [firebolt-sqlalchemy](https://pypi.org/project/firebolt-sqlalchemy/).
Superset has been tested on `firebolt-sqlalchemy>=0.0.1`.
The recommended connection string is:
```
firebolt://{username}:{password}@{database}
or
firebolt://{username}:{password}@{database}/{engine_name}
```
Here's a connection string example of Superset connecting to a Firebolt database:
```
firebolt://email@domain:password@sample_database
or
firebolt://email@domain:password@sample_database/sample_engine
```

View File

@@ -0,0 +1,16 @@
---
title: Google Sheets
hide_title: true
sidebar_position: 21
version: 1
---
## Google Sheets
Google Sheets has a very limited
[SQL API](https://developers.google.com/chart/interactive/docs/querylanguage). The recommended
connector library for Google Sheets is [shillelagh](https://github.com/betodealmeida/shillelagh).
There are a few steps involved in connecting Superset to Google Sheets. This
[tutorial](https://preset.io/blog/2020-06-01-connect-superset-google-sheets/) has the most up to date
instructions on setting up this connection.

View File

@@ -0,0 +1,16 @@
---
title: Hana
hide_title: true
sidebar_position: 22
version: 1
---
## Hana
The recommended connector library is [sqlalchemy-hana](https://github.com/SAP/sqlalchemy-hana).
The connection string is formatted as follows:
```
hana://{username}:{password}@{host}:{port}
```

View File

@@ -0,0 +1,16 @@
---
title: Apache Hive
hide_title: true
sidebar_position: 8
version: 1
---
## Apache Hive
The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Hive through SQLAlchemy.
The expected connection string is formatted as follows:
```
hive://hive@{hostname}:{port}/{database}
```

View File

@@ -0,0 +1,24 @@
---
title: Hologres
hide_title: true
sidebar_position: 33
version: 1
---
## Hologres
Hologres is a real-time interactive analytics service developed by Alibaba Cloud. It is fully compatible with PostgreSQL 11 and integrates seamlessly with the big data ecosystem.
Hologres sample connection parameters:
- **User Name**: The AccessKey ID of your Alibaba Cloud account.
- **Password**: The AccessKey secret of your Alibaba Cloud account.
- **Database Host**: The public endpoint of the Hologres instance.
- **Database Name**: The name of the Hologres database.
- **Port**: The port number of the Hologres instance.
The connection string looks like:
```
postgresql+psycopg2://{username}:{password}@{host}:{port}/{database}
```

View File

@@ -0,0 +1,23 @@
---
title: IBM DB2
hide_title: true
sidebar_position: 23
version: 1
---
## IBM DB2
The [IBM_DB_SA](https://github.com/ibmdb/python-ibmdbsa/tree/master/ibm_db_sa) library provides a
Python / SQLAlchemy interface to IBM Data Servers.
Here's the recommended connection string:
```
db2+ibm_db://{username}:{passport}@{hostname}:{port}/{database}
```
There are two DB2 dialect versions implemented in SQLAlchemy. If you are connecting to a DB2 version without `LIMIT [n]` syntax, the recommended connection string to be able to use the SQL Lab is:
```
ibm_db_sa://{username}:{passport}@{hostname}:{port}/{database}
```

View File

@@ -0,0 +1,16 @@
---
title: Apache Impala
hide_title: true
sidebar_position: 9
version: 1
---
## Apache Impala
The recommended connector library to Apache Hive is [impyla](https://github.com/cloudera/impyla).
The expected connection string is formatted as follows:
```
impala://{hostname}:{port}/{database}
```

View File

@@ -0,0 +1,73 @@
---
title: Installing Database Drivers
hide_title: true
sidebar_position: 1
version: 1
---
## Install Database Drivers
Superset requires a Python DB-API database driver and a SQLAlchemy
dialect to be installed for each datastore you want to connect to.
You can read more [here](/docs/databases/docker-add-drivers) about how to
install new database drivers into your Superset configuration.
### Supported Databases and Dependencies
Superset does not ship bundled with connectivity to databases, except for SQLite,
which is part of the Python standard library. Youll need to install the required packages for the database you want to use as your metadata database as well as the packages needed to connect to the databases you want to access through Superset.
A list of some of the recommended packages.
| Database | PyPI package | Connection String |
| --------------------------------------------------------- | ---------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
| [Amazon Athena](/docs/databases/athena) | `pip install "PyAthenaJDBC>1.0.9` , `pip install "PyAthena>1.2.0` | `awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.amazonaws.com/{ ` |
| [Amazon Redshift](/docs/databases/redshift) | `pip install sqlalchemy-redshift` | ` redshift+psycopg2://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>` |
| [Apache Drill](/docs/databases/drill) | `pip install sqlalchemy-drill` | `drill+sadrill:// For JDBC drill+jdbc://` |
| [Apache Druid](/docs/databases/druid) | `pip install pydruid` | `druid://<User>:<password>@<Host>:<Port-default-9088>/druid/v2/sql` |
| [Apache Hive](/docs/databases/hive) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` |
| [Apache Impala](/docs/databases/impala) | `pip install impyla` | `impala://{hostname}:{port}/{database}` |
| [Apache Kylin](/docs/databases/kylin) | `pip install kylinpy` | `kylin://<username>:<password>@<hostname>:<port>/<project>?<param1>=<value1>&<param2>=<value2>` |
| [Apache Pinot](/docs/databases/pinot) | `pip install pinotdb` | `pinot://BROKER:5436/query?server=http://CONTROLLER:5983/` |
| [Apache Solr](/docs/databases/solr) | `pip install sqlalchemy-solr` | `solr://{username}:{password}@{hostname}:{port}/{server_path}/{collection}` |
| [Apache Spark SQL](/docs/databases/spark-sql) | `pip install pyhive` | `hive://hive@{hostname}:{port}/{database}` |
| [Ascend.io](/docs/databases/ascend) | `pip install impyla` | `ascend://{username}:{password}@{hostname}:{port}/{database}?auth_mechanism=PLAIN;use_ssl=true` |
| [Azure MS SQL](/docs/databases/sql-server) | `pip install pymssql` | `mssql+pymssql://UserName@presetSQL:TestPassword@presetSQL.database.windows.net:1433/TestSchema` |
| [Big Query](/docs/databases/bigquery) | `pip install pybigquery` | `bigquery://{project_id}` |
| [ClickHouse](/docs/databases/clickhouse) | `pip install clickhouse-driver==0.2.0 && pip install clickhouse-sqlalchemy==0.1.6` | `clickhouse+native://{username}:{password}@{hostname}:{port}/{database}` |
| [CockroachDB](/docs/databases/cockroachdb) | `pip install cockroachdb` | `cockroachdb://root@{hostname}:{port}/{database}?sslmode=disable` |
| [Dremio](/docs/databases/dremio) | `pip install sqlalchemy_dremio` | `dremio://user:pwd@host:31010/` |
| [Elasticsearch](/docs/databases/elasticsearch) | `pip install elasticsearch-dbapi` | `elasticsearch+http://{user}:{password}@{host}:9200/` |
| [Exasol](/docs/databases/exasol) | `pip install sqlalchemy-exasol` | `exa+pyodbc://{username}:{password}@{hostname}:{port}/my_schema?CONNECTIONLCALL=en_US.UTF-8&driver=EXAODBC` |
| [Google Sheets](/docs/databases/google-sheets) | `pip install shillelagh[gsheetsapi]` | `gsheets://` |
| [Firebolt](/docs/databases/firebolt) | `pip install firebolt-sqlalchemy` | `firebolt://{username}:{password}@{database} or firebolt://{username}:{password}@{database}/{engine_name}` |
| [Hologres](/docs/databases/hologres) | `pip install psycopg2` | `postgresql+psycopg2://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [IBM Db2](/docs/databases/ibm-db2) | `pip install ibm_db_sa` | `db2+ibm_db://` |
| [IBM Netezza Performance Server](/docs/databases/netezza) | `pip install nzalchemy` | `netezza+nzpy://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [MySQL](/docs/databases/mysql) | `pip install mysqlclient` | `mysql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [Oracle](/docs/databases/oracle) | `pip install cx_Oracle` | `oracle://` |
| [PostgreSQL](/docs/databases/postgres) | `pip install psycopg2` | `postgresql://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
| [Trino](/docs/databases/trino) | `pip install sqlalchemy-trino` | `trino://{username}:{password}@{hostname}:{port}/{catalog}` |
| [Presto](/docs/databases/presto) | `pip install pyhive` | `presto://` |
| [SAP Hana](/docs/databases/hana) | `pip install hdbcli sqlalchemy-hana or pip install apache-superset[hana]` | `hana://{username}:{password}@{host}:{port}` |
| [Snowflake](/docs/databases/snowflake) | `pip install snowflake-sqlalchemy` | `snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}` |
| SQLite | | `sqlite://` |
| [SQL Server](/docs/databases/sql-server) | `pip install pymssql` | `mssql://` |
| [Teradata](/docs/databases/teradata) | `pip install sqlalchemy-teradata` | `teradata://{user}:{password}@{host}` |
| [Vertica](/docs/databases/vertica) | `pip install sqlalchemy-vertica-python` | `vertica+vertica_python://<UserName>:<DBPassword>@<Database Host>/<Database Name>` |
---
Note that many other databases are supported, the main criteria being the existence of a functional
SQLAlchemy dialect and Python driver. Searching for the keyword "sqlalchemy + (database name)"
should help get you to the right place.
If your database or data engine isn't on the list but a SQL interface
exists, please file an issue on the
[Superset GitHub repo](https://github.com/apache/superset/issues), so we can work on documenting and
supporting it.
[StackOverflow](https://stackoverflow.com/questions/tagged/apache-superset+superset) and the
[Superset community Slack](https://join.slack.com/t/apache-superset/shared_invite/zt-uxbh5g36-AISUtHbzOXcu0BIj7kgUaw)
are great places to get help with connecting to databases in Superset.

View File

@@ -0,0 +1,17 @@
---
title: Apache Kylin
hide_title: true
sidebar_position: 11
version: 1
---
## Apache Kylin
The recommended connector library for Apache Kylin is
[kylinpy](https://github.com/Kyligence/kylinpy).
The expected connection string is formatted as follows:
```
kylin://<username>:<password>@<hostname>:<port>/<project>?<param1>=<value1>&<param2>=<value2>
```

View File

@@ -0,0 +1,29 @@
---
title: MySQL
hide_title: true
sidebar_position: 25
version: 1
---
## MySQL
The recommended connector library for MySQL is `[mysqlclient](https://pypi.org/project/mysqlclient/)`.
Here's the connection string:
```
mysql://{username}:{password}@{host}/{database}
```
Host:
- For Localhost or Docker running Linux: `localhost` or `127.0.0.1`
- For On Prem: IP address or Host name
- For Docker running in OSX: `docker.for.mac.host.internal`
Port: `3306` by default
One problem with `mysqlclient` is that it will fail to connect to newer MySQL databases using `caching_sha2_password` for authentication, since the plugin is not included in the client. In this case, you should use `[mysql-connector-python](https://pypi.org/project/mysql-connector-python/)` instead:
```
mysql+mysqlconnector://{username}:{password}@{host}/{database}
```

View File

@@ -0,0 +1,17 @@
---
title: IBM Netezza Performance Server
hide_title: true
sidebar_position: 24
version: 1
---
## IBM Netezza Performance Server
The [nzalchemy](https://pypi.org/project/nzalchemy/) library provides a
Python / SQLAlchemy interface to IBM Netezza Performance Server (aka Netezza).
Here's the recommended connection string:
```
netezza+nzpy://{username}:{password}@{hostname}:{port}/{database}
```

View File

@@ -0,0 +1,17 @@
---
title: Oracle
hide_title: true
sidebar_position: 26
version: 1
---
## Oracle
The recommended connector library is
[cx_Oracle](https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html).
The connection string is formatted as follows:
```
oracle://<username>:<password>@<hostname>:<port>
```

View File

@@ -0,0 +1,16 @@
---
title: Apache Pinot
hide_title: true
sidebar_position: 12
version: 1
---
## Apache Pinot
The recommended connector library for Apache Pinot is [pinotdb](https://pypi.org/project/pinotdb/).
The expected connection string is formatted as follows:
```
pinot+http://controller:5436/query?server=http://controller:5983/``
```

View File

@@ -0,0 +1,42 @@
---
title: Postgres
hide_title: true
sidebar_position: 27
version: 1
---
## Postgres
Note that, if you're using docker-compose, the Postgres connector library [psycopg2](https://www.psycopg.org/docs/)
comes out of the box with Superset.
Postgres sample connection parameters:
- **User Name**: UserName
- **Password**: DBPassword
- **Database Host**:
- For Localhost: localhost or 127.0.0.1
- For On Prem: IP address or Host name
- For AWS Endpoint
- **Database Name**: Database Name
- **Port**: default 5432
The connection string looks like:
```
postgresql://{username}:{password}@{host}:{port}/{database}
```
You can require SSL by adding `?sslmode=require` at the end:
```
postgresql://{username}:{password}@{host}:{port}/{database}?sslmode=require
```
You can read about the other SSL modes that Postgres supports in
[Table 31-1 from this documentation](https://www.postgresql.org/docs/9.1/libpq-ssl.html).
More information about PostgreSQL connection options can be found in the
[SQLAlchemy docs](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#module-sqlalchemy.dialects.postgresql.psycopg2)
and the
[PostgreSQL docs](https://www.postgresql.org/docs/9.1/libpq-connect.html#LIBPQ-PQCONNECTDBPARAMS).

View File

@@ -0,0 +1,37 @@
---
title: Presto
hide_title: true
sidebar_position: 28
version: 1
---
## Presto
The [pyhive](https://pypi.org/project/PyHive/) library is the recommended way to connect to Presto through SQLAlchemy.
The expected connection string is formatted as follows:
```
presto://{hostname}:{port}/{database}
```
You can pass in a username and password as well:
```
presto://{username}:{password}@{hostname}:{port}/{database}
```
Here is an example connection string with values:
```
presto://datascientist:securepassword@presto.example.com:8080/hive
```
By default Superset assumes the most recent version of Presto is being used when querying the
datasource. If youre using an older version of Presto, you can configure it in the extra parameter:
```
{
"version": "0.123"
}
```

View File

@@ -0,0 +1,25 @@
---
title: Amazon Redshift
hide_title: true
sidebar_position: 5
version: 1
---
## AWS Redshift
The [sqlalchemy-redshift](https://pypi.org/project/sqlalchemy-redshift/) library is the recommended
way to connect to Redshift through SQLAlchemy.
You'll need to the following setting values to form the connection string:
- **User Name**: userName
- **Password**: DBPassword
- **Database Host**: AWS Endpoint
- **Database Name**: Database Name
- **Port**: default 5439
Here's what the connection string looks like:
```
redshift+psycopg2://<userName>:<DBPassword>@<AWS End Point>:5439/<Database Name>
```

View File

@@ -0,0 +1,16 @@
---
title: Rockset
hide_title: true
sidebar_position: 35
version: 1
---
## Rockset
The connection string for Rockset is:
```
rockset://apikey:{your-apikey}@api.rs2.usw2.rockset.com/
```
For more complete instructions, we recommend the [Rockset documentation](https://docs.rockset.com/apache-superset/).

View File

@@ -0,0 +1,31 @@
---
title: Snowflake
hide_title: true
sidebar_position: 29
version: 1
---
## Snowflake
The recommended connector library for Snowflake is
[snowflake-sqlalchemy](https://pypi.org/project/snowflake-sqlalchemy/1.2.4/)<=1.2.4. (This version is required until Superset migrates to sqlalchemy>=1.4.0)
The connection string for Snowflake looks like this:
```
snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse={warehouse}
```
The schema is not necessary in the connection string, as it is defined per table/query. The role and
warehouse can be omitted if defaults are defined for the user, i.e.
```
snowflake://{user}:{password}@{account}.{region}/{database}
```
Make sure the user has privileges to access and use all required
databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does not test for
user/role rights during engine creation by default. However, when pressing the “Test Connection”
button in the Create or Edit Database dialog, user/role credentials are validated by passing
“validate_default_parameters”: True to the connect() method during engine creation. If the user/role
is not authorized to access the database, an error is recorded in the Superset logs.

View File

@@ -0,0 +1,17 @@
---
title: Apache Solr
hide_title: true
sidebar_position: 13
version: 1
---
## Apache Solr
The [sqlalchemy-solr](https://pypi.org/project/sqlalchemy-solr/) library provides a
Python / SQLAlchemy interface to Apache Solr.
The connection string for Solr looks like this:
```
solr://{username}:{password}@{host}:{port}/{server_path}/{collection}[/?use_ssl=true|false]
```

View File

@@ -0,0 +1,16 @@
---
title: Apache Spark SQL
hide_title: true
sidebar_position: 14
version: 1
---
## Apache Spark SQL
The recommended connector library for Apache Spark SQL [pyhive](https://pypi.org/project/PyHive/).
The expected connection string is formatted as follows:
```
hive://hive@{hostname}:{port}/{database}
```

View File

@@ -0,0 +1,16 @@
---
title: Microsoft SQL Server
hide_title: true
sidebar_position: 30
version: 1
---
## SQL Server
The recommended connector library for SQL Server is [pymssql](https://github.com/pymssql/pymssql).
The connection string for SQL Server looks like this:
```
mssql+pymssql://<Username>:<Password>@<Host>:<Port-default:1433>/<Database Name>/?Encrypt=yes
```

View File

@@ -0,0 +1,28 @@
---
title: Teradata
hide_title: true
sidebar_position: 31
version: 1
---
## Teradata
The recommended connector library is
[sqlalchemy-teradata](https://github.com/Teradata/sqlalchemy-teradata).
The connection string for Teradata looks like this:
```
teradata://{user}:{password}@{host}
```
Note: Its required to have Teradata ODBC drivers installed and environment variables configured for
proper work of sqlalchemy dialect. Teradata ODBC Drivers available here:
https://downloads.teradata.com/download/connectivity/odbc-driver/linux
Required environment variables:
```
export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini
export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini
```

View File

@@ -0,0 +1,27 @@
---
title: Trino
hide_title: true
sidebar_position: 34
version: 1
---
## Trino
Supported trino version 352 and higher
The [sqlalchemy-trino](https://pypi.org/project/sqlalchemy-trino/) library is the recommended way to connect to Trino through SQLAlchemy.
The expected connection string is formatted as follows:
```
trino://{username}:{password}@{hostname}:{port}/{catalog}
```
If you are running trino with docker on local machine please use the following connection URL
```
trino://trino@host.docker.internal:8080
```
Reference:
[Trino-Superset-Podcast](https://trino.io/episodes/12.html)

View File

@@ -0,0 +1,31 @@
---
title: Vertica
hide_title: true
sidebar_position: 32
version: 1
---
## Vertica
The recommended connector library is
[sqlalchemy-vertica-python](https://pypi.org/project/sqlalchemy-vertica-python/). The
[Vertica](http://www.vertica.com/) connection parameters are:
- **User Name:** UserName
- **Password:** DBPassword
- **Database Host:**
- For Localhost : localhost or 127.0.0.1
- For On Prem : IP address or Host name
- For Cloud: IP Address or Host Name
- **Database Name:** Database Name
- **Port:** default 5433
The connection string is formatted as follows:
```
vertica+vertica_python://{username}:{password}@{host}/{database}
```
Other parameters:
- Load Balancer - Backup Host