mirror of
https://github.com/apache/superset.git
synced 2026-04-18 23:55:00 +00:00
chore(doc): Update BigQuery Connection database connection UI into doc (#17191)
* Update google-bigquery.mdx Update BigQuery Connection database connection UI * fix grammar Co-authored-by: Geido <60598000+geido@users.noreply.github.com> * fix grammar Co-authored-by: Geido <60598000+geido@users.noreply.github.com> * pre-commit prettier Co-authored-by: Geido <60598000+geido@users.noreply.github.com>
This commit is contained in:
@@ -11,32 +11,21 @@ version: 1
|
||||
The recommended connector library for BigQuery is
|
||||
[pybigquery](https://github.com/mxmzdlv/pybigquery).
|
||||
|
||||
The connection string for BigQuery looks like:
|
||||
|
||||
### Install BigQuery Driver
|
||||
Follow the steps [here](/docs/databases/dockeradddrivers) about how to
|
||||
install new database drivers when setting up Superset locally via docker-compose.
|
||||
```
|
||||
bigquery://{project_id}
|
||||
echo "pybigquery" >> ./docker/requirements-local.txt
|
||||
```
|
||||
|
||||
When adding a new BigQuery connection in Superset, you'll also need to add the GCP Service Account
|
||||
### Connecting to BigQuery
|
||||
When adding a new BigQuery connection in Superset, you'll need to add the GCP Service Account
|
||||
credentials file (as a JSON).
|
||||
|
||||
1. Create your Service Account via the Google Cloud Platform control panel, provide it access to the
|
||||
appropriate BigQuery datasets, and download the JSON configuration file for the service account.
|
||||
|
||||
2. n Superset, Add a JSON blob to the **Secure Extra** field in the database configuration form with
|
||||
the following format:
|
||||
|
||||
2. In Superset, you can either upload that JSON or add the JSON blob in the following format (this should be the content of your credential JSON file):
|
||||
```
|
||||
{
|
||||
"credentials_info": <contents of credentials JSON file>
|
||||
}
|
||||
```
|
||||
|
||||
The resulting file should have this structure:
|
||||
|
||||
```
|
||||
{
|
||||
"credentials_info": {
|
||||
"type": "service_account",
|
||||
"project_id": "...",
|
||||
"private_key_id": "...",
|
||||
@@ -48,10 +37,49 @@ The resulting file should have this structure:
|
||||
"auth_provider_x509_cert_url": "...",
|
||||
"client_x509_cert_url": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||

|
||||
|
||||
|
||||
3. Additionally, can connect via SQLAlchemy URI instead
|
||||
|
||||
The connection string for BigQuery looks like:
|
||||
|
||||
```
|
||||
bigquery://{project_id}
|
||||
```
|
||||
Go to the **Advanced** tab, Add a JSON blob to the **Secure Extra** field in the database configuration form with
|
||||
the following format:
|
||||
```
|
||||
{
|
||||
"credentials_info": <contents of credentials JSON file>
|
||||
}
|
||||
```
|
||||
|
||||
The resulting file should have this structure:
|
||||
```
|
||||
{
|
||||
"credentials_info": {
|
||||
"type": "service_account",
|
||||
"project_id": "...",
|
||||
"private_key_id": "...",
|
||||
"private_key": "...",
|
||||
"client_email": "...",
|
||||
"client_id": "...",
|
||||
"auth_uri": "...",
|
||||
"token_uri": "...",
|
||||
"auth_provider_x509_cert_url": "...",
|
||||
"client_x509_cert_url": "..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You should then be able to connect to your BigQuery datasets.
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
To be able to upload CSV or Excel files to BigQuery in Superset, you'll need to also add the
|
||||
[pandas_gbq](https://github.com/pydata/pandas-gbq) library.
|
||||
|
||||
Reference in New Issue
Block a user