mirror of
https://github.com/apache/superset.git
synced 2026-04-16 22:55:52 +00:00
* setup docusaurus
* rename
* add introduction content
* chore(docsV2): move content from docs to docsV2 (#17714)
* add FAQs and contribution pages
* chore: add api, security, and roadmap pages, include swaggerui in dependency for api page
* chore: move api page header below imports
* chore: change API page info alert to use built in Infima class instead of custom class
Co-authored-by: Corbin Robb <corbin@Corbins-MacBook-Pro.local>
* chore(docs-v2): moving more markdown content to new documentation site (#17736)
* chore: move markdown content and images for docs installation directory to docs-v2
* chore: move docs miscellaneous directory content to docs-v2
* chore(docs-v2): move over connecting to databases content and rename some files to .mdx
Co-authored-by: Corbin Robb <corbin@Corbins-MacBook-Pro.local>
* Update styling and logo (#17990)
* update styling
* update colors
* chore(docs-v2): remove blog and tutorial and update some styling (#17929)
* add superset logo and favicon, change styles to better match current docs, add prettierrc
* change file types to mdx
* Add simple superset dark mode freindly logo
* clean up default pages - blog and tutorial docs
Co-authored-by: Corbin Robb <corbin@Corbins-MacBook-Pro.local>
* Chore: moving charts and dashboard to docusaurus (#18036)
* add contributing add creating charts and dashboards
* delete extra images
* update rat-excludes
* Port homepage (#18115)
* Port community page (#18128)
* chore: add seo redirects for Docs v@ (#18092)
* fix: handle null values in time-series table (#18039)
* cleanup column_type_mappings (#17569)
Signed-off-by: Đặng Minh Dũng <dungdm93@live.com>
* important change to MakeFile (#18037)
* Update superset-e2e.yml (#18041)
* Revert "Update superset-e2e.yml (#18041)" (#18051)
This reverts commit b5652739c9.
* feat: Trino Authentications (#17593)
* feat: support Trino Authentications
Signed-off-by: Đặng Minh Dũng <dungdm93@live.com>
* docs: Trino Authentications
Signed-off-by: Đặng Minh Dũng <dungdm93@live.com>
* chore(supeset.utils.core): move all database utils to database utils module (#18058)
* chore(plugin-chart-echarts): add types to controls (#18059)
* fix(generator): more cleanup to plugin framework (#18027)
* fix(generator): more cleanup to plugin framework
* fix typo and package name
* add docs
* fix typo
* Update superset-frontend/webpack.config.js
Co-authored-by: Kamil Gabryjelski <kamil.gabryjelski@gmail.com>
* fix generator reference
* add steps to tutorial and fix package version
* refine docs/readme
Co-authored-by: Kamil Gabryjelski <kamil.gabryjelski@gmail.com>
* feat(advanced analytics): support groupby in resample (#18045)
* fix(dashboard): scope of nativefilter not update (#18048)
* fix(generator): add lockfile and fix styling issues (#18073)
* fix(generator): add lockfile and fix styling issues
* fix margins and remove redundant scroll
* update tutorial
* refactor(sql_lab): SQL Lab Persistent Saved State (#17771)
* a lot of console logs
* testing
* test
* added saved_query to remoteId
* created useEffect so that title properly changes in modal
* Update superset-frontend/src/SqlLab/actions/sqlLab.js
Co-authored-by: Lyndsi Kay Williams <55605634+lyndsiWilliams@users.noreply.github.com>
Co-authored-by: Lyndsi Kay Williams <55605634+lyndsiWilliams@users.noreply.github.com>
* refactor(example_data): replace the way the birth_names data is loaded to DB (#18060)
* refactor: replace the way the birth_names data is loaded to DB
* fix failed unit test
* fix failed unit test
* fix failed tests
* fix pass wrong flag of support datetime type
* remove unused fixture
* feat: add chart description in info tooltip (#17207)
* feat: add chart list description
* fix: text overflow
* fix: text-overflow with line-height
* Correction of proper names format in README (#18087)
* chore: added SEO routes
* fix can't use examples helpers on non app context based environment (#18086)
* chore: split CLI into multiple files (#18082)
* chore: split CLI into multiple files
* Update tests
* Who fixes the fixtures?
* Add subcommands dynamically
* Rebase
* fix misspelling (#18097)
* refactor: sqleditorleftbar to functional (#17807)
* Working on converting sqleditorleftbar to functional component
* Creating draft PR to address bug
* Still working on solving re rendering bug
* infinite rerender fix
* Creating draft PR to address bug
* Cleaning up in preparation for push
* Made changes suggested by Elizabeth
* Fixed issues as per Lindsey's comment
Co-authored-by: Arash <arash.afghahi@gmail.com>
* fix rat excludes and headers
* fix(docs): fix path of image for "Create New Chart" (#18089)
* Migrate Checkbox story to tsx - see #18100 (#18101)
Looks good!
* refactor: migrate RowCountLabel to TypeScript & added story (#18105)
* enable superbook for explore component
* migrate RowCountLabel to TypeScript
* add storybook for RowCountLabel
* fix: logging warning on dataframe (don't use python's warnings) (#18111)
* fix: logging warning on dataframe (don't use python's warnings)
* lint
* update changelog and updating for 1.4.0 (#18083)
* feat: Adds a key-value endpoint to store charts form data (#17882)
* feat: Adds a key-value endpoint to store charts form data
* Fixes linting problems
* Removes the query_params from the endpoints
* Refactors the commands
* Removes unused imports
* Changes the parameters to use dataclass
* Adds more access tests
* Gets the first dataset while testing
* Adds unit tests for the check_access function
* Changes the can_access check
* Always check for dataset access
* fix(explore): fix chart embed code modal glitch (#17843)
* feat(plugin-chart-echarts): support non-timeseries x-axis (#17917)
* feat(plugin-chart-echarts): support non-timeseries x-axis
* fix tests
* change formula return type from Date to number
* add x_axis test coverage
* rename func and improve coverage
* add x-axis control to bar chart
* remove redundant console.log
* fix description
* make x-axis control mandatory
* 🙃
* fix x-axis formatter
* fix showValues
* fix implicit rDTTM_ALIAS references in postProcessing
* replace TIME_COLUMN with DTTM_ALIAS
* fix remaining implicit indexes
* fix: Disable filtering on wide result sets (#18021)
* fix: handle null values in time-series table (#18039)
* cleanup column_type_mappings (#17569)
Signed-off-by: Đặng Minh Dũng <dungdm93@live.com>
* important change to MakeFile (#18037)
* add missing is_timeseries to pivot op
Co-authored-by: Erik Ritter <erik.ritter@airbnb.com>
Co-authored-by: Grace Guo <grace.guo@airbnb.com>
Co-authored-by: Đặng Minh Dũng <dungdm93@live.com>
Co-authored-by: AAfghahi <48933336+AAfghahi@users.noreply.github.com>
* feat(country-map): added new countries in country-chart-map (#18081)
* chore: migrating storybook jsx to typescript #18100 (#18133)
* Migrating storybook jsx to typescript #18100
* Migrating storybook jsx to typescript
Co-authored-by: Jayakrishnan Karolil <jayakrishnan.karolil@nielsen.com>
* feat(annotation): add toast feedback to annotation templates (#18116)
* feat(dashboard): add toast feedback to dashboard actions (#18114)
* feat(explore): more toast feedback on user actions in Explore (#18108)
* feat(explore): add toasts feedback when user copies chart url
* Show toast message when updating chart properties
* Change toast type to success when saving chart
* Use success toast from props
* Fix tests
* Use withToasts instead of dispatch
* Use PropertiesModalProps instead of any
* Docs: fix typo (#18125)
* fix: undefined error when adding extra sequential color scheme (#18152)
* feat: allow assets to be managed externally (#18093)
* feat: allow assets to be managed externally
* Use server_default
* chore: use pkg_resources for cleaner config (#18130)
* refactor: Moves the Explore form_data endpoint (#18151)
* refactor: Moves the Explore form_data endpoint
* Removes unused imports
* Fixes openapi schema error
* Fixes typo
* Renames and UPDATING.md
Co-authored-by: Grace Guo <grace.guo@airbnb.com>
Co-authored-by: Đặng Minh Dũng <dungdm93@live.com>
Co-authored-by: AAfghahi <48933336+AAfghahi@users.noreply.github.com>
Co-authored-by: Hugh A. Miles II <hughmil3s@gmail.com>
Co-authored-by: ofekisr <35701650+ofekisr@users.noreply.github.com>
Co-authored-by: Ville Brofeldt <33317356+villebro@users.noreply.github.com>
Co-authored-by: Kamil Gabryjelski <kamil.gabryjelski@gmail.com>
Co-authored-by: Yongjie Zhao <yongjie.zhao@gmail.com>
Co-authored-by: Stephen Liu <750188453@qq.com>
Co-authored-by: Lyndsi Kay Williams <55605634+lyndsiWilliams@users.noreply.github.com>
Co-authored-by: Adam Dobrawy <ad-m@users.noreply.github.com>
Co-authored-by: Beto Dealmeida <roberto@dealmeida.net>
Co-authored-by: Emily Wu <86927881+em0227@users.noreply.github.com>
Co-authored-by: Josue Lugaro <82119536+JosueLugaro@users.noreply.github.com>
Co-authored-by: Arash <arash.afghahi@gmail.com>
Co-authored-by: Ville Brofeldt <ville.v.brofeldt@gmail.com>
Co-authored-by: Daniel Vaz Gaspar <danielvazgaspar@gmail.com>
Co-authored-by: Elizabeth Thompson <eschutho@gmail.com>
Co-authored-by: Michael S. Molina <70410625+michael-s-molina@users.noreply.github.com>
Co-authored-by: Erik Ritter <erik.ritter@airbnb.com>
Co-authored-by: Hammad-Raza <hammadraza42@hotmail.com>
Co-authored-by: jayakrishnankk <kk.jayakrishnan@gmail.com>
Co-authored-by: Jayakrishnan Karolil <jayakrishnan.karolil@nielsen.com>
Co-authored-by: Farid Rener <proteusvacuum@users.noreply.github.com>
* remove unneeded requirement
Co-authored-by: Corbin Robb <31329271+corbinrobb@users.noreply.github.com>
Co-authored-by: Corbin Robb <corbin@Corbins-MacBook-Pro.local>
Co-authored-by: Daniel W <61300812+The-hyphen-user@users.noreply.github.com>
Co-authored-by: Geido <60598000+geido@users.noreply.github.com>
Co-authored-by: Srini Kadamati <skadamat@gmail.com>
Co-authored-by: Grace Guo <grace.guo@airbnb.com>
Co-authored-by: Đặng Minh Dũng <dungdm93@live.com>
Co-authored-by: AAfghahi <48933336+AAfghahi@users.noreply.github.com>
Co-authored-by: ofekisr <35701650+ofekisr@users.noreply.github.com>
Co-authored-by: Ville Brofeldt <33317356+villebro@users.noreply.github.com>
Co-authored-by: Kamil Gabryjelski <kamil.gabryjelski@gmail.com>
Co-authored-by: Yongjie Zhao <yongjie.zhao@gmail.com>
Co-authored-by: Stephen Liu <750188453@qq.com>
Co-authored-by: Lyndsi Kay Williams <55605634+lyndsiWilliams@users.noreply.github.com>
Co-authored-by: Adam Dobrawy <ad-m@users.noreply.github.com>
Co-authored-by: Beto Dealmeida <roberto@dealmeida.net>
Co-authored-by: Emily Wu <86927881+em0227@users.noreply.github.com>
Co-authored-by: Josue Lugaro <82119536+JosueLugaro@users.noreply.github.com>
Co-authored-by: Arash <arash.afghahi@gmail.com>
Co-authored-by: Ville Brofeldt <ville.v.brofeldt@gmail.com>
Co-authored-by: Daniel Vaz Gaspar <danielvazgaspar@gmail.com>
Co-authored-by: Elizabeth Thompson <eschutho@gmail.com>
Co-authored-by: Michael S. Molina <70410625+michael-s-molina@users.noreply.github.com>
Co-authored-by: Erik Ritter <erik.ritter@airbnb.com>
Co-authored-by: Hammad-Raza <hammadraza42@hotmail.com>
Co-authored-by: jayakrishnankk <kk.jayakrishnan@gmail.com>
Co-authored-by: Jayakrishnan Karolil <jayakrishnan.karolil@nielsen.com>
Co-authored-by: Farid Rener <proteusvacuum@users.noreply.github.com>
296 lines
13 KiB
Plaintext
296 lines
13 KiB
Plaintext
---
|
||
title: Frequently Asked Questions
|
||
hide_title: true
|
||
sidebar_position: 7
|
||
---
|
||
|
||
## Frequently Asked Questions
|
||
|
||
### Can I join / query multiple tables at one time?
|
||
|
||
Not in the Explore or Visualization UI. A Superset SQLAlchemy datasource can only be a single table
|
||
or a view.
|
||
|
||
When working with tables, the solution would be to materialize a table that contains all the fields
|
||
needed for your analysis, most likely through some scheduled batch process.
|
||
|
||
A view is a simple logical layer that abstract an arbitrary SQL queries as a virtual table. This can
|
||
allow you to join and union multiple tables, and to apply some transformation using arbitrary SQL
|
||
expressions. The limitation there is your database performance as Superset effectively will run a
|
||
query on top of your query (view). A good practice may be to limit yourself to joining your main
|
||
large table to one or many small tables only, and avoid using _GROUP BY_ where possible as Superset
|
||
will do its own _GROUP BY_ and doing the work twice might slow down performance.
|
||
|
||
Whether you use a table or a view, the important factor is whether your database is fast enough to
|
||
serve it in an interactive fashion to provide a good user experience in Superset.
|
||
|
||
However, if you are using the SQL Lab, there is no such limitation, you can write sql query to join
|
||
multiple tables as long as your db account has access to the tables.
|
||
|
||
### How BIG can my datasource be?
|
||
|
||
It can be gigantic! Superset acts as a thin layer above your underlying databases or data engines.
|
||
|
||
As mentioned above, the main criteria is whether your database can execute queries and return
|
||
results in a time frame that is acceptable to your users. Many distributed databases out there can
|
||
execute queries that scan through terabytes in an interactive fashion.
|
||
|
||
### How do I create my own visualization?
|
||
|
||
We recommend reading the instructions in
|
||
[Building Custom Viz Plugins](/docs/installation/building-custom-viz-plugins).
|
||
|
||
### Can I upload and visualize CSV data?
|
||
|
||
Absolutely! Read the instructions [here](/docs/creating-charts-dashboards/exploring-data) to learn
|
||
how to enable and use CSV upload.
|
||
|
||
### Why are my queries timing out?
|
||
|
||
There are many reasons may cause long query timing out.
|
||
|
||
For running long query from Sql Lab, by default Superset allows it run as long as 6 hours before it
|
||
being killed by celery. If you want to increase the time for running query, you can specify the
|
||
timeout in configuration. For example:
|
||
|
||
```
|
||
SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6
|
||
```
|
||
|
||
Superset is running on gunicorn web server, which may time out web requests. If you want to increase
|
||
the default (50), you can specify the timeout when starting the web server with the -t flag, which
|
||
is expressed in seconds.
|
||
|
||
```
|
||
superset runserver -t 300
|
||
```
|
||
|
||
If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are
|
||
probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response
|
||
from Superset server (which is processing long queries), these web servers will send 504 status code
|
||
to clients directly. Superset has a client-side timeout limit to address this issue. If query didn’t
|
||
come back within client-side timeout (60 seconds by default), Superset will display warning message
|
||
to avoid gateway timeout message. If you have a longer gateway timeout limit, you can change the
|
||
timeout settings in **superset_config.py**:
|
||
|
||
```
|
||
SUPERSET_WEBSERVER_TIMEOUT = 60
|
||
```
|
||
|
||
### Why is the map not visible in the geospatial visualization?
|
||
|
||
You need to register a free account at [Mapbox.com](https://www.mapbox.com), obtain an API key, and add it
|
||
to **superset_config.py** at the key MAPBOX_API_KEY:
|
||
|
||
```
|
||
MAPBOX_API_KEY = "longstringofalphanumer1c"
|
||
```
|
||
|
||
### How to add dynamic filters to a dashboard?
|
||
|
||
Use the **Filter Box** widget, build a slice, and add it to your dashboard.
|
||
|
||
The **Filter Box** widget allows you to define a query to populate dropdowns that can be used for
|
||
filtering. To build the list of distinct values, we run a query, and sort the result by the metric
|
||
you provide, sorting descending.
|
||
|
||
The widget also has a checkbox **Date Filter**, which enables time filtering capabilities to your
|
||
dashboard. After checking the box and refreshing, you’ll see a from and a to dropdown show up.
|
||
|
||
By default, the filtering will be applied to all the slices that are built on top of a datasource
|
||
that shares the column name that the filter is based on. It’s also a requirement for that column to
|
||
be checked as “filterable” in the column tab of the table editor.
|
||
|
||
But what about if you don’t want certain widgets to get filtered on your dashboard? You can do that
|
||
by editing your dashboard, and in the form, edit the JSON Metadata field, more specifically the
|
||
`filter_immune_slices` key, that receives an array of sliceIds that should never be affected by any
|
||
dashboard level filtering.
|
||
|
||
```
|
||
{
|
||
"filter_immune_slices": [324, 65, 92],
|
||
"expanded_slices": {},
|
||
"filter_immune_slice_fields": {
|
||
"177": ["country_name", "__time_range"],
|
||
"32": ["__time_range"]
|
||
},
|
||
"timed_refresh_immune_slices": [324]
|
||
}
|
||
```
|
||
|
||
In the json blob above, slices 324, 65 and 92 won’t be affected by any dashboard level filtering.
|
||
|
||
Now note the `filter_immune_slice_fields` key. This one allows you to be more specific and define
|
||
for a specific slice_id, which filter fields should be disregarded.
|
||
|
||
Note the use of the `__time_range` keyword, which is reserved for dealing with the time boundary
|
||
filtering mentioned above.
|
||
|
||
But what happens with filtering when dealing with slices coming from different tables or databases?
|
||
If the column name is shared, the filter will be applied, it’s as simple as that.
|
||
|
||
### How to limit the timed refresh on a dashboard?
|
||
|
||
By default, the dashboard timed refresh feature allows you to automatically re-query every slice on
|
||
a dashboard according to a set schedule. Sometimes, however, you won’t want all of the slices to be
|
||
refreshed - especially if some data is slow moving, or run heavy queries. To exclude specific slices
|
||
from the timed refresh process, add the `timed_refresh_immune_slices` key to the dashboard JSON
|
||
Metadata field:
|
||
|
||
```
|
||
{
|
||
"filter_immune_slices": [],
|
||
"expanded_slices": {},
|
||
"filter_immune_slice_fields": {},
|
||
"timed_refresh_immune_slices": [324]
|
||
}
|
||
```
|
||
|
||
In the example above, if a timed refresh is set for the dashboard, then every slice except 324 will
|
||
be automatically re-queried on schedule.
|
||
|
||
Slice refresh will also be staggered over the specified period. You can turn off this staggering by
|
||
setting the `stagger_refresh` to false and modify the stagger period by setting `stagger_time` to a
|
||
value in milliseconds in the JSON Metadata field:
|
||
|
||
```
|
||
{
|
||
"stagger_refresh": false,
|
||
"stagger_time": 2500
|
||
}
|
||
```
|
||
|
||
Here, the entire dashboard will refresh at once if periodic refresh is on. The stagger time of 2.5
|
||
seconds is ignored.
|
||
|
||
**Why does ‘flask fab’ or superset freezed/hung/not responding when started (my home directory is
|
||
NFS mounted)?**
|
||
|
||
By default, Superset creates and uses an SQLite database at `~/.superset/superset.db`. SQLite is
|
||
known to [not work well if used on NFS](https://www.sqlite.org/lockingv3.html) due to broken file
|
||
locking implementation on NFS.
|
||
|
||
You can override this path using the **SUPERSET_HOME** environment variable.
|
||
|
||
Another workaround is to change where superset stores the sqlite database by adding the following in
|
||
`superset_config.py`:
|
||
|
||
```
|
||
SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db'
|
||
```
|
||
|
||
You can read more about customizing Superset using the configuration file
|
||
[here](/docs/installation/configuring-superset).
|
||
|
||
### What if the table schema changed?
|
||
|
||
Table schemas evolve, and Superset needs to reflect that. It’s pretty common in the life cycle of a
|
||
dashboard to want to add a new dimension or metric. To get Superset to discover your new columns,
|
||
all you have to do is to go to **Data -> Datasets**, click the edit icon next to the dataset
|
||
whose schema has changed, and hit **Sync columns from source** from the **Columns** tab.
|
||
Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the
|
||
table afterwards to configure the Columns tab, check the appropriate boxes and save again.
|
||
|
||
### What database engine can I use as a backend for Superset?
|
||
|
||
To clarify, the database backend is an OLTP database used by Superset to store its internal
|
||
information like your list of users, slices and dashboard definitions.
|
||
|
||
Superset is tested using Mysql, Postgresql and Sqlite for its backend. It’s recommended you install
|
||
Superset on one of these database server for production.
|
||
|
||
Using a column-store, non-OLTP databases like Vertica, Redshift or Presto as a database backend
|
||
simply won’t work as these databases are not designed for this type of workload. Installation on
|
||
Oracle, Microsoft SQL Server, or other OLTP databases may work but isn’t tested.
|
||
|
||
Please note that pretty much any databases that have a SqlAlchemy integration should work perfectly
|
||
fine as a datasource for Superset, just not as the OLTP backend.
|
||
|
||
### How can I configure OAuth authentication and authorization?
|
||
|
||
You can take a look at this Flask-AppBuilder
|
||
[configuration example](https://github.com/dpgaspar/Flask-AppBuilder/blob/master/examples/oauth/config.py).
|
||
|
||
### How can I set a default filter on my dashboard?
|
||
|
||
Simply apply the filter and save the dashboard while the filter is active.
|
||
|
||
### Is there a way to force the use specific colors?
|
||
|
||
It is possible on a per-dashboard basis by providing a mapping of labels to colors in the JSON
|
||
Metadata attribute using the `label_colors` key.
|
||
|
||
```
|
||
{
|
||
"label_colors": {
|
||
"Girls": "#FF69B4",
|
||
"Boys": "#ADD8E6"
|
||
}
|
||
}
|
||
```
|
||
|
||
### Does Superset work with [insert database engine here]?
|
||
|
||
The [Connecting to Databases section](/docs/databases/installing-database-drivers) provides the best
|
||
overview for supported databases. Database engines not listed on that page may work too. We rely on
|
||
the community to contribute to this knowledge base.
|
||
|
||
For a database engine to be supported in Superset through the SQLAlchemy connector, it requires
|
||
having a Python compliant [SQLAlchemy dialect](https://docs.sqlalchemy.org/en/13/dialects/) as well
|
||
as a [DBAPI driver](https://www.python.org/dev/peps/pep-0249/) defined. Database that have limited
|
||
SQL support may work as well. For instance it’s possible to connect to Druid through the SQLAlchemy
|
||
connector even though Druid does not support joins and subqueries. Another key element for a
|
||
database to be supported is through the Superset Database Engine Specification interface. This
|
||
interface allows for defining database-specific configurations and logic that go beyond the
|
||
SQLAlchemy and DBAPI scope. This includes features like:
|
||
|
||
- date-related SQL function that allow Superset to fetch different time granularities when running
|
||
time-series queries
|
||
- whether the engine supports subqueries. If false, Superset may run 2-phase queries to compensate
|
||
for the limitation
|
||
- methods around processing logs and inferring the percentage of completion of a query
|
||
- technicalities as to how to handle cursors and connections if the driver is not standard DBAPI
|
||
|
||
Beyond the SQLAlchemy connector, it’s also possible, though much more involved, to extend Superset
|
||
and write your own connector. The only example of this at the moment is the Druid connector, which
|
||
is getting superseded by Druid’s growing SQL support and the recent availability of a DBAPI and
|
||
SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support,
|
||
it’s probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible
|
||
the database needs to have support for running OLAP-type queries and should be able to things that
|
||
are typical in basic SQL:
|
||
|
||
- aggregate data
|
||
- apply filters
|
||
- apply HAVING-type filters
|
||
- be schema-aware, expose columns and types
|
||
|
||
### Does Superset offer a public API?
|
||
|
||
Yes, a public REST API, and the surface of that API formal is expanding steadily. You can read more about this API and
|
||
interact with it using Swagger [here](/docs/rest-api).
|
||
|
||
Some of the
|
||
original vision for the collection of endpoints under **/api/v1** was originally specified in
|
||
[SIP-17](https://github.com/apache/superset/issues/7259) and constant progress has been
|
||
made to cover more and more use cases.
|
||
|
||
The API available is documented using [Swagger](https://swagger.io/) and the documentation can be
|
||
made available under **/swagger/v1** by enabling the following flag in `superset_config.py`:
|
||
|
||
```
|
||
FAB_API_SWAGGER_UI = True
|
||
```
|
||
|
||
There are other undocumented [private] ways to interact with Superset programmatically that offer no
|
||
guarantees and are not recommended but may fit your use case temporarily:
|
||
|
||
- using the ORM (SQLAlchemy) directly
|
||
- using the internal FAB ModelView API (to be deprecated in Superset)
|
||
- altering the source code in your fork
|
||
|
||
### What Does Hours Offset in the Edit Dataset view do?
|
||
|
||
In the Edit Dataset view, you can specify a time offset. This field lets you configure the
|
||
number of hours to be added or subtracted from the time column.
|
||
This can be used, for example, to convert UTC time to local time.
|