HISWAI shows you what’s connected to topics that interest you, and where to find more. HISWAI members can curate information with dashboards to gain insights, proper context and stay up to date on the latest relevant information. Sharing these discoveries with others is easy too!
Date: 2022-01-14 23:33:39
Tags for this article:
Last year, the Uber team introduced Orbit, a Bayesian time series modeling user interface which is simple to use, adaptable, interoperable, and high-performing (fast computation). Orbit uses probabilistic programming languages (PPL) for posterior approximation. So far, it is the only tool that enables simple model specification and analysis without being restricted to a small number of models.
Uber Team has recently released version 1.1 of Orbit, which includes changes in the syntax of calling models, the new classes design, and the KTR (Kernel Time-varying Regression) model.
The new version has a different syntax for calling models due to class redesigning. In the new design, there are mainly three classes that developer/advanced users may notice:
The benefit of this architecture is that it separates model research and numerical answers. Additionally, if someone wants to improve the whole workflow, they can work on the Forecaster.
The new version includes Orbit’s kernel-based time-varying regression (KTR) model, which defines a smooth, time-varying representation of regression coefficients using latent variables. Kernel Smooths were used to create these renderings. Time-varying regression coefficients can be used to represent systems that change over time in a highly clean (and easy-to-understand) fashion.
Some of the key highlights of KTR are mentioned below:
In addition to all this, the model diagnostic and validation tool has also been refactored and improved in version 1.1. One significant change is that users can now pick the format for extracting and exporting posterior samples. Most of the plotting features in the widely used ArViz package can now be used thanks to a newly supported format. Users can perform diagnostics and compare outcomes among models using the information and insights provided by these representations.
The Orbit team has constructed an internal backtesting dashboard to illustrate the correctness of Orbit models on a variety of data sets. Internal and external datasets are analyzed and fed into predefined models once a week. The findings of the backtest are saved in the database and displayed on the internal monitoring dashboard. This dashboard calculates metrics like symmetric MAPE (sMAPE) to see if changes from the previous Orbit version impacted model performance. Other metrics are also supplied, such as the successful run rate, to ensure that stability and run-time are satisfied. For benchmarking, the dashboard currently includes two prominent Python models (SARIMA and Prophet).
The employment of data-driven success criteria enables close monitoring of model performance. The team plans to expand the backtesting framework to a wide range of models, datasets, and metrics.