August 3, 2017

Nirdizati Runtime

Authors: Andrii Rozumnyi, Ilya Verenich, Simon Raboczi, Marlon Dumas, Marcello La Rosa

Once the predictive models have been created, they are used by the Runtime component to make predictions on ongoing cases. Nirdizati Runtime takes a stream of events produced by an information system, transforms it into a stream of predictions, and visualizes those predictions in a web-based dashboard.

The dashboard provides a list of both currently ongoing cases as well as completed cases. For each case, it is also possible to visualize a range of summary statistics including the number of events in the case, its starting time and the time when the latest event in the case has occurred. For the ongoing cases, Nirdizati Runtime provides the predicted values of the performance indicators the user wants to predict. For completed cases, instead, it shows the actual values of the indicators. In addition to the table view, the dashboard offers other visualization options, such as pie charts for case outcomes and bar charts for case durations.

Process workers and operational managers – typical users of the Runtime component – can set some process performance targets and subscribe to a stream of warnings and alerts generated whenever these targets are predicted to be violated. Thus, they will be capable of making informed, data-driven decisions to get a better control of the process executions. This is especially beneficial for processes where process participants have more leeway to make corrective actions (for example, in a lead management process).

Nirdizati Runtime is available at http://dashboard.nirdizati.com/


Architecture

The Runtime component is built on top of the open-source Apache Kafka stream processing platform. The predictor components of the pipeline are the predictive models from Nirdizati Training. The topic components are network-accessible queues of JSON messages with publisher/subscriber support. This allows the computationally intense work of the predictors to be distributed across a cluster of networked computers, providing scalability and fault-tolerance. The collator component accumulates the sequence of events-to-date for each case, such that the prediction is a stateless function of the trained predictive model and of the case history. This statelessness is what allows the predictors to be freely duplicated and distributed. The joiner component composes the original events with the various predictions, ready for display on the dashboard.


Visualize performance predictions via dashboard

Once the predictive models have been created, they can be deployed to the Runtime predictive monitoring environment of Apromore, to make predictions on ongoing cases. The Runtime plugin bundle can be used to stream an event log from the repository, or hook into an external stream. Either way, the input stream is transformed into a stream of predictions which is visualized in a Web-based dashboard.

The dashboard provides a list of currently ongoing as well as completed cases. For each case, it is also possible to visualize a range of summary statistics including the number of events in the case, its starting time and the time when the latest event in the case has occurred. For the ongoing cases, the Runtime plugin bundle provides the predicted values of the performance indicators the user wants to predict. For completed cases, instead, it shows the actual values of the indicators. Color-coding is applied to help users quickly nail down potentially problematic cases.

A screencast of this plugin can be found here.


Export performance predictions into CSV

In addition to the dashboard for continuous real-time process monitoring, the Runtime plugin supports a “regular reports” use case where users can get reports in a CSV format on a regular basis with the current set of predictions. These reports can be readily imported into common data analytics platforms (e.g. Microsoft Excel, Tableau, QlikView, R) for further exploration and visualization.