09 Operations and Pipelines

When interactively exploring a dataset you often end up interleaving visualization and analysis code. In HoloViews your visualization and your data are one and the same, so analysis and data transformations can be applied directly to the visualizable data. For that purpose HoloViews provides operations, which can be used to implement any analysis or data transformation you might want to do. Operations take a HoloViews Element and return another Element of either the same type or a new type, depending on the operation. We'll illustrate operations and pipelines using a variety of libraries:

Since Operations know about HoloViews you can apply them to large collections of data collected in HoloMap and DynamicMap containers. Since operations work on both of these containers that means they can also be applied lazily. This feature allows us to chain multiple operations in a data analysis, processing, and visualization pipeline, e.g. to drive the operation of a dashboard.

Pipelines built using DynamicMap and HoloViews operations are also useful for caching intermediate results and just-in-time computations, because they lazily (re)compute just the part of the pipeline that has changed.

In [1]:
import time
import param
import numpy as np
import pandas as pd
import holoviews as hv
import datashader as ds

from bokeh.sampledata import stocks
from holoviews.operation import decimate
from holoviews.operation.timeseries import rolling, rolling_outlier_std
from holoviews.operation.datashader import datashade, dynspread, aggregate

hv.extension('bokeh')