traffic on motorways

counting locations

Across germany there are thousands of counting locations on the main roads and the count of vehicle crossing the section of the street is (https://www.bast.de/BASt_2017/DE/Verkehrstechnik/Fachthemen/v2-verkehrszaehlung/Stundenwerte.html?nn=1817946")[public]

bast_germany BaSt Germany

For each of those location we select a pair of openstreetmap nodes (arrows) for the same street class.

via_tile_selection For the same BaSt location the tile intersects more streets

To stabilize over year we build an isocalendar which represents each date as week number and weekday. We see that isocalendar is pretty much stable over the year with exception of easter time (which shifts a lot).

isocal_deviation isocalendar deviation

data set preparation

We take hourly values of BaSt counts and we split in weeks. Every week is represented as an image of 7x24 pixels.

time_series image representation of time series

The idea is to profit from the performances of convolutional neural networks to train an autoencoder and learn from the periodicity of each counting location.

Convolutional neural network usually work with larger image sizes and they suffer from boundary conditions that creates a lot of artifacts.

That’s why we introduce backfold as the operation of adding a strip to the border from the opposite edge.

backfold backfolding the image

In this way we obtain a new set of images (9x26 pixels)

time_series image representation of time series with backfold

And produce a set of images for the autoencoder

dataset image dataset

model definition

We first define a short convolutional neural network

3d short convNet in 3d

We than define a slightly more complex network

3d definition of a conv net in 3d

In case of 7x24 pixel matrices we adjust the padding to achieve the same dimensions.

training

We fit the model and check the training history.

history training history

Around 300 epochs the model is pretty stavle and we can see the morphing of the original pictures into the predicted

morphNo raw image morphed into decoded one, no backfold

If we introduce backfold we have a slightly more accurate predictions

morph raw image morphed into decoded one, with backfold

The most complex solution comes with the deeper model

morphConv morphing for convNet

tuning

We worked to tune the network to avoid the system to fall in a local minimum

mimimum training is trapped in a local minimum

results

At first we look at the results of the non backfolded time series

shortConv results for the short convolution, no backfold

If we add backfold we improve correlation and relative error

shortConv results for the short convolution

The deepest network improves significantly the relative error but as a trade off loose in correlation

convNet convNet results with backfold

scores

The deepest network improves drastically the relative error sacrifying the correlation

boxplot_corErr boxplot correlation and error difference between models

Correlation not being in the loss function is really disperse while optimizing

confInt confidence interval for correlation and relative error

Ranking is not stable among the different methods

sankey Sankey diagram of correlation shift between different methods

Different methods behave differently wrt the particular location

sankey sankey diagram of error reshuffling

The deepest network tend to amplify the bad performances in correlation

parallel parallel diagram of correlation differences

The short backfolded model has the worse performances for locations that had the best performances in the non backfolded version

parallel parallel diagram of relative error

dictionary learning

We perform a dictionary learning for knowing the minimal set average of time series to describe with good accuracy any location. For that we will use a KMeans

clusterer = KMeans(copy_x=True,init='k-means++',max_iter=600,n_clusters=4,n_init=10,n_jobs=1,precompute_distances='auto',random_state=None,tol=0.0001,verbose=2)
yL = np.reshape(YL,(len(YL),YL.shape[1]*YL.shape[2]))
mod = clusterer.fit(yL)
centroids = clusterer.cluster_centers_

We start with the most common time series and we calulate the score of all locations on that cluster

cluster most frequent cluster

We realize the 90% of the locations and weeks have a correlation higher than 0.9

cluster_histogram kpi distribution for single cluster

A single cluster is already a good description for any other location but we want to gain more insight about the system. We than move the 2 clusters to classify the most important distintion between locations which we will call “touristic” and “commuter” street classes.

cluster most 2 frequent clusters, touristic and commuter

We can extend the number of cluster but we don’t significantly improve performances

cluster most 24 frequent clusters

If we look at the KPIs distribution 4 clusters are the best trade-off between precision and computation

cluster_histogram cumulative histogram for correlation and relative error

cluster_histogram histogram for correlation and relative error

If we look at the most 4 frequent clusters we see that they are split in 2 touristic and 2 commuters.

cluster most 4 frequent clusters

We want than to see how often a single location can swap between commuter and touristic and we see that locations are strongly polarized though all the year

cluster_polarization cluster polarization

If we look at the weekly distribution we see that the commuting pattern ressamble our expectation

commuting_pattern commuting pattern strength through all locations

To compute a common year we build an isocalendar which is the representation of a year into

via nodes

To select the appropriate via nodes we run a mongo query to download all the nodes close to reference point. We calulate the orientation and the chirality of the nodes and we sort the nodes by street class importance. For each reference point we associate two via nodes with opposite chirality.

We can see that the determination of the via nodes is much more precise that the tile selection.

via_algo identification of via nodes, two opposite chiralities per reference point

The difference is particular relevant by junctions

junction via nodes on junctions, via nodes do not count traffic from ramps

morph external data

Once we have found the best performing model we can morph our input data into the reference data we need

join_plot distribution of via and tile counts compared to BaSt