9min read

Unconventional WiFi use: watch MCE conference heat up

In January 2014, we organized first edition of the MCE conference in Warsaw, Poland. Even though it was the first time we organized such an event, it’s been really successful. Now, while we organize a second edition we analysed past one, and we found out that we can show in rather cool way what was really interesting for attendees of the conference (450 of them!).


You can see the results of our work here (click to see the interactive, animated version!) :


You can see the heatmap of people changing over time, and also the corresponding screenshots of talks at the time (we’ve published all the talks for free!).

If you want to find out how we’ve done that - read on.

Devices and data gathering

We knew from the very beginning that WiFi is really important for our attendees - and we could not simply outsource it. Thats why we decided to do it ourselves. After the event we gathered a lot of feedback from the attendees, and one of the comments we had was that the WiFi was really great - especially comparing to other events people were attending.

We needed to be able to provision a new device and reconfigure part of the network on the fly. The OpenWrt configuration consist of simple text files that can be easily managed with a few bash scripts over SSH. Our great team at Polidea with Kamil and Maciek leading the effort managed to build a complete wireless system with help from Warsaw’s hackerspace.

That turned to be really good decision - on top of really great experience for our attendees, we had a chance to do more than simply delivering the internet… We could log quite a lot of data. But we logged only the absolute minimum data to be able to make some useful analysis: MAC addresses (to distinguish between the devices), time of last data transfer and signal strength (which finally we did not use). We did not log URLs, session times, nor data transfer - anything that could be considered as private data. Then for processing we even aggregated the data and removed MAC addresses to make sure we are not violating anyone’s privacy.

On every hotspot in /www/cgi-bin we put simple bash script. This is special folder which is exposed over www. When script has executable flag set it can run commands on router. In our case the script was simply printing all associated stations with information how long ago transmission was seen. One the other side we had server with MySQL database and second script which was crawling all the devices every minute. Received data were then inserted into the database for later analysis.

These data we later aggregated into 5-minute slots with information how many devices were connected in the cinema rooms. Aggregation were needed to remove some fluctuations, ie. random reconnections on the edge of wireless coverage.

That was Devops approach from the very beginning.

Data analysis

So what can you do when you run the WiFi hotspots at the conference? It turns out that you can do quite interesting visualisation and analysis.

First of all - our network was open - you did not have to have password to access it, instead you just had to confirm that you agree to the terms of use of the network - more than half of the people attending the conference connected to the network because of that (people got notifications on their devices that open network is available). It was also so much easier to communicate (i.e. we did not have to communicate it at all ;) ). Thanks to that almost everyone connected to our wifi network (and with multiple devices).

Secondly - the Praha Cinema in Warsaw has a lot of insulation between the cinema halls and the main cinema hall . The insulation was mainly about sound separation of course, but it turned out that it’s also a great WiFi signal barrier - thanks to that we could tell at any time during the conference how many connections to WiFi hotspots were established in which room. So - what can you do with that data? Of course you can visualise it showing the heatmap of people during the conference! It’s obvious, right?

Heatmap visualisation

Since at Polidea we love diversity, polyglot programming and good engineering in general, we chose a mix of tools and languages to help us to get there fast and efficiently.

The project is open-sourced and hosted on github (obviously). You can see the final result at . The technologies involved are: python scripts using some useful libraries, unix command line tools (ffmpeg and imagemagick), storing processed data in yaml, csv and json format, finishing with plain HTML (with small jquery’s help) to present the data and javascript to provide the interactivity, using some simple, but efficient techniques to make the animations smooth and hosting the page using Github Pages. All this backed by our great creative team’s ideas on how to visualise the heatmap.

Front-end technologies

  • We have a couple of scripts which allows us to manage all the devices from command line.

  • The data from devices needed to be fetched and put into database using script added to crontab.

  • Later we had to post process raw data in database. We did it with simple PHP script which produced readable .csv output. The data was in 5 minute intervals and it was simply number of people that were connected to our WiFi in each room (in each interval).

  • In order to be able to experiment, we defined the “cinema world” in file - it describes the cinema using simple dimensions. Thanks to flexible configuration we could generate the cinema room layout image and see if our rooms were defined properly and also change the dimensions very easily in the final version. That was done using script and the great PIL library

  • We also defined movies.json file where we stored data about the presentations, number of frames, links to youtube recordings.

  • Extracting talk data was fairly easy to do as we already had structured yaml data about presentations. Data that we used to generate our page (hosted with Github Pages of course and preprocesed using Ruby’s Jekyll). Which BTW is also used to generate the page you are looking at - and the whole of Polidea’s website.

  • We also added schedule definition (schedule.json), where the movies were allocated to the rooms and timings. You might notice that by clicking the screenshots of talks you can get directly to the youtube talk recording at the exact time of screenshot. How cool is that?

  • We developed several python scripts that pre-processed the data and movies:

    • script to extract snapshots of the recordings using ffmpeg utility from the original mpg files for the talks.
    • python script to combine the snapshots into bigger montage images - useful for performance (more of this later)
    • script that preprocessed raw data from routers into data that was directly useful for visualisation. It’s fairly complex on it’s own, so more information about this later
    • finally mce-heatmap.js javascript file that provided interactive heatmap generation and asynchronous loading of the data generated by python scripts. Here some optimisations were applied to get really smooth animation - which is explained later

Visualisation preprocessing

The raw data was pre-processed in order to have really good looking visualisation and smooth animation:

  • we did not have the actual location of people, we only had data about the hall where people connected. So we had to “cheat” a bit - the heatmap “points” - representing people - were randomly generated rather than known by us

  • we interpolated the 5-minute-interval data to have 1 minute intervals - that’s another small “cheating” but providing much smoother transitions

  • we adjusted the data between the frames to account for exits from individual halls and from the cinema. If the number of people in each hall changed - we assumed that during that frame those people were moving out or in the hall (and we placed them in the exit area for that hall). If the total number of people in the cinema changed - we assumed that people were moving in or out the cinema, and we placed them in the cinema exit area

  • we needed to have smooth transitions during animations. If we get random distribution for each frame, prepared separately, the heatmap would change significantly after every frame. So we’ve done it incrementally (or decrementally) - each following frame is built from the previous frame. The points from previous frame are base for the next one - if the new frame has more points, only the few new points are randomly added, it the new frame had less points - the missing points are removed from the previous frame points.

  • The preprocessed data is stored in data_source.js in format ready to use by the awesome library (and asynchronously loaded by the main javascript file).

Javascript interaction and optimisations

  • Heatmap.js library is great to generate static heatmaps (it uses HTML5’s canvas to draw the heatmap) but we had to perform several optimisations for smooth transitions:

  • We used the old trick of double-buffering - we generate two neighbouring frames and then transition between the frames using parallel fade-in / fade-out

  • We display the current talk screenshots in parallel. In order to avoid reloading images for each frame - we use montages prepared by the python scripts and css image sprites to display appropriate frame from the montage. This way we don’t reload image at every frame, only when the talk changes we reload the whole talk montage and keep it in memory for following frames.

  • we use requestAnimationFrame in order to automatically adjust the processing time to the speed of client’s browser - if it takes longer to process a frame, then some of the frames will be automatically skipped (total animation frame remains constant then independently on the speed of the client’s browser)

  • We used rangeslider library to get custom slider - we even made a small fix that now got into the released version - we had more frames (steps) than most of the slider users and there was a subtle bug with rounding steps that manifested itself sometimes when slider was clicked.


That was really nice exercise to develop this small interactive visualisation of the WiFi data that we gathered. The polyglot programming and devops approach to your infrastructure is right - combining several technologies, tools and languages, can be efficiently used to get cool results. We cannot wait to see what we can get with MCE 2015 which is happening soon (hint: NFC, BLE, beacons). If you want to be part of it and your interest is in Mobile or Internet of Things - buy tickets at



Contact us if you have any questions regarding the article or just want to chat about technology, our services, job offers and more!


Sign in and expect sharp insights, recommendations, ebooks and fascinating project stories delivered to your inbox

The controller of the personal data that you are about to provide in the above form will be Polidea sp. z o.o. with its registered office in Warsaw at ul. Przeskok 2, 00-032 Warsaw, KRS number: 0000330954, tel.: 0048 795 536 436, email: (“Polidea”). We will process your personal data based on our legitimate interest and/or your consent. Providing your personal data is not obligatory, but necessary for Polidea to respond to you in relation to your question and/or request. If you gave us consent to call you on the telephone, you may revoke the consent at any time by contacting Polidea via telephone or email. You can find detailed information about the processing of your personal data in relation to the above contact form, including your rights relating to the processing, HERE.

Data controller:

The controller of your personal data is Polidea sp. z o.o. with its registered office in Warsaw at ul. Przeskok 2, 00-032 Warsaw, KRS number: 0000330954, tel.: [0048795536436], email: [] (“Polidea”)

Purpose and legal bases for processing:


Used abbreviations:

GDPR – Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

ARES – Polish Act on Rendering Electronic Services dated 18 July 2002

TL – Polish Telecommunications Law dated 16 July 2004

1)        sending to the given email address a newsletter including information on Polidea’s new projects, products, services, organised events and/or general insights from the mobile app business world |art. 6.1 a) GDPR, art. 10.2 ARES and art. 172.1 TL (upon your consent)

Personal data:name, email address

2)       statistical, analytical and reporting purposes |art. 6. 1 f) GDPR (based on legitimate interests pursued by Polidea, consisting in analysing the way our services are used and adjusting them to our clients’ needs, as well as developing new services)

Personal data:name, email address

Withdrawal of consent:

You may withdraw your consent to process your personal data at any time.

Withdrawal of the consent is possible solely in the scope of processing performed based on the consent. Polidea is authorised to process your personal data after you withdraw your consent if it has another legal basis for the processing, for the purposes covered by that legal basis.

Categories of recipients:

Your personal data may be shared with:

1)       authorised employees and/or contractors of Polidea

2)       persons or entities providing particular services to Polidea (accounting, legal, IT, marketing and advertising services) – in the scope required for those persons or entities to provide those services to Polidea


Retention period:

1)       For the purpose of sending newsletter to the given email address – for as long as the relevant consent is not withdrawn

2)       For statistical, analytical and reporting purposes – for as long as the relevant consent is not withdrawn

Your rights:


Used abbreviation:

GDPR – Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016
on the protection of natural persons with regard to the processing of personal data and on the free movement
of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

According to GDPR, you have the following rights relating to the processing of your personal data, exercised by contacting Polidea via [e-mail, phone].

1)       to access to your personal data (art. 15 GDPR) by requesting sharing and/or sending a copy of all your personal data processed by Polidea

2)       to request rectification of inaccurate personal data
(art. 16 GDPR) by indicating the data requiring rectification

3)       to request erasure of your persona data (art. 17 GDPR); Polidea has the rights to refuse erasing the personal data in specific circumstances provided by law

4)       to request restriction of processing of your personal data (art. 18 GDPR) by indicating the data which should be restricted

5)       to move your personal data (art. 20 GDPR) by requesting preparation and transfer by Polidea of the personal data that you provided to Polidea to you or another controller in a structured, commonly used machine-readable format

6)       to object to processing your personal data conducted based on art. 6.1 e) or f) GDPR, on grounds relating to your particular situation (art. 21 GDPR)

7)       to lodge a complaint with a supervisory authority,
in particular in the EU member state of your habitual residence, place of work or place of the alleged infringement if you consider that the processing
of personal data relating to you infringes the GDPR
(art. 77.1 GDPR)

No obligation to provide data:

Providing your personal data is not obligatory, but necessary for Polidea to provide you the newsletter service

Refusal to provide the above data will result in inability to receive the newsletter service.


In the process of providing the newsletter service, we make decisions in an automated way, including profiling, based on the data you provide.


“Profiling” means automated processing of personal data consisting of the use of your personal data to evaluate certain personal aspects relating to you, in particular to analyze or predict aspects concerning your personal preferences and interests.


The automated decisions are taken based on the analysis of clicked and viewed content. They affect the targeting of specific newsletter content to selected users registered to receive the newsletter service, based on the anticipated interests of the recipient.