Most project involving air quality portable devices start this way: “the device is going to be compared to a reference instrument in order to evaluate how good the data is and see if it can be used”. While this process is undoubtedly necessary in order to evaluate the reliability of the device (and integrated sensor), it is far from being the only issue we need to take into consideration to assess how valuable the data collected by the device is. And some of the first questions could probably be “what do we want to measure?”, “what are we trying to unveil?”. While this could seem obvious these aspects are often overlooked as current protocols mainly follow the same process: inter comparison, data collection, data processing, data exploitation, conclusions. Considering the change of paradigm driven by these disruptive technologies this approach is too limiting to fully understand their potential.

Sometimes presented as “personal sensors” or “low-cost, palm-sized air quality instruments” these devices are often considered as gadgets or cheap (barely reliable?) solutions to measure air quality. In best cases they are used to create crowd sourced air quality maps but often with a questionable methodology. We are just scratching the surface of what this new technology has to offer and even if comparing a solution to existing ones is a logical and natural reaction, the past holds many examples we can learn from when it comes to evaluating the impacts of innovation in existing technological ecosystems.

Lessons from the past

When Google acquired Keyhole for its globe mapping technology back in 2004, it created a major disruption in the mapping industry, mainly because the data and the tools used to navigate the world were suddenly made available to anyone with a simple computer, without prior mapping expertise. The GIS (Geographic Information System) community that had been working on the digital mapping of the world since the beginning of modern computer science felt somehow disoriented (/irritated) as this precious knowledge was getting out of its control. Of course, Google Earth is very limited compared to any GIS software but more and more when people wanted to present geographic information, they would use Google Earth and not only on TV but also in professional environments such as the military, education… The frustration grew even bigger as for many, Google had become a reference for maps, leaving the traditional GIS actors in the shadow. Even if Google Earth has become less popular over time, Google remains a top player in the mapping industry and a major source of innovation in that field. GIS remains a strong industry with a unique expertise but when looking at the impact of mapping technologies on people and on the economy, Google, Microsoft or Apple come first … by far.

But what did they have that the pros didn’t?

By using mainstream technology (web/gps enabled smartphones/apps) the potential audience suddenly became worldwide. No technological frontier as maps could come in our hands at no cost and with a friendly user experience.

However, making the technology and data accessible is one thing but creating so much impact cannot just be explained by the basic curiosity “can I see my car in front of the house?”. If Google (now Alphabet), a company that makes about 85% of its revenues on advertising, has invested so much on maps it is because it served a dedicated purpose as back in 2004 “Google was finding that over 25% of its searches were of a geospatial character, including searches for maps and directions1”. People were not interested by maps as descriptive representations of the earth but as a tool to locate points of interest and basically get there. In one case maps were the final product but for Google maps were “just” a media. Yet Google has changed the way we navigate the world because of these two main factors: making the information accessible and satisfying a targeted need/expectation — and not because they were “better at maps”.

In the past years another example has also illustrated the impact of disruptive technologies: the rise of UAVs (Unmanned Aerial Vehicle) for earth observation. As a specialist in satellite/airborne mapping and early adopter of “non-conventional” sensing platforms, I also had the chance to witness the impact of this new technology in a rather elitist/scientific oriented community. Low quality sensors, with poor selectivity, without proper calibration, with no correction of signal alteration caused by atmospheric conditions, with little control or information about the acquisition platform attitude (XYZ position, pitch/yaw/roll) …it was more than enough for the remote sensing community to have a poor opinion about what this technology could offer.

Still, UAVs have become for many a reference and a popular source for aerial views, even though it can appear as a much lower level of technology compared to multimillion-dollar satellites. UAVs are now used for many types of mission among which disaster mapping, real time war supervision … If you are monitoring a flood and need to take a decision on whether or not evacuate people, traditional remote sensing is a precious source of information but you may have to deal with clouds (unless using radar, in some cases), you may have to wait hours or days before a new pass over the area of interest and have to add the time for data download and processing. This may take hours or days and by the time you get the data, the crisis may be over and that highly accurate data may be out of date in regard of your current problem. On the other hand, using a UAV will (in some circumstances) give you the possibility to monitor events in real time, even if images are not perfectly calibrated or georeferenced.

Even if both technologies involve earth observation, they don’t necessarily serve the same objective: while traditional remote sensing is best at getting the big picture, UAVs are by far better to report on critical/specific real time situations.

So, does it mean that traditional instruments and expertise are getting progressively out of date? Definitely not, they just don’t serve the same purpose.

Air quality mapping vs personal exposure

Today with your smartphone, you can easily find out the quality of your surrounding air using an app, connected via an API to an air quality map created by a model based on in situ data (from ground stations), remote sensing data (Copernicus for example) and statistical information (traffic, emission inventories…). By using mainstream technology, air quality information has been made easy to access to anyone and is getting better with high accuracy 3D models that can describe with high fidelity the dynamics of pollutants over large areas. However, what happens if I go past a highly polluting vehicle? What happens if I am smoking in my car? Of course, this has nothing to do with maps of air quality however, it has a lot to do with what most people expect: is the air I breath harmful or not? Personal sensors versus traditional technics may also create some deeper issues: imagine a person going past a heavy smoke. His low-cost sensor (and probably his nose and eyes) tell him there is a problem but the map coming from official sources says everything is normal. Most people will understand the reason why the map is not “correct” but some may conclude that official maps are not reliable or even in some cases that official sources try to hide the truth.

The whole thing is that we are just not trying to measure the same thing, even if both are about air quality: in one case we are trying to create maps of air quality over large areas, while in the other case we want to measure our personal exposure. Sure, with traditional air quality maps you can give an indication of how polluted the air is for a specific user/place, but still in most cases, data is updated once or twice a day and with a low spatial resolution. And of course, what happens once we get indoor, where we spend about 80% of our time?

So, does it mean this data is not relevant? Definitely not, it just serves a specific purpose… and so do personal sensors.

The place of personal sensors in the “value chain” of air quality is therefore more than just about how can it fit in existing data processes. It is more about what is the added value, about the insights, the understanding it may provide to its user.

Personal devices in the air quality monitoring “value chain”

Then, what are personal sensors good for and how do they fit in the air quality value chain? If we take some distance and relate to previous examples it could be summarized this way: air quality maps give the big picture, personal sensors report a specific situation. That very specific situation that matters for the user but that could be seen as an artefact for a person doing air quality modeling. So, how does that mix together? Let’s compare the pros and cons, beyond the simple accuracy criteria (that remains of course a central issue):

– Personal devices have a limited number of sensors compared to traditional stations.

– Reference station’s sensors are more accurate.

– Personal sensors are sensitive to local conditions. They don’t relate about the street air quality but about the spot where I stand.

– Traditional stations often have a lower temporal resolution of acquisition and it takes time between the moment the data is collected and the moment it is made available.

– Personal sensors are cheaper, mobile, autonomous and may be used by non-professionals.

There are many more aspects but, from and end-user perspective, we could say that the data coming from a personal sensor will seem more relevant to report about an instant situation (here and now) while the traditional reference station/modeling process/maps is more relevant when it comes to projections in a distant space or time (there and then). While a personal sensor is perceived as a short scale risk detector, the other is more relevant to answer the question “what will be the air quality when I get there”, as a trip advisor for air quality.

So, does this mean that data collected from portable sensors cannot be used for air quality mapping and are just good for personal exposure? As described above, even if we consider it is not their primary value, in some conditions the data collected from these devices may have an important added value if integrated in air quality modelling process, but under some conditions, the most important one being documenting the “context”. Context is the single most important factor to transform basic data into a valuable information as it will allow to narrow down the variables and therefore provide the possibility to provide meaningful information. Reference stations have high quality sensors but they also have limited variables: air pollutants. Location, close environment, weather… all these are fixed or monitored so scientist can process the data accordingly, by grouping related data with each other.

Personal sensors have by themselves too many variables. In the data stream it is impossible to tell if the raw data is relevant or not. If I am smoking in my car next to my sensor, of course that data makes no sense for air quality mapping and it has to be removed for that purpose, even if it is an interesting information about my personal exposure. That data has to go through a different stream and this can only be done if context is provided.

And by the way, this goes for “low tech” personal sensors but also applies for mobile air quality mapping done with high end (costly) sensors. If in one case the data is captured at 2 AM on Sunday, then at 8:30 AM on Monday, 8:30 AM on Tuesday after the rain, at 8:30 AM on a strike day (happened often lately in France…), or if I got stuck for kilometers behind an old diesel bus? Does it make sense to mix that data to create maps? Even if shot with high end instruments?

On the other hand, wouldn’t it seem acceptable to use the data collected from personal sensors if all used in a known/standardized or at least documented environment (window sill, away from indoor air pollution…)? In such conditions it could even provide insights hard to get with traditional instruments like for example the behavior of pollutants depending of floor level, building orientation…

Documenting the context is therefore almost as important as collecting the data as it is a capital ingredient to turn it into valuable information.


As learned from past examples on disruptive technologies, personal devices are a real opportunity for the air quality monitoring community. It is part of the missing link that could help us change our behavior as individuals as it deals directly with our personal exposure and may provide concrete feedback as we take action. If enough information is provided about the context, data crowdsourced from individual devices could also provide a fantastic opportunity to densify existing air quality monitoring networks. But beyond just collecting more data, it can also provide a new perspective to better understand the dynamics of air quality.

Future challenges are of course about sensor reliability in order to get better data but it is also on the combination of the added value provided by each technology, taking into consideration their specificity. One key factor will be the development of automatic process to report and document the context of the measurement in order to integrate the data into dedicated air quality data processes.

Personal sensors are not better or not as good as, they are a different a technology that serves a different purpose. Therefore, beyond trying to just “mix the data” the real progress will come from our ability to combine insights derived by these technologies of different but complementary nature.


1. Bill Kilday; Never Lost Again: The Google Mapping Revolution That Sparked New Industries and Augmented Our Reality; HarperBusiness, 2018

2. AirLab 2019 International Microsensor Challenge results (accessed May 2020) pp35

3. PMSCAN specification sheet (accessed May 2020).

4. NextPM specification sheet (accessed May 2020).

5. DIAMS presentation

Votre contact
David Riallant

+33(0)6 43 11 36 52


Un projet ? Parlons-en !