ADVERTISEMENT

Meet The Pandemic’s Most Tracked Economic Indicator

Over the past few months, the ‘Google Mobility Reports’ have become a closely watched economic indicator.

A Google Earth map. (Photographer: Adam Berry/Bloomberg News)
A Google Earth map. (Photographer: Adam Berry/Bloomberg News)

“Google it.”

A common response to many questions. That’s been true during the pandemic too. Google Trends shows spikes in questions on “when will corona end”, “coronavirus vaccine”, “coronavirus cases” and “coronavirus.”

Like the rest of the world, Google too is fumbling to find answers to some of these questions. But it’s helping answer one crucial question with some degree of accuracy: What is the impact that lockdowns are having on mobility and what does that, in turn, say about economic activity?

Over the past few months, the ‘Google Mobility Reports’ have become a closely watched economic indicator. The ‘mobility report’, together with other indicators like the ‘Tom-Tom Traffic index, the E-way bills data and electricity consumption data among others, have emerged as a set of ‘concurrent indicators’ being tracked to assess the impact of the pandemic and design policy responses. More traditional indicators like GDP growth or industrial output come with a lag and are of limited use in a fast evolving economic crisis.

The Google Mobility Index is being put out as a ‘public good’ and is made available to anyone who wants it — policymakers, researchers or individuals. Over the past few weeks, the India index has shown improvement in activity levels, which still remains below the ‘baseline’.

How did the indicator come about? In a conversation with BloombergQuint, Anal Ghosh, senior program manager at Google India, talks about its design and where it can help.

Edited excerpts of the conversation:

The Google Mobility Reports are now among the most cited high frequency indicators on the impact of the lockdown and the consequent pace of recovery. How did the index come about?

We started releasing the reports in April but obviously started work on it prior to that. 

Since the onset of the pandemic, we have been working closely with the government and various public health authorities on various Covid response efforts. Not just in India but lots of markets like the U.S., Europe and across South East Asia. We have been hearing from public health authorities in different countries on how aggregate and anonymised data could be helpful in making critical decisions.

We have had dedicated teams across our products since the pandemic on how we can actually come up with products and features in addition to the products and features we have released like processing data through Google search, Google maps. All of that has been a broad effort across Google.

We’re glad we could turn it around very quickly and launch it globally.

The effort was collaborative. We have been working with the government and public health officials on how we can combat the spread. That’s one. Also, we have been thinking through how we can use our products to help in this scenario. So it was a combination of both. Google had this anonymised aggregated data. We have made it public before. While working with public health officials we realised that this could really help them frame lockdown strategies and social distancing norms.

You said, that this data has been made public before?

In 2019, Google published a blog post on human mobility trends. It was aimed at researchers to help them understand human mobility for predicting epidemics, urban and infrastructure planning, also understanding people’s response to conflict and natural disasters.

The changes are based on a pre-determined baseline? What drove the choice of the baseline? Why has that remained fixed? For instance, why were changes in mobility not computed on an annual basis?

We wanted to ensure that the baseline was consistent across countries. We also wanted to take a reference point, which was a closer point in time. January was a month when a majority of the areas across the world were normal. That was one of the drivers. We have taken a period from Jan. 3- Feb. 6, 2020 — that was the period before the trends started to change.

Seasonality was a factor we considered. We also considered public holidays and how local communities may have changed over the last 12 months. A lot of changes would not have been taken into consideration if we had a taken a period that was long back. Things would have evolved so we took a more recent or a more representative, as well as more consistent, baseline period across the world.

Is the indicator more representative of urban areas? How well does it incorporate the urban-rural mix in an economy?

If you see, we have initially released the reports at a country level when we launched in India. Few weeks back, we released this data at a state level so now you actually have this data for all six categories broken down by state. The important thing to note here is that the reason we are slowly becoming granular is that we have very stringent quality and privacy norms before we actually start surfacing this data.

So for example, in a region a lot of people may not have opted to share their location history. If that’s the case, then we might actually have less data which doesn’t meet our quality thresholds based on which we can gain inputs. So that’s just an example of what is driving the level at which we have been releasing data.

That is the reason we have gone top down. We wanted to share as much data as we have ready. Hence we started at a country level and went down to state level. We did this even in the US.

At this point of time, we don’t have data which we can club into rural vs urban areas. We are actually taking the approach which is the naturally defined geographical structures within the country. So we have it at state level.

The reason for doing that was also that the objective was to see the efficacy of the lockdown and social distancing guidelines as implemented by the authorities. For the authorities they would actually want to see data for the jurisdiction where they are actually implementing it.

We’re actually exploring making data even more granular. We’re hopeful to release even more granular reports in the coming months. It’s something we currently don’t have but are working on while ensuring that it still meets privacy thresholds.

So would it be fair to assume that the trends the reports point to are reflective of the urban-rural mix in the country?

These trends are across the state.

Let’s take Andhra Pradesh, for instance. We have six categories which are reported across the state. Having said that, logically, you would expect more business and more activity in the city. That is a logical deduction I am making. So the delta or the deviation would be changing much more rapidly in the urban areas than the deviation we would see in rural areas.

We have a baseline of January data. It shows changes based on the January data. So the deviation would change much more rapidly in urban areas because of the population density. But for the reports what holds true is that it is reflective of the state.

How much have you had to tweak the data from country to country? For instance, accounting for data security laws, the country’s requirements, internet penetration and other such factors.

All of these were definitely factors.

We have released these reports in some countries but not all countries globally. I will take you back to the point I made about privacy and quality thresholds and also whether we have enough data. We also had to comply with guidelines from country to country.

In India, we have been working very closely with government authorities. We have been working with the Ministry of Health and with a bunch of different bodies. We ensured beforehand that we had the policy approvals.

We have also looked at which countries have been affected the most globally. We have looked at factors such as population and whether we are actually able to get granular data or not. So, in a small country if we go into state level, we may not have confidence on the data.

We can’t be absolutely accurate but we have to be as confident as we can be about the data before we release it. That’s why we have pre-determined thresholds.

What has been the nature of engagement with governments. How much are you engaging with governments and how is that working?

Broadly we have engaged with central authorities and state govt authorities dedicated to the Covid-19 response over the past two months. We have actually received good feedback from a lot of state governments on how useful they have found these trends, especially before and after the lockdowns. It has helped in planning lockdown decisions and what should be included in the lockdowns. Once a lockdown has been announced, it helps in judging the efficacy and how the decisions should evolve on the basis of the changing activity.

In addition, we have also seen policy reports around it. I came across a working paper by the Indian Institute of Science and Tata Institute of Fundamental Research on how governments can come up with lockdown easing strategies and the mobility reports were a key inputs of the paper.

The data made available to governments is the same as what the public has?

In terms of data that is available, effectively what is available to the public is what is available to authorities.

What is your assessment of the data?

We have not taken a stance on how this data is changing or how it compares across countries. Even the reports we release do not have much text.

The analysis is best left to experts. We are experts on the underlying data ⁠— how well we can anonymise and aggregate this data. But we are not influencing decisions here. That’s left to people in-charge of these decisions.