The myth of the average

What do civil servants know, what don’t they know, and how do they know? We’re testing how a new intervention called Grounded might complement empirical data with ethnographic data. Keep reading to download results from our very first beta-test in Ottawa, and to understand why ethnographic data is so valuable precisely because it isn’t representative.

Testing how policy analysts might use ethnographic data in their jobs.


On any given night, 28,500 Canadians are homeless. Over a year, up to 300,000 Canadians experience homelessness.* The price tag across our legal, health care, and social care systems? $7 billion dollars, which amounts to about 2.5% of Canada’s total expenditures in 2015.**

Numbers give us a sense of the scale and directionality of the problem: Is it growing? Where is it most acutely felt? What’s the mean wait time to get a shelter bed, to get subsidized housing, or to get into a treatment program?

Numbers don’t tell us what precipitated homelessness, the experience of receiving services, or why a segment of the ‘house-less’ do not use services, or use services but remain on the streets.

In other words, numbers can give us the impetus to act but are insufficient for telling us how to act. How do we complement statistical snapshots of a population group with ethnographic portraits, which offer starting points for program design and policy evaluation?

Introducing Grounded

Over the past month, we’ve been testing a new intervention called Grounded. Grounded is a feedback loop for people in power. We envision it as an online platform with long-form narratives of people experiencing social policy problems and aggregated qualitative data of peoples’ experiences with health, justice, and social care services.  We’ve started with street-involved adults.

Few policy analysts have a direct line to the end users of their programs and policies. Those in a program design or oversight role may be in touch with service providers. Those in a policy role read evaluation reports or academic studies that naturally have a time delay to when the problem is experienced.

Our hunch was that real-time data from end users could provide additional intelligence during the problem definition, option setting, and evaluation phases of the policy process.

You can read more about how that hunch played out during our first trial of Grounded with federal civil servants by downloading our short paper, Grounded Beta Test 1 Reflections.


Challenging what constitutes evidence

It’s fair to say that the ethnographic data sparked a polarized reaction. There was a small, but very eager group of civil servants who were enthused at the prospect of accessing richer, more contextual information – to build empathy, but also to open up the policy process. There was a larger, perhaps more pragmatic group of civil servants who could not see how this data could be used – since it wasn’t (1) representative, (2) provincial or national wide, (3) neutral, or (4) de-personalized. And because it was not being asked for on their briefing notes, or by their deputy ministers.

Empiricism is firmly rooted within the public service. What counts as evidence is large-scale quantitative studies, where the effects can be generalized across a whole population and whole geography. That makes some sense when the policy issue is universal – say, about the majority’s access to primary health care or education. Trouble is, many of our social services (e.g shelter systems, child protection systems) are designed to be targeted – to address the people on the margins, who are not, by definition, the average.

Indeed, most of the street-involved adults we work with would not be in existing quantitative datasets. They aren’t part of the census. They often do not have health care cards. Many don’t even have a current ID. And even if we were to count them, we wouldn’t know why the average service interventions haven’t worked for them. Extreme sampling – rather than representative sampling – is a more logical method when the policy question at hand is about the people at the end of the bell curve. Extreme sampling won’t produce big data sets. It will give you data which has a different purpose: to surface multiple options, rather than validate a single option.


A call for small data

Big data is all the rage these days. Governments are increasingly looking at the data they do have, and trying to make better sense of what it tells them. This is important work. But it’s not enough. At the same time as analyzing big data, there is a need to collect and use small data. That’s data about people’s experiences, motivations, perceived barriers, and enablers.

Yes, this data is inherently subjective. It is from the perspective of end users, and the ethnographer who shadowed them. Ethnographers have no vested interest in the current system. Their role is to describe what is unfolding. Unless we understand what unfolds for end users and how they see things, how will we make policies and programs that work with & for them?  Given people are on the margins because of the failure of mainstream services, isn’t that perspective particularly valuable?

All data reported by people has a bias. Whenever a survey is designed, the researcher or policymaker chooses which questions to ask. The choice of those questions reveals a particular point of view. How end users fill out survey questions also depends on their perceptions of where the data goes. We’ve seen how people rate themselves on a depression scale, for instance, shifting depending on who asks the questions. Similarly, whenever a professional enters data into a system, there are incentives at play that influence what is reported and what is ignored. If funding is dependent on the number of people served, definitions around ‘receiving a service’ can be fluid.

If we discount qualitative data because of its bias, then we should surely do the same for much of the quantitative data that policymakers use. Rather than attempt for ‘bias-free’ data, we should try and understand what the bias is – because that actually tells us a lot more about how to design systems with fewer perversions and better incentives.

True evidence-based policymaking wouldn’t reject intelligence that doesn’t fit the mold. Instead, it would make an effort to understand what we know, what we don’t know, and most importantly, how we know. Indeed, if policymakers were to be truly rigorous, they would revisit the epistemological basis behind their decisions. We’d love for Grounded to help, if it can.