By Maxine Durand
Famine, war, disease—headlines from the crisis areas of the world may sound familiar, but the study of the people affected by these events may not always be so easily recognized. Often, international relief efforts are tied to reactions to the death toll, which begs an important, if morbid, question: how do you count casualties?
Royce Hutson, an associate professor with Boise State’s School of Social Work, is working to provide a better answer to that question.
In April, Hutson returned from a trip to Beirut, Lebanon, where he worked with colleagues on a proposal to examine the methodologies for counting mortalities and health outcomes in crisis environments. His work in Beirut is aimed at reducing the number of hours of human labor it takes to complete crisis surveys, and increasing survey accuracy.
These numbers are important, Hutson said, because crisis survey data has a direct relationship on the international community’s response. If a death count goes underreported, there may not be enough aid generated to help those suffering, but an inaccurately high count might divert resources from other crisis areas or hurt the credibility of the organizations involved in relief efforts.
“You can easily over or underestimate mortality if your sample is skewed in any fashion,” said Hutson.
The Challenges of Data Collection in Crisis Surveys
Crisis surveys typically center around in-person interviews with those affected by the crisis—but arranging in-person meetings can be a challenge in terms of how to interview a person who has survived a crisis, and in selecting exactly who to meet with.
The first concern, handling respondents with care and respect, does come with a certain protocol. Hutson said it’s important to leave personal questions about sensitive topics, like sexual assault, until the end. This helps build trust with the respondent, as do assurances that participation is voluntary and answers are not forced.
“You have to train your interviewers to be empathic, essentially,” said Hutson. “You build a rapport with the respondent, and remind them constantly that they don’t have to answer if they don’t want to.”
Hutson said it’s always stated in crisis survey results that the number of people affected is almost certainly an undercount, since many people either don’t participate or choose not to give certain information because it’s too personal or traumatic to talk about.
As for selecting people for interviews, this seemingly simple task carries a lot of debate over just how it should be done. Typically, Hutson said, many researchers would use a version of the “spin the pen” method, wherein they would travel to a geographic center point—say, the middle of a large city—and literally spin a pen. The researchers would then count out the houses along the line the pen points to, and select which houses to visit from this pool randomly.
Hutson said the pen method became common practice in the 1970s, under the Expanded Programme on Immunization, or EPI—but by the 80s, problems with this means of sampling had already become apparent. Not being truly random, the pen method tends to favor houses closer to the center of the selected area, and can also skew results when considering density—a crowded apartment block and a large ranch with one domicile would both appear as only one building, but the people living in each could vary significantly in number and along demographic lines.
“Even given these fundamental flaws, the approach has been used, with some modifications, knowing that these issues exist,” said Hutson.
These challenges make it difficult enough for crisis researchers, but even if the interviews are accurately selected and conducted with care, it’s still difficult to produce accurate results if there is no baseline data to compare to—typically, areas where there is little to no information on a population from before a crisis occurred.
Researchers look for natural death rate—the rate at which people in the crisis area typically die in a given time period—and compare it to the death rate after the crisis. The difference, called the “excess death,” is a primary metric in determining just how severe any particular crisis is
Developed countries like the United States have the benefit of data from public records, like birth and death certificates, and stable infrastructure such as the phone lines used to conduct traditional surveys. In many crisis areas, however, the records needed to accurately measure casualties may no longer be accessible, or may not have been collected at all. Without a solid body of data to compare to, it’s next to impossible to see how far off a survey’s results are from the actual underlying characteristics of the selected population.
“Traditionally in the United States, if you were doing a political poll, for example, you would have information on demographics,” said Hutson. “To know if survey results are disproportionate, you would need to know that information.”
This puts crisis responders—who are often tasked with performing crisis surveys instead of trained, professional researchers—in a difficult position, where the resources they need from the international community are dependent upon data collection that could be inaccurate or incomplete.
“To expect that a humanitarian organization in a crisis area is going to have access to this high-level survey modeling software, I think is a bit too big of an ask for a lot of these organizations,” said Hutson.
A Better Methodology for Crisis Surveys
After the Haitian coup in 2004, Hutson was part of a study in Port au Prince, investigating human rights violations. Hutson and his research assistant looked at several different approaches when trying to estimate how many people had been affected, and they found the EPI’s spin-the-pen approach to be inadequate.
Instead, they used GPS to randomly select starting points within population centers—Hutson said this approach seemed novel at the time, but this utilization of GPS didn’t account for population density. A single-family house, an apartment building, or a ranch would all appear as similar data points, even though the number of people living at each type of address would be vary greatly.
Since then, Hutson and his colleagues have compensated for this disparity by creating a geocoded map of their GPS coordinates, and counted the number of households in each building—interviews were then randomly selected from each household, instead of each building.
Hutson said the research has benefited from advances in computing, since it’s now much easier to use a computer algorithm to list all of the buildings in a population center, and then weigh them against households for random sampling. It’s also simpler to identify the buildings thanks to improvements in digital mapping technology.
“Now, mapping is so good that we can identify all buildings within a given area,” said Hutson. “Before, we’ve literally had someone with a map asking, ‘Is that a building?’”
Hutson’s current work in Beirut is an experiment designed to compare the geocoding approach versus other traditional sampling methods. The Syrian refugee camps in Lebanon provide a perfect test environment, since there is already extensive monitoring and because the camps approximate the immediate aftermath of a crisis, minus the inherent security risks of being in a crisis area.
“The camps in Lebanon are extraordinarily safe, relatively speaking,” said Hutson. “Most of the health outcomes are fairly well-known, so we can compare what we get… with what the health ministry is giving.”
Hutson said he wants to carry out similar testing in multiple crisis situations around the world in order to see which approach is more accurate, but also which is more efficient in terms of time and cost. Ideally, by using off-the-shelf freeware and Google maps, similar research could be conducted by on-the-ground non-governmental organizations (NGOs) to get numbers they need to plan for prioritization and resource allocation.
“The sampling is just so much easier,” said Hutson. “In the case of some mass crisis and if there was a lot of internally displaced persons, then it could be helpful.”
Crisis Surveys and Indirect Impact
In 2017, there were over 17 million internally displaced persons—people who are forced to leave their homes because of a crisis event, but remain in their home country. This was almost double the number of displaced persons who left their countries and became refugees.
Hutson said there is only a handful of researchers working in this field, and he can usually recognize most of their names when they publish. Currently, he is working with colleagues in Canada, Brazil, and Lebanon, who all routinely perform crisis surveys, and funding for their work usually comes from governments outside the countries affected by crisis. Often, governments either don’t have the funds to sponsor a crisis survey when disasters happen, and sometimes, Hutson said, they would rather not know the results, and can be reticent about supporting these kinds of programs.
The amount of aid sent to crisis centers has a direct impact on how effective relief efforts will be, and how much suffering is mitigated—the number of lives affected, and the number of deaths, and the number of displaced persons can only go up if the international community ignores a crisis, or doesn’t give it the attention it needs. This is obviously a huge impact for places like Syria since the civil war began, but it also affects other countries that take in refugees—countries like the United States, and specifically, places within the US such as Idaho, which have fairly prominent refugee resettlement programs.
This makes it all the more important to be able to produce accurate data when resources are available to conduct surveys. During his work in Port au Prince, for example, Hutson said there was an estimated excess death of 8,000 in the first two years after the coup—which is a lot of people, but a small number in proportion to the 1.5 to 2 million people living in the city at the time. Hutson said that if survey results are skewed towards areas more prone to violence, like in a densely-populated city, the survey would have a much higher estimate of mortality, or vice-versa in areas less prone to violence.
“It’s one thing to say 8,000 people died,” said Hutson. “But if I said only 500 people died, the response from international actors will be quite meager.”
Dr. Hutson can be reached during office hours at the School of Social Work. Questions or comments about this article can be referred to the school’s main office line, (208) 426-1568.