How We Made “Behind the Bloodshed”
Behind the scenes with USA Today and Gannett Digital
“Behind the Bloodshed: The Untold Story of America’s Mass Killings,” is a collaboration between the database team at USA Today and Gannett Digital’s interactive applications and design teams. We chatted with Anthony DeBarros of Gannett Digital, with input from colleagues Juan Thomassie and Destin Frasier, on the project’s origins, design, and code, the challenges the team encountered as they made it.
How and when did this project begin? And how did your team make the decision to do this story as an interactive feature?
After the Newtown, Conn., murders last December, there was obviously a lot of attention on the topic of mass killings. That led to conversations around going beyond the surface of the event and the often-politicized aspects of that kind of violence, such as gun control. Right off the bat, there was a general thought about getting a database online showing the history of mass killings in America, and USA Today’s database team started the painstaking process of building a data set. Meghan Hoyer worked with Brad Heath of their investigative team to use the FBI’s Supplemental Homicide Report as a starting point, working fast on an initial analysis that they published a few days later.
Pretty soon after that story ran, our team at Gannett Digital met with the database editors to talk about how we could visualize the data they were gathering. Their data journey was still early, because their initial analysis showed that the FBI’s data was limited and often inaccurate and didn’t include any narratives of the cases. Hoyer and Paul Overberg and Jodi Upton and others on that team ended up spending a good while tracking down cases and unearthing all the details around victims and weapons and suspects.
Our team got involved in earnest in March, when we pulled some of their data, and Juan Thomassie used D3 to prototype a few visualizations. We played around with how to best highlight patterns, at that point just thinking of these as standalone charts. That early exploration boiled down to three main ideas:
- A filtered data table and map that would show the mass killings by type of weapon, by how victims died, number of victims, or by filtering text strings;
- An interactive histogram of the victims by age and by how they were killed;
- A timeline of the killings that would show both frequency and year-by-year trends.
We brought these interactive sketches back to the database team and started to hash out where we could go from there.
How did you arrive at the feature’s current form, with a chaptered exploration of specific findings and the more freeform exploration at the end?
It was a bit of a winding road at the start, to be honest. None of us liked the idea of just running a couple of standalone charts, especially since the data revealed a number of counter-intuitive findings — that family killings were really the majority of mass murders, for example. Also, everyone here was really happy with how Ghost Factories had turned out, and we wanted to keep working in that direction.
We took a step back from the individual pieces and sent the database team a rough outline on how we might tell the story of the data by leading readers through the findings one by one, using visualizations as anchors along the way. We definitely wanted to end the interactive by letting readers explore the data on each case—letting them find cases they knew about or that were close to home.
The database team came back with additional ideas and findings, and from there we worked together really, really hard to focus the content, find the right tempo, and organize it. That was the hardest part—because we weren’t just adding some graphics to a narrative. We were making the data itself the story, without an abundance of text around it.
While we were hammering out the content, Destin Frasier of our interactive design team worked through a series of comps and brought in another designer, Jerry Mosemak, to help with storyboarding and design elements. It really crystallized one day when Destin showed a prototype that had the photo of Carrie Soto, sister of a teacher killed at Newtown, in anguish on her cell phone. That immediately set the tone for the rest of the work, and once we nailed down the chapters and the content in each, Destin and Juan were heads-down coding.
The initial D3 sketches for the data visualizations evolved into the three big anchor pieces. We used the timeline first because it immediately communicates the frequency and severity of mass killings over the last eight years. People on Twitter have probably responded to that one more than any other, because they can see right away just how pervasive the events are.
At OpenVisConf this year, I was really struck by a talk on data about gun killings by Kim Rees of Periscopic. You’re dealing with many of the challenges she spoke about: dredging up the data, but also finding a way to help the viewer connect with the human life behind each number. Can you talk about the challenges of handling and communicating this kind of data?
There are a couple of challenges:
One is to narrow the focus of the analysis—and the visualization—to the most salient points. With a data set like this, you can generate a lot of slices and come at it from many angles. You might even come away with a lot of very valid, interesting findings. But it’s important to choose your main points carefully or you’ll end up trying to cram too much in. So, we spent a lot of time with the database team talking through their findings and finding that focus.
And that leads to the second point, which is that the data really doesn’t mean as much if it’s disconnected from the human drama it describes. In the video you mention, Rees starts with a quote, “One death is a tragedy; one million is a statistic.” Early on, whenever we met with the database team, they would recite some of the grim stories they were finding—especially the ones involving families. The stories were so moving and troubling, and we all really wanted to find ways to pass what they were learning on to readers with more than just statistics. That’s how our designers arrived at the idea of case studies, of using strong images that evoke the tragedy, and of being very thorough—highlighting a narrative on each case in the database.
Technically speaking, what’s the project made of?
I wanted to ask about mobile, actually, since this project is optimized for larger screens—on future projects, do you imagine doing separate versions for small screens, or would you try to make a single responsive version?
We’ve done apps that are responsive that work on phones and desktop and tablet. One of them is a live hurricane tracker that resizes nicely and loses or gains elements depending on the screen size. When you can pull it off, it’s the way to go — one app and you’re done. But “Behind the Bloodshed” as envisioned was a big-screen experience from the start, and then we spent a lot of time making it work on tablet. To go smaller really would have required a separate vision, and we bowed to our time and people constraints and focused on the bigger screens.
Process & Performance
You mentioned the Twitter response to this piece, which seemed pretty big. Can you talk about how the piece has performed, and how you measure performance for interactives?
Without going into the numbers, we’ve been extra happy with the response. Yes, we do look at time spent and page views—and in those regards this app has done very well for us.
But we’ve been interested in looking at other ways we might measure success. One was to dig into the social conversation around the app. As we read Twitter, we saw that a fair number of people were commenting on the findings we’d highlighted. In a way, that tells us they had absorbed the data—completed the communication, if you will. For a journalist, that’s a measure of success. No one was scratching his or her head, saying, “What does it mean?”
We also baked in some tracking events so we can dig in later and see which parts of the presentation readers engaged with the most.
How many people were on your team, and how did you work with the reporters on the story as you built the interactive?
The core team that did the lion’s share of the work was about a half-dozen people, but if you look at the credits there are 17 names spanning Gannett Digital and USA Today. That included a project manager who coordinated work and tasks. Depending on which part of the process we were in, we had daily scrums to go over progress and deadlines, and the database team was in the mix with us the whole way as we coded, tested, and finessed the application. All the while, they continued to work on stories off the data.
One of the most helpful things we did, about a month before we launched, was put the app in front of a user test panel via our information architecture team. It’s not something we usually do, but it’s really easy to develop tunnel vision on something this big, and it was good to hear feedback from regular people. In fact, we ended up changing up the navigation and clarifying some headlines as a result of what we heard the testers say — it was really valuable.
A chance to do a full round of user testing seems pretty great—and unusual for most teams? What kinds of changes did you end up making based on the user tests?
It’s unusual for us, too. But some things shifted and there was an opening to get it on the testing schedule, and we jumped at the chance. Two main things came out of the testing. One was that people really got the data visualizations, they got the design, and they found the content compelling. But occasionally, they told us a headline wasn’t clear, so we rewrote one or two. The second was that our navigation wasn’t the best. At the time, we did not have the page up/down buttons and were relying on people to navigate by scrolling. That works when you have a longform text-heavy presentation but not in something like this where there are lots of transitions from photos to graphics with lots of space. People were having a hard time understanding where they were in the overall presentation. So, we added the up/down buttons to more intentionally walk readers through, and then the dots in each chapter as a signal for location. We actually did that fast enough to send it to testing again, and we found we’d solved the problem.
Now that the piece is up, is there anything you’d do differently when you take on another big interactive piece like this? Technically or design-wise or even editorially?
All of us—coders, designers, and database editors—came away with some schooling in what it takes to put a story like this together. We’ll probably adjust a hundred little things and just look to apply what we learned. But here’s to hoping that we don’t try to copy this bit of success. It’s important to keep searching for unique approaches that tell the story in the best possible way. That’s what keeps us all coming to work every day, isn’t it?
Product, code, data, journalism. ISTJ. Author of “Practical SQL,” coming soon from No Starch Press. Now: @QuestexLLC Prev: @usatoday @documentcloud @PokJournal
Web designer/front-end developer for USA TODAY’s data & story visualization team.
Professor and data visualization developer