Features:

How We Made Our “Crossing Divides” News Game

How BBC News Labs built a new interactive format for teaching communication skills


(BBC News Labs)

The BBC World Service’s Crossing Divides was a “solutions journalism” series that showcased how individuals and communities are working to bridge social divisions. News Labs is often asked to build experimental formats for these series, and this year, we were given free rein to pitch something innovative—so long as our idea met a few criteria:

  1. It was a new format we hadn’t tried before.
  2. It could be translated into different languages for global audiences.
  3. It taught audiences new skills for crossing divides in their own lives.

Our final product: a messaging interface that players use to carry out a positive conversation with a person on the other side of a political, ethnic, or generational divide. At the end of the conversation, they receive tips for communicating more effectively. The web application has been translated into six different languages by the World Service.

With this project we found ourselves up against some serious constraints. We faced a hard six-week deadline, which included the time for researching and pitching an idea to our editorial stakeholders. Of the four people in our team, only two had any serious front-end development experience—and they weren’t necessarily the two working on the project full-time. And finally, none of the content had been commissioned, which meant that we needed to build something robust enough to handle the English text, translations, and illustrations that we received in the days before the series began.

Research, Pitching and Ideation

We were interested in building a news game even before we started working on pitches. We were inspired by the recent success of the FT’s Uber Game, which is a great example of how the format can help users appreciate someone else’s experience. Games also felt well-suited to teaching new skills, which was part of our brief.

We spent a week creating a mood board and researching before finalizing pitches. Some other sources of inspiration were:

In the end, we agreed on a game that asked users to have a conversation with an “other,” and then offered feedback for having better conversations in the future. The World Service was already planning coverage of Better Angels, a US nonprofit that runs skills workshops for talking to family members with different political views. We used resources on their website to guide the editorial aspects of the project.

After agreeing on a concept, we quickly moved into a rough design phase. We started with pen and paper and a few “crazy eight” exercises to help experiment with possible formats…

Photo of flip chart

The rough design phase.

We then made low-fidelity wireframes to explore how the application might look on mobile and desktop, focusing on the main user journey through the experience.

Drawings of mobile phone screen

Low-fidelity wireframes.

Drawing of desktop screen

Low-fidelity wireframes.

From Mocks to a Web Application

Before touching any code, we set out a taxonomy to describe the dialogue that a user would have with the system:

Nested list of terms

Each story has a dialogue interaction, which is made up of lines. Each dialogue line has a speaker and an array of text. The text array contains three options a user will choose from when having a conversation: positively phrased questions/responses, neutral questions/responses, and negatively phrased questions/responses.

We later added a feedback field for each text option: an array with text critiquing the positive, neutral, and negative options in the text array.

Defining the taxonomy was a crucial first step before we defined our data model. This is the final iteration of our model:

Dialogue structure

When we started working on our data model we knew we needed to avoid using a branching narrative. We had already committed to producing three stories with at least ten different lines of dialogue in each. With a branching narrative, we would have had to write, edit, translate, and re-edit 243 pieces of text per story, per character—way too much for our writers, given the short amount of time between commissioning and launch.

This pushed us to develop parallel conversational “tracks” that a user jumps between based on their dialogue choices. This is similar to the string-of-pearls structure used in gaming, where players follow a generally linear narrative with brief moments of freedom. Because it allows users to diverge from the main narrative, the pearl structure gives the illusion of freedom…but because all paths tend to re-converge on the central narrative path, a player’s choices don’t alter the overall course of the game.

Diagram of lines in diamond shape

String of pearls structure.

In the end, instead of using the “string-of-pearls” model, we designed an interaction where users always discuss the same topics, but encounter a friendly or hostile conversational partner depending on their choices. By wedding a text-based game with the personalized feedback element of a quiz, we created a structure where choices do matter, as they determine the tone of the conversation and the final feedback.

Diagram of rectangle with arrows going through it

A structure where choices determine the tone and feedback.

Rather then diving straight into building the app after coming up with a data model, we made a quick terminal-based simulation to test it. One thing we quickly realized is that we needed to add logic to the code that allowed users to choose their player without re-scripting the entire conversation. This affected both our logic in the JavaScript and the way we set up our CMS.

A black screen with white text

Setting up.

Implementing a CMS

Because we had limited time, we decided to use Google Spreadsheets as our content management system rather than building our own.

A spreadsheet with colored rows

This ended up being a good choice for a few different reasons:

  • It allowed us to develop and test the web application while writers continued to revise the dialogue.
  • We were able to clearly represent the idea of “tracks” with positive, neutral and negative columns.
  • The Translate function let us test functionality in our code for supporting different language versions and auto-detecting right-to-left scripts.
  • Setting permissions at the sheets and columns level gave us some peace of mind once the text was finished.

We used Tabletop JS and a little bit of extra JavaScript to pull the dialogue script from the Google Spreadsheets and reformat it to match the JSON schema we developed. After reviewing the JSON, we then used it to populate the application.

Web application

We decided to use React JS as the front-end framework for the web application despite having only one team member with prior experience. By dividing our wireframes into views, and dividing those views into components, we were able to build out the entire experience over four weeks. It also gave us a good excuse to learn something new while working on the project.

The success of our application’s development really hinged on how well we were able to plan, communicate and collaborate while working individually across different components. We created a planning document to explore architecture, parts, views, and components for the application, and it became an essential element of the process.

This document was lightweight yet precise, focusing on the high-level design architecture. We sketched out parts and components in a language-agnostic way, specifying the inputs and the output “interfaces” but leaving the technical implantation to a second draft. Producing it helped us evaluate our design and get suggestions on implementation from more experienced teammates.

It was also helpful to develop basic diagrams of the application’s views. This helped us further refine the terminology we used across the document.

A drawing of chat bubbles, rectangles, and emojii

When we finished the project, we also had a description of the architecture ready to add to the wider documentation.

Bootstrap

Rather than starting the project with high-fidelity designs, we built the application in parallel with commissioned work from the design agency magneticNorth. This meant that we needed to suspend judgement on the look of the application and concentrate on getting the main functionality working.

We used Bootstrap to handle the layout and to help draft an illustration brief, trusting that everything would come together once we had the designs.

A screenshot from the project's design process
A screenshot from the project's design process

Here’s an example of magneticNorth’s early designs:

Three mobile screens in a row with text and images

And—it worked!

Three mobile-sized screens in a row

Guerrilla User Testing

Since we had a global audience, we felt it was particularly important to be aware of any user experience issues with our final application. So we organized guerrilla user testing in the kitchens of the BBC’s Broadcasting House with three days to go before launch.

To help us get the most out of this testing, we spent a few days identifying what we felt were the worst elements of the user journey. This helped us set out a purpose and methodology before asking busy journalists to play with the application. We also limited the scope of the testing by focusing only on mobile.

We found that using a user research script gave us good insight into what parts of the experience confused our testers. Ours looked something like this:

  • Give participants a brief introduction of the project
  • Ask participants to play through the whole application while speaking through their thought process
  • Don’t interrupt participants by asking questions
  • Take notes of anything participants say for each of the app’s main components

We also asked follow-up questions at the end:

  • “Thinking back about this section, what was that button asking you to do?”
  • “What do you think the point of this interactive was?”
  • “Did you notice the colors in the ‘Feedback’ section?”
  • Ask for any final thoughts, comments and questions they might have

We then did a very informal debrief as a team, where we separated the findings from the recommendations. This allowed us to more freely brainstorm more possible solutions for each problem.

For example, one important thing we found was that most people weren’t able to tell us what the real point of the interactive was. After evaluating a few options for how to address that, we added introductory text which explained what users were supposed to be doing and mentioned that there would be feedback to look forward to at the end.

We wound up adding a paragraph of context to the landing page for the individual stories just before launch. We were glad to have spotted the oversight before the project went live, rather than after.

Browser Testing

Because our interactive was being developed off-platform and outside the BBC’s CMS, we needed to make it as compatible as possible with different browsers and devices. But we also needed to be strategic and focus on the bugs that would affect the highest proportion of our global audience.

We spoke with other teams within the BBC about the statistics they collect on the most common browsers, devices, and operating systems used by World Service audiences. We then used BrowserStack to tackle cross-browser and device compatibility and Babel’s Polyfill library to ensure compatibility with older devices. For performance, we also used a CDN to make sure that pages loaded in under the recommended three seconds.

How Did We Do?

While there are always issues we could address if given more time, we are pleased with the piece’s reception and with what we were able to accomplish. Because of the early language testing we did, we were able to re-version the interactive into six different languages over the course of a two-week period. Three weeks after publishing, the three stories had a 50% to 60% completion rate across all languages. We’ve also been approached by other departments in the BBC who are interested in adapting this style of dialogue interaction for other projects in the future. We’re planning to open-source the code in the coming weeks.

Organizations

Credits

  • Luke Hutton

    Luke Hutton is a Software Engineer at BBC News Labs. He was previously a Research Fellow examining ethical and privacy issues in social network sites.

  • Fionntán O’Donnell

    Fionntán O’Donnell is an AI Research Engineer at BBC News Labs.

  • Pietro Passarelli

    Pietro Passarelli is a software developer and documentary filmmaker. He is passionate about projects that sit at the intersection between software development and video production, both in terms of the growing trend of interactive documentaries but also as tools for making video production and post-production easier, such as autoEdit. While working in broadcast documentaries for BBC and C4 Pietro noticed the convergence of video production and software development and did an MSc in Computer Science at UCL. He worked as a newsroom developer at the Times and Sunday Times where he developed quickQuote, an open-source project to make it easier and faster for journalists to identify and create an interactive video quote. While at Vox Media as one of the 2016 Knight-Mozilla fellows, he worked with the product team and storytelling studios on autoEdit to make video production faster, easier and more accessible across the Vox Media brands. He is currently a senior software engineer at The New York Times.

  • Alli Shultes

    Alli Shultes is the social media and outreach producer for BBC News Labs. She also regularly reports on technology for the BBC News website. She has previously worked as a digital producer for al Jazeera English and as a computing teaching assistant on the MA/MS Digital Journalism course at Goldsmiths, University of London.

Recently

Current page