Features:

Choose Your Own Mad Libs (or, how you can plug data into automated stories and free up lots of reporting time)

From housing prices to weather to wages, template systems can instantly generate 100s of stories from numbers people care about


Cabinet drawers with various vintage letterpress elements

Photo by Erik Mclean

What if you could click on a few things, run a few computer commands, and generate hundreds of stories?

You can. I did. I drafted more than 1 million words under my byline in a month.

The approach I used is called natural language generation. It’s distinct from the generative AI approaches that gets much of the chatter these days—and also results in headlines about $100 billion in losses when the black-box AI approach takes a wrong turn.

The NLG approach I’m discussing is centered around templates that are much like the Mad Libs you did as a kid, with some Choose Your Own Adventure thrown on top.

These tools can inform your readers, help with subscriptions, and free up reporters to write on critical subjects with nuance and in-depth reporting. The stories are not a replacement for good journalism; this is a way to get good journalism while allowing people with limited resources to do work that can’t be automated. And the tools aren’t a replacement for people, but a way to shift some of their work and use these stories as a starting point.

In other words: Let the computers do the stupid, repetitive stuff.

Making your own Mad Libs-style templates for news

What kind of stories can you do this way? Quite a few, if you have the data. You’ll want to prioritize by readership potential, reusability, and how much work it takes to get you to a launch.

I’ll suggest four Ws: Where people live, What people do, Wallet concerns, and Weather. So you might consider stories around housing prices, restaurant inspections, weather watches and warnings, employment and wages, gas prices, and more.

Here’s a quick example of the kind of story this approach works well for:

Gas prices in {statename} averaged ${formatnumber(latestprice, 2)}
in the week ending Monday,
{if weekchange > 8, “leaping {weekchange} cents”}
{elseif weekchange > 1.2, “rising {weekchange} cents”}
{elseif weekchange > -1.2, “flat”}
{elseif weekchange >-8, “falling {weekchange} cents”}
{else “plummeting {weekchange} cents”} from a week earlier.

You’re looking to match properly formatted data with reasonable wording that includes appropriate verbs. A 2-cent decline in the price of gas shouldn’t be called “plummeting,” but some kind of decline probably should be called “plummeting”—and journalists should decide where those levels begin and end.

If you want to start writing (hundreds of) stories like this (by pushing a button), you’ll need to consider a bunch of things.

First, and most important: If the data is wrong, the stories will be wrong, too. As Mitch Ratcliffe said, “A computer lets you make more mistakes faster than any invention in human history, with the possible exceptions of handguns and tequila.” You need solid safeguards, near-flawless editing, and a thorough understanding of when your data may be unreliable or even unrecorded.

Here are the other things I keep in mind in developing templates like this:

Data collections

  • There are tradeoffs between timeliness, frequency, accuracy, and geographic detail. For example, unemployment figures: You’ll find national data weekly, state data monthly, and metro-level data every other month. And you might find details on specific industries at a quarterly level but only after a painful delay.
  • Data may change or be discontinued entirely, as with the COVID Tracking Project’s collection of testing data in 2021, or the Johns Hopkins University and New York Times case and death data in 2023, or the CDC adopting different measures.
  • Geography gets complicated, and you need to consider one-to-many relationships for each data item. In most places you’ll want a prioritized list of the county identifiers known as FIPS codes for the circulation area; states like Virginia (independent cities) and Massachusetts (all but requiring town-level data) will throw wrenches at you.
  • You may also want to track things like social media handles, URLs for subscription sites and data sites, and human-friendly names for metro areas.

Technology hurdles

  • In drafting your story templates, you need to consider tensions between usability and complexity. A set of county-level stories can be simpler to produce, but may not be as interesting or relevant as a story that fleshes out all of a newspaper’s coverage area. Comparison points are also important here: You may need to bring in data from more than one state, as well as national.
  • How do you get the stories to your editors? You might get away with generating text that can be copy-pasted into your CMS, but that will make it less likely editors will actually use the stories. Inserting stories with automation might not allow you to set headlines or URL endings. Can you generate and send localized images? Are you only contemplating one-time story sends, or can you push corrections or updates to evolving stories like hurricane previews? Under what conditions, if any, do humans get removed from the loop so stories publish automatically?

Quality considerations

  • What historical context can you add? In weekly gas prices, for example, you might want to say something like, “Over the last year, prices ranged from $2.69 on March 28 to $4.48 on Sept. 15.” How does your current real estate market compare to a longer-term average, in terms of housing inventory? (“It’s a hot market in Tempe.”)
  • Good verbs help, but remember the muddle of the middle. It’s easy to write conditional language around “grows” and “falls”—but “leaps” and “plummets” helps. Comparisons that are truly flat, or just barely edging up or down, should probably be called flat.
  • You need strong guardrails on your data and templating. If a thing moves from 1 to 2, are you going to report a 100% increase, or use the number? Will you spike stories if there are insufficient counts to ensure your story has validity? What precise issues of quality would lead you to drop a community’s paragraph—or an entire story?
  • When will you stop a story? Stories on gas prices and unemployment are exciting at times and boring at other times. What metrics, or whose judgment, determines when a story gets paused or restarted?
  • What happens when the data has a lot of overhead? COVID–19 data has been troubled by erratic reporting, data dumps, errors, changes in reporting schedule, and discontinued data series. What staff can properly supervise the process at appropriate hours? What if something goes wrong?

People

  • What do you tell readers about how these stories were written? Whose contact information is included for accountability?
  • What staff can you find, help train, and get supported? You may need people who understand your infrastructure, your markets, your SEO standards, writing, editing. You’ll likely need at least one coder to shape the data for your templating system, to bring in multiple geographies and the temporal context. Someone needs to build the template. And then you’ll need at least two people who can actually process the data and keep it moving, in case one is on vacation.

The effort up front is worth it

You’ll spend a significant amount of time nit-picking these things and ensuring templates are as good as they can be, with no errors. It’s a lot of work to ensure you’ve got proper number-to-word situations straightened out (“one death”, “1,234 cases”), and have error handling for when numbers that must be positive somehow come through as negative. And then you’ll need to make sure you’ve got $ and % signs everywhere you’re supposed to.

You’ll also need to ensure quality as you build a whole pile of conditionals in your story under that Choose Your Own Adventure-style motif. If the data is good, you’re generating this section of the story; if this number is more than 15%, you’re putting this bit before that other bit; if that other number is at least 8%, you’re using this stronger verb. I once built a tiny function that kept track of every conditional path and kept a sample of each result; that thing slowed down my workflow enormously, but it made editing far more thorough.

And in the end? I could fire up my program, run a single command, and write hundreds of stories in three minutes. I could have them to editors a minute after I did some quality checks.

These story automations gave several hundred stories to editors within 10 minutes, start to finish. It took a lot of work to build the launch pads. But we were able to get readers informed, collect subscriptions, and free up reporters—at scale, week after week.

I couldn’t have done this without the mentorship, support and hands-on help from literally dozens of people, and I probably can’t list everyone here. I’m hoping by sharing some of what I’ve learned other journalists may benefit.

Credits

Recently

Current page