When Bots Get Together: Part 2

Our code convening in Austin brought together nine teams working on bots and automation. Here’s what they made.

When bots gather, amazing things happen. (jeffedoe)

See part one of our bot convening roundup for lots more about the event itself and the first batch of bot-centric projects our teams wrapped up in Austin.


Team: Matt Hampel, NPR/LocalData; Livia Labate, NPR; Ryan Murphy, Texas Tribune

What It Is: Carebot started as an effort in thinking about alternative ways to look at analytics for journalism: both the measures and indicators used to understand story impact, and the way in which analytics data is used in the newsroom. It’s meant for journalists and teams looking for insights to enhance stories to better serve users in the real world, to evolve storytelling methods, and to celebrate success within the newsroom.

Bot Convening Progress:

  1. Implementing the Carebot tracker and Slackbot at the Texas Tribune.
  2. Writing implementation documentation and other supplementary documentation.

How to contribute to the project.


Team: Jeff Kramer, Vox Media

What It Is: TopBook is a lightweight, open-source version of a tool we use to track article popularity inside of Vox. Sometimes it can be challenging to answer exactly what’s popular and how popular it is. This tool allows you to quickly query what’s been commented on, shared, and liked most across single and multiple Facebook pages. It’s made for journalists who are curious about what people are reacting to, and how exceptional those reactions are.

Bot Convening Progress: Added a lot of polish to the user experience, the ability to return multiple sorted results and calculate relative popularity compared to the page average, wrote documentation and added tooling improvements. Also added the ability to easily deploy on Heroku.

Please do try it out and report bugs, and suggest other interesting analysis that could be done if you had a few days’ worth of Facebook posts sitting in a data structure.

Jennifer A. Stark at podium

Jennifer A. Stark presenting at the code convening. (Ivar Vong)

Anecbotal NYT

Team: Jennifer A. Stark, Merrill College of Journalism

screencap of James Gleick retweeting an AnecbotalNYT tweet

An AnecbotalNYT tweet in action

What It Is: AnecbotalNYT is a Twitter bot that connects NYT commentary to people tweeting about NYT articles. Before the Code Convening, it listened to the Twitter sphere only for tweets containing “nytimes.com.” If someone’s tweet contains a “nytimes.com” article link, there’s a chance the bot will respond to that person with a comment curated from that article’s comments. If comments existed related to that “nytimes.com” link, and that comment passed various editorial checks including readability, length, and how anecdotal it was (based on a personal experience score developed as part of the Comment IQ project), the bot created a PNG image of that comment and tweeted it back to the person who originally shared the link, with “.@” in the reply so everyone following the bot will see it. Humans have been observed to interact with the bot’s tweets, like replying to the tweet, retweeting the bot, or replying to someone else’s reply to the bot (effectively creating a conversation by two real people as a result of the bot’s tweet).

We hoped that if we abstracted it well, other news organizations could use it to surface comments and engage Twitter followers around the comments people leave on their news sites. The images generated should be modified to reflect the visual style of the news org (typeface, colors, logos).

Bot Convening Progress: Originally the bot would only listen to comments from the NYT’s bespoke commenting platform. In those two days we:​

  • Got the bot to interact with the Disqus comment system commonly used by news organizations.
  • Improved image formatting options (e.g. font style, font color, background color, logo watermark).
  • Parameterized font characteristics to accomodate scenarios in which a news org has a signature font (before, only one font was supported).
  • Checked and cleaned up dependencies.
  • Updated requirements.txt.
  • Added a README.md description of using this inside a virtual environment, and the oddities of working with Disqus comment systems in different news organizations.

In addition, there is a separate image-testing script so people can test their tweeted image output separately from testing the Disqus forum options and actually tweeting it out! Handy. ​ What we’re up to next that developers could help with:​

  • Adding editorial decisions to config file so they can be parameterized, i.e. thresholding for the things that are currently hard-coded like readability, length, anecdotal score.
  • Ability to create different dictionaries. Currently, terms referencing people and relationships make up the “anecdotal” dictionary from which we calculate the anecdotal score. What if a news org wanted a different dictionary, like sports or finance, to surface certain kinds of comments?
  • The config file is a bit clunky, so if there were a more elegant way to do it, that would be nice.
  • We would like to replace the double hyphen at the end where we cite the comment author with a single em dash.
  • Maybe add top-left and top-center logo/watermark options…

(Working branch here.) ​

Albert Sun at podium

Albert Sun presenting at the code convening. (Ivar Vong)

Huginn Newsroom Scenarios

Team: Albert Sun, New York Times

What It Is: Huginn (last seen on Source during the first #botweek) automates common simple tasks for newsroom users. It’s easier to use than writing lots of small cron jobs for everything, and it offers a lower bar to clear for non-developer users to automate tasks and scrape websites.

Bot Convening Progress: I wrote documentation and scenarios explicitly for newsroom users.

Now Huginn needs more journalists and other developers to try using it and document confusing points in the UI—and contribute code, useful agents, and documentation back to the project.


Team: Jeremy B. Merrill, New York Times

What It Is: Everybody gets document dumps—from emails released by politicians to leaked piles of documents—and everybody tracks social media from local, state or domain-specific officials. Stevedore is a tool for quickly giving reporters simple search access for those documents and posts, regardless of format. In about 10 minutes, with defaults, you can generate an 80% solution; in maybe an hour you can, through a template framework, get to the 100% solution for many document dumps. In my experience in the newsroom, big piles of stuff often show up, and search is the simplest way of letting reporters use their domain knowledge to separate the wheat from the chaff. Stevedore is built to make that process easier. (Uploader repo.)

Bot Convening Progress: Making the installation process easier and an experiment in bundling social-media scrapers with Stevedore, so one “out-of-the-box” search engine would let a user add the social media handles of local officials (school board, city council, state leg, etc.) and be able to search an archive of their posts (even if they’re deleted later). Design help would be great, as would be ideas on how to improve installation.

Thank you to all the participants and volunteers who brought code and worked on projects at the bots code convening in Austin. And a huge thanks to the newsrooms and other organizations who freed up people’s time and were willing to share the ideas behind a new set of tools for the journalism community.

Code convenings are an ongoing series of OpenNews events, and we try to put together two or three each year. If you have a project that’s making a difference in your newsroom, and you’d be interested in bringing it to a code convening and working on an open-source release, please tell us about it! Or if your newsroom is trying to solve problems on a particular theme and you’d like to see more software options to explore, we’d love to hear those ideas too.





Current page