When Bots Get Together: Part 1
Our code convening in Austin brought together nine teams working on bots and automation. Here’s what they made.
Code convenings have been regular events on the OpenNews calendar for a little more than two years now, each of them bringing a small group of designers and developers together to work on projects that fit a particular theme. Given a chance to step away from normal routines and daily deadlines, participants spend a couple days writing code and documentation before releasing fresh open-source projects and updates into the journalism community.
Our code convening in Austin last week might have had the most timely theme yet. Teams brought projects on bots and automation, during a spring where Facebook is launching a bot platform, frameworks like Howdy are making chat interfaces easier to build, and conferences like ISOJ are putting together panels on bots. The Austin event definitely was our largest so far, with 14 participants and two volunteers working on nine projects. It was a fantastic mix of people, with developers and designers from all sizes of news organizations, and fields like education, finance, and civic tech. We even had our first international team.
The mix of work was equally diverse. Some projects had been running in their newsrooms for a year or more, and developers used their time to write better docs or build out new features. Others brought brand-new code that’s being released here for the first time. We’re excited to let each team introduce their work here, and we hope you find new ways to use these tools in your newsroom too.
Team: Andy Rossback, Tom Meagher, Ivar Vong of The Marshall Project
What It Is: Klaxon enables reporters and editors to monitor and be notified via email or Slack of newsworthy changes to sites and files on the web. Two years ago, the team at The Marshall Project wanted a minimal, web-based scraper to watch dozens of local government pages for changes. This newest version of Klaxon is intended to allow newsroom users who don’t code to set it up easily and start watching pages for free.
Bot Convening Progress: Scraping web pages is nothing new, but getting a web-based tool that is deployable and usable by non-programmers was a bit tricky. We spent a lot of time working on deployment and user documentation, as well as building the interface to find and target pages to watch with less-technical users in mind. We’re happy with where we’ve landed with the design and the deployment procedures. We still have a little work to do to finish the bookmarklet for reporters to watch specific elements on a web page.
Klaxon isn’t quite ready for widespread use, but we’re lucky to have a couple people already helping out, including contributions from Ryan Murphy of the Texas Tribune and Jeremy Merrill of the New York Times, who were both at the convening. These early collaborators will be really useful in figuring out how others want to use Klaxon.
At the code convening, we laid out some advice for people who’d like to contribute to the project in our repo. When we finalize this release in the coming days, we’ll also be posting issues to GitHub that journalist-developers can help with. We want to allow anyone to contribute to the project, even if you don’t code, so we’ll definitely need beta testers outside our newsroom to help kick the tires on the web interface.
Tom has written extensive docs on how to get it deployed on Heroku, and we have some early work on Docker-based deployments.
Let us know if you’re interested in helping by signing up at newsklaxon.org and we’ll let you know when we’re ready for you.
Team: Ariana Giorgi, Chronicle of Higher Education; Sara Simon, Vermont Public Radio
What It Is: Reliable Sources is a command-line tool that uses the Twitter API’s geolocation search to discover reliable, on-the-ground tweets during a breaking news event. Users can search by location, with an option to narrow the search with a hashtag or Twitter handle. When run, the program funnels tweets into a timestamped database in the user’s Google Drive. This allows for an archive of tweets and associated metadata—the Twitter handle, that person’s number of followers, the number of favorites and retweets that the tweet had when the command was run, etc.—from the users most likely to be the type of reliable source that a newsroom might want to contact.
Bot Convening Progress: We’d set up a repo and started the bare bones of the project during the weekend prior to the code convening, but the majority of the work happened in those two days. Ariana tackled the bulk of the Twitter API work and Sara took on the bulk of the Google Drive work. We prioritized documentation and clean code together.
The next step for this project will be to set it up as an app so that its use is not limited to command-line users.
Team: Sandra Barrón, Miguel Morán, Noé Dominguez of Civica Digital
What It Is: La Refinadora is a tool to create validators of open-data standards, and help governments, journalists and users clean and standardize data for analysis. We already had set up the government version, we now want to set up the Journalism-Refinadora. The idea came from the open-data team within Mexico National Digital Strategy, but then we figured out this tool could also be useful for journalists on data-driven investigations. La Refinadora helps solve the problem of data validation and quality assurance for open-data datasets.
Bot Convening Progress: We worked on making a new interface for the validator where you could upload datasets and a web interface—it’s still a work in progress, but will soon be available in a first stable version.
We could use help creating a new validator that’s dockerized, and also feedback on the validator engine design.
Team: Neil Bedi, independent
What It Is: The idea for typecaster was originally a simple weekend hack to play with IBM Watson’s APIs and see if turning text into podcasts was even feasible. With Watson’s API capabilities and pricing, this can actually help with improving accessibility for articles and briefings with little cost. The final product (see demo) ends up being way better than the output of most screen readers.
Bot Convening Progress: I cleaned up core functionality, produced docs, and got full test coverage. Next step is to build a web UI so non-technical users can make and refine podcasts with ease.
I’d love to hear from people pointing out and contributing features/functionality that I missed—and anyone who’s interested is welcome to help build the web app or just test it when it’s ready.
Tomorrow we’ll share the rest of the projects from the Austin code convening. Until then…