Paul Overberg explains base tables and how to get the best data from them (hint: ask good questions!).
Jacqui Maher says it’s not just the numbers, it’s what they mean about the audience.
People tweet what they think, when they think it—and, crucially, we wanted to provide a visualization for the State of the Union speech which reflected that. This wouldn’t be a (shudder) word cloud based on frequencies but a way to track the conversation on Twitter as it was directly influenced by the President’s speech.
Jonathan Stray’s guide to turning documents into data you can run with.
Tyler Dukes on combining the power of data-sorting tools with old-fashioned digging.
As part of the orientation week for the 2014 class of Knight-Mozilla OpenNews Fellows, fellow nerd-cuber Mike Tigas and I led a hackathon at Mozilla’s headquarters in San Francisco…
At the Chicago Tribune, we had a simple goal: to automatically tweet contributions to Illinois politicians of $1,000 or more, which campaigns are required to report within five business days. To see, in something approximating real time, which campaigns are bringing in the big bucks and who those big-buck-bearers are. The Illinois State Board of Elections (ISBE) has helpfully published exactly this data for years online, in a format that appears to have changed very little since at least the mid-2000s. There’s no API for this data, but the stability of the format is encouraging. A scraper is hardly an ideal tool for anything intended to last for a while and produce public-facing data, but if we can count on the format of the page not to change much over at least the next several months, it’s probably worth it.
The U.S. Treasury’s Daily Treasury Statement lists actual cash spending down to the million on everything the government spent money on each day, as well as how it funded the spending. But, the Treasury only releases these files in PDF or fixed-width text files like this one, making any analysis very difficult.
To liberate the data and make it easy to analyze federal money flows across time, we created Treasury.IO. The system we built downloads and parses the fixed-width files into a standard schema, creating a SQLite database that can be directly queried via a URL endpoint.
Jacob Harris on the hows and whys of designing interactives to survive the future.
Alan Palazzolo on how the MinnPost team rocks it without a big budget
Joe Germuska on the iterative, human-centered process that’s made the new Census Reporter project especially awesome.
The Center for Investigative Reporting continues their work visualizing Department of Veterans Affairs’ data. Here, they discuss their development process.
Chase Davis lays down some data science upon us to change how you think about the questions you’re asking of your data
As the government shutdown grinds into its third day, many news developers, civic data hackers, and open gov activists are starting to feel the hurt due to the suspension of most government data feeds, APIs, and websites. How they’re adapting and collaborating to fill the gaps of the shutdown.
Jake Harris opens a serious barrel of monkeys about when and how to issue corrections for data journalism.
Matt Waite on what to do when things don’t work out like you planned.
The Wall Street Journal’s Jeremy Singer-Vine recently released Reporter, an open source tool that makes it easy to hide and reveal the code behind common forms of data visualization presented on the web. We spoke with him about the tool’s makeup, design goals, and future development plan.
John Keefe on tracking the cicada pestilence with open source sensor journalism and crowdsourced data collection
Matt Waite says just because you can make it doesn’t mean you should