How We Made “Meanwhile, Near Saturn”
Bringing raw imagery from a NASA probe down to Earth without losing its magic
This past summer, while researching awesome space photos for use in our “Field Guide to the Solar System,” I was struck by what an outsized contributor NASA’s Cassini probe was to the photographic record of our celestial neighborhood.
The Cassini spacecraft launched in 1997, with a planned 20-year mission to study Saturn, its mysterious rings and its 62 moons. During the seven-year, billion-mile journey to Saturn, Cassini pointed its cameras and sensors toward Earth, Venus, and Jupiter before reaching its destination. Since 2004, Cassini has been sending us vast quantities of scientific data and jaw-dropping imagery from around Saturn.
At NICAR 2015 I saw Al Shaw’s talk about how he worked with LANDSAT satellite imagery to build “Losing Ground.” Al described how he and his colleagues at ProPublica built a suite of custom tools to work with LANDSAT data to tell the story of the degradation of the Louisiana coastline. I was impressed and inspired by their work, and by Al’s reminder that the decades of data and imagery produced by NASA’s satellites and probes were a public-domain treasure trove that surely held many opportunities for storytelling.
This is the story of how I made Meanwhile, Near Saturn, our retrospective of Cassini imagery contextualized by glimpses at events taking place back on Earth.
Getting the Data
The first thing I wanted to do was grab everything from the Saturn phase of Cassini’s mission. I wanted to see a time-lapse video of every single frame of Cassini’s mission. The JPL / SSI team does a great job of making everything available, with some great documentation. It quickly became clear that NASA is obsessed with future-proofing its scientific data. So much so, that the raw data isn’t always easy to work with (here’s the definitive user manual).
I was mainly interested in the photographs that Cassini took, which came from its Imaging Science Subsystem (ISS) instrument, which consists of two cameras: a wide-angle and a narrow-angle. Each camera has two filter wheels in front of its imaging sensors. The operators can control which combinations of filters will be in front of the sensor for s sequence of photos. There are a few places you can grab the images from, but the easiest way was from NASA/JPL’s Planetary Data System. I had first used their helpful search tool, which lets you drill down and grab all of the images of a specific “target”—like Saturn’s moon, Tethys, or the F-Ring.
Over the summer, I used this tool to download collections of images to get my first look at what these time-lapses looked like. I was immediately awestruck. In stark black and white, speckled with cosmic rays and lossy packets, the videos reminded me of early film. The sequences of Saturn’s moons were particularly amazing.
The main chunk of data from Cassini’s Saturn mission is served up in 93 volumes. Within each volume are subdirectories which contain raw binary images (.IMG files), and a corresponding metadata text file (.LBL files). I set up a script to download all 93 volumes down as GZIP’d files.
Converting the Images
Each of the archives contains some ready-to-use thumbnails, but they were only 256x256 pixels, and I wanted the highest resolution I could get (1024x1024 pixels). So I needed to convert those raw IMG files into JPEGs or PNGs.
These images files aren’t RAW images like you may shoot on your DSLR, then open in Photoshop. These are raw binary files. NASA makes some software tools to work with these (like the unfortunately named ISIS), but they aren’t always the easiest things to work with. Some of the most helpful tips for dealing with these images came from The Planetary Society’s Senior Editor and Planetary Evangelist Emily Lakdawalla’s tweets and informative posts.
Emily had mentioned working with a Windows command-line utility built by Björn Jónsson called IMG2PNG to convert these files. I work on a Mac, so I fired up Parallels, opened the Windows command line interface—for maybe the third time in my life—and converted them all to PNGs.
I did try to make use of some 16-bit TIFF files that NASA started to include at Volume 25 (coiss_25). These contain an enormous amount of information, and are great for experimenting with different exposures to pull out important details. But often these unprocessed TIFFs appear black when you open them, as they haven’t been adjusted at all. I needed to convert them to 8-bit (which loses some valuable image data, but fine for my needs) then adjust their contrast and brightness. I ran into trouble getting good results with ImageMagick converting these, and since I had an incomplete set (only 68 out of 93 volumes had TIFFs), I decided I would stick to converting from the raw binaries.
So the next step was to take these PNGs and do some light image processing. Some of the images were shot at 1024x1024 and others at 512x512. Using ImageMagick, I converted all of the PNGs to be 1024x1024 8-bit grayscale, and adjusted the contrast of the images using the “auto-levels” command:
mogrify -auto-level -depth 8 -resize 1024x1024 -format jpeg ‘*.png’
Building a Database of Imagery
For my project, I first wanted to find the most visually impressive sequences, but I had over 350,000 images to go through (almost 4 hours at 30 frames per second). So I needed to create my own database of the imagery so I could query these images, and return sequences, such as all photos of Tethys, or all photos taken on June 5, 2010, and list them in chronological order. Looking at the metadata, I noticed a squence_id field, followed by a sequence_number. I had a hunch that the sequences with most frames would be some of the greatest interest, and therefore some of the best looking stuff. It turned out to be a great place to start. I have been writing a lot of node scripts on my local machine for scraping, cleaning and processing data recently. So I wrote a node script to walk through all of the raw data directories looking for .LBL text files. It reads the filename, then grabs the values for the fields I have pre-selected that I am most interested in. I decided the ones I wanted were
A sample of the Cassini imagery database I created:
|image id||filters||series id||series.number||image time||target||adjusted ts||clean_date|
|N1584356553.||.1||(CL1.CL2)||S38||1||2008–076T 10:25:3 8.875||Rocks||2008–03–17|
|W1584357436.||.1||(CL1.CL2)||S38||12||2008–0 76T 10:40:2 2.4 33||Rocks||2008–03–17|
|W1584357766.||.5||(CL1.CL2)||S38||15||2008–076T 10:45:52.4 31||Rocks||2008–03–17|
|W1584376923.||.2||(CL1.8L1)||S38||1||2008–076T 16:0 5:08.716||Saturn-Rings||2008–03–17|
|N1584374886.||.1||(CL1.CL2)||S38||10||2008–076T 15:31:12.2 80||Rocks||2008–03–17|
For simplicity’s sake, I just had my script spit out a CSV file with the values for each of these, along with the identifying image ID from the filename. I also saved the file path on my local machine where a given image was located, so I knew which directory each image was located in, so I could make the video later.
I built a MySQL table, then imported the CSV. Now I could query the image collection.
I found that while looking at the image sequences, I needed to see the dates right on the image as a watermark. ImageMagick lets you write text on top of bitmaps easily enough, so I wrote a quick script to create a set of all of the images in the dataset with an annotated date on the image:
convert -background ‘#0008’ -fill white -gravity center -size 512x90 -pointsize 48 caption:“ August 12, 2004 ” ‘N1510253246_1.jpeg’ +swap -gravity NorthEast -composite ‘N1510253246_1.png’
Now I was ready to render some movies. FFmpeg is a fantastic tool for making movies from sequences of images, as well as transcoding and converting video files. I used FFmpeg’s ‘concat’ function which lets you pass it a text file with a list of images, to generate a video clip. I generated a text file for every sequence I wanted to render, then passed that text file to FFmpeg:
ffmpeg -f concat -i /Volumes/Cassini/Titan.txt -r 30000/1001 -b:v 4000k -minrate 4000k -maxrate 4000k -vcodec mpeg4 -an /Volumes/Cassini/Titan.mp4
I then imported these sequences into Adobe AfterEffects, arranged them in order chronologically, then began to whittle them down. Early on, I had placed a song that I loved into the timeline from the band “Explosions in the Sky” called “So Long, Lonesome.” I felt it really had the right mix of loneliness, sadness, hope, and awe that I was hoping to convey in the video. After looking through lots of disappointing stock music tracks, I reached out to the band, and we were luckily able to license the track for use in the video. I think it really adds a lot to the emotional impact of the movie, and I’m glad we spent the extra time required to use this track.
Meanwhile, Near Saturn…
Watching these “longest chain” sequences with the dates, I found myself thinking back to what was happening in my life on these dates, and what was happening in the news. This is when the idea of “Meanwhile, Near Saturn…” struck: Present a curated sequential series of clips alongside headlines about what was happening back on Earth the day the space photos were taken.
This approach did create some problems: Sometimes when I looked up the day that a particularly striking sequence was captured, not much was happening. And inversely, we also chose a bunch of important moments from the past 11 years, only to find that the imagery from that day wasn’t especially compelling.
My editor Wilson Rothman and I ended up collecting a varied assortment of headlines for each date, with an eye towards keeping a mix of global news from across many areas of political, social and cultural interest. In the end however, I opted to cut almost half of the headlines. The problem was, the video was demanding too much of the viewer. The Saturn sequences are visually intense, captivating and short—and people had to read a bunch of headlines, too? There was too much competition for your focus. A slower pace of headlines felt much calmer. Keeping the date constant reinforced the concept all the while.
My main goal for this project was to showcase the most beautiful footage from Cassini to tap into the “overview effect”—a phenomenon that some astronauts report experiencing after having spent some time in space. They claim that after looking down on all of humanity, all together on this big blue marble, the conflicts and problems that divide us seem insignificant. Our shared humanity comes into sharp focus.
I tried to strike a balance of leaving some calm moments to savor the beautiful imagery, while pulling the reader back to Earth with a headline from that day, but of course reading is optional. Overall, it does appear that people got the concept. For some, it struck an emotional chord. For others, the footage was fun, but they struggled to understand their connection to the headlines. Perhaps a little more headline tweaking would have helped in this regard.
One thing my editor and I were dedicated to including from the get-go was the jumbo time-lapse—all of the images from the 93 Saturn volumes in proper order—as a ridiculously long bonus feature. It continues to catch people’s attention, and as of this writing has over 84,000 views. The simpler offering of “11 years of Cassini images in 3 hours and 48 minutes” is pretty hard to resist. Due to its incredible length, we had to publish this outside of our video platform. It lives inside WSJ’s YouTube account.
The promotional page for the project also included a few of the total sequences targeting some of Saturn’s unusual looking moons. For social media, we worked up a 15-second teaser of the imagery for Instagram, and a 6-second version for Vine.
Since I had such a difficult time getting things going to work with this data and imagery, I decided, at the end of the project, to clean up and release some tools for working with Cassini images.
If you want to try it out for yourself, this release includes a node script that will extract the metadata of your choice from the Cassini ISS metadata text files and generate a CSV. I’ve also included a shell script to download the raw data files, and a SQL file to make a table you can populate from the node script output. Then you will have a handy database to explore this cool dataset, but you still need to convert your own images.
NASA has scheduled an end date for Cassini’s mission: Sept. 15, 2017. That might be a good time to go back and archive the complete the Cassini mission’s 20-year lifespan. One cool project that could be done would be to use the database to find regular, repeating sequences that use Cassini’s cameras’ red, green, and blue filters to automatically generate color movies. They rarely match up neatly, and usually color images must be processed by hand but it could lead to some interesting footage. I hope that these tools I’ve shared can help speed up the process of working with this imagery, and I can’t wait to see what others will do with this amazing collection of scenes from space.
Note: If you’ll be at NICAR 2016, you can catch Keegan’s Saturday afternoon talk on command-line graphics—it draws on his Cassini project and more. —Ed.
Jon Keegan (@jonkeegan) is a Visual Correspondent at The Wall Street Journal. His work is focused on the visual exploration of the patterns around us in technology, nature and culture. Previously, he was Director of Interactive Graphics for the Journal, and a news app developer before that. See more of his work here.