Features:

Hacking Our Hiring: Let’s Talk About Screening

Everything we’ve learned about phone interviews, coding tests, and data exercises


(WOCinTech Chat)

This article is adapted from a presentation given by Tiff Fehr and Ryann Grochowski Jones at SRCCON 2018. The version you’re reading features greater detail about hiring efforts within The New York Times’ Interactive News Team, specifically, but offers information on ProPublica’s processes for comparison. You’re about to read part 4 of this series. Here’s the full series so far.

Previously, we wrote about a hypothetical short list of 18 very qualified candidates. Let’s say our hiring panel’s debate narrowed that down to eight candidates we want to screen further. To get to eight candidates, we have put in a good-faith effort to reduce our own biases, and ideally we’ve tried hard to gather a remarkable group of people with some unexpected strengths.

At this point, our effort at anonymization has run its course—we can reveal the names and details because we’re about to talk with these candidates on the phone. (It would be strange to try to keep up anonymization through the phone screen process.)

Phone Screening and Testing

For phone screens, we want to start focusing specifically on questions around our themes. Remember, a good evaluation helps judge what an applicant knows, what an applicant can do, and how an applicant works. We’ve covered the first part in our initial steps. Now, in a phone screen, we want to expand into what they can do and how they work.

To make sure we cover a few themes per candidate, we need all the phone screens to be as similar as possible. We recommend using a script to guide interviewers and an assessment rubric to capture feedback.

Our questions and range of answers should focus on our themes. Here is our chance to ask directly about work history and projects and how they fit our themes.

Text of sample scripts for interviewing. Please contact us at source at opennews dot org if you would like this text sent to you via email.

While this is a good way to write our script of questions, it turns out we’re also designing our assessment rubric at the same time! Teams can use these questions and answer pairs to do their assessment as well as plan out the interview. Try to focus on questions that prompt immediate examples, and avoid those that might dredge up the entire history of a huge project to make a point about a specific feature within it. You may need to cut off some candidates if they’re taking up too much time. We need to provide time to answer our candidate’s questions, too. Make time in the script for that. Ask the interviewer to simply take the best notes they can, particularly about the quality of the questions asked by the candidate.

Each candidate should get at least one phone screen. If you have teammates eager to volunteer, you could try for two screens for each candidate, with a different team member on each. Like with our blind initial assessments, duplicate assessments help correct for conscious or unconscious biases.

Multiple phone screens take more time, of course. But they also give phone screen candidates exposure to different team members, let them ask a second round questions, and provide everyone with a chance to recover from an “off” day, too. (Everyone has off days, interviewers and interviewees.)

Should We Do a Reporting or Coding Test?

For more reporting-oriented roles, you may want to assess reporting skills. ProPublica’s data and news apps team often evaluates short listers by asking them to find a potential story idea in a dataset from a previously published story, as well as an assessment of the quality and completeness of the data.

For more technical roles, you may want to assess coding skills, to prove candidates are proficient in all the languages/libraries they list. This is common but debated. Interactive News tries to find this signal in the noise via application materials, a specific questionnaire item, and in our phone screen, in place of a take-home test.

Benefits of Tests

A good test is largely for the benefit of people doing the hiring. It can provide a very concrete demonstration of what a candidate can do that may be otherwise obscured in the materials they submit. Afterwards, talking with a candidate about their test and the test-taking process can provide insight into how a candidate works.

Test results can also provide a baseline for candidates and interviewers alike, as a long-running example that can be used within the rest of the interview process.

Asking each candidate to take a test provides an even-handed way to compare candidates’ abilities, whether that is mining a data set for reporting leads or describing the way they’d start coding a solution to a problem.

Drawbacks of Tests

Tests are artificial! At best, they vaguely indicate the kind of work the position entails. Each candidate is different, and some may find the test either too hard or too easy.

If we pick a task that is easier to test but not actually indicative of the work, candidates may set unrealistic expectations about the job. Some representative tasks can feel like “free work” if it is not messaged or structured correctly, which is why ProPublica uses a dataset from an already-published story for its tests.

Another big flaw is the amount of time a candidate can devote to the task—some candidates can afford to spend more time than others. We should not favor a candidate who elected to spend twelve hours versus a candidate with pressing obligations who could only do two. It is important to evaluate results at the bare minimum for a passing grade, and to not reward extra work that goes beyond what was asked in the test.

Assessing Phone Screens

Our assessment of candidates after their phone screen is a lot like our quantification and hiring panel debate we described using to get our short list.

Next the hiring panel decides who advances to an in-person interview. How many people should make it to an in-person interview? That number depends on the quality of the short list group and the rigor of our rubric in teasing out candidate differences. Remember, each candidate that advances will need a substantial chunk of the team’s time to meet with them. We encourage hard decisions at this point. We want to keep just a handful of the best candidates where we really need to learn more—in person—to decide among them.

Candidates who will not be progressing further would then get a very nice note informing them of that fact. Encourage them to keep an eye on your team and to re-apply when another position becomes available—after all, you already know these are very talented people and you want them in your next applicant pool.

Next in the Series

Next, we’ll talk about in-person interviewing, plus best practices for communicating with finalists.

Organizations

Credits

  • Tiff Fehr

    Tiff Fehr is an assistant editor and lead developer on the Interactive News desk at The New York Times. Previously she worked at msnbc.com (now NBCNews.com) and various Seattle-area mediocre startups.

Recently

Current page