Selasa, 04 Juni 2013

Nieman Journalism Lab

Nieman Journalism Lab


Attention fact-checkers: Dangle a buck in front of partisans and they’ll come closer to the truth

Posted: 03 Jun 2013 11:12 AM PDT

Here’s a fascinating new NBER working paper that has implications for news media. It focuses on a well known phenomenon: People with strong partisan opinions on politics are more likely to believe the facts back up their perspective. Those on the right are more likely to believe incorrect “facts” that put left-leaners in a bad light, and vice versa:

For example, Republicans are more likely than Democrats to say that the deficit rose during the Clinton administration; Democrats are more likely to say that inflation rose under Reagan.

In the real world, there’s little penalty for getting this sort of a question wrong. It’s unlikely anyone will correct you, and even if they do, it’s very rare that someone will face any significant consequences — even a public shaming — for incorrectly blaming the other side for something they didn’t do.

This new paper, by researchers at Yale and UC San Diego, tests that idea by upping the ante. Let’s say, when asking people these questions, you offered small payments to people for being correct. Would that change their behavior and make their statements better align with the facts?

The answer seems to be yes:

The experiments show that small payments for correct and "don’t know" responses sharply diminish the gap between Democrats and Republicans in responses to "partisan" factual questions. The results suggest that the apparent differences in factual beliefs between members of different parties may be more illusory than real. [emphasis mine]

The paper itself goes into much more detail. One piece of the experiment involved giving people the option to answer “don’t know” to a question where they might have partisan interest in one answer. If a “don’t know” response generated a financial reward — even though it was a smaller one than for getting the answer correct — roughly half of the responses became “don’t know.”

This pattern — frequent "don't know" responses when subjects are paid to say "don't know," even though they are also offered more for correct responses — implies that participants are sufficiently uncertain about the truth that they expect to earn more by selecting "don't know”

This great willingness to select "don't know" has important implications for our understanding of partisan divergence. In particular, participants who offer "don't know" responses behave in a manner that is consistent with this hypothesis: they know that their responses are otherwise partisan and that they don't know the truth. In the absence of incentives for "don't know" responses, they would offer insincere partisan responses, even if paid for correct ones, because they are both uninformed about the truth and aware of their ignorance.

This opens up a host of questions for journalism’s growing fact-checking industry. The idea that people who promulgate inaccurate ideas know they’re inaccurate — or at least know they don’t know the truth — fits into a larger set of evidence that merely presenting “the facts” doesn’t always lead to a more informed audience. “Facts” become more grist for the mill of identity creation — I’m a Democrat, so I say bad things about Republicans, facts be damned, or vice versa.

It also provides backing to Brendan Nyhan’s ideas about potential “backfire” in fact-checking — that what journalists perceive as a neutral recounting of reality can in fact be perceived as raising the stakes of a partisan battle and can engender a hardening of incorrect beliefs. Here’s Joe Keohane writing about the subject in 2010:

Facts don't necessarily have the power to change our minds. In fact, quite the opposite. In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger

"The general idea is that it's absolutely threatening to admit you're wrong," says political scientist Brendan Nyhan, the lead researcher on the Michigan study.

But dangling a buck in front of partisans seems to go a long way toward letting people do just that.

Paying people to acknowledge their ignorance doesn’t seem like a workable large-scale strategy. But it does open up a range of ideas about how we might create incentive structures that somehow reward people to acknowledge when they don’t know what they’re talking about. And it also means that at least some of the polarization we’ve seen in American politics in recent decades might be only skin-deep. From the paper:

We find that small financial inducements for correct responses can substantially reduce partisan divergence, and that these reductions are even larger when inducements are also provided for "don't know" answers. In light of these results, survey responses that indicate partisan polarization with respect to factual matters should not be taken at face value. Researchers and general analysts of public opinion should consider the possibility that the appearance of polarization is to a great extent an artifact of survey measurement rather than evidence of real differences in beliefs.

What does a Pinterest-employed data scientist do?

Posted: 03 Jun 2013 09:58 AM PDT

Pinterest’s John Rauser, who was recently interviewed by Mashable, says he views the site as a “a data company,” in which it is his job to track patterns of use and networked concepts and ideas. In addition to discussing his use of data analysis in streamlining internal processes, Rausner talks about what he’s learned about how people want to consume media.

Perhaps the most substantial is an in-depth study of how pinners use both our mobile apps and the website. It’s very easy to hold these platforms separate in your mind and to try to optimize them in isolation, but I don’t think pinners see it that way. To them, there’s only one Pinterest, and their experience of it moves seamlessly between platforms as they go through their day. This is a hard but extremely important thing to keep in mind as we continue to build the service.

Politico hires away Susan Glasser from Foreign Policy

Posted: 03 Jun 2013 09:49 AM PDT

Erik Wemple has a good take at The Washington Post, and Politico’s Dylan Byers has the memo. Wemple:

It's a colossal move. Glasser is a big name in Washington journalism, fresh off of five years spent turning Foreign Policy into a digital force. The magazine had significant print bona fides at the time of its September 2008 purchase by The Washington Post Co., yet its Web presence lagged behind the standards of the time. Upon taking the job, Glasser quickly finished off a redo of foreignpolicy.com that had been in the works — and the magazine's Web evolution hasn't slowed since. According to a news release from the Foreign Policy Group, the magazine's site tallied 4.4 million unique visitors in April, breaking readership records…

Glasser will have a twofold mandate at Politico: One is producing long-form pieces with significant gestation periods. Much of this stuff will land in a new Politico magazine that will come out at least six times per year, according to Politico Executive Editor Jim VandeHei. This magazine will be "stocked with profiles, investigative reporting and provocative analysis." The other is to generate "of-the-moment" opinion pieces off of the news — kind of a faster-paced version of what Glasser did when she helmed the Outlook section of The Post. The goal is two such pieces per day.

We’ve written a number of times about Glasser’s work at FP.

SCOTUSblog partners with NBC News and MSNBC

Posted: 03 Jun 2013 09:37 AM PDT

The oft-cited and never duplicated SCOTUSblog is teaming up with NBC News and MSNBC to cover the court’s upcoming batch of end-of-term rulings. Politico’s Dylan Byers reports that SCOTUSblog’s Tom Goldstein will partner with Pete Williams, NBC News’ justice correspondent, to provide analysis and reporting on the court.

Here’s our recounting of how SCOTUSblog dealt with its biggest traffic day ever on the day of the health-care decision last summer.

Monday Q&A: Fast times with Fast Company’s Co.Labs editor Chris Dannen

Posted: 03 Jun 2013 09:24 AM PDT

Chris Dannen doesn’t much care for gadgets, and he only taught himself to code a few years ago. But that hasn’t stopped him from becoming the editor of a rather innovative and unusual technology blog.

Dannen heads a staff of freelance writers and one “news hacker” at Co.Labs, the most recent addition to Fast Company magazine’s family of web verticals. The nearly 20-year-old magazine has divided their web presence into one home site and four separate verticals, Co.Design, Co.Create, Co.Exist, and Co.Labs. The first three cover design, culture, and innovation, respectively.

Dannen had been a freelancer with the magazine when the editors offered him the job managing Co.Labs, a project they conceived of as an in-house lab that would cater to the tech community, not only as a blog, but as a place for experimentation and reflection. Dannen says he quickly realized the smartest move would be to focus on a topic that both interested and plagued him — the future of publishing.

In our conversation, Dannen discusses some of what he has learned in the first year of experimentation. In addition to their attempts at creating knowledge networks via stub posts (there’s even one about the future of journalism), we touched on Dannen’s openness with problematic metrics, de-siloing developers in the community of small publishers, and gathering data on the perfect headline. Here’s our conversation.

O’Donovan: It seems like you guys have been doing some really interesting work lately, new kinds of posts and doing a lot of experimenting. But to start off, can you explain the genesis story of Co.Labs, how it came to be and how it fits into what Fast Company is doing in terms of its digital strategy overall? And maybe a little bit about how you came to be in the position you are now.
Dannen: I’ll start with my background. I started with Fast Company as an intern in early 2007. I subsequently became a freelancer, and I’ve been freelancing for the magazine and the website on and off since, so I’ve know those folks for a long time. In the intervening time, I was doing other projects — things like working on startups, I did some consulting, I taught myself how to design software. I did a lot of reading. I did some traveling. And when I came back to do more writing last fall, the editor-in-chief and our executive editor and our CTO had this idea of having a lab inside the magazine where we could kind of experiment with kind of formats, different technologies, different ways of publishing.

The impetus was really, as it always is, about money. Can we figure out a way create content that is either engaging enough that we can drive online subscriptions, or engaging enough that we can build events around it, or engaging enough that we can package it up and sell it. But the superintending assumption was we can’t live off banner ads forever, and even if we could, we might want to find some ways to make money that have a little more precision in the way that we measure them and the way we measure the engagement.

So that’s a very vague mission that I was given in January, and we launched the site in March. My idea was to take that vision and hone it in really sharply on the future of publishing and focus on, What is the proper format for an article? What is the meaning of rich media? What does it mean that all this stuff is networked? Should we be changing our distribution strategy? There are all these questions that the other verticals on our site don’t have time to experiment with. So we have FastCompany.com, but we also have our vertical sites Co.Exist, Co.Create, Co.Design, and now Co.Labs. So the idea was to basically let Labs kind of experiment and teach the other sites what we learned.

O’Donovan: So is there space for what you’ve discovered so far, for that to sort of trickle back up, I guess, to those verticals?
Dannen: Yeah, actually, it already has. The idea of doing what we call the stub posts was not original to me — it was actually an idea that our top editors had been kicking around for a while, but because our content management system doesn’t support articles in that format, nobody had really tried it. So what we did was decided that we would try to make our article work in kind of a stop-gap way, and that’s what we do, and what my post was about. How do we turn this discrete article format in a more flexible, growing format? So now we’re in the process of retooling our CMS to accommodate that format in a little more of a natural way.
O’Donovan: Can you talk a little more about that stub post format? What was the genesis of the idea, what are you building toward, and what you think it says about the industry more broadly?
Dannen: I guess, in hindsight, what we were really missing was something between a short news post and a long reported piece. And the reason that there’s a gap there, the reason that we needed something in the middle, was that there are a lot of scenarios where we have a feature or a reported story that is applicable to a greater story line, that is part of a bigger schema of events. The problem is that, when you go to write that feature, the reporter is put in this position of reexplaining all that context all over again at the top of the article, just so that the reader can dive in and make use of the new reporting. That’s a real time suck, and it also makes writing articles a lot more onerous. It makes our features longer than they have to be.

So we thought, when we cover these big, sprawling, slow-moving stories, why don’t we build a stub post, have updates as they come in, and when we get a source that’s really useful or something breaks, we’ll do those discrete articles, reported pieces, but then tie them back to the stub as the way of telling the reader, “If you’re not up on the story, if you need context, go back and read the stub and it will make sense why this is timely, and why this is a person we’re talking to.”

So what it’s really been able to do is let the reporters get right into their features and hit the ground running. For anyone who is not caught up, we always link back to the stub.

So that’s a win for two reasons. It doesn’t just make our feature article a little punchier, but it also helps us with another problem that we’ve been mulling, which is how to make our articles more relational. If you think about articles as a database, we have all these entries in the database, but they’re not very well related to each other. We do related links like everybody does at the bottom of our stories, but there isn’t really a schema that connects one part of a story to another.

So something that we found is that, when we internalized these feature stories and stubs, we get people bouncing back and forth, and maybe even bouncing to other different stubs. The ultimate goal — which I think and I hope based on our feedback anecdotally that we’re getting some limited success — is just introducing these individuals and events in context with plenty of information around, so you can understand what the timeliness is, why is this relevant, and who’s relevant to it.

O’Donovan: To what extent is Wikipedia an inspiration? Are you building a knowledge database for developing news and stories?
Dannen: Yeah, you could think of it as a really messy version of a wiki. At Wikipedia, their taxonomy is to separate by person, place, or thing. What we’re essentially doing is mixing all those things up, but we’re saying there’s a common storyline here. So, for example, women in engineering is one of our most popular stubs. We’re saying the dearth of women in engineering is a common storyline, but you see that story pop up in all kinds of places, and all we’re really doing is sort of tying disparate events or findings or studies, tying the disparate things together and showing that they’re part of one common narrative, and that makes it easier for us to address that narrative when we go and do features.
O’Donovan: I want to go back to something you just said about precision measurements. In the post you wrote about your experimentation with these stub posts and writing longer stories, you went into some details about the analytics and how you look at them and how you think about them and some of the early conclusions you’ve drawn.

But then there was a robust conversation in the comments about what types of conclusions you can draw from the types of data you’re using. So I’m curious about your original findings and also your takeaway from that, and moreover, how do you watch your traffic and how do you do it differently from other sites?

Dannen: We use Google Analytics and Chartbeat. That’s what I use on a day-to-day basis. And we also use Omniture on a longer term basis. But I don’t think we look at metrics differently than anyone else.

Obviously, the number one metric is something around engagement. We always want people to be hitting the site and staying there as long as possible. I don’t have numbers in front of me, but I believe our news hacker told me yesterday that on average, since we’ve started doing those stub posts, we’ve had a 42 percent increase in engagement.

So how exactly do we measure what we’re calling engagement? I’m actually going to do a followup post about that because it’s a big discussion and, as you saw in the comments there, there are plenty of things that you can turn on and off to affect whether you’re measuring someone as engaged. But I think that the overarching thing that we’re doing, that maybe some of our competitors aren’t, is, we’re looking less and less at how wide our reach is and more and more at how often people come back and how long they stay. And I don’t think we’re necessarily that cutting edge in that respect, but I’ve heard a lot of people say it’s better to have a core audience of 1,000 people that come back regularly than 10,000 who only come back once every couple months.

Part of the reason for Co.Labs existing is to figure out…those one thousand core readers, they don’t really represent you well when it comes time to sell banner ads, but it might be really useful to the site when it comes time to make money off of other things. So we’re trying to shift our model a little bit towards loyalty and engagement and not tons and tons of one-off hits and pageviews.

O’Donovan: Is that, down the line, one of the findings you might be able to extrapolate to the rest of the operation?
Dannen: Yeah, this is my sort of thing to prove to the other sites that we can do more with the core group and maybe even grow that core group a little slower.
O’Donovan: You mentioned your news hacker before. It says on the site that the title’s a work in progress, but, structurally, what is the staff you’re working with? While the title isn’t the most important thing when it comes to the tasks a person is responsible for, you’re a lab, but you’re also a publication — how are you thinking of that structural stuff when you’re building a team and thinking about workflow?
Dannen: That’s a good question. There’s a lot of ins and outs there. The first thing — I’ll talk about our news hacker in a second — but the first thing I realized early on was that we would not only have to write about technology. Originally we were going to be a developer-centric site and write about issues relevant to developers. It was going to be just another Co site, except more in depth and tech-focused.

But of course, technology is so big — you have web developers over here and iOS developers and Windows developers, and they all read different stuff, and there are very few common threads, actually, in terms of the culture and the subject matter. So we realized we would actually have to hone in on what we were doing, which was building technology for a publishing company.

We’re hoping to mimic the other technical blogs of startups and companies that you see. There are a lot of really fantastic company blogs out there, and usually what they do is adopt this formula of writing about their own technology, obstacles, assumptions, trends they see, things that are in their wheelhouse, and then they use that as a conduit and a lens to talk about broader issues in technology. So we’re doing the same thing.

O’Donovan: There was some conversation around that idea recently, when Gizmodo relaunched as this new design, architecture, urbanism focused technology blog and there was some conversation around that about what tech blogging was five years ago or ten years ago and how that might be changing. Where do you see yourself fitting in that conversation?
Dannen: You know, I haven’t read most of what are considered the flagship techblogs. I visit them so infrequently that it’s become almost a joke. Because all they write about is phones and Apple and Microsoft and Facebook. These guys are not producing news that is really compelling. If you’re not a gadget freak, then this stuff just, it doesn’t really get at the core of what’s interesting about technology, which is people solving real-world problems, bringing in digital and figuring out new ways of doing things. It’s kind of part of the maker culture we’re trying to cover. Gizmodo is also probably moving in that direction, we’re just doing it in a much more self-conscious way. We’re talking about things we’re doing in house.

And that’s why we hired our news hacker Gabe from Google. He basically builds some of the experiments we’re working on in house and then we write about them. So we’ve got a lot of experiments underway. We really needed that hybrid editorial technical person for that.

O’Donovan: Interesting. So he conducts the experiments and then you have writers who observe them happening live and try to extrapolate them into a broader context for the whole industry?
Dannen: Well, we actually write about those experiments ourselves, he and I. From a staffing perspective there’s myself and Gabe, one lead writer, and anywhere from 5 to 8 freelancers in a given week who are contributing. The freelancers mostly write about whatever is germane to their beat, but Gabe and I are pretty much owning the in-house experiments.
O’Donovan: Have there been, for you, any big surprises that have come out of that process? Things that, what you actually found from fooling around was different from what you expected at the beginning?
Dannen: One thing that I underestimated was the importance of using and testing multiple headlines for the same piece of content. One of the things that is great about the stub articles that I mentioned is, what we do every time we update one, we change the headline and we change the deck and we put the update at the top of the article and push the old updates down the page. And then we update the authored on date. So the system essentially looks at the new post with a new URL, but it’s really the same old story, it’s just been updated.

So what ends up happening as we do more of these updates is we get more of these alias URLs and alias headlines that all point back to the same piece of content. We make the headlines specific to that days update, and then you kind of see this actually, “Hey, if you want more context on this, this story is actually about this bigger issue X, you can read on to find out more about this bigger issue.” So what we found is that, every time we do these updates, we get a new headline, a new URL, we tweet it out or we publish it on Facebook, our normal distribution methods, and we can actually test which headlines are people liking. Because obviously, it’s the same story, we’re just approaching it from a different angle every time we write the headline.

So it’s been really interesting to see — some headlines, they really hit. We probably should do more exhaustive headline testing, but somehow, we never have the time, or we just don’t have the means, and I think what this has taught us is that we need to be a lot more data-driven about what kind of headlines our readers like and which ones are specific to which distribution method. So, for example, I know that on Twitter, the fewer capitalizations and the less proper English I put in a tweet with a link, typically, the better it does. The more conversational I spin the article, the more people click. Whereas, on some other distribution methods, people want to read a real headline and not so much just my blurb.

O’Donovan: That’s interesting. When you said you wanted to build a more core group of really engaged users, what is your view of what you want comments and their contribution to the site to be? I know you sometimes comment back to commenters on the site — is that kind of engagement top priority right now?
Dannen: It is, although comments are a constant quandary, because the comment area is really not the best place to engage with people, and it’s definitely not the best place for a new reader to feel enfranchised on the site.

I sort of struggle with what we’re going to do about that, but what I like — I should say, our intent is to adapt our CMS to the point where other people can annotate our articles and actually branch off an article and make their own version. We’ve already experimented with this a little bit using GitHub. We have the GitHub repository where the text of our articles get republished into this repository and you can take that text and change it and branch it off and make a new version and make corrections or edits or whatever.

But that’s not the best interface for that, because most of our readers aren’t expecting to find our content on GitHub. But we’re thinking maybe eventually we’ll build some kind of interface on GitHub, build it right into the site, so that if people don’t like the way something is written, or if they have a different or ancillary story to tell, then they can sort of get in there and start writing and people can you know see different versions. In that sense, it is like Wikipedia, as you said before.

O’Donovan: In terms of developing that stuff in an experimental and open way, are there any features on the site that are really interesting for you? For example, what is this back page thing?
Dannen: The back page is something that Gabe our news hacker built under the assumption that there might be better homepage formats for certain readers. Right now, our homepage is really pretty and it looks great if it’s your first time for the site. You’ll get a good feel for what kind of content we’re running and it has an overall magazine feel. But it’s not particularly hard hitting — it doesn’t show you a lot of stories at once and doesn’t allow you to quickly browse through a lot of stories at once.

So we put together this back page which essentially pulls in the article tags from the last 10 or 15 stories, and puts them in a grid. The idea is to use the tag field, which nobody ever really thinks about anymore since we’re out of the days of SEO, and put little teaser phrases in the tag field an then on the back page you get this derivative of what is essentially clickbait phrases or sobriquets, many of which lead back to the same article.

It’s sort of along the same lines of experimenting with headlines. You might take the article I mentioned about women in engineering. I could tag that one “How to scare away women,” and on the backpage all you see is this box that says “How to scare away women.” It’s hard not to click on that to find out what it’s about. It actually refers to a stub we wrote about that says a lot of job listings for engineers are written in a way that really only appeals to men and sort of makes women want to run. So that’s the back page, that’s one experiment.

O’Donovan: I am curious about how many stories you’re tagging just “bro.”
Dannen: [laughs] Well, it’s always related to that women in engineering stub. That word comes up again and again when we interview people. You know, whether it’s bro culture or bro-ing out, I don’t know. It does pop up.
O’Donovan: Does the open ethos behind these experiments conflict at all with being supported by Fast Company?
Dannen: We intend on being wildly transparent about what goes on on the site. So far, we haven’t had any pushback from the executive editor or the editor-in-chief, and I think so far they’ve been nothing but encouraging about talking about what we’re doing.

I — we may run into issues with that in the future. There’s a lot of information, specifically financial information, which I might be privy to if I wanted to dig, but at the end of the day it’s actually not that interesting. The most sensitive stuff is probably not the most widely appealing to talk about. So, in that sense, I think our interests are aligned with our agenda to be transparent. But, then again, that situation could totally change.

O’Donovan: So you’ve been at this a couple of months now. Where do you see this in six months or a year? What are some of your top goals in terms of what you might want to learn, but also in terms of what you might want to build?
Dannen: My priority right now is to recruit as many individuals and startups who are working in the area of new media or new content publishing aggregation, news, that whole schema — to get as many of those people guest authoring for us and getting interviewed by our reporters. Because there doesn’t seem to be, that I’ve found, a super technically literate place to talk out the future of media.

But we’re trying to get really technical about, What are the challenges and what are the results? What should we be building? What needs to be built? What can be bought off the shelf? Really pragmatic stuff, for people who are running small publishing operations, which seem to be popping up all over the place.

What I’m really hoping to learn is actually just, how do we make use of all these new media as what is essentially publishing is becoming software, and software is becoming content? These things are sort of meeting in the middle, and it’s going to be really interesting to see what role video plays, what role audio is going to play. Audio has been really underutilized, I think. And how much of it is going to be livestreamed and how much will be prepackaged? What happens to the concept of having an issue — monthly or weekly issue, when we can just dump everything in a feed. I think the feed is maybe not the perfect paradigm in a lot of ways. I don’t know if that answers your question.

O’Donovan: It does! But it also raises questions. What is the role, in your mind, going to be for video and audio on your site? Is that something that would be constructive for you? I know you do post some interviews and things.
Dannen: I think it’s going to be huge for us. I think the role of editors in the coming next few years is going to be to determine which stories are going to be the ones that we invest the resources in. Obviously, we have limited resources and a handful of feature stories that we make our big bets on. To me, editorial judgment is going to be much more about allocating resources to a story and figuring out, Where do we take the time for video and infographics, and where do we do just text?
O’Donovan: So do you ultimately see this as a place where small developers working separately around the world can come with their problems and look for solutions? Do you want to build something that practically applicable?
Dannen: I think, like I said, that when it comes down to actually sharing code, the universe of developers is so segmented that right now it’s not feasible to do a tutorial and have it be really popular, because everyone is using different technologies. But I think in a couple years we’ll have it narrowed down: What are the best technologies for front end and back end, and what can we all contribute? Which open source projects are we all going to invest in?

But right now, everyone’s sort of working on different stuff in different offices all around the city.

The cicadas are here: 4 lessons from WNYC’s Cicada Tracker project

Posted: 03 Jun 2013 07:00 AM PDT

cicada-cc

When cicadas start coming out of the ground, it’s called a bloom. Different types of cicadas are separated into different broods. When a cicada first crawls out of the earth, it’s called a nymph.

This entomology jargon is a small part of the body of knowledge that WNYC’s Cicada Tracker has brought to both public radio listeners and staffers alike. We wrote about the project just after it launched back in March, touching on the future possibilities for data journalism in which the data is collected by tiny, inexpensive bits of hardware. The Cicada Project was among the first large scale experiments of this kind, with the goal of predicting the moment when this year’s brood would emerge by crowdsourcing ground temperature readings.

Did they succeed?

John Keefe, senior editor of data news at WNYC and the man behind the project, says yes. “I think it’s been a really cool success in these unexpected ways,” he says. “We set out to see if we could get people to participate in a pretty complicated project.”

The Cicada Tracker ultimately received nearly 1,500 temperature readings from listeners, many of whom made the kits distributed by the radio station or made their own. Since posting a sightings survey on their website, Keefe says they’ve received over 2,000 different reports. “Now, we’re kind of the go-to place for the map of cicadas,” Keefe said. “Everyone’s coming to us.”

Whether or not they exactly predicted the moment the first cicada’s hindquarters wriggled out of the earth, the project has become an important hub for a number of interested parties and an important milestone for public media and the power of crowdsourcing. Keefe says they learned a lot about what’s possible from the project, and shared four major takeaways about what the cicada tracker means for journalism, science, innovation and community.

"The only question left is: What stories do we what to chase?"

The most important lesson of the project, Keefe says, is that it proves all the necessary parts for this kind of journalism are accessible and in order. “It’s showing that the hardware and the software are doable. A lot of people could have told you that years ago — but now it’s really, clearly possible,” Keefe says.

The hardware actually proved to be more than doable during the Cicada Tracker Project. The original kit WNYC had participants building was comprised of parts from Radio Shack and an Arduino board and cost around $80. Within a few weeks of the project’s launch, however, a member of Hack Manhattan contacted Keefe and said he could get the cost down to around $16.

“It involved us having to program a chip ourselves to give to people. We mailed them all over New Jersey. We gave them out at a museum here in town. They were all assembled by people at a Brooklyn Brewery event we had. That’s just an amazing story right there.”

And it gets more amazing. A few weeks later, Keefe says, “the same guy sent the plan for that to China and had fifty of them come back at about a buck a piece.”

But Keefe says it wasn’t just about making sure the technical aspects were workable — crowdsourcing doesn’t work without the crowd. Another victory for the Cicada Tracker was proving not only that there was widespread audience for this kind of story, but also widespread interest in participation.

“The software and the hardware are not out of reach. The crowds are not out of reach, especially in public media,” says Keefe. “The only question left is, what stories do we what to chase?”

Raising the bar on engagement

But of course, it’s not just the crowds, it’s what the crowds are willing to do.

“I was recently speaking on a panel about engagement on websites. They’re talking about liking things, leaving comments…it’s kind of interesting because here I was, also on the panel, and first of all, I’m in public media. If we don’t have engagement, we won’t exist. We have to ask people for money and they have to send it to us…But also, here, we have almost 3,500 instances of people collecting information, whether the temperature of the soil or the fact that they see cicadas…They’re not getting their opinions out. It’s not about comments. It’s basically people working together toward a common goal that’s kind of cool and kind of fun, but it takes more than writing a couple words in a comment box. You have to actually see something, or build this kit, or get a thermometer. So in terms of engagement as a buzzword, I think this is a pretty advanced version of that,” Keefe says.

There was some discussion in our last piece on this topic about why public radio listeners are an unusually good match for crowdsourced projects, but that’s not the only thing Keefe had going for him. We’ve also heard in the past about the power to be harnessed from niche interest groups, finding the audience where they already are. Keefe says there is a robust community around physical computing right now that will propel any project like the Cicada Tracker forward.

But there’s also a significant population of people who are interested in bugs and entomology, both scientific and amateur.

It was their inherent interest that radically lowered the cost threshold of the project — and that also got a group of researchers from the University of Connecticut involved, thereby significantly expanding the reach and heft of an idea that began as a hackathon prototype. These are important examples of a new kind of engagement, in which the audience can give back to the story in a concretely valuable way.

“The nexus of public media, science, and the audience”

The involvement of the UConn entomologists is exciting for WNYC, because it justifies that the work they’re doing is important, and that the data they are gathering is of real value. The researchers helped the station design a survey form for their site where people can submit details on when and where they first saw cicadas emerge. The responses are fed to the researchers who, as of this writing, are busily tracking the cicadas northward from North Carolina to Virginia by car.

But the benefit to the scientists is even greater. Professor Chris Simon says she’s been tracking cicadas for many years, trying to figure out what lies behind unusual genetic development in the cicada’s transition from larva to adult. “Our work will help to show how a common genetic mechanism can be modified (probably by a few simple changes) to produce a radically different phenotype,” she says.

Simon first tried crowdsourcing as a method of research and data gathering in 1979. “I wrote an article in Natural History Magazine and asked for letters, postcards, and specimens,” she says. “I got thousands of replies.”

Of course, there are still things possible through digital technology that were not possible before. Simon says to map the mailed data points would have taken months, compared to a data map that can update live. Simon says she was surprised by the level of human resources that the radio station made available to the project, but there are some wrinkles in the collaboration. To eradicate the problem of replicating data points, WNYC has to eventually merge their data with the researchers’ table. Sion says it would be much easier if WNYC directed participants to the site she’s been using for years, Magicicada.org.

Ultimately, Simon says the project has made her look forward to next year, when a new brood of cicadas will emerge in other parts of the country. WNYC reached out to radio stations there about continuing the project before the idea even occurred to Simon, she says.

“To document more examples of four-year jumps, we are using crowdsourcing to locate individuals that come out in odd years or in odd places for a particular year,” she says. “We are also using crowdsourcing to help find the boundaries of each brood or year class. We then go in ourselves and map the brood edges carefully.”  

Julia Kumari Drapkin is another radio journalist trying to capture the power of public media loyalty for the benefit of public health. Kumari Drapkin received funding from the Association of Independents in Radio’s Localore project to go to rural Colorado and crowdsource information and opinions about climate change. While she found that farmers were much more interested in talking in person than in regularly texting in data about rainfall, temperature, and crop yields, she nonetheless was able to build a live and robust site, TheAlmanac.org, that tells the story of changing weather patterns in a variety of ways. When I spoke to Kumari Drapkin a few months ago, she said the project had ignited in her a real excitement around finding the future of journalism sat the “nexus of public media, science, and the audience.”

She says she’s the Cicada Tracker is “definitely a step in the right direction,” and says she’s been watching WNYC’s progress closely.

“Everybody is curious, everybody wants to find the answer — scientists, journalists, listeners alike. When we all seek the answers together, the process is not only more efficient and informative for scientists, it’s more interesting and relevant to audiences when they can participate,” she wrote in an email. “Most importantly, it’s fun. Five-part win for journalism in the equation.”

Kumari Drapkin says she is currently working on a collaboration between her project, iSeeChange, and NASA and the Jet Propulsion Laboratory that would involve fact-checking satellite data via crowds and “near sensing what we see on the ground against what satellites are seeing from space.”

“It’s difficult to pull off,” she says, “but we believe there’s promise.”

“There was that high level of interest — not meddling, but interest.”

I asked Keefe how to make a project like this work, and he said he didn’t know. But he did have a couple of ideas.

First, the cicada project came with a built-in deadline. From the moment the idea was born at a hackathon, there was a race against the clock to get the sensors in place before the ground temperature started to rise. That kind of hard stop provides an organic impetus to work quickly and flexibly, to take risks, and to have back up plans, all of which Keefe says was essential to making the end product a possibility.

“The entire thing was a little bit crazy,” says Keefe. “We put it together pretty fast.”

But more importantly, he said the willingness of upper management to take a risk was what made the Cicada Tracker a real possibility for WNYC.

“This is going to sound super self-promotional, but it’s really not. We prototyped this at a hackathon, and the vice presidents all attended the demos, and a couple of them were saying, ‘Let’s do this.’ And I said, ‘If you want to do this, we can do this.’ And they said, ‘Go.’ And we kind of ran with it. It allowed this to happen much more quickly than it would have happened here earlier.”

The Cicada Tracker also ended up partnering with Radiolab, so there’s no doubt that the robust character of the institution helped the project along, which Keefe says is not necessarily something he could teach someone else.

For now, WNYC is keeping their next plans for sensor journalism projects and crowdsourced data under wraps, but there’s no question that it’s a burgeoning field with a lot of interest around it. This weekend, Keefe and other experts gathered for a workshop on sensor journalism at Columbia to discuss practices, applications and ethics. But Keefe, who was scheduled to speak on “Near Field Possibilities,” says that’s not the hard part.

“Journalists, whether they’re political journalists or sports journalists or data journalists, they’re taking their expertise, their knowledge of these different realms, and finding stories,” Keefe said. “I think that’s the next big challenge — what stories can we find and tell? Can we do investigative reporting this way? Is there something we can reveal that’s begin hidden? I don’t know the answers to all those things yet. But my guess is, somewhere, somehow, in some cases, the answer is yes.”

Photo by jeff-o-matic used under a Creative Commons license.