Rabu, 19 Desember 2012

Nieman Journalism Lab

Nieman Journalism Lab


A year of more of the same?

Posted: 18 Dec 2012 10:01 PM PST

Linear projections are always dangerous, but well established and deeply rooted overarching trends are also too often overlooked and forgotten as more exciting new developments attract our attention — so let me highlight five basic features of recent media developments I think will continue to shape the news throughout 2013.

  1. Further cost-cutting in a newspaper industry faced with persistent structural (digital transition) and cyclical (global economic crisis) pressures.
  2. Further reduction in many commercial broadcasters’ investment in news as they recalibrate to match audience and advertisers’ interest.
  3. Increased political pressure on public service media, in some cases (the U.S.) from conservatives ideologically opposed to state intervention in media markets, in some cases (much of Western Europe) from private sector media lobbies concerned with what they regard as unfair competition.
  4. More journalistic content-producing startups, nonprofit as well as for-profit, launched to great fanfare only to struggle to make ends meet and draw much of an audience.
  5. Major digital intermediaries like Amazon, Apple, Facebook, and Google presenting content producers (both private and public) with a growing number of tactical challenges in terms of access to users, data, control over pricing, etc.

I’m sure there will also be at least one major surprise, potentially a game-changer that bucks these trends. Maybe an easily replicated and scalable model for collaborative, quality, low-cost journalism; maybe a significant shift in willingness to pay as more people wake up to the differences between freely available content and stuff only available at a price; maybe a yet-to-be-foreseen technological shift; or maybe a legal development (on fair use, for example — keep an eye on the French and German attempts to change the terms of trade for online news, and on Google’s attempts to fight these moves).

I’m not predicting either of these latter things — just recognizing that the future usually turns out to be weirder than we think. That said, it’s also built on the past, and the past has a way of repeating itself, especially if we forget it, so let’s keep the basic features of recent years in mind.

Rasmus Kleis Nielsen is assistant professor of communications at Roskilde University in Denmark and research fellow at the Reuters Institute for the Study of Journalism at the University of Oxford.

Journalism schools as startup accelerators

Posted: 18 Dec 2012 10:01 PM PST

Disruptive-innovation guru Clay Christensen exhorts news organizations to focus on the “jobs to be done” in their communities. Help people do them and revenue opportunities will follow. (Especially when consumers didn’t realize they needed those jobs to be done.)

The winners in 2013 will be entrepreneurs and intrapreneurs whose mission is to help get those jobs done: creative individuals who see a gap and fill it, define a niche and commoditize it, re-imagine information and deliver it in smart, new ways — ways that are increasingly easier to monetize than erecting paywalls.

While many legacy news organizations continue to focus on “making money” as their paramount job to be done (a focus on themselves instead of their consumers), media entrepreneurs are stealing bases and hitting home runs all around them. That will continue in force this year.

Media entrepreneurship is happening in a couple ways. Front-end media entrepreneurs are creating news content — from comics-journalism apps and digital voter guides to state watchdog initiatives. Back-end entrepreneurs are building mobile apps, scraping data, and automating tasks. The creativity — believe me, I just helped judge the SXSW News Technology Accelerator entries — is simply breathtaking.

I also just finished teaching the inaugural cohort group in American University’s MA in Media Entrepreneurship program. Nine sharp students had nine great projects — any of which could have been triples for traditional journalism organizations.

In addition to clever ideas for news products and news delivery systems, we will see more creative ideas for journalism collaborations. The opportunities are wide open for connecting silos of information in communities, amplifying good stories that people want to know about and for leveraging resources so that the sum of the efforts is bigger than the individual contributions. They can be metro collaborations, such as the new video-sharing agreement between The Boston Globe and WBZ, or mainstream media organizations working with indie news sites, as is happening with The Seattle Times and The Oregonian in Portland. Someone will crack the nut on revenue sharing.

Journalism schools, such as CUNY, Arizona State, American, and others, will continue to innovate with programs to help both entrepreneurs and intrapreneurs build the skills, mindsets, and networks to make their media ideas happen. Welcome to the year of j-schools as start-up accelerators.

The winners this year will be those who see the opportunities right under their noses and act on them.

Jan Schaffer, is executive director of J-Lab: The Institute for Interactive Journalism at the American University School of Communication. Previously, she was business editor and a Pulitzer Prize winner at The Philadelphia Inquirer.

The end of “disruption”

Posted: 18 Dec 2012 10:01 PM PST

In 2013, news will continue to be made and reported. Much of it will be bad. Some of it will be good.

In 2013, newspapers and newsmagazines will struggle to make ends meet. Some will go out of business. Some will be acquired. Some will survive into 2014, at which point they will continue to struggle to make ends meet.

In 2013, ad sales will be crucial to the health of the news business, and ad sales will follow a cyclical pattern, tied to the health of the economy. But ad sales won't be enough, not for general interest publications, anyway, and so we'll see more experiments with online paywalls and subscription plans.

In 2013, breaking news will be reported immediately through the web, and online forums will provide endless opportunities to discuss the news. But the "atomic unit" of journalism (to borrow a term beloved by news pundits and no one else) will be the story.

I make these obvious observations not to be glib but to point out that many of journalism's fundamental qualities aren't changing, or at least aren't changing all that much. That's true even when you look at the technological and financial aspects of the news as a business. Searching, aggregating, linking, blogging, craigslisting, photosharing, social networking, microblogging: These things are not new anymore. The big, transformative changes in the industry — the shifts in the habits of readers and advertisers — happened years ago, and since then a kind of uneasy stability has taken hold.

The future is uncertain, yes, but the future has been uncertain for a while now. The basic dynamics of the news business didn't change much from 2010 to 2011 to 2012, and I suspect they won't change much in 2013 or, for that matter, in 2014.

For a long time now, "disruption" has been the go-to buzzword in commentary about journalism. Pundits and consultants love to say "disruption" because the word tends to attract money and attention. But the word is starting to ring hollow. Throwing it around today seems more like a way to avoid hard thinking than to engage in it. Maybe 2013 will be the year when we finally stop talking about "disruption." I hope so, because then we can start giving as much consideration to what endures as to what changes.

Nicholas Carr's most recent book is The Shallows: What the Internet Is Doing to Our Brains, which was a finalist for the 2011 Pulitzer Prize in General Nonfiction. He blogs at roughtype.com.

The year responsive design starts to get weird

Posted: 18 Dec 2012 10:01 PM PST

Over the past year, the idea of responsive web design has taken hold in a growing number of newsrooms. The Boston Globe launched a paywalled version of the Globe as a responsive experience at the end of 2011. Pay for the Globe online and it’ll work — and look good — on your mobile, your tablet, and your desktop just the same. One URL, three different presentations, making the most of the screen size you’ve got.

In 2012, the BBC, Time, and the Guardian, among others, rolled out responsive sites, while even more outlets added apps and projects that were rigged to fit any browser. Mashable went responsive earlier this month and promptly declared 2013 the Year of Responsive Design.

I think that’s only half right.

For everyone just tuning in, 2013 will be the Year of Responsive Design. For the nimble but not bleeding edge, 2013 will be the Year of Responsive Design. For lumbering, slow-to-adapt, drowning-in-enterprise-bullshit news organizations, maybe 2014 will be the year, but at least they’ll start talking about it in 2013. It just makes too much sense for an online news organization to have URLs that just work no matter what size screen hits it.

But for people living on the edge of technology, I think 2013 will be the year that Responsive Design Starts To Get Weird.

Buckle up, Future of Newsers.

Think it’s hard to adapt your content to mobile, tablet, and desktop? Just wait until you have to ask how this will also look on the smart TV. Or the refrigerator door. Or on the bathroom mirror.

Or on a user’s eye.

They’re all coming…if they aren’t already here. It doesn’t take much imagination or deep reading of the tech press to know that in 2013 more and more devices will connect to the internet and become another way for people to consume internets.

We’ll see the first versions of Google’s Project Glass in 2013. A set of smart glasses will put the internet on a user’s eyes for the first time. Reaction to early sneak peeks is a mix of mockery and amazement, mostly depending on your propensity for tech lust. We don’t know much about them, other than some tantalizing video, but Google is making them, so it’s a safe bet that Chrome For Your Eyes will be in there. And that means some news organization in 2013 is going to ask: “How does this look jammed right into a user’s eyeballs?”

Others argue that 2013 (like 2012 and 2011) may (finally) be the year of the smart TV. The reason? Apple, by rumor or innuendo, is in the TV business. Without so much as a dubious leak to go on, fans are ready to take out a car loan to buy a TV. And it would be ludicrous to believe that a full-sized Apple TV (as opposed to the hockey-puck-sized appliance they sell now) won’t be connected to the Apple universe, which means iCloud, which means we can hope for Safari on the TV.

Google, on the other hand, has had a smart TV out since 2010, and the 15 of us who own them are interested to see the platform evolve further. I had Android’s Jelly Bean update on my TV before most people had it on their phones, but I still can’t do very much with it. My main beef with Google TV (and it’s as good of a reason as any for why it hasn’t caught on) is that you can use the internet, or you can watch TV. You cannot watch TV while using the internet, or vice versa. There’s a bright line between the two and you can’t cross it. I can tweet from my TV — trust me, it’s less exciting than you think — but I can’t tweet while I’m watching that TV or follow a feed of hashtags about the show I’m watching. So instead of using the 40 inches of HD I’ve got, I’m using my iPad or iPhone as a second screen (I know, first world problems). I want to know what happens when my first screen and second screen are the same screen. We might find out what that’s like in 2013. And, again, some news designer is going to have to wonder how this story is going to look on a 64-inch (or larger) screen hanging in the living room.

In 2012, if you were in the market for a refrigerator and had a cool $3,700 to burn, you could have bought one with a touch screen and a set of apps on board. I’m still mad at my wife for not letting me get one. Why wouldn’t I want to tweet from the fridge? Well, truth be told, tweeting from one in the store was…less than ideal. But stretch your imagination a bit. Take an iPad screen and embed it in your fridge door. Instead of the kid’s artwork hung by magnet, you now have a dynamically updating, touch-reactive screen. Now come to the fridge to grab a soda and get updated on the latest news while you’re at it. Is your content formatted for binge eating?

Reading tech news in 2012, it wasn’t hard to imagine all sorts of flat surfaces in your house becoming screens. Microsoft introduced the Surface, which pained me because what used to be called the Surface — a touch-screen table computer now called the PixelSense — seemed like a fantastic idea. Imagine, instead of passing sections of the paper to your spouse across the breakfast table, you passed browser windows. But with 82-inch touch screens on the market, it’s a matter of time before walls in our houses become giant internet-connected appliances. Or, in my dream home of the future, my bathroom mirror becomes a screen, showing me the weather, the traffic on my commute, my schedule, my inbox, and a feed of news while I brush my teeth in the morning. Nuts, you say? They went on sale in April for a pricey $7,800. Not in your price range? The New York Times R&D Lab hacked one together with an Xbox 360 Kinect and a flatscreen TV in 2011. Smart mirrors aren’t far off. So is your content formatted for shave-time reading?

And I haven’t even gotten around to asking if your content is formatted for your watch.

Now, do I expect all or even any of these to catch on and become the next smartphones? No. Some more than others, but not all. But inexorably, more things in our lives are going to become connected to the internet, capable of displaying news for us when we find ourselves with a moment. And many of those things are going to have bigger, better screens than our tiny smartphones do now. So if I can start a great, long-form story on my coffee table, send it to my bathroom mirror as I brush my teeth before going to bed, and finish it on my iPad before falling asleep, why wouldn’t I?

Is your content ready for that?

Matt Waite is a professor of practice at the University of Nebraska’s College of Journalism and Mass Communications, teaching reporting and digital product development. Previously, he was the principal developer of PolitiFact.

Mobile first

Posted: 18 Dec 2012 10:01 PM PST

For the past six years, various people have made predictions about next year being “The Year of Mobile Fill-in-the-Blank.” In many ways those predictions have been accurate: The growth of smartphone and tablet usage has been off the charts. In other ways, the predictions have fallen short. So I’m going to weigh in with one that feels somewhat safe (and obvious?).

If media companies want to stay in step with their users in 2013 and beyond, they will no longer be able to think of the mobile experience as being downstream from, or an afterthought to, the desktop web experience.

The numbers speak for themselves. In the next 12–18 months, many news organizations will cross the 50 percent threshold where more users are visiting on phones and tablets than on desktop computers and laptops.

In November, 37 percent of all visits to the Times (including to NYTimes.com, our mobile site, and all of our apps) came from phones or tablets. That’s up from 28 percent in 2011 and 20 percent in 2010. When media organizations see numbers like this, they will be forced to decide whether they can continue to put the majority of their digital efforts into the presentation of their desktop report. If you do that, your product, and your journalism, will not be tailored for the majority of your digital readers.

Many news organizations are already starting to think this way. Josh Marshall of Talking Points Memo was quoted right here earlier in the year describing his thinking on the shift:

Inevitably, as long as mobile was something like five percent of traffic, it was just something you made available on the side. But you start to see, this is going to be half of our audience. We can’t be approaching it in a way that the website is the thing, and we’re making imitations of it — because this thing is losing its primacy. In a lot of ways, it wasn’t until late last year that it hit me at a different level. It hit me as more than a concept. It was really true.

At the Times, we made some big steps forward on this front in 2012. We used the big news events of the year as opportunities for experimentation, beginning the process of trying to make our mobile report more dynamic and not just a downstream feed of articles from our desktop website.

It sounds simple: Put your effort where the majority of your readers are. The reality, of course, is that it’s not simple at all.

First, there are the resource constraints. Most newsrooms put their digital focus on their desktop websites. Shifting manpower to mobile means either doing less somewhere else or hiring a whole new staff. The latter is problematic: Controlling costs is a top priority everywhere, and there is a new platform or device to worry about every other week. What’s the answer? I’m not exactly sure. But I think the shift has to happen, and it needs to be accomplished through a combination of smart automation and directing more human brainpower to mobile.

Second, there are the business-model problems. Figuring out how to monetize an audience on mobile (especially on the phone) is probably not going to be solved by the end of 2013. But there are parallels here with the web in the late ’90s. If news organizations had waited until all the business questions surrounding the Internet were solved before they began to experiment on websites, they would have been in bad shape. Holding back on mobile because of the monetization concerns feels perilous when that’s where the audience is going. I would like to see news organizations lead by coming up with new forms of storytelling, new presentations for our journalism, and new types of advertising, all of which are uniquely suited to mobile devices.

Third, there are a lot of complicated technical challenges. Should you pursue native apps or is it all about the browser? Will HTML5 solve all of our problems? Do you have to be on every new platform or should you just focus on the established ones? Isn’t responsive design the answer to everything? My only prediction on this front is that these questions are also not going to be answered by the end of 2013. Contrary to what some have said, I believe that we’re just at the beginning of an exciting time of experimentation with native app technologies on tablets and phones. That doesn’t mean that HTML5 should be abandoned or that news organizations shouldn’t pursue responsive designs for their websites. I think we have to continue to experiment with all of it. Easier said than done, I realize, in a time when money is short. But putting all of your eggs in one basket at this stage feels dangerous.

Finally, there are a bunch of potentially thorny but also potentially exciting journalistic issues to be addressed. It’s clear that refining the presentation of news for phones and tablets is essential. But it’s less clear whether news organizations need to fundamentally change how they approach the news. At The Times, our audience on smartphones is much younger and more international than our desktop and print audience. Does that mean we should suddenly stop covering New York aggressively or start obsessing over video games? Obviously not. But it could, over time, mean that we need to shift our concept of who our primary audience is. There are also many issues to work out when it comes to covering news events in real time and through social media, which is increasingly going mobile. And how do we deal with the fact that usage patterns on phones, tablets, and standard computers are all different?

Many of these questions have yet to be answered, but one thing is clear. If news organizations want to serve the majority of their users in the best possible way and stay ahead of the game on these issues, they will have to adopt more of a “mobile first” mentality.

Fiona Spruill is editor of emerging platforms at The New York Times.

How does Wikipedia deal with a mass shooting? A frenzied start gives way to a few core editors

Posted: 18 Dec 2012 11:53 AM PST

If you follow me on Twitter, you’re probably already well acquainted with my views on what should happen in the wake of the shooting spree that massacred 20 children and 6 educators at a suburban elementary school in Newtown, Connecticut. This post, however, will build on my previous analysis of the Wikipedia article about the Aurora shootings, as well as my dissertation examining Wikipedia’s coverage of breaking news events, to compare the evolution of the article for the Sandy Hook Elementary School shooting to other Wikipedia articles about recent mass shootings.

In particular, this post compares the behavior of editors during the first 48 hours of each article’s history. The fact that there are 43 English Wikipedia articles about shootings sprees in the United States since 2007 should lend some weight to this much ballyhooed “national conversation” we are supposedly going to have, but I choose to compare just six of these articles to the Sandy Hook shooting article based on their recency and severity as well as an international example.

Wikipedia articles certainly do not break the news of the events themselves, but the first edits to these article happen within two to three hours of the event itself unfolding. However, once created these articles attract many editors and changes as well as grow extremely rapidly.

Figure 1: Number of changes made over time.

The Virginia Tech article, by far and away, attracted more revisions than the other shootings in the same span of time and ultimately enough revisions in the first 48 hours (5,025) to put in within striking distance of the top 1,000 most-edited articles in all of Wikipedia’s history. Conversely, the Oak Creek and Binghamton shootings, despite having 21 fatalities between them, attracted substantially less attention from Wikipedians and the news media in general, likely because these massacres had fewer victims and the victims were predominantly non-white.

A similar pattern of VT as an exemplary case, shootings involving immigrants and minorities attracting less attention, and the other shootings having largely similar behavior is also found in the the number of unique users editing an article over time:

Figure 2: Number of unique users over time.

These editors and the revisions they make cause articles to rapidly increase in size. Keep in mind, the average Wikipedia article’s length (albeit highly skewed by many short articles about things like minor towns, bands, and species) is around 3,000 bytes and articles above 50,000 bytes can raise concerns about length. Despite the constant back-and-forth of users adding and copyediting content, the Newtown article reached 50kB within 24 hours of its creation. However, in the absence of substantive information about the event, much of this early content is often related to national and international reactions and expressions of support. As more background and context as information comes to light, this list of reactions is typically removed which can be seen in the sudden contraction of article size as seen in Utøya around 22 hours, and Newtown and Virginia Tech around 36 hours. As before, the articles about the shootings at Oak Creek and Binghamton are significantly shorter.

Figure 3: Article size over time.

However, not every editor does the same amount of work. The Gini coefficient captures the concentration of effort (in this case, number of revisions made) across all editors contributing to the article. A Gini coefficient of 1 indicates that all the activity is concentrated in a single editor while a coefficient of 0 indicates that every editor does exactly the same amount of work.

Figure 4: Gini coefficient of editors’ revision counts over time.

Across all the articles, the edits over the first few hours are evenly distributed: editors make a single contribution and others immediately jump in to also make single contributions as well. However, around hour 3 or 4, one or more dedicated editors show up and begin to take a vested interest in the article, which is manifest in the rapid centralization of the article. This centralization increases slightly over time across all articles suggesting these dedicated editors continue to edit after other editors move on.

Another way to capture the intensity of activity on these articles is to examine the amount of time elapsed between consecutive edits. Intensely edited articles may have only seconds between successive revisions while less intensely edited articles can go minutes or hours. This data is highly noisy and bursty, so the plot below is smoothed over a rolling average of about 3 hours.

Figure 5: Waiting times between edits (y-axis is log-scaled).

What’s remarkable is the sustained level of intensity over a two day period of time. The Virginia Tech article was still being edited several times every minute even 36 hours after the event while other articles were seeing updates every five minutes more than a day after the event. This means that even at 3 am, all these articles are still being updated every few minutes by someone somewhere. There’s a general trend upward reflecting the initially intense activity immediately after the article is created following increasing time lags as the article stabilizes, but there’s also a diurnal cycle with edits slowing between 18 to 24 hours after the event, before quickening again. This slowing and quickening is seen around about 20 hours as well as around 44 hours suggesting information being released and incorporated in cycles as the investigation proceeds.

Finally, who is doing the work across all these articles? The structural patterns of users contributing to articles also reveals interesting patterns. It appears that much of the editing is done by users who have never contributed to the other articles examined here, but there are a few editors who contributed to each of these articles within 4 hours of their creation.

Figure 6: Collaboration network of articles (red) and the editors who contribute to them (grey) within the first four hours of their existence. Editors who’ve made more revisions to an article have thicker and darker lines.

Users like BabbaQ (edits to Sandy Hook), Ser Amantio di Nicolao (edits to Sandy Hook), Art LaPella (edits to Sandy Hook) were among the first responders to edit several of these articles, including Sandy Hook. However, their revisions are relatively minor copyedits and reference formatting reflecting the prolific work they do patrolling recent changes. Much of the substantive content of the article is from editors who have edited none of the other articles about shootings examined here and likely no other articles about other shootings. In all likelihood, readers of these breaking news articles are mostly consuming the work of editors who have never previously worked on this kind of event. In other words, some of the earliest and most widely read information about breaking news events is written by people with fewer journalistic qualifications than Medill freshmen.

What does the collaboration network look like after 48 hours?

Figure 7: Collaboration network after 48 hours.

3,262 unique users edited one or more of these seven articles, 222 edited two or more of these articles, 60 had edited 3 or more, and a single user WWGB had edited all seven within the first 48 hours of their creation. These editors are at the center of Figure 7 where they connect to many of the articles on the periphery. The stars surrounding each of the articles are the editors who contributed to that article and that article alone (in this corpus). WWGB is an editor who appears to specialize not only in editing articles about current events, but participating in a community of editors engaged in the newswork on Wikipedia. These editors are not the first to respond (as above), but their work involves documenting administrative pages enumerating current events and mediating discussions across disparate current events articles. The ability for these collaborations to unfold as smoothly as they do appears to rest on the ability for Wikipedia editors with newswork experience to either supplant or compliment the work done by amateurs who first arrive on the page.

Of course, this just scratches the surface of the types of analyses that could done on this data. One might look at changes in the structure and pageview activity of each article’s local hyperlink neighborhood to see what related articles are attracting attention, examine the content of the article for changes in sentiment, the patterns of introducing and removing controversial content and unsubstantiated rumors, or broaden the analysis to the other shooting articles. Needless to say, one hopes the cases for future analyses become increasingly scarce.

Brian Keegan is a post-doctoral research fellow in computational social science at Northeastern University. He earned his Ph.D. in the Media, Technology, and Society program at Northwestern University's School of Communication, where his dissertation examined the dynamic networks and novel roles which support Wikipedia's rapid coverage of breaking news events like natural disasters, technological catastrophes, and political upheaval. This article originally appeared on his website.

New York Times gets into original ebook business with Byliner

Posted: 18 Dec 2012 09:49 AM PST

At The New York Times, book deals are not exactly a foreign concept. At any given moment, any number of Times journalists are at various stages of the book publishing process. Now the newspaper wants to capture a bit more of that creativity by producing timely ebooks with the publishing startup Byliner.

The deal means the Times will publish around a dozen nonfiction narratives in 2013, in areas like politics, business, and culture, written by Times journalists and available on Kindle, iBooks, and Nook. The first book, Snow Fall: The Avalanche at Tunnel Creek, by John Branch, is listed as “coming soon” for $2.99 at the Times’ online store. The tale of skiers trapped in an avalanche in Washington spins out of a story reported by Branch set to appear in the Times.

Newspapers have been dabbling in ebooks for a while now, with varying approaches and degrees of success. A number of newspapers use ebooks as a means for repurposing and repackaging their reporting for a different audience. But Gerald Marzorati, the Times’ editor for editorial development, said the Times will go beyond rehashing its reporting in ebook form and plans to develop original stories that will be exclusive to the platform. Revenue generated by ebook sales will be shared between the Times and Byliner.

“The idea is to do 10 to 12 of these in the next year and see what works to some extent,” Marzorati told me.

The next book in the pipeline is an original story by David Leonhardt, the Times’ Washington bureau chief, that looks at the how the election, government austerity plans, and the “fiscal cliff” negotiations have affected the economy. Marzorati told me they’re in the planning stages of books from health writer Gretchen Reynolds and business columnist James Stewart. Marzorati said ebooks have “the feeling of being new and immediate, and Amazon is selling millions of them. There is a feeling on the part of journalists too that they want to experiment with this.”

The Times ebooks will fit with the scheme of most Byliner originals, stories that fall somewhere between a long magazine article and a short book — or, as Marzorati put it, things “meant to be read in a single sitting.”

Marzorati’s job is an interesting one — “editorial development” could be rephrased as “find ways to make additional dollars off the editorial product.” Aside from ebooks, Marzorati oversees the development of conferences like the recent DealBook conference.

Amazon has said it sold more than 2 million Kindle Singles in the program’s first 14 months, which may be another reason media companies want in on ebooks. Publishing ebooks is a right fit because it fits with the Times strategy of finding ways to make money, but also reaching out to new audiences on mobile devices as well as through apps like Flipboard.

Along with the Byliner project, the Times is partnering with Vook to create TimesFiles, a collection of 25 ebooks on select topics with stories from the Times archives. (Titles include The Life and Films of John Hughes, The Iran-Contra Affair, Finding the Right Financial Planner, and…Artichokes: Articles and Recipes from The New York Times.)

Marzorati said they decided to partner with Byliner because it would allow them to get the books up and running quickly. It also helped that Marzorati previous worked with Byliner cofounder Mark Bryant on Play, the Times short-lived sports magazine.

Partnering with Byliner gives the Times a quicker glide path into Amazon’s Kindle Store and Apple’s iBookstore. Aside from working with individual authors, Byliner has also partnered with publishers like New York magazine and Esquire. Byliner splits the revenue with with its authors, though Bryant would not disclose the terms of the agreement with the Times.

Bryant said publishers want to be able to use their brand and recognition to sell ebooks and reach new readers in a way that doesn’t require too much set up or additional resources. For the Times, the reporters and editors will produce stories and then hand them off to Byliner for formatting. “We’re marrying a kind of old-school group of editors with high standards to a new digital publishing effort and publishing community,” Bryant said.

What make Byliner stories compelling for readers comes down to three things: Price, length, and subject, Bryant said. The length of a Byliner story is perfect for journalists, Bryant said, because it lets you expand beyond the constraints of typical newspaper or magazine writing. “It’s a matter of finding the right type of narrative that will flourish in that kind of space between magazines and books,” he said.

Being timely and topical are two of the biggest selling points for any newspaper. That raises the question of why the Times would be publishing original, exclusive work for ebooks, instead of the paper and website. Marzorati says that won’t be a problem: In the same way reporters and editors decide which stories fit what sections or what kind of treatment, they’ll determine what will work best as an ebook. The Times will break news and plumb narrative features in the same way it always has, he said, but ebooks will be a new venue for reporting.

“The ambitious writers and reporters today want to be tweeting and doing long form and be on Facebook and some form of television,” he said. “They want to be a multi-platform journalists, and this is another platform.”

For MuckRock.com, the new Freedom of the Press Foundation will mean more muck, more rocks

Posted: 18 Dec 2012 09:24 AM PST

The first time I heard of Michael Morisy and MuckRock.com was in 2010, after the site was targeted by a bureaucrat working for Massachusetts Governor Deval Patrick.

It seems that MuckRock, using the state’s open records law, had obtained information about how food stamps were being used in grocery stores. The data, which did not name any individual food-stamp recipients, had been lawfully requested and lawfully obtained. But that didn’t stop said bureaucrat from threatening Morisy and his tech partner, Mitchell Kotler, with fines and even imprisonment if they refused to remove the documents from their site.

They refused. And the bureaucrat said it had all been a mistake.

Now Morisy is preparing to expand MuckRock’s mission of filing freedom-of-information requests with various government agencies and posting them online for all to see. The just-launched Freedom of the Press Foundation has identified MuckRock as one of four news organizations that will benefit from its system of crowdsourced donations. The best-known of the four is WikiLeaks.

The foundation’s board is a who’s who of media activists, including Pentagon Papers whistleblower Daniel Ellsberg, Electronic Frontier Foundation co-founder John Perry Barlow, Josh Stearns of Free Press and the journalist Glenn Greenwald, now with The Guardian.

“The Freedom of the Press Foundation can be a first step away from the edge of a cliff,” writes Dan Gillmor, author of We the Media and Mediactive. “But it needs to be recognized and used by as many people as possible, as fast as possible. And journalists, in particular, need to offer their support in every way. This is ultimately about their future, whether they recognize it or not. But it’s more fundamentally about all of us.”

What follows is a lightly edited email interview I conducted with Morisy (a past Nieman Lab contributor) about MuckRock, the Freedom of the Press Foundation, and what comes next.

Kennedy: Tell me a little bit about MuckRock and its origins.
Morisy: I’d been really frustrated that we hadn’t seen much innovation in newsgathering generated by journalistic organizations. You see lots of innovations in how stories are told, but they’ve been generated by companies like Twitter, Facebook, and Instagram — all wonderful organizations, but ones which generate news as a byproduct, and where the journalistic function is by far secondary to business considerations. My co-founder and I wanted to create a startup where creating news was a core part of the business, and where the news was both user-generated and -directed as well as verified.

Since requests on MuckRock come from — and are paid for by — our users, we are able to align our business and editorial goals almost perfectly. We don’t sell advertising, we don’t put up paywalls. We just help people investigate the issues they want to, and then share those results with the world.

We’ve know been growing as a business and as an editorial operation for three years, with a part-time news editor and two fantastic interns.

Kennedy: What sorts of projects are you involved in today?
Morisy: Our biggest project to date is a partnership with the Electronic Frontier Foundation (EFF) called the Drone Census, which has broken a lot of major stories around the country. We let anyone submit an agency’s information and then we follow up with a public records request. So far we’ve submitted 263 requests to state, local, and federal agencies, the vast majority of which were suggested by the public. And it’s helped shed more light on a program that police departments and drone manufacturers are very purposefully keeping quiet.

We’ve also gotten to cover some really interesting local stories, such as getting the late Boston mayor Kevin White’s FBI file and taking an inside look at the timing of a drug raid, as well as national stories.

Kennedy: What is the nature of your relationship with The Boston Globe?
Morisy: MuckRock was invited to be part of the Globe Lab‘s incubator program a little over a year ago. We’ve received free office space and, most important, a good mailbox to receive the dozens of responses we get back every day. It’s also given us a chance to bounce ideas back and forth with their technology and editorial teams, and we’re in the early stages of a collaborative project with them.

They also recently launched The Hive, a section focused on startups in the Boston area. Given my experience running one and my editorial background, when they were looking for someone to manage and report for that section, I was a natural fit and thrilled to be invited to cover startups in the area. It’s a dream job, and it means I now have two desks, and often wear two hats inside the same building.

Kennedy: How did you get involved in the Freedom of the Press Foundation?
Morisy: Trevor Timm has been our main point of contact with the EFF working on the drone project, and he’s been absolutely great to work with. He reached out to us about a week ago and said that he was working on a new venture to help crowdfund investigative journalism projects, and we were honored to be thought of. It turns out he is the executive director of the Freedom of the Press Foundation, so we got lucky to be working with the right people.
Kennedy: Do you have a goal for how much money you’re hoping to raise through the foundation? What kinds of projects would you like to fund if you’re successful?
Morisy: We’re kind of going into this with an open mind and a hopeful heart. Any amount raised is greatly appreciated, but this will help jumpstart several new projects similar in size and scope to the drone effort, which has had an amazing response, including nods from The New York Times and many other outlets. It may also give us the flexibility to fund important stories that maybe are not as sexy. We were really interested in funding an investigation into MBTA price jumps for the disabled, for example, but our crowdfunding efforts on Spot.us are essentially dead on arrival. Having a reserve will allow us to take gambles on stories like that without having to choose between making rent and breaking news.

Dan Kennedy is an assistant professor of journalism at Northeastern University and a panelist on Beat the Press, a weekly media program on WGBH-TV Boston. His blog, Media Nation, is online at www.dankennedy.net. His book on the New Haven Independent and other community news sites, The Wired City: Reimagining Journalism and Civic Life in the Post-Newspaper Age, will be published by the University of Massachusetts Press in May 2013.