Kamis, 31 Mei 2012

Nieman Journalism Lab

Nieman Journalism Lab


Jeff Israely: To B2B or not to B2B, that is our question

Posted: 30 May 2012 11:30 AM PDT

Editor’s Note: Jeff Israely, a former Time magazine foreign correspondent in Europe, has launched a news startup called Worldcrunch. For the past two years, he’s been describing and commenting on the process here at Nieman Lab. Read his past installments here.

In 2011, when we launched our site, the focus was on building our product. Now, the next stage in the startup climb is to build the business.

As the editor, I still spend the majority of my time on the product side: helping to pick, produce, and package our news content in the best old and new ways possible. But I am also busy working with my business partner Irène Toporkoff on the essential how-to-make-the-product-make-money piece of the puzzle.

It’s the first time for me on this side of slowly crumbling wall separating business and editorial, and it’s a whole lot of fun. It’s also a whole lot of hard work, made harder by the fluid and fraught state of our industry, the changing habits of news consumers, and the very fact that we are indeed a startup, with limited means and constant choices between the possible and the practical. There’s an extra dose of awareness in the Worldcrunch pit of what we can call the Miles Davis rule of business: what you choose not to do matters just as much.

I know that folks in much bigger news organizations are grappling with some of the same choices and pressures. Stakes are high, budgets tight, time short.

Over the next three posts, I will try to unpack how it looks from my particular vantage point in a miniseries I will presumptuously call The False (And True) Dilemmas In Finding Your News Industry Business Model — laying out some of the big opposing choices our fledgling news company faces: B2B vs. B2C; paid vs. free; content quality vs. content quantity. Of course, as the title suggests, there’s plenty of spillover within and across these apparent alternatives, not to mention over that once mighty wall. So let’s talk business!

Until two years ago, B2B (meaning business-to-business) didn’t mean much more to me than those yellow pages that didn’t arrive at my home — or office.

The first time I met Irène for coffee to tell her about the project (mostly the product, as it were), she took it all in, asked lots of questions…and had this to offer as her first full affirmative sentence: “B…to…B.”

She had both bought and sold digital content at previous jobs, and she knew that — particularly online — it is still easier to get a business, rather than the equivalent individual customers, to actually pay for something. It would be, Irène explained, B2B2C…business-to-business-to-customer/client.

The alternative path is to simply aim straight for the C. Be a destination site, build your brand, reach the consumer! Soon enough, we realized that this is still the big prize. It not only plays to our own sense of mission (and vanity), but also just seems like more fun: pursue your vision rather than sniff out the market and be forced to react to the needs of intermediary third parties. If we post it, they will come.

Anyway, those were — and are — the two choices of this true-and-false dilemma. And people (including potential investors) continuously ask whether we will be B2B or B2C. Timidly at first, and now with confidence, we say both. The B2B is a way to find revenue in the short term, while the brand grows and the B2C model reveals itself to us and the industry as a whole.

Of course, pursuing both is neither easy nor simple. Indeed, it’s the implicit tension in many of the week-in, week-out choices we have to make, on both the business and editorial sides. A decision to focus on selling Article X to third parties may force us to forgo investing in some technical or human tools that could help us push the same article directly to readers. It also finds its way into our editorial choices: We might actually end up producing Article X rather than Article Y, which would have been better suited for our own site and our search for readers.

Major news organizations are also weighing the dilemma, as the old models fade and new opportunities present themselves. Reuters, traditionally a B2B and B2B2C outfit, wants to become more and more a direct source of news for consumers. The New York Times, which has always banked on readers reaching directly for the brand, is committed to its global syndication business as a steady source of additional revenue.

Nieman Lab columnist and news business guru Ken Doctor recently encouraged media companies to pursue lots of little golden eggs of revenue, now that the historical twin rivers of advertising and circulation are turning to smaller streams in a much more crowded information ecosystem.

Two years after that first meeting with Irène, and one year after going live on the website, we have built a nice little chunk of loyal readers who offer a mini egg of ad revenue — which nonetheless will alone clearly not be enough to build a business around. So we’ve just begun to sell our stuff to other media outlets. The potential syndication revenue is real, though may ultimately be more modest than we had once hoped. It also must be built one deal at a time, which requires run-up time and investment into sales operations, and/or working with third-party syndication services that take a major cut.

Where things stand today, in a world of digital pennies, there are clearly still some nice dollars to be made in print licensing. But what is equally clear is that the future is indeed digital. So your legwork and brainwork — your strategy — should be geared toward that future.

Being a digital-only venture, we don’t have the monstrous dilemma of weighing what to do about our own print product. And yet we are (in a number of ways) part of the bifurcated world of the mainstream media. How important are those print dollars we can gather now? How long will print even be around? Should we focus instead on emerging digital syndication possibilities? Or should we move away, as much as possible, from B2B altogether?

As a Paris-based, French/American company looking to get our business rolling, we’ve had the good fortune of getting introduced recently to a successful U.S.-based French entrepreneur, Thibaut de Robien, who wants to work with us on both business development and our overall strategy. He is now busy pounding the pavement stateside and joining Irène and me (via Skype) on the hard thinking about these tricky choices we face.

One of the many strengths Thibaut brings is the very fact that he comes from another industry: online gaming. Beyond his know-how around digital user experience, his battle scars in a decade of actually finding a new business model for an existing product is a great asset for a news startup.

B2B or B2C — for us, for now, it’s both. It’s what one smart businessman said he likes to call “holding up some gas stations on the way to robbing the bank.” That’s the startup’s outlaw spirit we like. But building a new company from scratch is also about being ferociously pragmatic — which we might also call preparing to thrive tomorrow by perfecting the art of surviving today.

Photo by Jeremy Brooks used under a Creative Commons license.

Reverse engineering Chinese censorship: When and why are controversial tweets deleted?

Posted: 30 May 2012 10:35 AM PDT

A 404 error on Sina Weibo

Censoring the Chinese Internet must be exhausting work, like trying to stem the flow of a fire hose with your thumb. Sina Weibo, a popular Twitter-like service, says its 300 million registered users post more than 100 million weibos, or tweet-like posts, a day. (In Chinese, weibo means microblog or microblog post.)

And of course the entire Chinese Internet isn’t as censored as some might think. So why are some tweets deleted, not others? Which topics are seen as the biggest threat to harmony?

Chi-Chu Tschang wants to unwrap the black box. Tschang is an MBA student at MIT’s Sloan School and former China-based correspondent for BusinessWeek and a student in Ethan Zuckerman’s class this semester, “News in the Age of Participatory Media.” For his final project, Tschang built on data collected on thousands of deleted weibos in China to look for answers. (I summarized some other interesting ideas from students in a previous post.)

“We know that certain topics are censored from blogs hosted in China, Chinese search engines and Weibos,” Tschang writes in his paper. “But we don’t know where the line lies. Part of the reason is because the line is constantly moving.”

Tschang drew on the work of researchers at the University of Hong Kong’s Journalism and Media Studies Center. Cedric Sam and King-wa Fu helped build WeiboScope, which visualizes the most popular content on Sina Weibo in something close to real time. On top of that app, they built WeiboScope Search, which includes deleted weibos — more than 12,000 since Feb. 1 — in its huge archive.

Using the data visualization software Tableau, Tschang plotted those deleted weibos on a timeline, then superimposed politically sensitive events to provide context. (Click to enlarge.)

Chi-Chu Tschang's timeline of censored weibos

The day that saw the highest volume of deletions, in a dataset covering Feb. 1 to May 20, was March 8: the day rumors of Bo Xilai’s fall from power began to spread. Bo was a high-ranking party secretary who was under scrutiny for, among other things, his tremendous apparent wealth. Bo’s son, studying here at Harvard, attracted a lot of attention when he reportedly picked up Jon Huntsman’s daughter in a red Ferrari for a date.

The second-busiest censorship day was March 15, the day Bo was sacked.

Here’s one more interesting data point: On March 18, word spread of a deadly car accident involving a Ferrari (a black one, not a red one). Nearly all information about the crash disappeared from the Internet, fueling speculation about who was involved. Even the word “Ferrari” was censored. Tschang observed moderate deletion activity that day on Sina Weibo.

There is one day of missing data: April 22, the day civil-rights activist Chen Guangcheng escaped from his house arrest in Shangdong. Why? An error message dated April 23, the day after, reports “load problems” that temporarily disabled data collection — disappointing timing. It could be that the Chinese Weibosphere was so jammed on that momentous day that the servers were crashing. Or it could be something else entirely. (Reader Samuel Wade notes that news of Chen’s escape was not widely known until days later.)

Tschang crunched the raw data and generated a word cloud, to see which terms in deleted weibos appear most often.

Top 73 censored words from Weibo

Word clouds, though pretty, don’t provide a whole lot of context. Tschang said he wants to examine the list more carefully, filtering out words like the Chinese equivalents of “RT” and “ha ha.” He also wants to examine the relationships of the 3,500 most censored Weibo users, creating, I don’t know, a Klout for civil disobedience?

Tschang’s hypothesis — that Sina Weibo deletions correlate highly with spikes in media coverage of sensitive stories — is consistent with the findings of a similar study from researchers at Carnegie Mellon University, who evaluated 56 million weibos, of which about 16 percent were deleted.

Those researchers found some key words were far more likely to get a weibo deleted: Ministry of Truth, Falun Gong, Ai Weiwei, Playboy, to name a few. “By revealing the variation that occurs in censorship both in response to current events and in different geographical areas,” the researchers wrote, “this work has the potential to actively monitor the state of social media censorship in China as it dynamically changes over time.”

Finally, Tschang also evaluated how long it took for deleted weibos to be deleted. He wrote:

The fastest a post was deleted on Sina Weibo was just over 4 minutes. The longest time it took for the censor to get around deleting a message on Sina Weibo was over four months. For the posts created on May 20, 2012 and deleted on the same day, it took on average 11 hours for Weibo Scope Search to detect the deletion.

Tschang said he suspects some weibos get deleted months later because they are about topics that suddenly re-surface in Chinese media.

Tschang even tried posting spare, scandalous messages to his own Sina Weibo account, just to see what would happen.

  • Chen Guangcheng
  • Bo Xilai
  • Taiwanese independence

Here’s Tschang:

Less than 14 hours later, I received a message from Sina Weibo’s system administrator informing me that my two posts on “Chen Guangcheng” were “inappropriate” and had been censored. While I can still see the two “Chen Guangcheng” posts on my Sina Weibo account page, no one else can. Surprisingly, my posts on “Bo Xilai” and “Taiwan independence” were not censored.

One caveat: Tschang cannot be 100 percent sure that a deleted weibo wasn’t deleted by its creator, rather than Sina’s “monitoring editors.” But Sina Weibo’s API makes a helpful distinction in the way it returns data for deleted weibos. The error message for a non-existent weibo comes back as either “Weibo does not exist” or “Permission denied.” So one could assume, as do Tschang and the HKU researchers, that “permission denied” equals “censored.”

And the best time to weibo something politically sensitive in China? After 11 o’clock on a Friday night, according to the data.

“Interestingly, deletion of Sina Weibo messages tend to hit a low on Saturdays,” Tschang wrote. “I’m not too sure why that is, except that maybe censors want to take time off on weekends as well.”

What is it that journalists do? It can’t be reduced to just one thing

Posted: 30 May 2012 07:30 AM PDT

There’s a craving in the air for a definitive statement on what journalism is, something to rally around as everything changes. But I want to do the opposite. I want to explode journalism, to break it apart into its atomic acts. I’m beginning to suspect that taking it apart is the only way we can put it all back together again.

In the endless debate about what the “future of journalism” holds, “journalism” doesn’t have a very clear meaning. We’re in the midst of hot arguments over who is a journalistwhether social media is journalismwhether data is journalism, whether cherished tenets like objectivity are necessary for journalism. As the print advertising model that funded the bulk of working journalists collapses and forces transformation, it’s pressing to know what is worth preserving, or building anew.

After decades where “journalism is what journalists do” was good enough, there is a sudden a bloom of definitions. Some claim that “original reporting” is the core, deliberately excluding curation, aggregation, and analysis. Others say “investigative reporting” is the thing that counts, while a recent FCC report uses the term “accountability journalism” liberally. These are all efforts to define some key journalistic act, some central thing we can rally around.

I don’t think I could tell you what the true core of journalism is. But I think I have a pretty good idea of what journalists actually do. It’s a lot of things, all of them valuable, none of them the exclusive province of the professional. Journalists go to the scene and write or narrate or shoot what is happening. They do months-long investigations and publish stories that hold power accountable. They ask pointed questions of authorities. They read public records and bring obscure but relevant facts to light. All of this is very traditional, very comfortable newswork.

But journalists do all sorts of other things too. They use their powerful communication channels to bring attention to issues that they didn’t, themselves, first report. They curate and filter the noise of the Internet. They assemble all of the relevant articles in one place. They explain complicated subjects. They liveblog. They retweet the revolution. And even in the age of the Internet, there is value to being nothing more than a reliable conduit for bits; just pointing a camera at the news — and keeping it live no matter what — is an important journalistic act.

There’s more. Journalists verify facts and set the record straight when politicians spin. (You’d think this would be uncontroversial among journalists, but it’s not.) They provide a place for public discussion, or moderate such a place. And even though magazine journalism can be of a very different kind, like Hunter S. Thompson writing for The Atlantic, we still call it journalism. Meanwhile, newspaper journalists write an enormous number of interpretive pieces, a much larger fraction than is normally appreciated. The stereotypical “what just happened” report has become less and less common throughout the last 100 years, and fully 40 percent of front page stories are now analytical or interpretive, according to an excellent piece of forthcoming research. And, of course, there are the data journalists to cope with the huge rise in the availability and value of data.

Can we really say which of these is the “true” journalism?

I think it depends hugely on the context. If some important aspect of the present has never been represented anywhere else, then yes, original reporting is the key. But maybe what the public needs is already in a document somewhere, and just posting a link to it on a widely viewed channel is all that is needed. At the other end of the spectrum, verifying the most basic, on-the-ground facts be can challenge enough. I saw the process that the AP went through to confirm Gadhafi’s death, and it was a tricky undertaking in the middle of a conflict zone. In other cases, the missing piece might not require any new reporting at all, just a brilliant summary that pulls together all the loose threads.

There are a lot of different roles to play in the digital public sphere. A journalist might step into any or all of these roles. So might anyone else, as we are gradually figuring out.

But this, this broad view of all of the various important things that a journalist might do, this is not how the profession sees itself. And it’s not how newsrooms are built. “I’ll do a story” is a marvelous hammer, but it often leads to enormous duplication of effort and doesn’t necessarily best serve the user. Meanwhile, all the boundaries are in flux. Sources can reach the audience directly, and what we used to call “technology” companies now do many of the things above. Couple this with the massive, beautiful surge of participatory media creation, and it’s no longer clear where to draw the lines.

But that’s okay. Even now, news organizations do a huge number of different things, a sort of package service. Tomorrow, that might be a different package. Each of the acts that make up journalism might best be done inside or outside the newsroom, by professionals or amateurs or partners or specialists. It all depends upon the economics of the ecosystem and, ultimately, the needs of the users. Journalism is many good things, but it’s going to be a different set of good things in each time, place, and circumstance.

Photo by Niclas used under a Creative Commons license.

Rabu, 30 Mei 2012

Nieman Journalism Lab

Nieman Journalism Lab


3 new ideas on the future of news from MIT Media Lab students

Posted: 29 May 2012 10:50 AM PDT

Keys on computer keyboard spelling "geek"

Ethan Zuckerman of the MIT Center for Civic Media taught a class this semester tailor-made for Nieman Lab readers: “News in the Age of Participatory Media.” The hook: What happens if you treat journalism as an engineering problem, bringing together the efforts of journalists and computer scientists?

The course’s final class last week featured a lot of bright students presenting their final projects, which was supposed to be a new tool, technique, or technology for reporting the news. (They were in various stages of completion.) I’ll be breaking out a few of the good ideas in future posts, but here are some of the ones that stood out to me.

Modernizing the hyperlink

The <a> tag hasn’t changed much since Tim Berners-Lee proposed it 20 years ago. Hyperlinks are the fiber of the web. But Neha Narula, a Ph.D. student of computer science at MIT, finds herself frustrated with writers who abuse them. Blog posts littered with too many links leads to “cognitive overload,” she says. “As I explored this topic a little more,” she said, “I found what I was annoyed with was not linking too much but not linking well.” If Google is mentioned in copy, does Google have to be linked to the Google home page? Does the same link need to appear multiple times in one story?

Narula proposed the use of microformats and the little-known rev attribute to attach semantic meaning to links, allowing browsers to handle different kinds of links differently. (rev is supposed to represent a reverse link. All major browsers, when faced with a rev attribute now, just ignore it. It’s like a cousin to rel.)

For example, a link to a citation (dictionary definition, Wikipedia article) would get rev="bib", for bibliography. So:

<a href="http://en.wikipedia.org/en/Nieman_Foundation" rev="bib">

might lead to that link being presented not in the body copy, but at the bottom of the post, in the form of a tidy bibliography.

She also proposes rev="reaction", which would clearly call out the original post an article is responding to; and rev="object" for links to people and companies, which would facilitate an index for all of the proper nouns in a piece.

Perhaps most intriguing was rev="set" for a series of links, to avoid awkwardness when linking to (for example) this series of Lab articles on the hyperlinking debate. She mocked up a little bit of JavaScript and CSS to show how it could look. (Hover over “Twitter users to follow” or “BBC linking policy.” You can also see mockups of the object, reaction, and bib attributes there.)

Oh, and the biggest crowd pleaser was a feature you may love or hate: a button that toggles off all links in a document for distraction-free (or, er, context-free) reading. (Try it on this article!)

Others have proposed approaches to adding metadata to links, from nofollow to syndication-source to standout to FOAF. Zuckerman suggested Narula create WordPress and Drupal plugins to encourage adoption. Getting the rest of the web on board would be a tall order.

Searching for correlations in a haystack

Eugene Wu, a graduate student of computer science at MIT, demonstrated a suite of tools called DBTruck that makes data comparison a snap. Enter the URL of a CSV file, JSON data, or an HTML table and DBTruck will clean up the data and import it to a local database. Normally you might go to a web page like this, select and copy the table, paste it into an Excel spreadsheet, then spend 15 minutes trying to fix the misplaced cells and formatting issues. DBTruck is automated and fast.

The program allows you to geocode any field that contains address information, whether that field is “Cambridge, MA” or “Cambridge, Mass.” or “1 Francis Ave, Cambridge.” Humans have come up with many ways to represent physical locations, but geographic coordinates are unambiguous instructions for computers to map a location. When you’re dealing with disorganized datasets, getting consistency is key.

Wu’s tool then lets you plot arbitrary comparisons between datasets. To test the program he plugged in all kinds of datasets, just for fun. Is there a correlation between addresses of Massachusetts lottery winners and Taco Bell locations? (No.) Suicide rates and unemployment rates in New York state? (No.) Suddenly he stumbled upon a connection that made sense: Communities in New York state with high teen pregnancy rates correlated highly with low birth weights. There’s a potential story there that Wu might not have otherwise set out to write. Zuckerman advised Wu to team up with The Boston Globe to run more arbitrary comparisons and discover what local stories might be hidden in the numbers. (It also seems like a dandy add-on to the PANDA Project, which is building a platform for in-house newsroom databases.)

How many Rhode Islands is that?

Nieman Fellow Paul Salopek and Knight Science Journalism Fellow/Reuters correspondent Alister Doyle have covered large-scale calamities in far-off countries for domestic audiences sometimes too busy to care. Foreign correspondents have tricks, sometimes clichés, to get people to pay attention, comparing populations and land masses to familiar American things. Write Salopek and Doyle:

Too often we just get a giant number — the U.S. debt is $15 trillion, Chinese greenhouse gases are the highest in the world at 7 billion tonnes a year, Americans spend $8 billion a year on cosmetics, etc. Is there some way of helping to put these statistics — huge to the point of meaningless — into an understandable, human framework?

They propose something like a currency converter that turns impossibly big numbers into more qualitative terms. Great for a correspondent on a deadline.

If it's an economics story, what does your share of debts or GDP represent? A new car? A house? How many vacations? How many pizzas? How would it be, for instance, if everyone had the debts of the average Greek citizen? (awful, in most countries). How would global warming be if everyone emitted greenhouse gases at the rate of an Indian? (much better). The U.S. debt works out at about $50,000 a person — what can you buy with that?

The site would be user-maintained, like Wikipedia, and powered by real datasets. All statistics would require citations. It’s just an idea at this point, but a website like this is very buildable. (Anyone want to try it? Leave a comment below.) Salopek and Doyle offer a dizzying number of potential cross-discipline conversion units. How about Ayns, a unit of measure for how friendly a government is to corporations, named for Ayn Rand? Or the Obama Gap, a measure of the difference between a leader’s domestic and foreign approval ratings? Or Jolies, a unit of a country’s developmental aid as proportional to the amount of attention it has received from Angelina Jolie? (The Economist’s long-running Big Mac index is of similar spirit.)

Along with the three projects mentioned above, a couple others caught my eye: Nathan Matias’s Data Forager, which slurps up all the Twitter handles mentioned on a webpage and builds a Twitter list that follows those people, and Arlene Ducao’s OpenIR, a much larger project that overlays multiple layers of satellite imagery on a map.

To paraphrase Zuckerman, I hope these ideas earn at least 40 nanoKaradashians of your attention today.

Photo by Solo used under a Creative Commons license.

Dan Kennedy: How news executives can fend off the Wolff at their door

Posted: 29 May 2012 08:30 AM PDT

Facebook’s disappointing IPO may be indicative of a larger problem: the declining value of online advertising, an inexorable force that will eventually destroy not just Facebook, but the web itself.

Sound nuts? Well, that’s the thesis put forth by media critic Michael Wolff in a piece he wrote for Technology Review headlined “The Facebook Fallacy.” Wolff has made his reputation as a provocateur, and his analyses often straddle a fine line between brilliant and crazy. Some might also consider him to be part of the problem facing news organizations, as his Newser.com site practices an unusually in-your-face form of aggregation.

But if Wolff has overstated the case, he nevertheless may be on to something. Indeed, his Tech Review screed carries the endorsement of Doc Searls, a respected thinker about online media and advertising.

I’ll get to what I think this means for journalism in a moment. First, though, a few words about Wolff’s argument.

Essentially, Wolff is expanding on something that we already know: the value of web advertising is low and getting lower, even as it keeps expanding. He writes:

The daily and stubborn reality for everybody building businesses on the strength of web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

Wolff’s insight — again, not particularly original except for his eagerness to pick it up and run with it — is that Facebook is just another website. According to Wolff, Facebook’s current revenues are unlikely to grow all that much. And the situation is only getting worse as mobile becomes a bigger part of the mix, since Facebook has said it doesn’t know how to make money there.

The only company making money from online advertising, Wolff argues, is Google, because it’s the middleman of last resort. Essentially Google is the company that finds itself in the enviable position of selling Levi’s and pickaxes to the hapless gold miners.

People in the news business have been sweating out the reality Wolff describes for some time. How will we pay for the journalism we need? The answer to that is far from clear — and, of course, it’s also far from clear that the public is even willing to pay for what we’re selling, either directly or indirectly.

But here are four partial answers that get us beyond the conundrum of relying on Internet advertising whose quantity keeps expanding but whose value keeps shrinking.

Newspapers should keep printing.

Print is doomed — if not in the short-term, then certainly in the medium- and long-term. But that doesn’t mean it’s going away entirely. It’s just too valuable, as online advertising is worth scarcely a fraction of its print counterpart.

The most recent example of a newspaper company trying to adjust to that reality is the New Orleans Times-Picayune, which is cutting back its print edition to three days a week. The idea is to squeeze seven days’ worth of print advertising into three, saving money on printing and distribution costs while holding onto the most lucrative part of its revenue stream.

As Nieman Lab columnist Ken Doctor notes, if the Times-Picayune move is successful, then the paper will keep some 80 percent of its print advertising revenues while saving a lot of money. By contrast, dumping print entirely would be disastrous.

Move to the flat-fee ad model.

With regard to online advertising, Wolff describes a never-ending spiral to the bottom, as prices for CPM advertising (that is, the cost of a thousand impressions) keeps dropping.

News sites are getting killed by this model, which is based on the notion that advertisers should pay only for the number of times someone sees their ads, with a premium if someone clicks through. (Reality check: No one clicks on ads. No one. Not. Ever.)

Some successful sites have moved away from this model, simply charging a flat fee — what might be called the sponsorship model. One of those is The Batavian, a for-profit community site in western New York. Publisher Howard Owens runs all of his 100-plus ads on the home page, rotating them from bottom to top throughout the week.

As Lisa Williams, founder of Watertown’s late, lamented community blog H2otown, and now the head of a venture called Placeblogger, told me: “I think a lot of people will buy a sponsorship on a local blog for the same reason that they put their name on the back of Little League shirts.”

Expand nonprofit journalism.

Local foundations, community institutions, and wealthy philanthropists, not to mention readers, are contributing to nonprofit local news sites such as the New Haven Independent, Voice of San Diego, MinnPost, and the Texas Tribune just as they do to public television and radio stations.

Unfortunately, the nonprofit news movement has failed to take off, in part because the IRS has held up applications for new nonprofits as the agency ponders whether journalism is an activity that deserves such status. That’s one reason the Chicago News Cooperative died earlier this year.

Although there’s an argument to be made that IRS officials simply could (and should) change their minds, it’s possible that we need legislation explicitly recognizing journalism as an activity covered by the regulations governing nonprofits. Maryland Sen. Ben Cardin proposed such a bill several years ago, but it hasn’t gone anywhere.

Experiment with flexible paywalls.

There’s much that has been said and written about paywalls, so I won’t belabor the point. But smart, flexible systems such as those put in place by The New York Times and The Boston Globe, which allow for free sharing via blogs and social media, are worth exploring, even if they never amount to more than a minor revenue stream.

Michael Wolff’s analysis of Facebook’s problems, and of any website depending on advertising, should be must reading for news executives. Perhaps they won’t learn anything they don’t already know. But it might give some of them the impetus to stop pursuing strategies that are bound to fail, and instead to seek new ways of paying for the news.

Dan Kennedy is an assistant professor of journalism at Northeastern University and a panelist on Beat the Press, a weekly media program on WGBH-TV Boston. His blog, Media Nation, is online at www.dankennedy.net. His book on The New Haven Independent and other community news sites, The Wired City, will be published by the University of Massachusetts Press in 2013.

Photo by Scott Beale/Laughing Squid used under a Creative Commons license.

Alan Rusbridger on The Guardian’s open journalism, paywalls, and why they’re pre-planning more of the newspaper

Posted: 29 May 2012 06:41 AM PDT

Alan Rusbridger is a busy man on two sides of the Atlantic. The editor of the Guardian seems to be everywhere, writing, tweeting, and leading the paper’s ongoing coverage of the British phone hacking scandal that continues to pick off executives and editors of Rupert Murdoch’s News International. Meanwhile, on this side of the water, he’s directing the establishment of The Guardian’s New York-based operation, where they hope to claim a foothold in the US media market through an aggressive online-only play.

The thing that connects the two ends of The Guardian’s franchise is a full embrace of new technology and the opportunities it provides for reaching readers and producing more impactful journalism. I had a chance to talk with Rusbridger during a recent trip to the U.S., when he came to Harvard to accept the Goldsmith Career Award for Excellence in Journalism at the Shorenstein Center. We spoke about open journalism and how it’s changing the newspaper’s report; Rusbridger also talked about how The Guardian is altering the production of its print paper to adjust to evening reading, and why he doesn’t see a paywall in the near future of his paper. Here’s an edited transcript of our conversation.

Justin Ellis: What I want to talk about is the concept of open journalism, something you’ve obviously talked about a lot and something the Guardian has made a very big push for. Explain what the concept of open journalism means to you.
Alan Rusbridger: The simplest way I explain it is to think of the theater critic. The Guardian’s got a wonderful theater critic whose been doing the job for 40 years, and no editor I can think of in his right mind would get rid of Michael Billington or not have a theater critic. If you asked the question, “What about the 900 other people in the audience next door to Michael?” — is it conceivable no one else in the audience has an interesting opinion that could add to your understanding?

Editorially, it is generally better to try and harness multiple views. So then, if you accept that, then I think there are only two questions. One is how do you sort interesting people from uninteresting people, and how do you sort people of particular interests from other interests? And that’s something which is not unique to newspapers. Many, many people are trying to crack that nut in an age of overabundance of information.

And then the question is: If that’s true for theater criticism, is that true for other areas a journalist can cover? Is it true of war reporting and reporting on science and fashion? Nearly always, and I would say always in our experience, the answer is yes it is true…Go back to the Billington example, the theater critic. Imagine you answered no to that question and said actually we are going to back our man against the rest of the web. Somebody else will do that if we don’t do that. So therefore you are allowing somebody else to come into your field. Commercially, it seems to me, that’s a very foolish step to take, as well as it is wrong.

Then what you’re doing, particularly if you want to put a paywall around your theater critic, you’re inviting the public to choose between somebody who may well produce a very good account of that play over its entire run, versus the expert voice one night. So you have to be really, really confident your expert voice is worth a multiple of free voices, if what you want to do is create a model that’s actually a 19th-, 20th-century model, where you’re going to insist your content is worth paying for.

The Scandinavians said, well, actually, most news is kind of predictable.

Ellis: What’s your take on the wave of paywalls being tried here in the U.S.? Do you think it’s a way to help shore up revenues? Do you think it limits access?
Rusbridger: From the point of view of The Guardian, a wall that separated our content from the readers — the people who want to contribute and the people who want to have access to it — I think that would be a wrong turning for us. The New York Times model, I think, is more interesting because it’s so porous. So if you believe what I believe about being open, a paywall that succeeds in getting revenue as well as being open is a more interesting model, obviously.

We charge — we charge for mobile, we charge for iPads. It’s not that we’re against payment altogether. But at the moment, when we’ve crunched the numbers, we don’t think that the revenues we would get from a paywall would justify making that the main focus of our efforts right now. I’m not a sort of anti-paywall fundamentalist — it just doesn’t seem the most interesting thing to be doing at the moment.

Ellis: You guys made a big splash with the Three Little Pigs commercial. Why did you feel it necessary to make the case for open journalism directly to the public in that way?
Rusbridger: First of all, the industry is changing so fast — my worry is that the reader is going to be left behind. I think what we’re trying to say with that advert is that The Guardian is moving beyond a newspaper. It’s something which is a different idea of journalism — it’s something which involves others and is responsive to others.

It’s a sort of statement about journalism itself: We’ve moved from an era in which a reporter writes a story and goes home and that’s the story written. I think that we’re living in the world at the moment where the moment you press send on your story, the responses start coming in. And so I think journalists have to work out what to do about those responses: How do you incorporate those responses? And in this world, in which as a news reporter you’re going to — if you go along with open journalism — you’re going to be open to other sources, other than what can be created in your own newsroom, you’re going to incorporate those responses. The Three Little Pigs was an attempt at explaining the benefits of open journalism to the reader — that you get a more complete version of the truth — and to explain to them this idea of a newspaper company is changing very, very fast.

Ellis: I remember in the announcement last year about going digital first one of the ideas was creating some sort of evening product — trying to follow the trend of people shifting reading time to evenings, when they’re reading on devices or maybe looking for something in print. Have you guys gone anywhere with that?
Rusbridger: With 90 percent of stories, you have a vague familiarity with the paper, because you can’t avoid news now — it’s ambient, it’s everywhere. So why were we still feeding a product to people who might be reading in the evening, which was now 36 hours old? Wouldn’t they rather be helped to understand the context of it?

So we’ve changed the paper and we had really interesting discussions with the Schibsted group in Norway, who’ve been pioneering this with their Swedish flagship paper. They went extremely radical: They have a daily paper which they now plan 50 percent of it 7 days in advance. When we first heard, we thought that’s ridiculous — how could you do a daily paper and have half of it planned? It comes back to how you think of news. The Scandinavians said, well, actually, most news is kind of predictable. There are profiles, pieces about the economy and the Middle East or what China’s doing in Africa. There are so many stories you could do at any point in time, and what newspapers tend to do — to be concise, the way we all grew up — was to leave everything till the last minute, and then between 4 o’clock and 10 o’clock in the evening, make a paper. So you have this huge down period at the beginning of the day and then this sort of crazy period for 6 hours.

We haven’t done 50 percent — we’ve aiming for 30 percent of content pre-planned. It helps you even out production, it saves on costs — which we have to do — and it produces a paper which is more effective, more analytical, and helps you explain — because you then have to explain — to the readers that doesn’t mean we’re bailing out of news. On these devices is where you’ll find the news. So if it’s not in the paper, it doesn’t mean we’re not doing it.

Ellis: What about the idea of journalists not taking it into their own hands to break stories on Twitter, or only linking to stories in your own publication — what do you think about that? It seems to touch on the idea of open journalism and sharing.
Rusbridger: We don’t break everything on the web, and sometimes we hold things back in print. I think common sense tells you you don’t rush to break exclusive properties on the web without talking to your desk editor. But the notion of — you’re covering a sports event or trial which everyone is going to break, the window of exclusiveness may be a minute at the most. Writing down a policy that says you must file to The Guardian because The Guardian must be several seconds ahead? You know, I think Twitter is the place where those kind of stories are broken.

If there are ten reporters in court or at a football match, the notion it has to come by The Guardian production system, somebody has to then edit it and then publish it on The Guardian, I can’t see what the value of that is over doing it on Twitter. In the eight minutes that it takes to do that, the story’s going to be out. As to linking to others, I think it’s a sort of good and generous thing to do. Years ago, we got over the hangup on The Guardian site — we wouldn’t link to others. If somebody’s done a good account of a story, and you can save the next three hours rewriting it, why not just point to it or link to it? I’ve got no problem with Guardian reporters saying “Interesting piece in the Telegraph today.” It makes them look like more rounded people, not simply as though they are extensions of the press office pushing out Guardian content.

Ellis: It seems to me there’s two things at play for most news organizations when we talk about being open: one being having people onboard who believe in it as a philosophy and the other the tools you have available. How important is having a culture that buys into open journalism, and what role do tools play in being able to do it effectively?
Rusbridger: It is obviously important to get buy-in. I think we’re there. Not everybody is there in any newsroom — you’re going to have skeptics, you’re going to have people who say “show me the money and then I’ll believe in it.” No one can get 100 percent. As I said, if you went around the paper you would find enough people who are saying this not because they hear me saying it — they’re saying it because they genuinely believe it’s better. That hasn’t happened overnight. It’s happened because the editorial and commercial leadership believe in it. It’s happened because we’ve been evangelists for getting people onto social media platforms. Very early on, I instructed my senior editors to sign up to Facebook at a time when other people were saying you mustn’t use Facebook in the office. I said no, you must use Facebook in the office.

You know, there are sort of big things like the Facebook app we built recently, which says a very powerful thing: that we’re not hung up on all the content being on The Guardian site. It can sit on Facebook as easily as it can on The Guardian. We built an open API so that people who want to do things with our content will find it easier to do. We begin every day in a morning conference, which is open to everybody, with a little five-minute slot where people come in and talk about particular projects, talk about a particular thread, or they’ll talk about metrics or SEO.

Or, to move onto the question of tools, they’ll talk about the tools they use. We haven’t got the best tools in the world, but we’ll use something like Storify or Audioboo, we’ll use other people’s tools if they can help us tell stories more effectively. We’ve spent time with Facebook recently. Google is extremely interested in working with us because we’re very easy to work with. The open API means they can take our content and they can play around with it. That seems to me a pretty desirable place to be in: You’ve got the most successful new media players actively wanting to talk with you and play with you.

Sabtu, 26 Mei 2012

Nieman Journalism Lab

Nieman Journalism Lab


How a New York Times developer reverse engineered @Horse_ebooks –An Interesting

Posted: 25 May 2012 11:39 AM PDT

Horse with jaunty gallop

Jacob Harris, a senior software architect at The New York Times, shares my obsession with @Horse_ebooks, the wise and mysterious Twitter spambot. @Horse_ebooks tweets nonsensical phrases, apparently scraped at random from the web, and sometimes includes links to spam sites. The account has become a huge hit, with 74,000 followers.

The horse is often imitated but never duplicated, powered by the manual labor of human satirists. Like a good hacker, Harris took his obsession to the next level: He reverse-engineered the horse’s algorithm and created one for the New York Times. Behold, @nytimes_ebooks:

Like its precursor, tweets from @nytimes_ebooks are surprisingly compelling and accidentally hilarious. Harris describes in a blog post how he did it: A script crawls the New York Times RSS feed for recent stories, extracts quotes from the text (“better for ebookification,” he writes), and converts the text into a Markov chain.

Harris has no control over the text produced by his bot, which he finds “both comforting and alarming.” The source material includes the darkest moments of the human experience. He said the project is not unlike the artwork in the Times’ 8th Avenue building, a series of mounted screens that pluck phrases from stories and flash them without context.

Unlike its precursor, every @nytimes_ebooks tweet includes a short link back to the story. And it’s nearly impossible to resist clicking to find out what inspired the nonsense.

Bravo, Jacob Harris. You probably generated enough clicks per reader to hit the NYT paywall many times over.

“There is a mystery in the Markov model of how it writes its text,” Harris told me in an email. “Like Eliza or other textual experiments, there is this ambiguity where the machine sometimes writes something poetic and new and sometimes line noise. If this were a 3-D drawing of a person, we’d be staring right at the horror of the uncanny valley, but here it’s really compelling. Why?”

A father of two young children, these are the kinds of thoughts he discovered in the “loopy predawn hours.”

“Sometimes the text reminds me of a toddler learning to talk. We like to watch it because sometimes bots say the darndest things! And sometimes because it feels like we’re watching something being born.”

Harris laid out an example:

I’m not sure if @horse_ebooks uses this attention to get clicks. I’ve never clicked on a link in its feed when they appear. But I could see how you might want to just to see where the tweet came from. For instance, here are two tweets of the same story.

Which would you click? Of course, I did this for the lulz, not the clicks, but I’d be interested to see if it has a positive effect there, given that it’s not user-friendly at all! To give you some background, The New York Times sometimes creates two headlines for an article: a print version which can be opaque and artful and a more straightforward version of the headline for mobile readers and twitter. This makes sense, because print readers can see what the article is about from its context and layout on the page, but a headline like “A Very Fine Line” would be opaque and annoying on Twitter (where it ran as “A Brooklyn Artist Free-Associates on Her Walls”).

Harris stresses this is nowhere near an official project of the Times. But this being the Nieman Lab, we try to take away lessons for the news business. @nytimes_ebooks demonstrates the joy of finding content in unexpected places, places that previously appeared to have none. Who would have thought a robot that slipped through Twitter’s spam filters would have inspired so much creativity? Content with limited value in one context can have real value in another.

It’s a great example of the hacker mindset that journalists can embrace: What is a truly new and surprising way to tell stories? Experiment often, fail fast. Harris told me he spent a few days tinkering with Markov chains and two evenings coding it, but that’s it.

Are you confident that,

Must-see TV for the weekend: Three takes on how we create, spread, and take in information

Posted: 25 May 2012 09:23 AM PDT

In the United States, we’re about to start the three-day Memorial Day weekend, which means a little more sun, a few more hot dogs, and a bit more mental space. Don’t let it go to waste! Spend some part of it listening to smart people say smart things!

The Berkman Center for Internet & Society here at Harvard has hosted a spree of folks this month talking about the kinds of subjects we’re interested: how information gets made, how it gets shared, and how it gets consumed. First was James Gleick, talking about the ideas contained in his terrific book The Information: A History, a Theory, a Flood. Then came metaLAB’s Matthew Battles, who brought in his knowledge of the history of knowledge to talk about what it might mean to “go feral” on the Internet. And finally, earlier this week, Mike Ananny of Microsoft and Berkman spoke about the public’s right to hear and how APIs are changing media infrastructure and affecting free speech.

At some point, maybe after that second BBQ burger (extra mustard, please), take a stretch with your iPad or your laptop and have a listen to what these guys have to say. You’ll be smarter for it.

James Gleick / The Information: A History, a Theory, a Flood

James Gleick is a native New Yorker and a graduate of Harvard and the author of a half-dozen books on science, technology, and culture. His latest bestseller, translated into 20 languages, is The Information: A History, a Theory, a Flood, which the NY Times called “ambitious, illuminating, and sexily theoretical.” Whatever they meant by that. They also said “Don’t make the mistake of reading it quickly.”

Matthew Battles / Going Feral on the Net: the Qualities of Survival in a Wild, Wired World

How do we balance the empowering possibilities of the networked public sphere with the dark, unsettling, and even dangerous energies of cyberspace? Matthew Battles blends a deep-historical perspective on the internet with storytelling that reaches into its weird, uncanny depths. It’s a hybrid approach, reflecting the web’s way of landing us in a feral state—the predicament of a domestic creature forced to live by its imperfectly-rekindled instincts in a world where it is never entirely at home. The feral is a metaphor—and maybe more than just a metaphor—for thriving in cyberspace, a habitat that changes too rapidly for anyone truly to be native. This talk will weave critical and reflective discussion of online experience with a short story from Battles’ new collection, The Sovereignties of Invention.

Mike Ananny / A Public Right to Hear and Press Freedom in an Age of Networked Journalism

What does a public right to hear mean in networked environments and why does it matter? In this talk I’ll describe how a public right to hear has historically and implicitly underpinned the U.S. press’s claims to freedom and, more fundamentally, what we want democracy to be. I’ll trace how this right appears in contemporary news production, show how three networked press organizations have used Application Programming Interfaces to both depend upon and distance themselves from readers, and describe how my research program joins questions of free speech with media infrastructure design. I will argue that a contemporary public right to hear partly depends upon how the press’s technologies and practices mediate among networked actors who construct and contest what Bowker and Star (1999) call “boundary infrastructures.” It is by studying these technosocial, journalistic systems — powerful yet often invisible systems that I call “newsware” — that we might understand how a public right to hear emerges from networked, institutionally situated communication cultures like the online press.

This Week in Review: Facebook’s IPO gone bad, and New Orleans loses its daily newspaper

Posted: 25 May 2012 07:00 AM PDT

Facebook’s quick fall: A week ago, Facebook had just launched the largest, most buzzworthy initial public offering in years. And now, that IPO has already brought them a potentially massive lawsuit and a federal investigation. Aside from the whole “pocketing millions upon millions of dollars” thing, it’s been a brutal week for Facebook execs. Here’s what happened.

Facebook dominated the conversation online last week (GigaOM has a good roundup from last Friday’s IPO), and a lot of that wasn’t positive. As data from Pew’s Project for Excellence in Journalism showed, much of the chatter online, particularly on Twitter, was about Facebook as an overhyped (and overvalued) stock. Those online observers may have been more right than they knew: As reports from Reuters, Business Insider (two posts), and The Wall Street Journal detailed, Facebook was allegedly telling top investors they had overestimated their projected financial figures, all while publicly talking up their earning potential and even expanding their stock offering to the rest of us. The result, so far, has been a U.S. Securities and Exchange Commission investigation and a (potentially class-action) lawsuit from investors.

There were a number of good analyses of what went wrong — at The Guardian, Heidi Moore laid out the list of sins involved and concluded, “Facebook didn’t know how to work its own privacy settings for investors. It couldn’t figure out, essentially, who should know what.” Reuters’ Felix Salmon was more specific with his list of incompetents, declaring that the only winners in this game were the ones who didn’t play at all. The Big Picture’s Barry Ritholtz also ripped apart the debacle.

The whole scandal still leaves open the question of what Facebook should, in fact, be valued at. At Technology Review, Michael Wolff was most provocative with his assessment, arguing that Facebook is just another business inextricably reliant on a fatally flawed online advertising model: “The crash will come. And Facebook—that putative transformer of worlds, which is, in reality, only an ad-driven site—will fall with everybody else,” he wrote. Harvard’s Doc Searls echoed Wolff’s thoughts about the brokenness of Facebook’s (and the web’s) ad model, and media consultant Terry Heaton countered that the broken industry isn’t the ad-supported web, but Madison Avenue’s insistence on the status quo on that web.

Others looked more closely at the future of Facebook’s services and of the social web more generally. The Atlantic’s Alexis Madrigal wondered whether Facebook’s users would keep sharing and what would become of its native and mobile users, and ReadWriteWeb’s Dan Frommer examined the company’s four biggest risks (there’s mobile and advertising again!). There were other problems spotted: All Things D’s Peter Kafka looked at the continued decline of Facebook’s Social Reader apps, and The New York Times’ Nick Bilton contrasted Facebook and Twitter’s approaches to privacy. Tech blogger Dave Winer insisted that we can do better than Facebook, while Slate’s Farhad Manjoo contended that Facebook has improved Silicon Valley.

The end of an era for New Orleans news: The American newspaper industry absorbed another big blow this week when the New Orleans Times-Picayune announced that it would drop back from daily publication to just three days a week, a change accompanied by the creation of a new corporate entity to run the paper and heavy layoffs — possibly a third of the newsroom. The change will leave New Orleans as the largest city in the U.S. without a daily newspaper.

The news was broken by The New York Times’ David Carr, and according to the New Orleans alt-weekly Gambit, Times-Picayune employees learned of the paper’s fate through his report. (They later got this memo from the paper’s publisher.) All this came despite the fact that, as Jim Romenesko reported, the paper remains profitable. For some of the background on the paper — which is owned by Advance Publications, a division of the Newhouse publishing empire — see this post at the Columbia Journalism Review. (Advance also announced they’d be doing the same thing with three of its Alabama papers, led by the Birmingham News.)

Media analyst Ken Doctor has an extremely useful analysis of what exactly Advance/Newhouse is trying to accomplish with this move, and what perils it faces. Doctor called the paper’s transition to digital a “forced march” because the paper simply isn’t ready for a digital transformation, particularly in terms of digital circulation. Others were similarly skeptical: The immediate comparison was to Advance’s 2009 transition of the daily Ann Arbor News to AnnArbor.com, and Forbes’ Micheline Maynard gave a bleak picture of what’s left of that news organization and the hole it’s left in the community.

Forbes’ John McQuaid, a former Times-Picayune reporter, described the way Advance’s web strategy has been “only lightly tethered to newsgathering,” and concluded that “with Advance, news has always been an adjunct to its vanilla branded sites, not something that is driving the internal business conversation, and it shows.” And former Wall Street Journal writer (and Times-Picayune intern) Jason Fry said he doesn’t see any reason for optimism that Advance will get the web right in this case.

Free Press’ Josh Stearns noted that while the future-of-news world has been optimistically focused on experiments to sustain quality journalism in certain hubs like San Francisco, Chicago, and New York, they need to pay closer attention to mid-sized cities like New Orleans, where the infrastructure simply isn’t there to pick up the journalism being cut at major traditional news organizations.

What’s behind Buffett’s newspaper buy?: I briefly mentioned Warren Buffett’s purchase late last week of 63 newspapers from Media General in last week’s review, but some smart commentary has come out about the deal since then (along with a few other pieces I missed at the time), so it’s worth touching on again. Media analyst Ken Doctor did a sharp rundown of the deal, pointing out that the upside of Media General’s broadcast properties and the real estate involved with the newspapers Buffett’s buying should help buffer him from the inherent danger of buying a set of newspapers. Reuters’ Jack Shafer pointed out several of Buffett’s past bearish statements about newspapers, but said he’s most likely buying because he sees an undervalued asset, not for any sentimental reason.

The Columbia Journalism Review’s Justin Peters and The Washington Post’s Erik Wemple both explained why these papers might be surprisingly valuable for Buffett: While major metro dailies have taken a beating, smaller community newspapers in rural areas have weathered the digital storm fairly well so far, in part because of their monopoly on local news and the slower rates of broadband adoption there.

Former journalism professor Philip Meyer made a similar point, arguing that Buffett is the type of buyer who’s happy with the new normal of lower profit margins for newspapers: “It looks like he is betting that the slide in newspaper earning power has leveled out. The Internet has done all the damage it can, and papers still make money.” PaidContent’s Jeff John Roberts looked at the economic sense Buffett’s paywall plan makes, while media consultant Dan Conover said he should be open to other non-paywall-based models. Poynter’s Andrew Beaujon, meanwhile, said we may be ignoring another big reason for news org purchases like Buffett’s — they’re a platform for personal philosophies of how journalism should be done. Buffett did tell his new papers’ publishers that he would be hands-off with them, and that he expected to buy more small and mid-sized papers.

A dubious bid to outlaw anonymous comments: This bill is almost too ridiculous to merit an item, but I’ll give it a quick mention here anyway. New York lawmakers have proposed state legislation that would make anonymous comments illegal — New York-based blogs and websites would have to add a name to posts or delete them. Wired’s David Kravets, who brought the bill to national attention this week, pointed out that there was no identification required of those requesting such post takedowns.

The bill, which was supposedly meant as a weapon against cyber-bullying and attacks against “local businesses and elected officials,” was predictably (and rightly) met with derision from scholars and those on the web. Columbia’s Tim Wu told The Guardian the bill was “an obvious first amendment violation,” and the bill was also ripped at sites like Techdirt and Animal. BetaBeat reported that some of the lawmakers involved with the bill were surprised by the blowback about it, while The Atlantic brought out a dissenting opinion, with a point/counterpoint on the value of anonymous online discourse.

Reading roundup: Plenty of other stuff going during this week of Facebook:

— The Wall Street Journal reported on some of the ongoing struggles with AOL’s hyperlocal journalism project, Patch, breaking the news that 20 Patch employees were being laid off and that one of AOL’s major investors is trying to get Patch killed, sold, or put into a joint venture. Jeff Bercovici of Forbes said it’s going to take a lot more cost-cutting or revenue-raising to get Patch to profitability by next year, and The Washington Post’s Erik Wemple said hyperlocal journalism’s business model doesn’t have room for executives in suits.

— The New York Times’ public editor, Arthur Brisbane, will leave his position in September after two years, declining an optional third year. The Washington Post’s Erik Wemple, who broke the story, took the opportunity to criticize his most recent column, and Poynter’s Craig Silverman proposed five qualifications for the next public editor of the Times. Poynter also held a chat about the role of ombudsmen with Washington Post ombudsman Patrick Pexton and Reuters’ Jack Shafer.

— This week in Murdoch was a relatively quiet one. News Corp. was reported to be considering spinning off its British newspapers — the Sun, the Times, and the Sunday Times — in order to preserve the rest of its media empire, something Murdoch denied but the Columbia Journalism Review’s Emily Bell saw as quite sensible. Here at the Lab, Ken Doctor examined what a trust for those papers might look like.

— A couple of interesting pieces of survey data discussed this week: The study that drew most of the headlines was one that looked at the political knowledge of audiences for various news outlets, finding NPR’s listeners to be the most informed and Fox News’ viewers to be the least informed. Another study found that about half of media professionals abandon websites when they hit a paywall.

— Finally, a couple of cool pieces on data journalism — Simon Rogers of The Guardian urged us to take on the punk “anyone can do it” mindset toward data journalism, and Alex Howard of O’Reilly Radar talked with former Guardian digital editor Emily Bell about her efforts to put data journalism into action with students at Columbia University.

Photo of the Times-Picayune building by Alysha Jordan used under a Creative Commons license.