Was the NYT wrong to keep quiet? Yes

It’s been more than a week since New York Times reporter David Rohde escaped from his captors in Pakistan, so maybe now is a good time to try and look dispassionately at the massive coverup that prevented news of his kidnapping from being reported for more than six months — a coverup that included not just 40 or so mainstream media outlets but Wikipedia as well, with the personal help of founder Jimmy Wales. Raising such ethical issues seemed somewhat crass in the days following his miraculous escape (although that didn’t stop some observers, including Kelly McBride of the Poynter Institute, from being early critics of the coverup). But those issues deserve to be talked about in more detail.

For the record, I don’t know David Rohde. From all accounts, he is a wonderful friend and colleague, not to mention an excellent reporter who has a great deal of experience working in troubled areas. All of which is — I would argue — completely irrelevant to the issue at hand, namely whether the New York Times and its senior management were right to conceal evidence of his kidnapping, and whether the editors at dozens of other outlets were right to go along with this plan.

I would argue that they were not, and that if anything the coverup has made things harder not just for future kidnapping victims such as Rohde, but for newspapers and other mainstream media outlets as a whole.

(Please read the rest of this post at the Nieman Journalism Lab blog)

The Guardian ups the ante on APIs

The New York Times was the first major newspaper to take its cue from Google and open up its data via an API (which stands for application programming interface). In a nutshell, this allows developers to write programs that can automatically access the New York Times database, within certain limits, and use that data in mashups, etc. Now the Guardian newspaper in Britain has upped the ante: not only has it opened its data up via an API, but it has also done two things that the NYT has not — namely, it provides the full text of its articles to users of the API (while the Times restricts developers to an excerpt only) and it also allows the data to be used in for-profit ventures, while the Times restricts its data to non-profit purposes.

As Shafqat at NewsCred notes on his blog, these two differences are pretty important, and I would argue that the Guardian has really put its money where its mouth is in terms of turning its paper into a platform (to use the title of a blog post I wrote when the NYT came out with its open API). Not to denigrate what the Times has done at all, mind you — an API of any kind is a huge leap, and one that many newspapers likely wouldn’t have the guts to take, limits or no limits. But to provide full-text access to all Guardian news articles going back to 1999, and to allow all of this data and more to be used in profit-making ventures as well, takes the whole effort to another level entirely.

(read the rest of this post at the Nieman Journalism Lab blog)

NYT, Google exec go hyper-local

There’s an interesting battle shaping up in the “hyper-local” online journalism market, at least in the New York and New Jersey area. The New York Times confirmed on Monday that it is launching a new project called The Local, in co-operation with journalism students at the City University of New York. The network of local blog sites will reportedly start with Clinton Hill and Fort Greene in Brooklyn and Maplewood, Millburn and South Orange in New Jersey, and will apparently cover the usual neighbourhood fare such as schools, restaurants, crime and government. After the launch was mentioned by a local blog called Brownstoner (and also by PaidContent), blogger and journalism prof Jeff Jarvis wrote a post describing how he was working on a local-blogging project and happened to run into someone from the NYT, and the two agreed to co-operate on a joint venture. As Jarvis describes it:

In each of these two pilots, they’ll have one journalist reporting but also working with the community in new ways. The Times’ goal, like ours, is to create a scalable platform (not just in terms of technology but in terms of support) to help communities organize their own news and knowledge. The Times needs this to be scalable; it can’t afford to – no metro paper can or has ever been able to afford to – pay for staff in every neighborhood.

A spirited battle subsequently broke out in the comments section of Jarvis’s post, and on Twitter, between the blogger and Howard Owens — the former head of digital media for GateHouse Media (which recently settled a contentious lawsuit with the New York Times over one of the “hyper-local” sites run by Boston.com). Owens said he was skeptical of the plan, in part because of the failure of previous local journalism networks such as Backfence and YourHub, and made the point that local staff need to be in each community. Jarvis and Owens then got into a debate over (I think) whether the staff working for such a hyper-local site should be primarily professional journalists or people who emerge from the community itself.

(read the rest of this post at the Nieman Journalism Lab blog)

The NYT and “real-time news”

On Saturday, the “public editor” of the New York Times, Clark Hoyt, published a long discussion of a story the newspaper had recently reported, and how problematic it was for the Times, and titled his column “Reporting in Real Time.” The original story was about how New York Governor David Paterson had decided not to appoint Caroline Kennedy (who later withdrew from the race) to the Senate because of concerns about a tax issue and an incident involving a nanny with an expired visa. But as the story evolved, it appeared that the Times had been played by an anonymous source within the Governor’s office who wanted to slam Kennedy (as described in this NYT followup).

Continue reading

The NYT API: Newspaper as platform

There’s been a lot of chatter about the newspaper industry in recent weeks — about whether newspaper companies should find something like iTunes, or use micropayments as a way to charge people for the news, or sue Google, or all of the above — and how journalism is at risk because newspapers are dying. But there’s been very little discussion about something that has the potential to fundamentally change the way that newspapers function (or at least one newspaper in particular), and that is the release of the New York Times’ open API for news stories. The Times has talked about this project since last year sometime, and it has finally happened; as developer Derek Gottfrid describes on the Open blog, programmers and developers can now easily access 2.8 million news articles going back to 1981 (although they are only free back to 1987) and sort them based on 28 different tags, keywords and fields.

It’s possible that this kind of thing escapes the notice of traditional journalists because it involves programming, and terms like API (which stands for “application programming interface”), and is therefore not really journalism-related or even media-related, and can be understood only by nerds and geeks. But if there’s one thing that people like Adrian Holovaty (lead developer of Django and founder of Everyblock) have shown us, it is that broadly speaking, content — including the news — is just data, and if it is properly parsed and indexed it can become something quite incredible: a kind of proto-journalism, that can be formed and shaped in dozens or even hundreds of different ways.

(read the rest of this post at GigaOm)