Content Discovery - New Models for Revenue

5 comments
Thread Title:
10 Predictions for the Year
Thread Description:

Some interesting predictions for content revenue, no.2 particularly caught my eye in light of Googles recent moves with Scholar and Library - it's clear as day when you read it but i'd certainly not given it much thought so i think some of you will find it interesting aswell:

To achieve much broader penetration, the information industry needs to change the process by which people find and buy the information they need. Two conditions are necessary and both will start to take hold in earnest this year. First, premium content will become discoverable though the major search engines. In seeking information, users often do not know the likely sources and therefore rely on search engines. While search engines are relatively effective in finding relevant content on the free web, they have as yet little content indexed from premium collections. As a result, search engines currently do not find relevant articles from The New York Times archive, Thomson’s Investext library of Wall Street research reports, Hoover’s company reports, or reports from market research firms, among other premium sources. An important example of change, however, is Google’s plan to index content from hundreds of academic journal publishers (with their permission). This move will enable users to discover content that previously was not visible to search engines. The second condition for broadening content sales is the packaging of information for pay-per-view purchasing as an alternative to subscriptions, so that users can purchase content once they find it. Pay-per-view packaging is not new. A variety of publishers and distributors, ranging from The New York Times to Factiva to Forrester Research, have been selling content “by the drink” as a complement to subscriptions. Now, however, the combination of discovery and pay-per-view packaging will set the stage for a much larger content market.

There's much more at the threadlinked post above so check it out.

The question that comes to mind immediately upon reading that for me is: Wouldn't it be annoying to find information on Google/Yahoo/MSN etc only to discover that you couldn't read it without getting your CC out?

Then i start to wonder just how you would integrate such information - another Beta (sigh..) as a seperate service like scholar? Or perhaps "additional results" at the bottom/side of free pages?

via pc

Comments

Northern Light

Northern Light had its Premium Content in its SERPs, but clearly marked. However, it wasn't enough to keep them afloat.

And Google News already indicates that certain results require subscriptions. I assume some of those subscriptions are paid.

NL vs google

Qwerty, you beat me to the punch with NL. But then again, NL's algo generally sucked hard. Odds are Google could make it work. NL was all about KW count, it was a piece of shit really. I could make a better SE with wget, grep and wc. (that would be a fun toy, might have to try it)

For people to find it acceptable it would have to be works which have copywriter restrictions and then the SE cuts a deal with the copywriter (maybe on a % basis?).

Overall, this cold get ugly, if say for instance a premium listing buts up against a free listing with the same content, will there be incentive to push the free stuff out?

Copyright implications

Google News' implementation of the subscription notification is somewhat arbitrary and annoying - with some articles readable despite the notification, some not readable and not notified, and some only partially readable.

A lot of freelance copy agreements historically had restrictions on reproduction and republication and, perhaps crucially, failed to detail internet usage. (I note the UK newspaper The Independent for example now grants itself clear - and alarmingly broad - permissions to republish work and to permit others to do the same.)

In the real world it may be fine for Google (or any other engine) to profit from copies of original web-based work and the benefit to both sides is arguable despite the lack of permission.

However print-based authors and journalists have historically had a more professional, jaundiced and aggressive eye when it comes to breaches of copyright agreements. Especially if those breaches were deliberate and made in the pursuit of profit.

Paid content

I can't say I properly understand the purpose of linking to subscription content - can you imagine Froogle where you can't check the product details without buying the product first?

I interviewed with a content provider

But their strategy was to create optimized pages that convinced people of the value of the article they'd have to pay for.

It seems to me that allowing a spider onto a page that you don't allow normal users onto unless they pay is a bit (dare I say it here) unethical.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.