Jeff Lebowski is ... the Dude. Vestibulum id ligula porta felis euismod semper. Maecenas sed diam eget risus varius blandit sit amet non magna. Curabitur blandit tempus porttitor.

More >

Powered by Squarespace
  • The Big Lebowski (Limited Edition) [Blu-ray Book + Digital Copy]
    The Big Lebowski (Limited Edition) [Blu-ray Book + Digital Copy]
    starring Jeff Bridges, John Goodman
  • The Big Lebowski (Widescreen Collector's Edition)
    The Big Lebowski (Widescreen Collector's Edition)
    starring Jeff Bridges, John Goodman, Julianne Moore, Steve Buscemi, David Huddleston
  • The Big Lebowski - 10th Anniversary Limited Edition
    The Big Lebowski - 10th Anniversary Limited Edition
    starring Jeff Bridges, John Goodman, Julianne Moore, Steve Buscemi, David Huddleston
« links for 2007-06-20 | Main | links for 2007-06-19 »
Tuesday
Jun192007

Content That Finds You (Part I)

For pretty much as long as the Internet has been part of our lives, pundits have been talking about smart technology that's able to surface content that interest you. This was one of the ideas behind General Magic in the 1990s. (Historians, please correct me if this is wrong.)

That early vision is now closer to a reality. It was one of the big themes to emerge from last week's International Council conference, hosted by the Paley Center for Media.

This is the first of two posts on the subject. The first covers the four underlying pillars of content that finds you. The second will cover the impact of this major change in how we interact with media and the impact on marketing. In addition, Part II will address how content that finds you might even mitigate The Attention Crash by helping us focus more, perhaps to a fault of exclusion.

As I mentioned, several underlying forces are coming together in a powerful way that will very soon help everyone find content that they care about more easily.

The first underlying technology is search. Specifically, I am referring to what John Battelle describes in his great book, The Search, as databases of intention. Search tools are gathering so much data that they are able to show you related content, such as advertising, just at the moment you need it.

The second building block is personalization. Today consumers are balancing the benefit they get from personalizing services against the downside risks of privacy. This will become less prevalent as the Net Generation ages. They live their whole life online already. I personalized my Google News page for example, and now it recommends news stories that are relevant to my interests.

The third is Web 2.0 people-powered services, such as del.icio.us, Flickr, digg and others. For example, Flickr Interestingness consistently surfaces incredible photos based on the activities that the community generates through comments, clicks and favorites. Similarly, Techmeme taps the global brain that is the blogopshere to show us what's hot in the tech news sphere today.

The fourth and final building block, perhaps the most critical, is RSS. Feeds by their nature bring us content that we care about to our desktops. However, today consumers need to preselect the content. We need to tell Google Reader or Newsgator that we want the Sports section New York Times. However, soon that will change and the readers will get smarter. Check out Newsgator Buzz for a glimpse of the future.

So how will this change how we consume media and the PR/marketing business? Stay tuned for part II.

Reader Comments (6)

Steve, I think your entire premise is wrong about this stuff. In the mid-90's I was the Acting President of Recommendations.net, a Stanford CS Department spin-off that flopped. (I won't mention any particulars to protect the guilty, although there really weren't any "guilty" per se.) One of the problems is that we were trying to do exactly what you're talking about, but the technology and the computational resources required were -- and still are -- a thing of the future.

What you're really talking about is content-based filtering. Many have tried this; all have failed. Evidently the CIA has something that works, but I'm hardly privy to it and nobody who is is likely to chime in on this thread. The problem is that agent-based software -- and we're talking about intelligent agents, not just smart spiders -- tends to be computationally intensive. So even if the AI engines work, the resources required for massive scale-up might be prohibitive. For the CIA, well, that's one thing; for you and me, it's quite another.

The other side of the coin is collaborative filtering. Microsoft bought the leader in this space, an MIT spin-off called Firefly. Firely was founded by an AI community guru at MIT, Pattie Maes. After the acquisition, it seems to have died. I can't think of how it's used in Microsoft's products and I was at Microsoft during or around the time of the acquisition. But with the diffusion of social networks and social computing (some might say social software, but hardware requirements/constraints should NOT be ignored, so I use "social computing," as does Microsoft and IBM), collaborative filtering has legs. Think del.icio.us and Furl as two examples. Thing of Digg. It's hardly where it could be, but my point is that collaborative filtering is much further along than the promise of content-based filtering. Don't believe the hype about content-based filtering; it's still far in the future except in very simplistic formats. BTW, collaborative filtering seems to address your third point.

You talk about personalization and personalization is certainly a good thing, assuming the end user knows what s/he is doing. I like the idea behind Particls, even the Google Desktop Web Clips gadget that add feeds based upon what one is viewing. Particls has a lot more options, but it's not AI. It's merely personalization. MySupplyChainDaily is another example, although I can't guarantee that the back-end engine has very much sophistication. (Maybe, maybe not.) In general, personalization should be done without a lot of customization by the user, just like with Particls. I highly recommend Particls to newbies and people who absolutely refuse to use a feed reader, be it offline or online. (Actually, I recommend it to everyone.) People are willing to read blogs, but only if they don't have to manage them per se.

RSS. You mean news feeds, right? I wish everyone would stop calling them RSS feeds. This is just plain stupid, alpha geeks trying to act like they're in some sort of exclusive club. Want to reach the masses? Then call it news feeds, NOT RSS feeds. It's just like the idiots who claim AI or the Semantic Web is Web 3.0. They miss the point; it's NOT about the core tech, but about a level of interaction.

RSS is just one piece of the puzzle. Most people still prefer to receive e-newsletters. That's a fact that you can't deny. RSS has its place, along with e-newsletters, and (to a much lesser extent) podcasts (both audio and video). They're all a piece of the puzzle. And, let's face it, even podcasts are best delivered as feeds. Don't get too high and mighty about RSS. RSS is critical, but e-newsletters still have much greater penetration and much better stickiness. There are exceptions, to be sure, but in general e-newsletters still beat RSS.

I'm looking forward to part 2 of this series. BTW, I've been in this space for a long time, longer than California has been a state!! I attended my first IJCAI conference in the mid-70's; I was just a kid. And in 1996, I was the Web Agents Session chair at the First International Conference on Autonomous Agents (i.e., intelligent/software agents). So I have a bit of skin in this game. I'm cautiously optimistic, but see much more progress on the social computing side of things versus on the core AI. True, artificial intelligence is better than none, but AI is still something in the future.

Bottom line: The advancements in social computing, whereby computers enable communication between humans, will lead to more relevant content than anything associated with the days of "Magic" (that's what General Magic was called back then; now it's called "toast").
June 19, 2007 | Unregistered CommenterDavid Scott Lewis
Wow, David. You managed to intrigue me and I am by no means or definitions a geek (or maybe that was one of the points you were making ;-). Interestingly enough, maybe it's the pull of Supernova, I am seeing more conversations around search and content these days. I just read yet another testimony to Google's ubiquity at Lunch Over IP http://www.lunchoverip.com/2007/06/googles_quest_f.html

I will be referring those readers back here. You both have good points and food for thought.
June 19, 2007 | Unregistered CommenterValeria Maltoni
David, thank you for your comments. This is exactly the kind of high-level discussions I want to start to have on my blog. Not the usual, "what feature did Google add today" fare. I have been trying to up my blogging game, if you will, to solicit this kind of discussion.

As it relates to AI, from a historical perspective what you write cannot be disputed. However, Google has enough Phds, computing power and bandwidth that I would not underestimate their ability to deliver such functionality - incrementally at first. Try adding a recommendations tab to your iGoogle page and you can see where this is going.

Second, re. collaborative filtering, I agree with you this is promising but the magic comes into play when you combine social computing with computer computing.

I disagree with the notion of calling RSS feeds news feeds. What about blogs or searches or calendars or all the other things you can do with feeds? News makes it sound narrow. RSS is geeky, agreed. But I haven't seen a viable alternative emerge - other than just calling this all "feeds."
June 19, 2007 | Unregistered CommenterSteve Rubel
Nice post Steve and a very enlightening response by David.

Further to the discussion about RSS Feeds as a discovery mechanisms, one more argument in favor of not calling them News Feeds is the wealth of data just waiting to be discovered inside presentations, documents, non-rss enabled websites, Podcasts etc. Services such as Scribd and PodZinger bring these to the surface.

I recently blogged about it here if you would like to know more: http://zaptxt-inc.com/blog/2007/06/14/4-ways-to-discover-hidden-sources-and-influencers-using-rss/

I think another promising service is MyFeedz (www.myfeedz.com) from Adobe. It required some training but its been consistently finding me some very good stuff. Bring in the RSS feed from MyFeedz into your RSS reader to compliment your explicit feed monitoring and that's a pretty good start.

Finally, whilst it will most likely be larger, trusted firm like Google (be it organic or via acquisition) that will eventually give folks the confidence to share some of their privacy data (as Loren Feldman so correctly put it in one of his video posts), I do believe that there's a flip side to personalization and discovery as well. Just when folks get comfortable using machine based personalization to find stuff, will they soon begin to wonder what they might be missing since they don't understand the logic behind why something was considered worthy of a push (or not)? Most personalization services consider the algorithm to be their secret sauce or their IP, and will be reluctant to share details on the automagic that discovers content (as is done by Techmeme, Megite etc, today).

Non of this has stopped us from trying out discovery services but I do believe that if you are an information worker, you will want to know more before you bet the farm on automated discovery. If you can't fully rely on the discovery service you use, you are really back to relying on the explicit web.

Lets see how it shakes out. Looking forward to part II.
June 19, 2007 | Unregistered CommenterSameer
My team was trying to build something along these lines with our original service PostReach GuestPosts. However, we were not taking the traditional search-centric route. Most search uses popularity as a proxy of relevance (ie: pagerank). Getting the right content to the right people where they are already consuming content is what ad targeting is about. Ad targeting is not about popularity it is about effectiveness. Ad technology has grown on its own path beyond the original link analysis (branded as context analysis).

At the recent Semantic Technology Conference in SJ there was a great case study about CNet's usage of semantic search vs collaborative filtering. Collaborative gets you more of the head, whereas semantic search gave more relevant results.

Regardless, we changed model and mission and are now solving another problem: The participation rate of blog readers.
June 21, 2007 | Unregistered Commenterhans
极速自动化制造各种工业流水线,合作伙伴:流水线博客网
August 26, 2007 | Unregistered Commenter生产线

PostPost a New Comment

Enter your information below to add a new comment.

My response is on my own website »
Author Email (optional):
Author URL (optional):
Post:
 
Some HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <code> <em> <i> <strike> <strong>