About Us Our Team Our Investments News Investor Info Blog

Archive for the ‘Web 2.0’ Category

Online Ad Targeting: Fine Grain Targeting vs. Coarse Grain Delivery

Monday, July 21st, 2008

Behavioral ad targeting has been getting a lot of attention in the press lately, especially around the US Congress’ interest ing the technology. In general ad targeting of various shapes and forms is also becoming a busy space for startups - various types of targeting technologies trying to understand the users intent, and provide them with an appropriate advertisement.

When I looked a bit closer into what most of these ad targeting companies do, it turns out that after they have used whatever mechanism (contextual, behavioral, demographic, psychographic etc.) to decide who you are and what interests you, they translate this into one of a very small number of consumer segments, pick an ad for that segment and display it to you. So all that fancy computation upfront to provide a canned ad. Seems kinda of a waste. Wouldn’t it make more sense to create a personal, data driven ad (from one or more advertisers) to leverage that information? That would be the “holy-grail” of true 1-1 personalized advertising.

This growing impedance mismatch between the fine grain targeting ability of ad networks vs. the coarse grain delivery capability of advertisers is going “short-circuit” the ability of these targeting technologies to show their full potential.

eMail and Human Process Management

Monday, July 14th, 2008

Zvi referred me to an interesting post on read-write web on Is Email In Danger? by Alex Iskold, and in many ways the comments were just as interesting as the article. It is clear that email vs. twitter vs. IM vs. wiki is a topic that interests people.  Even though those tools overlap in functionality, I’d bet each will find its proper place and there won’t be one winner.  It would be interesting to see the best practices that are forming about when people use which tool. Just like Fedex, US Mail and email all coexist comfortably…

Personally I am sure that at least in a corporate setting, email is not going to be replaced in the foreseable future. The main reason is that email has become more than just “electronic mail” it has become the implicit mechanism of choice for managing many (if not most) the Human Processes in most organizations.

Using email for unstructured human centric processes is both its strength and its weakness. Just the fact that email is amenable to so many diverse, unstructured processes (and all without IT support) is a huge benefit, the downside is that email isn’t really optimized for managing those processes (but rather for single messages) - so we get Information Overload in our inbox. Threaded conversations are an interesting innovation, but they don’t solve the problem either.

Think about it - in many companies there are specialty systems for the “standard, heavy-duty” processes (like ERP, CRM), but for the other processes (or as someone coined the outside SAP - or OSAP processes) - what does everybody use? eMail! Even if you have a system in place for a specific process - how do you handle exceptions? eMail! How do you work across organizational silos (or across companies)? eMail!

So as I said, I don’t think eMail will be going away any time soon.

Wisdom of Markets

Monday, December 17th, 2007

I was looking at company that wanted to build a generalized market trading mechanism for anything (which seemed to me to be another name for gambling) and decided to look at intrade wondering whether a market mechanism could actually be used to acurately predict future events. I looked at some of the most highly traded intrade political markets (over 100K trades, which I guess doesn’t really mean over 100K different people) to see who will be the future president in 2008. According to intrade (on Dec 17th)  it will come down to Hillary (with VP candidate Bayh or Obama) vs. Giuliani (with VP Huckabee), and it looks like Hillary is a shoe-in.

One interesting thing I noticed is that there seems to be an internal consistency to these markets, even though the participants are (probably) different.

There is a close correlation between the nominee front runner charts and the presidential winners charts (even though they are unlinked as markets). It will be interesting to see how these predictions change as we get closer to the election, and whether they will actually predict the future…

 BTW - according to intrade there is about a 50% chance of a US recession next year (below the sentiment in September, but above the low in October), but only small chance that Israel/US will bomb Iran (way down from September)…

Amazon EC2, S3 – and now SimpleDB

Saturday, December 15th, 2007

I have been playing with Amazon S3 as a remote backup mechanism for my machines. It is well thought out, works well, and is cheap. For many applications it is a “good enough” solution for managed storage.

Now the friendly folks at Amazon have announced their SimpleDB which provides the core functionality of a DB - real-time lookup and simple querying of structured data. Looks like yet another “good enough solution” for many web based businesses.

It seems like Amazon is rolling along, trying to become the “data center for everyone else”. Big enterprise are not going to be able to divest themselves of their data centers anytime soon, but small business can have the support provided by a data center – with only a fraction of the expense.

Now match this up with a tailored IDE and programming framework to make it even easier to use these services – and you’ll have a killer web application platform (better than Force.com since it doesn’t require the use of a proprietary language – just a specific API).

Adam Smith, Tom Sawyer and Web Semantics

Friday, November 30th, 2007

Adding semantic information to the web has been on the agenda for a number of years (at least since 2001), and is high on the hype cycle. It is clear the value of web semantics once they exist – real automated digital assistants, search engines that can find what we meant – not just what we asked for. Practically magic.

So why aren’t web semantics evolving as fast as the web itself (though it seems like a new search engine claiming semantic capability if born almost day)? One key reason is that it is still in the domain of techies - all but meaningless to 95% of regular web users. To really start taking off requires harnessing that 95% of the web, making it useful and profitable for regular web users to generate useful web semantics – for their own benefit (or as Adam Smith put it – “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest..”).

OK, how do we spark this self interest and get the broader web community providing the semantics? The key is to take advantage of the currency of the web - advertising. Everybody is trying to make money off their sites by advertising. What if you could double, or triple advertising revenue by describing what your site is about (i.e simple semantics) just by a simple procedure that is no harder than what is needed to just get regular advertisements on your site? Even better, what if someone else could do for you, and you share the additional revenue (Tom Sawyer would be proud).

So it really is simple – make it so easy to add semantics that anyone can do it, make it worthwhile ($$$) so that everyone will to do it. Then sit back and watch web semantics start taking off. It won’t be perfect, but at least it will start to flesh-out and start evolving at the same velocity as the web itself.

Some Thoughts on Blogging

Wednesday, November 14th, 2007

I have been blogging for a while now, and like everyone else I used to look at metrics everyday, now I look at them every once in awhile. What struck me most about traffic (and hopefully readership - since I can only know that users looked at the site, not whether they read it ) - is that the more you talk about currrent events the more traffic you get.

The blips that I saw on traffic were always around my blogging on topics that were just discussed by other sites, or events that just happened - rather than the blogs on general topics (e.g. the blog post on Mashup camp got a lot more traffic than my posts on Integration and M&A).  The traffic blip is of course even more pronounced if you comment or link-back to the main sites that discussed the event themselves.

This probably isn’t earth shattering news to most bloggers - but the heavy traffic to current event bias suprised me.

Web Credibility

Tuesday, October 30th, 2007

I was looking around at some sites and was reminded of some older, but still relevant work done by the Stanford Persusive Technology lab on web credibility. I would have like to see the research updated to include some guidelines for UGC (User Generated Content) sites - but even so it is still very relevant. There are also a nice set of charts that describe captology here,  and web credibility here.

Another reason that I was reminded of the persuasive computing work is that I keep hearing from Israeli VCs the notion of “ease of use” being a key ingredient in the Web 2.0 world, and you need to make sure that Web 2.0 enterpreneurs understand that. IMHO that is a mistake - ease of use is the minimal bar - without that you don’t get to play… The real need is to make sure that developers and designers  understand that the real goal is “Joy of Use” - sure it has to to be easy and intuitive to use, but users also need to have fun using the technology - otherwise you won’t succeed.

Personalized Feeds (or more on Open APIs)

Friday, October 5th, 2007

 I just read an interesting study on the problems with existing news RSS feeds from the University of Maryland’s International Center for Media and Public Relations. I think it is a great example of how user’s can’t depend on the organization that creates the content to provide access to the content in the form or format most useful for them, and why the ability for users to create their own feeds is so valuable. To quote from the study:

“This study found that depending on what users want from a website, they may be very disappointed with that website’s RSS.  Many news consumers go online in the morning to check what happened in the world overnight—who just died, who’s just been indicted, who’s just been elected, how many have been killed in the latest war zone.  And for many of those consumers the quick top five news stories aggregated by Google or Yahoo! are all they want.  But later in the day some of those very same consumers will need to access more and different news for use in their work—they might be tracking news from a region or tracking news on a particular issue.

It is for that latter group of consumers that this RSS study will be most useful.  Essentially, the conclusion of the study is that if a user wants specific news on any subject from any of the 19 news outlets the research team looked at, he or she must still track the news down website by website.”

Bottom line, as long as we depend on publishers as both content providers and access providers we as consumers of content won’t be able to get what we need in the way we need it - just like with APIs.  The only way to solve the problem is to allow users or some unaffiliated community to create the access to content (or API), as opposed to limiting that ability to only the publisher.  As web 2.0 paradigms catch on with the masses, turning more and more of us to prosumers, this will become more and more of an issue.  Publishers that try to control access will lose out to those that provide users the to tailor the content to their own needs. Publishers need to understand that this benefits both them and the users.

I see signs that this is actually starting to happen (in a small way) with the NYTimes and WSJ both announcing personal portals for thier users. The jump to personalized feeds isn’t that unthinkable…

Open APIs

Tuesday, September 25th, 2007

Kudos to Google (soon) and Facebook (already) for offering open APIs, empowering the development community to create interesting (and hopefully profitable) applications based on those APIs. Opening the APIs allow the developer community to develop interesting applications, and enrich everyone’s user experience. However, there is a basic limitation of the current notion of open API (unless it is an open source project) – the owner of the API gets to decide for the developers what is opened (i.e. what programmatic access is allowed), and what remains unavailable. Sometimes limitations are created on purpose – limiting what developers have access to for business, security or other reasons. It is clear the owner has the right to limit usage to protect their rights – but limiting access will just stifle creativity – especially if the APIs are too limiting. Also, in many cases the limitations are artificial – the owner just hasn’t had time to develop all the possible APIs, or haven’t through all the use cases (if that is even possible) leading to a limitation that stops somebody building some really useful new application.
The only way to get around this is to allow the developers to create APIs themselves, or make it possible for anyone to extend and change the APIs and submit it back to the community - not be reliant on the owners to develop it for them. This would lead to a rich evolving set of APIs maintained by the developer community. Until then – open APIs will never be truly open.
And about the owner’s rights - my guess is that this will need to be done contractually rather than programmatically.

Vertcal Mashup Platforms

Wednesday, September 12th, 2007

Gartner just put out a report on “Who’s Who in Enterprise Mashup Technologies” whcih contains all of the usual enterprise paltform companies and all the usual web mashup players . They gave some good, though standard advice that you should understand the problem, before you choose the technology (duh?) - but I thought it was interesting that they didn’t try to define a best practices architecture, or give some guidance on how to combine technologies or choose between them (see my post below).

One thing that was clear is that all of the Mashup Platforms are trying to be generic - allow users to build any type of mashup application. As always, being generic means being more abstract - and making it harder for people to easily build a mashup for a specific domain or vertical. This isn’t unusual for platform builders, since by building a generic tool they can capture the broadest audience of user. But I think that they might be making a mistake with respect to Mashup Platforms - the whole idea is to make it easy for anyone to build “situational applications” - that solve a specific need for information quickily, and that can be used by non-developers. For me, that means that platforms will have to be tailored to the domain of the user.

I am expecting that in the next wave of Mashup Platforms we’ll start seeing vertically oriented mashup platforms that will make it even easier to build a mashup for a specific vertical - from standard verticals like Finance, to more consumer vertical like advertisements.