About Us Our Team Our Investments News Investor Info Blog

Archive for the ‘Israel’ Category

Microhoo - My Thoughts on a Microsoft-Yahoo Merger

Sunday, February 3rd, 2008

Well, it is in the news everywhere, the possibility of a $44B Microsoft/Yahoo merger.  Given that I have spent a lot of ink discussing how to manage mergers after they happen - I find it hard to believe that this merger will actually end-up as a net positive for either company over time. The companies are just too different. Yahoo has been spending the last few years making itself into a media company (though lately they have been talking about getting back to their technical roots), and Microsoft is, in the end, a software and engineering company. My guess is that it will be hard for the merged “Microhoo” to be both a media and software company at the same time, which will cause enormous tension w.r.t to management attention and resource allocation. I wouldn’t want to be the one who has to make that merger work…

Another point that has been discussed ad-naseum is whether this will help or hurt the start-up eco-system. I attached a table taken from the Israeli government website about recent acquisitions of Israeli comapnies. Taken in a purely Israeli context it will probably be a net plus. First of all Yahoo has been a complete non-player in Israel, while Microsoft has both a large presence and made three acquisitions lately (see the table below). People are right that Microsoft will be busy for a while digesting the acquisition, which will slow its pace. The good news is that it will probably cause other players to pick up the pace of their acquisitions - AOL (which bought Quigo and Yedda which even aren’t on the list), Ebay which has bought Shopping.com and FraudScience (also not on the list). Maybe even some of the other advertising\internet players - e.g. Google, Amazon, IAC, News Corp. will start acquiring differentiating technology in Israel which would more than make up for any slowdown by Microsoft.

Recent Israeli m&a activity chart

The Long Road towards Integration – Appendix

Sunday, October 28th, 2007

I was at Journey 2007  (E&Y’s yearly conference for startups and VCs) last week – it was an OK conference, a good way to catch up with people I haven’t seen for a while. I did sit through one interesting panel on Mergers and Acquisitions, and heard some additional insights that I would like to add to my “Long Road to Integration” series. The points reiterate some of the points I have made before, but I thought it was worth posting them anyway – since the whole panel more or less agreed with them.

The first is that there is no such thing as a merger of equals – the larger company ACQUIREs the smaller company – and make sure you understand that before you go into the transaction. Also, the bigger size difference between the company, the greater the chance for a successful outcome.

Even though you may need to let the CEO go, make sure you keep on the 2nd a 3rd level management at the company. They are what keep things ticking.

Finally, since culture issues are a large culprit in the failure of an acquisition the acquiring company should appoint a SPOC.

The Long Road towards Integration – Part 4

Sunday, October 21st, 2007

I am sort of surprised that I am back on this subject again, but when I read that Microsoft’s Ballmer plans to buy 20 smaller companies next year (Ballmer: We Can Compete with Google) it drives home for me the importance of being able to integrate well in the aftermath of M&A. My best guess is that those 20 companies will include 1-2 large companies, the rest being small and midsize companies - companies that are “innovating in the marketplace” (a term we used to use at IBM Research). So Microsoft is effectively outsourcing a good portion of their innovation, and placing a big bet on being able to integrate these acquisitions into the fabric of Microsoft.Thee types of smaller acquisitions seem to be in the cards for IBM and Google - and I think more and more technology companies will be outsourcing their innovation this way - augmenting internal “organic” growth with external  ”inorganic” growth. Oracle seems to have gotten this down to an art (though they tend to swallow whales rather than minnows), and even SAP has jumped on the bandwagon. One issue that will clearly come with these acquisitions is how the acquiring company doesn’t kill the spark of innovation that exists in these smaller companies  (of course that is assuming that they want to keep the innovation alive, and aren’t just buying a specific technology or existing product.I had the opportunity the other day to speak with someone that was on the Corporate side of an acquisition and discussed what was their thought process at the time of the acquisition, and how that differed from how things turned out after the acquisition.One thing that struck me was that both sides were fooled since they were (paraphrasing Bernard Shaw)  ”two companies separated by the same language”. The company being acquired thought they were communicating important information about the acquisition, but it turns out that they were using internal shorthand to describe people and situations, which were interpreted completely differently by the other side. This was probably exacerbated by the fact that one side was Israeli and the other American - but it could have happened with any two companies - especially when there is a high impedance mismatch between the two (or in English - the companies are of very different size). . For example when one company said a manager “kept the trains running on time” - they meant a clerk that could keep to a schedule - while the other side thought  they meant someone could manage a complex system with all its nuances and make sure that it keeps working. Understandably these kinds of miscommunications caused a lot of faulty decisions to made during, and right after the acquisition.In my experience it takes about 9  to 18 months until the sides really start to understand each, how the other side works - and how they need to work together. That is assuming that everything goes smoothly. If you try to speed it up too much - you will end up killing the innovation, and you may end up killing any possibility of a successful acquisition.So what is the bottom line? Assume that you will need to keep the current structure of the acquisition intact for about a year before you can make any drastic structural or strategic changes. See the rest of my recommendations in previous posts - and perhaps hire a consultant that has been there and can help smooth the transition.

‘‘I think I can, I think I can’’: Overconfidence and entrepreneurial behavior

Wednesday, October 10th, 2007

I actually “borrowed” the title from an interesting article in the Journal of Economic Psychology’s January edition. Not a journal that I usually read, but my interest was triggered by a post in Marc Andressen’s blog. When I first saw the article (especially the introduction) that explained that “The strongest cross-national covariate of an individual’s entrepreneurial propensity is shown to be whether the person believes herself to have the sufficient skills, knowledge and ability to start a business. In addition, we find a significant negative correlation between this reported level of entrepreneurial confidence and the approximate survival chances of nascent entrepreneurs across countries.”

So I thought to myself “aha – I finally understand why there are so many high-tech entrepreneurs in Israel” – the national trait of over-confidence is actually causing the Israeli propensity to create startups. This actually fit pretty well with the findings that I mentioned in an earlier post on Age and the Israeli Entrepreneur. Then I looked a bit closer at the numbers in the article.

Turns out the article is about new business in general, not just high-tech, and Israel has an relatively low percentage of entrepreneurs that perceive that they have sufficient skills, knowledge and ability to start a business (only 30% of respondents, as opposed to 61% in NZ, 55% in the US – but only 11% in Japan, Israel is in the bottom third of the countries mentioned). So clearly it isn’t a national trait, but one that seems more localized to the technology community. Given that, I think there may be a different trait involved rather than just self-confidence. Since the Israeli technology community is relatively small (and pretty close knit – many having served together in the Army) I think another factor mentioned in passing in the article may play a larger role in Israel’s technology startup phenomenon - “Knowing other entrepreneurs is also positively associated with start-up propensity.

Age and the Israeli Entrepreneur

Sunday, September 23rd, 2007

I read with interest a whole set of blog posts about the age of successful entrepreneurs in the US (one of the better ones was by Marc Andreessen, you can find it here  Age and the entrepreneur, part 1: Some data). In my opinion it was a debate over whether youth and enthusiasism trumped age and experience in the high-tech startup world. One thing that immediately jumps up at you is that most of the high-tech entrepreneurial super stars were young (e.g. Bill Gates, Larry Page, Sergey Brin).

I was wondering whether anyone had done any real studies on how things worked in Israel.  Even though the Israeli VC and start-up model is based on the US model, the culture, environment and people are different than in the US.  Thigs work diffefrently here (and I think the Israeli VCs will need to change in order to adopt - but I’ll write more on that in a separate post). For example, most Israeli entrepreneurs go through maniditory army service - for three years or more (and many Israeli high tech companies started are based on teams that worked together during their Army service). I guess that is why Israeli work better in teams than Americans - and the list of differences go on and on (probably write a post on that too:)

That leads me to an article I read yesterday in the Marker (an Israeli business daily) that quotes a study by Dr. Eli Gimon (sp?). I would have put up a link but I couldn’t find the article on the web - and both the article and summary I found was only in hebrew….

I thought it was telling to see what he actually measured - whether a company that started in a high-tech incubator was around for at least seven years. That was his definition of success. I am not sure any VC would agree with that definition - but it does make sense in an Israeli context. While most US VCs (and Israeli too) are looking for the elusive “home-run” - Israeli produces very few of those. It mostly produces companies with innovative, solid technology - which is why so many Israeli companies are snapped up by overseas companies - they provide technology innovation, depth and skills. These companies get acquired for anywhere between $10M-$200M - where over $100M is rare and high-end. Very different than the US model…

Bottom line - what Dr. Gimon (sp?) found was that the most important ingredient to success for an Israeli start-up is management skill and experience - not age, sex, schooling or national origin of the founder. Also whether they built the company based on their own technology made a difference.

I imagine these findings are probably very different than in the US…

Israel Chief Scientist Grants - Should Startups Use Them?

Thursday, July 26th, 2007

I was just looking at the new version of Israel’s “Chief Scientist’s Law” (the new version - updated June 2005) Encouragement of Industrial Research and Development Law 5744-1984. For those of you unfamiliar with this government program - it is essentailly a loan program to startups (and other manufacturing\technology companies) that is repayable as royalties or as a settlement on sale of the company. Companies submit proposals to the Office of the Chief Scientist, which decides on the merits whether to provide funding. The law itself is relevant for both manufacturing and IP based companies - but for most start-ups, the rules regarding IP are the the more interesting.

So is it worth it? Seems like easy money - right? Fill in a few forms, talk to a few people and get some serious funding. So whats the catch?

In general the law tries to be fair - if there is a sale of the company the government gets a percentage of the sale value relative to the amount invested, plus interest. More or less like any VC - except the calculation seems to be as if the investment is the relative part of 100% of the company ((government_grants/all_investments_in_company)*sales_price) - so what about the founders and employees? Is that supposed to be only out of the other investors share? Well, for that there is clause 19B.j.1 “The Ministers may affixx rules for calculating the Sale Price in a manner that will take account of the shares that have been issued to entrepreneurs and employees otherwise than for cash ” - besides the weird english (well, it isn’t an official translation) - it seems to say that the government doesn’t have to actually leave anything for the founders and employees from thier part of the proceeds, but they may decide to…

However, the biggest problem is with uncertainty that the law creates with the transfer of IP outside of Israel in the event of an acquisition. They allow for the transfer based on the decision of a committee (described in text of the law) - which according to the wording doesn’t have to allow the transfer of IP (though in most cases it probably will). Not being able to transfer IP abroad could kill an acquisition - or make it less valuable to the acquirng company. Many international corporations (e.g. IBM) require that all of their IP belong to corporate - so if the IP can’t be transferred, the IP remaining in Israel could be an issue - and will at least be a price negotiation point.

So if you are facing a choice between closing the company (or not getting off the ground) and taking Chief Scientist money - take the money - but make sure you know what you are getting into. Like with any transaction - caveat emptor…

Related reading: Export of Technology Developed With Chief Scientist

Patents and Israeli Startups

Friday, July 20th, 2007

Patents aren’t cheap, but they are important. Besides the time and effort, it will cost you somewhere between $5K-$15K per patent. As a startup you ‘ll need to worry about a patent portfolio that provides you with real value besides the obvious one - responding to a VC’s query about the IP protection you have, barriers to entry etc. So how do you go about creating a patent portfolio? Here are some of the considerations you should take into account when deciding what to patent:

  • Freedom of action - what is key to making sure that you can build the products you need to be successful, without anyone being able to stop you.

  • Leverage for partnering - allows you to provide unqiue partnership value that (hoepfully) people are willing to pay for. And it is cool to say “patent pending technology”.

  • Block competition - keep others from doing the same thing. But don’t really count on this, since this is usually relatively hard. Given that there is usually more than one way to do things - how do you tell if a competitor is actually using your IP without a costly trial.

  • Due diligence and M&A - worst comes to worst, you can sell your IP portfolio. However, this is really a last resort since patents without skills are usually not considered all that valuable as an acquisition. However some key patents can  increase your value in an acquisition.
  • Generate revenue (and especially profit) - this actually a possible, but very difficult, business model to implement (e.g. Qualcomm as an example). Be honest with yourself - what are the chances that someone will pay big bucks for access to your patent portfolio….

The basic steps in creating your patent are:

  • Invention - Discovering something that is unique and valuable and then deciding which parts are worthy of the time and effort of a patent.

  • Competitive Analysis - Should be done by the inventor, rather than attorney, since the inventors understand the domain better than anyone. You can find helpful resources at http://www.uspto.gov/ and http://www.google.com/patents.

  • Provisional Patent -doesn’t really provide protection, buty does allow you to set a date of inventtion. For the few hundred bucks it costs, it is usually worth it. In your provisional patent you should document as much as you can about the invention. Don’t forget you only have a year to submit the actual patent - don’t wait until the last minute.

  • Write patent  - Expect to spend significant time writing, explaining and reviewing.

  • Submit and wait - and decide where you would like to submit.

  • Modifications - the patent office will probably come back with questions and issues (though not quickily, it can take a couple of years for a patent to be reviewed)..

 

Structured, Semi-Structured and Unstructured Data in Business Applications

Monday, July 16th, 2007

I was discussing these issue again today - so I thought this old paper must still be relevant….
 
There is a growing consensus that semi-structured and unstructured data sources contain information critical to the business [1, 3] and must be made accessible both for business intelligence and operational needs. It is also clear that amount of relevant unstructured business data is growing, and will continue to grow in the foreseeable future. That trend is converging with the “opening” of business data through standardized XML formats and industry specific XML data standards (e.g. ACORD in insurance, HL7 in healthcare). These two trends are expanding the types of data that need to be handled by BI and integration tools, and are straining their transformation capabilities. This mismatch between existing transformation capabilities and these emerging needs is opening the door for a new type of “universal” data transformation products that will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

 The Problem with Unstructured Data
 The terms semi-structured data and unstructured data can mean different things in different contexts. In this article I will stick to a simple definition for both. First when I use the terms unstructured or semi-structured data I mean text based information, not video or sound, which has no explicit meta data associated with it, but does have implicit meta-data that can be understood by a human (e.g. a purchase order sent by fax has no explicit meta-data, but a human can extract the relevant data items from the document). The difference between semi-structured and unstructured is whether portions of the data have associated meta-data, or there is no meta-data at all. From now on I will use the term unstructured data to designate both semi-structured and unstructured data.

The problem is that both unstructured data and XML are not naturally handled by the current generation of BI and integration tools – especially Extract, Transform, Load (ETL) technologies. ETL grew out of the need to create data warehouses from production database, which means that it is geared towards handling large amounts of relational data, and very simple data hierarchies. However in a world that is moving towards XML, instead of being able to assume well-structured data with little or no hierarchy in both the source and target, the source and target will be very deeply hierarchical and probably have very different hierarchies. It is clear that the next generation of integration tools will need to do a much better job of inherently supporting both unstructured and XML data.

XML as a Common Denominator
 By first extracting the information from unstructured data sources into XML format, it is possible to treat integration of unstructured data similarly to integration with XML. Also, structured data has a “natural” XML structure that can be used to describe it (i.e. a simple reflection of the source structure) so using XML as the common denominator for describing unstructured data and structured data makes integration simpler to manage.

Using XML as the syntax for the different data types allows a simple logical flow for combining structured XML and unstructured data (see Figure 1):
1. extract data from structured sources into a “natural” XML stream,
2. extract data from unstructured sources into an XML stream,
3. transform the two streams as needed (cleansing, lookup etc.)
4. map the XMLs into the target XML.

This flow is becoming more and more pervasive in large integration projects, hand-in-hand with the expansion of XML and unstructured data use-cases. These use cases fall outside the sweet spot of current ETL and Enterprise Application Integration (EAI) integration architectures – the two standard integration platforms in use today. The reason is that both ETL and EAI have difficulty with steps 1 and 4. Step 1 is problematic since there are very few tools on the market that can easily “parse” unstructured data into XML and allow it to be combined with structured data. Step 4 is also problematic since current integration tools also have underpowered mapping tools that fall apart when hierarchy changes, or when other complex mappings, are needed. All of today’s ETL and EAI tools require hand coding to meet these challenges.

dm-review-no-affiliation.jpg
Figure 1: A standard flow for combing structured, unstructured and XML information

The Importance of Parsing
 Of course, when working with unstructured data, it is intuitive that parsing the data to extract the relevant information is a basic requirement. Hand-coding a parser is difficult, error-prone and tedious work, which is why it needs to be a basic part of any integration tool (ETL or EAI). Given its importance it is surprising that integration tool vendors have only started to address this requirement.

 The Importance of Mapping
 The importance of powerful mapping capabilities is less intuitively obvious. However, in an XML world, mapping capability is critical. As XML is becoming more pervasive, XML schemas are looking less like structured schemas and are becoming more complex, hierarchically deep and differentiated.

This means that the ability to manipulate and change the structure of data by complex mapping of XML to XML is becoming more and more critical for integration tools. They will need to provide visual, codeless design environments to allow developers and business analysts to address complex mapping, and a runtime that naturally supports it.

Unstructured data is needed both by BI and application integration, and the transformations needed to get the information from the unstructured source data can be complex, these use cases will push towards the requirement of “transformation reusability” – the ability to transform the data once (from unstructured to XML, or from XML to XML) and reuse the transformation in various integration platforms and scenarios. The will cause a further blurring of the lines between the ETL and EAI use cases.

Customer data is a simple example use case. The example is to take customer information from various sources, merge it and then put the result into an XML application the uses the data. In this case structured customer data is extracted from a database (e.g. a central CRM system), merged with additional data from unstructured sources (e.g. branch information about that customer stored in a spreadsheet), which is then mapped to create a target XML representation. The resulting XML can be used as input into a customer application, migrate data to a different customer DB or create a file to be shipped to a business partner.

Looking Ahead
 Given the trends outlined above there are some pretty safe bets about where integration tools and platforms will be going in the next 12-24 months:
1. Better support for parsing of unstructured data.
2. Enhanced mapping support, with support for business analyst end-users
3. Enhanced support for XML use cases.
4. A blurring of the line separating ETL integration products from EAI integration products (especially around XML and unstructured use cases)
5. Introduction of a new class of integration products that focus on the XML and unstructured use case. These “universal” data transformation products will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

References
[1] Knightsbridge Solutions LLP – Top 10 Trends in Business Intelligence for 2006
[2] ACM Queue, Vol. 3 No. 8 - October 2005, Dealing with Semi-Structured Data (the whole issue)
[3] DM review - The Problem with Unstructured Data by Robert Blumberg and Shaku Atre, February 2003 Issue

Open Source and Freeware

Friday, July 13th, 2007

Selling IT to corporations is hard (well, selling to anybody is hard) and requires a lot of resources (especially around presale - POCs, Bake-offs, etc.) So a lot of VCs are looking to the open source model for salvation - not Open Source in its purest (as published in The Cathedral and the Bazaar), but as a way to lower the cost and frcition in selling to the enterprise.

The logic behind it is that the techies (especially in larger organizations) will download the software, play with it, and start using it in a limited way. This can be either as part of a project to solve a specific problem  (e.g. we need a new documant management systems), or just something that interests them as part of their job (why pay for a FTP client and server if you can just use FileZilla, or pay for a databsae if you can use MySQL). So the thinking is that this solves the issue of both penetration (the user find the stuff themselves), expensive POCs (the users will create the POC themselves) and the length of the sale cycle.

The second part of the open source equation is that users will become an active and viable community - both recommending and improving the product directly. Linux is usually given as the prototypical example - with a vibrant user community and a large number of developer\contributors. The allure behind this idea, and the feeling that you have more control (you can modify the code yourself, no vendor tie in, a community of developers\contributers) is what differentiates Open Source from just Freeware.

So how does a company make money off an open source product:

1. Sell services - any large organization that uses a product wants support, and will pay for it.

2. Sell add-ons, upgrades, premium versions - once they get used to the product, they will be willing to pay for added functionality

What doesn’t seem to work is proving a dumbed down, or partial functionality product to get people “hooked” and them sell them the full version, or leaving out important features.

So should you turn your enterprise software product open source. Before you you do here are a few things to consider:

1. How will the techies find your product? Is it a well know category (so that when  they need to find a CRM system, and the search for vendors, your product will show up - e.g. SugarCRM,).

2. Do you really have a technological breakthrough - or are you trying to sell an enhnaced version of a well established product category? If you do have a real, viable techical breakthrough - your code is open and you can be sure that the first people to download your product will be competitors looking for the “secret sauce”.

3. There are a LOT of Open Source projects out there -  take a look at Sourceforge, there are at least 100K projects out there. You’ll need to put effort (probably at least 1 or 2 people) to make sure that you stand out from the crowd and start growing a user community.

4. The open source download to sale conversion rate is low somewhere between 1 in 1,000 to 1 in 10,000, so you have to make sure that you get enough users to be viable.

5. It is a one way street, you can make your code open source, but it is really impossible to take back that decision once it is out in the wild.

6. Choosing a license - GPL gives you the most control, but many organizations don’t like it’s restrictions. Apache license seems to be universally acceptable - but gives you almost no control.

7. You need to decide what you will do with user submissions - and make sure you get the copyright for everything that is submitted.

Mashups and Situational Apps

Saturday, July 7th, 2007

Mashups both for prosumers (a new term that I had first heard from Clare Hart at the “Buying & Selling eContent” conference) - high-end consumers and creators of content and for scripters (my own term since I am not sure what exactly to call these high end-users - for example the departmental Excel gurus that create and manage departmental Excel scripts and templates).

The search for tools that empower these domain experts to create applications without programming has been around since at least the 80s (i.e.  4th generation programming languages) - which led to various new forms of application creation - but the only one that has really evolved into a “general use”  corporate tool for non-programmers has been Excel (though not really a 4GL). The reasoning behind those tools was to put the power to create appplications into the hands of the domain expert, and you will get better applications, faster. One new evolution of these types of tools are Domain Specific Languages (DSL) that make programming easier by focusing on a specific domain and building languages that are tailored to that domain.

So much for the history lesson - but what does that have to do with Mashups and  Situational Apps?  Well they both focus on pulling together different data sources and combing them in new ways in order to discover new insights. Mashups seem to be the preferred web term, Situational Apps is a term coined by IBM for the same tyoe of application in a corporate setting.

These types of applications (and application builders) have a lot in common:

1. They all start from a data feed of some sort. either RSS or XML.

2. They focus on ease of use over robustness.

3. They create allow users to applications easily to solve short term  problems.

Many of these tools are experimental and in the Alpha or Beta stage, or are Research projects of one type or another (QEDWiki, Microsoft Popfly, Yahoo Pipes, Intel MashMaker, Google Mashup Editor). As these tools start maturing, I think we will see a layered architecture emerging, especially for the corporate versions of these tools.  Here is how I see the corporate architecture layers evolving (click on the chart to enlarge it):

Mashup Layers

I think the layers are pretty self explainatory, except for the top-most Universal Feed Layer which is simply an easy way to use the new “mashup” data in other ways (e.g. other mashups, mobile).

If you look at the stack there are players in all layers (though most of the mashup tools I mentioned above are in the presentation and mashup layers), and the stack as a whole competes very nicely with a lot of current corporate portal tools - but with a much nicer user experience - one that users are already familiar with from the web.

One important issue that is sometimes overlooked is that mashups require feeds - and even though the number of web feeds is growing, there is still a huge lack of appropriate feeds. Since most mashup makers rely on existing feeds they have a problem when a required feed is not available. Even if the number of available feeds explodes exponetially there is no way for the site provider to know how people would like to use the feeds - so for mashups to take off, the creation of appropriate filtered feeds is going take on new importance, and the creation of these feeds is going to be a huge niche. Currently “Dapper” is the only tool that fills all the needs of the “universal feed layer” - site independence, web based and an easy to use, intutive interface for prosumers and scripters.