About Us Our Team Our Investments News Investor Info Blog

Archive for July, 2007

Israel Chief Scientist Grants - Should Startups Use Them?

Thursday, July 26th, 2007

I was just looking at the new version of Israel’s “Chief Scientist’s Law” (the new version - updated June 2005) Encouragement of Industrial Research and Development Law 5744-1984. For those of you unfamiliar with this government program - it is essentailly a loan program to startups (and other manufacturing\technology companies) that is repayable as royalties or as a settlement on sale of the company. Companies submit proposals to the Office of the Chief Scientist, which decides on the merits whether to provide funding. The law itself is relevant for both manufacturing and IP based companies - but for most start-ups, the rules regarding IP are the the more interesting.

So is it worth it? Seems like easy money - right? Fill in a few forms, talk to a few people and get some serious funding. So whats the catch?

In general the law tries to be fair - if there is a sale of the company the government gets a percentage of the sale value relative to the amount invested, plus interest. More or less like any VC - except the calculation seems to be as if the investment is the relative part of 100% of the company ((government_grants/all_investments_in_company)*sales_price) - so what about the founders and employees? Is that supposed to be only out of the other investors share? Well, for that there is clause 19B.j.1 “The Ministers may affixx rules for calculating the Sale Price in a manner that will take account of the shares that have been issued to entrepreneurs and employees otherwise than for cash ” - besides the weird english (well, it isn’t an official translation) - it seems to say that the government doesn’t have to actually leave anything for the founders and employees from thier part of the proceeds, but they may decide to…

However, the biggest problem is with uncertainty that the law creates with the transfer of IP outside of Israel in the event of an acquisition. They allow for the transfer based on the decision of a committee (described in text of the law) - which according to the wording doesn’t have to allow the transfer of IP (though in most cases it probably will). Not being able to transfer IP abroad could kill an acquisition - or make it less valuable to the acquirng company. Many international corporations (e.g. IBM) require that all of their IP belong to corporate - so if the IP can’t be transferred, the IP remaining in Israel could be an issue - and will at least be a price negotiation point.

So if you are facing a choice between closing the company (or not getting off the ground) and taking Chief Scientist money - take the money - but make sure you know what you are getting into. Like with any transaction - caveat emptor…

Related reading: Export of Technology Developed With Chief Scientist

Walled Gardens on the Web (and elsewhere)

Wednesday, July 25th, 2007

Facebook has been getting a lot of press lately - one discussion item that caught my eye was a number of blogs and discussions around whether Facebook can thrive as a  “walled garden” (which refers to a closed set or exclusive set of information services provided for users (click here for the Wikipedia entry).

The main issues raised were the viability of a walled garden on the internet, the pluses and minuses walled gardens - both for the provider and for the consumer (you can find an interesting discussion at http://www.micropersuasion.com/2007/06/walled-gardens-.html). Most of the examples talk about AOL and how it failed as a walled garden, as did cellular providers that tried to limit WAP access to only certain sites.

I am not sure I actually understand the point - since the the whole internet is just sets of walled gardens - how many websites let you use thier information freely. Very few have comprehensive (or any) APIs, more have feeds that give you limited access to the information actually available. So how is Facebook any different?

One key difference is that opposed to most sites - Facebook has collected your own, personal information (or that of your friends). People want to be able to do with their own information whatever they please. So I think the right analogy isn’t the AOL walled garden approach, but rather something even more “ancient” - the client server revolution  of the 80’s. For years after GUIs and PCs were available it was still very hard it was to get your own organizational information out of various legacy systems to use in new applications. Even though the information was yours  - you couldn’t get at it to use as you like - either because the vendors couldn’t keep pace with the emerrging technologies - or didn’t want to (so they could keep it “hostage”). This gave rise to an imperfect, but usable technical solution that let people get at their information even though the system didn’t have the capability - a whole new set of “screen scraping” technologies that emulated users to get the  desired information out of applications.

So I think that the same will happen here - either the walled gardens will open up or  people will figure out to get at it some other way.

Patents and Israeli Startups

Friday, July 20th, 2007

Patents aren’t cheap, but they are important. Besides the time and effort, it will cost you somewhere between $5K-$15K per patent. As a startup you ‘ll need to worry about a patent portfolio that provides you with real value besides the obvious one - responding to a VC’s query about the IP protection you have, barriers to entry etc. So how do you go about creating a patent portfolio? Here are some of the considerations you should take into account when deciding what to patent:

  • Freedom of action - what is key to making sure that you can build the products you need to be successful, without anyone being able to stop you.

  • Leverage for partnering - allows you to provide unqiue partnership value that (hoepfully) people are willing to pay for. And it is cool to say “patent pending technology”.

  • Block competition - keep others from doing the same thing. But don’t really count on this, since this is usually relatively hard. Given that there is usually more than one way to do things - how do you tell if a competitor is actually using your IP without a costly trial.

  • Due diligence and M&A - worst comes to worst, you can sell your IP portfolio. However, this is really a last resort since patents without skills are usually not considered all that valuable as an acquisition. However some key patents can  increase your value in an acquisition.
  • Generate revenue (and especially profit) - this actually a possible, but very difficult, business model to implement (e.g. Qualcomm as an example). Be honest with yourself - what are the chances that someone will pay big bucks for access to your patent portfolio….

The basic steps in creating your patent are:

  • Invention - Discovering something that is unique and valuable and then deciding which parts are worthy of the time and effort of a patent.

  • Competitive Analysis - Should be done by the inventor, rather than attorney, since the inventors understand the domain better than anyone. You can find helpful resources at http://www.uspto.gov/ and http://www.google.com/patents.

  • Provisional Patent -doesn’t really provide protection, buty does allow you to set a date of inventtion. For the few hundred bucks it costs, it is usually worth it. In your provisional patent you should document as much as you can about the invention. Don’t forget you only have a year to submit the actual patent - don’t wait until the last minute.

  • Write patent  - Expect to spend significant time writing, explaining and reviewing.

  • Submit and wait - and decide where you would like to submit.

  • Modifications - the patent office will probably come back with questions and issues (though not quickily, it can take a couple of years for a patent to be reviewed)..

 

The Long Road towards Integration – Part 3

Thursday, July 19th, 2007

Well, I guess you learn something new every day.  

A super critical requirement is a headquarters SPOC (Single Point of Contact) for any decision concerning the remote location - for any decision that comes out of HQ that is not day-to-day business as usual. Someone who cares about the over health and well-being of the lab, even for things that don’t directly effect them, or their department. This is especially true if the remote location consists of a conglomeration of smallish groups reporting to different HQ functions. Otherwise, the different HQ departments will try to optimize things for their own group - but these local optimizations can a have a huge impact on the overall health of the lab - which won’t be noticed until it is too late (unless their is a SPOC).

For example, two HQ departments  could decide to streamline their organizations and remove some positions - for example each decides to remove one person in the remote location.  Even though each decision on its own makes sense - taken together it could have a very negative effect on the remote location - lowering moral and motivation. Cleaning up the fallout could take months….

So make sure you have a SPOC. Live long and prosper!

Structured, Semi-Structured and Unstructured Data in Business Applications

Monday, July 16th, 2007

I was discussing these issue again today - so I thought this old paper must still be relevant….
 
There is a growing consensus that semi-structured and unstructured data sources contain information critical to the business [1, 3] and must be made accessible both for business intelligence and operational needs. It is also clear that amount of relevant unstructured business data is growing, and will continue to grow in the foreseeable future. That trend is converging with the “opening” of business data through standardized XML formats and industry specific XML data standards (e.g. ACORD in insurance, HL7 in healthcare). These two trends are expanding the types of data that need to be handled by BI and integration tools, and are straining their transformation capabilities. This mismatch between existing transformation capabilities and these emerging needs is opening the door for a new type of “universal” data transformation products that will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

 The Problem with Unstructured Data
 The terms semi-structured data and unstructured data can mean different things in different contexts. In this article I will stick to a simple definition for both. First when I use the terms unstructured or semi-structured data I mean text based information, not video or sound, which has no explicit meta data associated with it, but does have implicit meta-data that can be understood by a human (e.g. a purchase order sent by fax has no explicit meta-data, but a human can extract the relevant data items from the document). The difference between semi-structured and unstructured is whether portions of the data have associated meta-data, or there is no meta-data at all. From now on I will use the term unstructured data to designate both semi-structured and unstructured data.

The problem is that both unstructured data and XML are not naturally handled by the current generation of BI and integration tools – especially Extract, Transform, Load (ETL) technologies. ETL grew out of the need to create data warehouses from production database, which means that it is geared towards handling large amounts of relational data, and very simple data hierarchies. However in a world that is moving towards XML, instead of being able to assume well-structured data with little or no hierarchy in both the source and target, the source and target will be very deeply hierarchical and probably have very different hierarchies. It is clear that the next generation of integration tools will need to do a much better job of inherently supporting both unstructured and XML data.

XML as a Common Denominator
 By first extracting the information from unstructured data sources into XML format, it is possible to treat integration of unstructured data similarly to integration with XML. Also, structured data has a “natural” XML structure that can be used to describe it (i.e. a simple reflection of the source structure) so using XML as the common denominator for describing unstructured data and structured data makes integration simpler to manage.

Using XML as the syntax for the different data types allows a simple logical flow for combining structured XML and unstructured data (see Figure 1):
1. extract data from structured sources into a “natural” XML stream,
2. extract data from unstructured sources into an XML stream,
3. transform the two streams as needed (cleansing, lookup etc.)
4. map the XMLs into the target XML.

This flow is becoming more and more pervasive in large integration projects, hand-in-hand with the expansion of XML and unstructured data use-cases. These use cases fall outside the sweet spot of current ETL and Enterprise Application Integration (EAI) integration architectures – the two standard integration platforms in use today. The reason is that both ETL and EAI have difficulty with steps 1 and 4. Step 1 is problematic since there are very few tools on the market that can easily “parse” unstructured data into XML and allow it to be combined with structured data. Step 4 is also problematic since current integration tools also have underpowered mapping tools that fall apart when hierarchy changes, or when other complex mappings, are needed. All of today’s ETL and EAI tools require hand coding to meet these challenges.

dm-review-no-affiliation.jpg
Figure 1: A standard flow for combing structured, unstructured and XML information

The Importance of Parsing
 Of course, when working with unstructured data, it is intuitive that parsing the data to extract the relevant information is a basic requirement. Hand-coding a parser is difficult, error-prone and tedious work, which is why it needs to be a basic part of any integration tool (ETL or EAI). Given its importance it is surprising that integration tool vendors have only started to address this requirement.

 The Importance of Mapping
 The importance of powerful mapping capabilities is less intuitively obvious. However, in an XML world, mapping capability is critical. As XML is becoming more pervasive, XML schemas are looking less like structured schemas and are becoming more complex, hierarchically deep and differentiated.

This means that the ability to manipulate and change the structure of data by complex mapping of XML to XML is becoming more and more critical for integration tools. They will need to provide visual, codeless design environments to allow developers and business analysts to address complex mapping, and a runtime that naturally supports it.

Unstructured data is needed both by BI and application integration, and the transformations needed to get the information from the unstructured source data can be complex, these use cases will push towards the requirement of “transformation reusability” – the ability to transform the data once (from unstructured to XML, or from XML to XML) and reuse the transformation in various integration platforms and scenarios. The will cause a further blurring of the lines between the ETL and EAI use cases.

Customer data is a simple example use case. The example is to take customer information from various sources, merge it and then put the result into an XML application the uses the data. In this case structured customer data is extracted from a database (e.g. a central CRM system), merged with additional data from unstructured sources (e.g. branch information about that customer stored in a spreadsheet), which is then mapped to create a target XML representation. The resulting XML can be used as input into a customer application, migrate data to a different customer DB or create a file to be shipped to a business partner.

Looking Ahead
 Given the trends outlined above there are some pretty safe bets about where integration tools and platforms will be going in the next 12-24 months:
1. Better support for parsing of unstructured data.
2. Enhanced mapping support, with support for business analyst end-users
3. Enhanced support for XML use cases.
4. A blurring of the line separating ETL integration products from EAI integration products (especially around XML and unstructured use cases)
5. Introduction of a new class of integration products that focus on the XML and unstructured use case. These “universal” data transformation products will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

References
[1] Knightsbridge Solutions LLP – Top 10 Trends in Business Intelligence for 2006
[2] ACM Queue, Vol. 3 No. 8 - October 2005, Dealing with Semi-Structured Data (the whole issue)
[3] DM review - The Problem with Unstructured Data by Robert Blumberg and Shaku Atre, February 2003 Issue

Open Source and Freeware

Friday, July 13th, 2007

Selling IT to corporations is hard (well, selling to anybody is hard) and requires a lot of resources (especially around presale - POCs, Bake-offs, etc.) So a lot of VCs are looking to the open source model for salvation - not Open Source in its purest (as published in The Cathedral and the Bazaar), but as a way to lower the cost and frcition in selling to the enterprise.

The logic behind it is that the techies (especially in larger organizations) will download the software, play with it, and start using it in a limited way. This can be either as part of a project to solve a specific problem  (e.g. we need a new documant management systems), or just something that interests them as part of their job (why pay for a FTP client and server if you can just use FileZilla, or pay for a databsae if you can use MySQL). So the thinking is that this solves the issue of both penetration (the user find the stuff themselves), expensive POCs (the users will create the POC themselves) and the length of the sale cycle.

The second part of the open source equation is that users will become an active and viable community - both recommending and improving the product directly. Linux is usually given as the prototypical example - with a vibrant user community and a large number of developer\contributors. The allure behind this idea, and the feeling that you have more control (you can modify the code yourself, no vendor tie in, a community of developers\contributers) is what differentiates Open Source from just Freeware.

So how does a company make money off an open source product:

1. Sell services - any large organization that uses a product wants support, and will pay for it.

2. Sell add-ons, upgrades, premium versions - once they get used to the product, they will be willing to pay for added functionality

What doesn’t seem to work is proving a dumbed down, or partial functionality product to get people “hooked” and them sell them the full version, or leaving out important features.

So should you turn your enterprise software product open source. Before you you do here are a few things to consider:

1. How will the techies find your product? Is it a well know category (so that when  they need to find a CRM system, and the search for vendors, your product will show up - e.g. SugarCRM,).

2. Do you really have a technological breakthrough - or are you trying to sell an enhnaced version of a well established product category? If you do have a real, viable techical breakthrough - your code is open and you can be sure that the first people to download your product will be competitors looking for the “secret sauce”.

3. There are a LOT of Open Source projects out there -  take a look at Sourceforge, there are at least 100K projects out there. You’ll need to put effort (probably at least 1 or 2 people) to make sure that you stand out from the crowd and start growing a user community.

4. The open source download to sale conversion rate is low somewhere between 1 in 1,000 to 1 in 10,000, so you have to make sure that you get enough users to be viable.

5. It is a one way street, you can make your code open source, but it is really impossible to take back that decision once it is out in the wild.

6. Choosing a license - GPL gives you the most control, but many organizations don’t like it’s restrictions. Apache license seems to be universally acceptable - but gives you almost no control.

7. You need to decide what you will do with user submissions - and make sure you get the copyright for everything that is submitted.

Mashups and Situational Apps

Saturday, July 7th, 2007

Mashups both for prosumers (a new term that I had first heard from Clare Hart at the “Buying & Selling eContent” conference) - high-end consumers and creators of content and for scripters (my own term since I am not sure what exactly to call these high end-users - for example the departmental Excel gurus that create and manage departmental Excel scripts and templates).

The search for tools that empower these domain experts to create applications without programming has been around since at least the 80s (i.e.  4th generation programming languages) - which led to various new forms of application creation - but the only one that has really evolved into a “general use”  corporate tool for non-programmers has been Excel (though not really a 4GL). The reasoning behind those tools was to put the power to create appplications into the hands of the domain expert, and you will get better applications, faster. One new evolution of these types of tools are Domain Specific Languages (DSL) that make programming easier by focusing on a specific domain and building languages that are tailored to that domain.

So much for the history lesson - but what does that have to do with Mashups and  Situational Apps?  Well they both focus on pulling together different data sources and combing them in new ways in order to discover new insights. Mashups seem to be the preferred web term, Situational Apps is a term coined by IBM for the same tyoe of application in a corporate setting.

These types of applications (and application builders) have a lot in common:

1. They all start from a data feed of some sort. either RSS or XML.

2. They focus on ease of use over robustness.

3. They create allow users to applications easily to solve short term  problems.

Many of these tools are experimental and in the Alpha or Beta stage, or are Research projects of one type or another (QEDWiki, Microsoft Popfly, Yahoo Pipes, Intel MashMaker, Google Mashup Editor). As these tools start maturing, I think we will see a layered architecture emerging, especially for the corporate versions of these tools.  Here is how I see the corporate architecture layers evolving (click on the chart to enlarge it):

Mashup Layers

I think the layers are pretty self explainatory, except for the top-most Universal Feed Layer which is simply an easy way to use the new “mashup” data in other ways (e.g. other mashups, mobile).

If you look at the stack there are players in all layers (though most of the mashup tools I mentioned above are in the presentation and mashup layers), and the stack as a whole competes very nicely with a lot of current corporate portal tools - but with a much nicer user experience - one that users are already familiar with from the web.

One important issue that is sometimes overlooked is that mashups require feeds - and even though the number of web feeds is growing, there is still a huge lack of appropriate feeds. Since most mashup makers rely on existing feeds they have a problem when a required feed is not available. Even if the number of available feeds explodes exponetially there is no way for the site provider to know how people would like to use the feeds - so for mashups to take off, the creation of appropriate filtered feeds is going take on new importance, and the creation of these feeds is going to be a huge niche. Currently “Dapper” is the only tool that fills all the needs of the “universal feed layer” - site independence, web based and an easy to use, intutive interface for prosumers and scripters.