About Us Our Team Our Investments News Investor Info Blog

Archive for the ‘software’ Category

Enterprise Software Startup Valuation

Sunday, February 10th, 2008

Not a blockbuster headline, but I just noticed that Workplace, Inc. bought Cape Clear. I had never heard of Workplace before, they sell various Enterprise applications using a SaaS model. I had heard of Cape Clear, they are (were?) a pretty welll known Irish startup in the Enterprise Integration\SOA space - providing Enterprise Services Bus middleware.

Most the articles I have seen about the acquisition focus on the aspect of how integration into existing systems is a key capability for SaaS players, and that the Enterprise middleware space is rapidly consolidating. Makes sense, but for me what is more interesting is what this says about enterprise software startups and their valuation. The details of the deal are confidential - but I am guessing that the deal isn’t a blockbuster (given the size of the acquirer), and I’d be surprised if the deal was for more than $50M (maybe much, much less), all stock. Now according to Joe Drumgoole’s blog about $48M has been invested in Cape Clear over the years - so a $50M exit doesn’t leave much for anyone. Here is his list of Cape Clear investments:

  • 2 Million in seed funding from ACT in 2000
  • 16 million in Series A funding from Accel and Greylock in 2001
  • 10 million in Series B funding from Accel and Greylock 2003
  • 5-10 million: A phantom series C round raised as a set of warrants amongst existing investors. It was never press released and their is no mention of it on the net.
  • 15 million in a series D round in the last few weeks (April 2006 - Jacob) with InterWest

Cape Clear seems to have been a “technology” acquisition for Workday - which brings me to my point about Enterprise Software startup valuations. It is very difficult to become a stand alone player in Enterprise software (especially with all of the consolidation going on), and if you aren’t a viable stand alone enterprise software company - well that means you need to plan for the fact you will be acquired - probably for technology. To make sure that a technology acquisition is a viable exit path you need to make sure your valuation isn’t too high in the early stages. Enterprise technology companies seem to sell for $15M-$100M, depending how strategic they are to the acquirer - but require a lot of money in the later sales and marketing phases.

So make sure you don’t over value your company early on, it will come back and bite you later.

Software as a Service and Hardware Virtualization

Thursday, November 15th, 2007

I have been musing lately about the connection between software delivered as a service and hardware virtualization. For me they are two sides of the same coin (I guess we could have just as easily called it Hardware-as-a-Service and Software Virtualization). The simplest way to implement a SaaS’ified version of an existing application is via Virtualization – just run as many instances of the application (or application components) as needed, each in their own virtual machine.

The down side is that this may not be very cost effective. First, you need to be able easily deploy and manage new instances of the application within you virtual environment (and hence VMWare’s acquisitions of Akimbi and Dunes), have a appropriate pricing model for the various components technologies that make up the application, and the ability to easily monitor the virtual vs. real machine resources needed for the application.

It is not always easy to reconcile the software component models with virtualization. Many traditional software vendors charge per instance of their application deployed on a server. So if you want to deploy a DBMS for each instance of the application – the price can be quite prohibitive. It would probably depend heavily on the number of users per instance, but for many SaaS applications there are only a few users per instance. You could rewrite the application so that you could use a shared DBMS, having each application instance use a different DB in the DBMS – but rewriting an application is very costly.

Monitoring all those instances isn’t easy either. You somehow need to correlate all the virtual instances with the physical resources on the machine. One of the key reasons to virtualize is to be able to use machine resources (especially the CPU) more effectively – which means you want to load as many instances as possible before having to buy a new machine – very different then what is available from today’s monitoring tools.  A good overviewof these issues by Bernd Harzog can be found here.

So what’s my point? I think that we’ll see SaaS take-off when it really easy to take an existing app, and created a SaaS’ified version of it – and that will happen when it is as easy as taking a “virtual version” of the application and deploying it for “tenant” as needed. We are still missing some pieces of the puzzle for that to happen, but my guess is that we will see it happen in the next couple of years.

Data Integration and Mashups

Saturday, November 10th, 2007

I am attending Mashup camp and university here in Dublin (the weather reminds me of a poem that a friend of mine wrote about Boston in February - gray, gray, gray, Gray!). IBM was here in force at Mashup University giving three good presentations (along with live demos) on their mashup stack. They were saying that the products based on this stack should be coming out early next year (we’ll see, since from my experience it can be very difficult to get out a new product in an emerging area in IBM - since you can’t prove that the space\product is valuable enough). They have decided to pull together a whole stack for the enterprise mashup space (the content management layer, the mashup layer and the presentation layer -see my previous post on mashup layers). One thing that struck me, especially when listening to the IBM QEDwiki and Mashup hub presentations, is how much those upcoming set of tools for enterprise mashup creation are starting to resemble  “traditional” enterprise data  integration tools (e.g. Informatica and IBM\Ascential). These new tools allow easy extraction from various data sources (including legacy data like CICS, web data  and DBs), and easy wiring of data flows between operator nodes (sort of a bus concept).  The end result isn’t a DB load as with ETL, but rather a web page to display.  No real cleansing capability yet, but my guess is that will be coming as just another web service that can be called as a node in the flow. So it is like the mashups are the lightweight cousin of ETL - for display rather than bulk load purposes. It will be interesting to follow and see how ETL tooling and mashup tooling come together at IBM, especially since the both the ETL and mashup tools tools are part of the Data Integration group at IBM.

Microsoft seems to be taking another route, a more lightweight desktop like approach, and focused on the presentation layer. Popfly is a tool that also allows you to wire together data extraction (only web data as far as I could tell, though it could be extended to other data types) and manipulation nodes – as you link the nodes, the output of one node becomes the input of the next etc… It seemed very presentation oriented, and I didn’t see any Yahoo! Pipes like functionality or legacy extraction capability.

Serena is presenting tomorrow, it will be interesting to see what direction they have taken.

Subprime Mortgage Crisis and Startups

Monday, October 22nd, 2007

I am not sure why we haven’t heard more about the effect of the sub-prime mortgage crisis on startups and VCs, but it seems clear to me that we will see an effect. The bad 3Q results (and bad 4Q forecasts) for many financial institutions will have a delayed effect on many later stage enterprise software startups. 

The Finance industry is a very large consumer of technology, and in many cases willing to be an early adopter of interesting technology. Of course, as with any downturn, new initiatives are always an easy target, and usually the first to go. Many enterprise software startups pin their hopes on selling their products to large US financial institutions. Those that have already signed deals- congratulations! Those that have deal propects in the pipeline, but haven’t signed the deal - don’t count your chickens, at least until the banks start growing again. 

No matter how the larger economic issues play out - the subprime  mortgage crisis will be a bad deal for startups.

The Death of Enterprise Software Startups?

Tuesday, October 2nd, 2007

In Israel, it has become close to impossible to get an investment for an Enterprise Software startup, even worse than in the US. One of the main reasons is that enterprise software sales are hard, and expensive ( a lot of high cost man power, and long sale cycles) - which is true. Everyone is looking at models to get around those issues (e.g. open source, SaaS), but fundamentally it remains an issue.
Not that there aren’t problems or opportunities in enterprise software (see The Trouble With Enterprise Software for a nice overview of some of the issues), there are huge issues with enterprise software, and SOA (Services Oriented Architectures) are no panacea. So opportunities for technical innovation abound, it is just that most VCs don’t believe that it is a good investment of time or capital. Since VCs are awfully busy, and have more on their plate than they can handle, once this is a “rule-of-thumb” it is hard to get their ear.
I think this will have grave implications for Enterprise IT shops (and vendors). In last few years most large IT vendors have gotten into the habit of “outsourcing” their technical innovation - they buy companies rather than develop the technology in-house. If the VCs stop investing - then in a few years, innovation in the enterprise software market will dry up. Given the current state of enterprise software, that can’t be a good thing….
I think that things will change - since there is still a lot of money in enterprise software and large vendors need technology, someone will have to provide them with it. Enterprise software companies probably will have smaller chance at IPO - but given the relative lack of competition they should have a better shot at M&A. The trick is to have unique, innovative technology that solves a problem for enterprise IT departments – or even better, for the business. I also think the pendulum has swung too far, and will swing back in a couple of years - making any investment done now, much more valuable in the future

Walled Gardens on the Web (and elsewhere)

Wednesday, July 25th, 2007

Facebook has been getting a lot of press lately - one discussion item that caught my eye was a number of blogs and discussions around whether Facebook can thrive as a  “walled garden” (which refers to a closed set or exclusive set of information services provided for users (click here for the Wikipedia entry).

The main issues raised were the viability of a walled garden on the internet, the pluses and minuses walled gardens - both for the provider and for the consumer (you can find an interesting discussion at http://www.micropersuasion.com/2007/06/walled-gardens-.html). Most of the examples talk about AOL and how it failed as a walled garden, as did cellular providers that tried to limit WAP access to only certain sites.

I am not sure I actually understand the point - since the the whole internet is just sets of walled gardens - how many websites let you use thier information freely. Very few have comprehensive (or any) APIs, more have feeds that give you limited access to the information actually available. So how is Facebook any different?

One key difference is that opposed to most sites - Facebook has collected your own, personal information (or that of your friends). People want to be able to do with their own information whatever they please. So I think the right analogy isn’t the AOL walled garden approach, but rather something even more “ancient” - the client server revolution  of the 80’s. For years after GUIs and PCs were available it was still very hard it was to get your own organizational information out of various legacy systems to use in new applications. Even though the information was yours  - you couldn’t get at it to use as you like - either because the vendors couldn’t keep pace with the emerrging technologies - or didn’t want to (so they could keep it “hostage”). This gave rise to an imperfect, but usable technical solution that let people get at their information even though the system didn’t have the capability - a whole new set of “screen scraping” technologies that emulated users to get the  desired information out of applications.

So I think that the same will happen here - either the walled gardens will open up or  people will figure out to get at it some other way.

Structured, Semi-Structured and Unstructured Data in Business Applications

Monday, July 16th, 2007

I was discussing these issue again today - so I thought this old paper must still be relevant….
 
There is a growing consensus that semi-structured and unstructured data sources contain information critical to the business [1, 3] and must be made accessible both for business intelligence and operational needs. It is also clear that amount of relevant unstructured business data is growing, and will continue to grow in the foreseeable future. That trend is converging with the “opening” of business data through standardized XML formats and industry specific XML data standards (e.g. ACORD in insurance, HL7 in healthcare). These two trends are expanding the types of data that need to be handled by BI and integration tools, and are straining their transformation capabilities. This mismatch between existing transformation capabilities and these emerging needs is opening the door for a new type of “universal” data transformation products that will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

 The Problem with Unstructured Data
 The terms semi-structured data and unstructured data can mean different things in different contexts. In this article I will stick to a simple definition for both. First when I use the terms unstructured or semi-structured data I mean text based information, not video or sound, which has no explicit meta data associated with it, but does have implicit meta-data that can be understood by a human (e.g. a purchase order sent by fax has no explicit meta-data, but a human can extract the relevant data items from the document). The difference between semi-structured and unstructured is whether portions of the data have associated meta-data, or there is no meta-data at all. From now on I will use the term unstructured data to designate both semi-structured and unstructured data.

The problem is that both unstructured data and XML are not naturally handled by the current generation of BI and integration tools – especially Extract, Transform, Load (ETL) technologies. ETL grew out of the need to create data warehouses from production database, which means that it is geared towards handling large amounts of relational data, and very simple data hierarchies. However in a world that is moving towards XML, instead of being able to assume well-structured data with little or no hierarchy in both the source and target, the source and target will be very deeply hierarchical and probably have very different hierarchies. It is clear that the next generation of integration tools will need to do a much better job of inherently supporting both unstructured and XML data.

XML as a Common Denominator
 By first extracting the information from unstructured data sources into XML format, it is possible to treat integration of unstructured data similarly to integration with XML. Also, structured data has a “natural” XML structure that can be used to describe it (i.e. a simple reflection of the source structure) so using XML as the common denominator for describing unstructured data and structured data makes integration simpler to manage.

Using XML as the syntax for the different data types allows a simple logical flow for combining structured XML and unstructured data (see Figure 1):
1. extract data from structured sources into a “natural” XML stream,
2. extract data from unstructured sources into an XML stream,
3. transform the two streams as needed (cleansing, lookup etc.)
4. map the XMLs into the target XML.

This flow is becoming more and more pervasive in large integration projects, hand-in-hand with the expansion of XML and unstructured data use-cases. These use cases fall outside the sweet spot of current ETL and Enterprise Application Integration (EAI) integration architectures – the two standard integration platforms in use today. The reason is that both ETL and EAI have difficulty with steps 1 and 4. Step 1 is problematic since there are very few tools on the market that can easily “parse” unstructured data into XML and allow it to be combined with structured data. Step 4 is also problematic since current integration tools also have underpowered mapping tools that fall apart when hierarchy changes, or when other complex mappings, are needed. All of today’s ETL and EAI tools require hand coding to meet these challenges.

dm-review-no-affiliation.jpg
Figure 1: A standard flow for combing structured, unstructured and XML information

The Importance of Parsing
 Of course, when working with unstructured data, it is intuitive that parsing the data to extract the relevant information is a basic requirement. Hand-coding a parser is difficult, error-prone and tedious work, which is why it needs to be a basic part of any integration tool (ETL or EAI). Given its importance it is surprising that integration tool vendors have only started to address this requirement.

 The Importance of Mapping
 The importance of powerful mapping capabilities is less intuitively obvious. However, in an XML world, mapping capability is critical. As XML is becoming more pervasive, XML schemas are looking less like structured schemas and are becoming more complex, hierarchically deep and differentiated.

This means that the ability to manipulate and change the structure of data by complex mapping of XML to XML is becoming more and more critical for integration tools. They will need to provide visual, codeless design environments to allow developers and business analysts to address complex mapping, and a runtime that naturally supports it.

Unstructured data is needed both by BI and application integration, and the transformations needed to get the information from the unstructured source data can be complex, these use cases will push towards the requirement of “transformation reusability” – the ability to transform the data once (from unstructured to XML, or from XML to XML) and reuse the transformation in various integration platforms and scenarios. The will cause a further blurring of the lines between the ETL and EAI use cases.

Customer data is a simple example use case. The example is to take customer information from various sources, merge it and then put the result into an XML application the uses the data. In this case structured customer data is extracted from a database (e.g. a central CRM system), merged with additional data from unstructured sources (e.g. branch information about that customer stored in a spreadsheet), which is then mapped to create a target XML representation. The resulting XML can be used as input into a customer application, migrate data to a different customer DB or create a file to be shipped to a business partner.

Looking Ahead
 Given the trends outlined above there are some pretty safe bets about where integration tools and platforms will be going in the next 12-24 months:
1. Better support for parsing of unstructured data.
2. Enhanced mapping support, with support for business analyst end-users
3. Enhanced support for XML use cases.
4. A blurring of the line separating ETL integration products from EAI integration products (especially around XML and unstructured use cases)
5. Introduction of a new class of integration products that focus on the XML and unstructured use case. These “universal” data transformation products will allow transformations to be defined for all classes of data (e.g., structured, semi-structured, unstructured), without writing code, and deployed to any software application or platform architecture.

References
[1] Knightsbridge Solutions LLP – Top 10 Trends in Business Intelligence for 2006
[2] ACM Queue, Vol. 3 No. 8 - October 2005, Dealing with Semi-Structured Data (the whole issue)
[3] DM review - The Problem with Unstructured Data by Robert Blumberg and Shaku Atre, February 2003 Issue

Open Source and Freeware

Friday, July 13th, 2007

Selling IT to corporations is hard (well, selling to anybody is hard) and requires a lot of resources (especially around presale - POCs, Bake-offs, etc.) So a lot of VCs are looking to the open source model for salvation - not Open Source in its purest (as published in The Cathedral and the Bazaar), but as a way to lower the cost and frcition in selling to the enterprise.

The logic behind it is that the techies (especially in larger organizations) will download the software, play with it, and start using it in a limited way. This can be either as part of a project to solve a specific problem  (e.g. we need a new documant management systems), or just something that interests them as part of their job (why pay for a FTP client and server if you can just use FileZilla, or pay for a databsae if you can use MySQL). So the thinking is that this solves the issue of both penetration (the user find the stuff themselves), expensive POCs (the users will create the POC themselves) and the length of the sale cycle.

The second part of the open source equation is that users will become an active and viable community - both recommending and improving the product directly. Linux is usually given as the prototypical example - with a vibrant user community and a large number of developer\contributors. The allure behind this idea, and the feeling that you have more control (you can modify the code yourself, no vendor tie in, a community of developers\contributers) is what differentiates Open Source from just Freeware.

So how does a company make money off an open source product:

1. Sell services - any large organization that uses a product wants support, and will pay for it.

2. Sell add-ons, upgrades, premium versions - once they get used to the product, they will be willing to pay for added functionality

What doesn’t seem to work is proving a dumbed down, or partial functionality product to get people “hooked” and them sell them the full version, or leaving out important features.

So should you turn your enterprise software product open source. Before you you do here are a few things to consider:

1. How will the techies find your product? Is it a well know category (so that when  they need to find a CRM system, and the search for vendors, your product will show up - e.g. SugarCRM,).

2. Do you really have a technological breakthrough - or are you trying to sell an enhnaced version of a well established product category? If you do have a real, viable techical breakthrough - your code is open and you can be sure that the first people to download your product will be competitors looking for the “secret sauce”.

3. There are a LOT of Open Source projects out there -  take a look at Sourceforge, there are at least 100K projects out there. You’ll need to put effort (probably at least 1 or 2 people) to make sure that you stand out from the crowd and start growing a user community.

4. The open source download to sale conversion rate is low somewhere between 1 in 1,000 to 1 in 10,000, so you have to make sure that you get enough users to be viable.

5. It is a one way street, you can make your code open source, but it is really impossible to take back that decision once it is out in the wild.

6. Choosing a license - GPL gives you the most control, but many organizations don’t like it’s restrictions. Apache license seems to be universally acceptable - but gives you almost no control.

7. You need to decide what you will do with user submissions - and make sure you get the copyright for everything that is submitted.

Strategic vs. Viral in Enterprise Software

Friday, June 1st, 2007

I have been thinking lately about the meaning of strategic software in the enterprise. I have had a number of conversations about interesting, useful applications for business, which were usually shrugged of since they aren’t “strategic”. I think the holy-grail of “being strategic” in enterprise software is a mistake, and explains why enterprise software has fallen out favor with VCs. It is impossible (or at least very, very expensive) to become strategic, and it takes a long time. Strategic, at least in these conversations, means inventing some piece of software (infrastructure or solution) that is so central to the needs of the organization that not having it become a critical showstopper. Becoming strategic moves the buying decision to the senior executive level at the customer, rather than through projects or end-users. It requires a skilled sales force and longer sales cycle – but has much, much higher revenue per sale.

The Web (especially 2.0) is different. The adoption mechanism of a software solution is viral – users enticing other users to join in the fun. This seems to be diametrically opposed with the strategic software paradigm – this type of adoption has to be simple enough that end-users can use the software and get value without the need for a centralized decision, IT department or skilled sales force. Since users use applications (and not infrastructure) viral adoption happens at the application, not at the infrastructure, layer (while most strategic software is in the infrastructure, not application layer). Of course apps drive infrastructure, so this type of software adoption in the enterprise will have a profound effect on enterprise infrastructure and architectures (e.g lets see how SOAP vs REST plays out for services). This type of adoption scares the pants off CIOs who crave centralized control and walled gardens.

This is the crux of the Enterprise 2.0 dilemma for software startups – how to get viral rather than strategic adoption in the enterprise. The best way to do this is by creating network effect applications that have value to enterprise users stand-alone (of course initially delivered over the internet as a service). If you have such an application, I’d love to hear about it.