About Us Our Team Our Investments News Investor Info Blog

Archive for the ‘enterprise 2.0’ Category

Bottom Up vs. Top Down Process Understanding, or Another Difference between BPM and HPM

Tuesday, September 16th, 2008

I was at the Gartner BPM conference last week. Walking around the vendor showcase, one thing that struck me was how similar all of the vendors offerings seemed to me (with a few exceptions). Sure some were traditional enterprise software, some were SaaS, some vendors stressed one set of features, while another stressed a different set - but to me all the vendors in certain BPM space (document centric vs. integration centric) all looked pretty similar. I am guessing most people interested in a BPMS feel the same way.

What interested me was how they proscribed the process for creating a BPMS based application to implement an existing  business process. For most, the first step is to create a model describing the process - using a BPMN modeling tool. The model is usually created by a business analyst (usually someone in the IT department) that understands the process.  This model is a high level description of the business process which is used to bridge the gap between the business (they understand the process) and IT (they understand implementation and data). What struck me was how much the methodology reminded me a of the “traditional” top down ways of creating software .  Since it very difficult to automatically create the actual complete, executable production system from the BPMN model - the model serves as a requirement definition for the development phase - which is handled by IT. Any end-user iteration and understanding is around the BPMN - which is a very abstract description of the process to be implemented.  This is then handed to the IT folks for implementation - with the standard lag of months between requirements and actual system. This will work fine for processes that are rigorously defined, unchanging and complete, it may work for processes that are rigorously defined with a small number of exceptions - and will completely breakdown for ad-hoc, unstructured Human Processes. The reason is that these ad-hoc human processes are not well defined, and exceptions are the rule. The only way to approach this is same way you approach building human intensive software  - iteratively, working intensely with customers on working prototypes - either low-fidelity or high-fidelity. John Gould and Stephen Boies taught me long ago that iterating on the spec (i.e. requirements or model) just doesn’t work.  I also learned that if you are implementing an existing process - you want to keep it as familiar as possible to the users, which means let users continue to use whatever they are used to (or feels natural to them) whenever possible.

This is why I think that existing BPMS vendors won’t do well in the ad-hoc, unstructured Human Process space. It will require a much lighter weight, flexible (or bottoms up) environment where processes can be easily created, modified and tested in the field - with the turnaround between versions (including the initial version) is measured in days (or hours) instead of weeks or months. I personally believe that the more Human Process Management Systems let users remain within familiar user environments (currently eMail and MS Office tools, Wikis and other tools in the future), the easier it will be to get these systems accepted by the organization and end users.

Amazon EC2, S3 – and now SimpleDB

Saturday, December 15th, 2007

I have been playing with Amazon S3 as a remote backup mechanism for my machines. It is well thought out, works well, and is cheap. For many applications it is a “good enough” solution for managed storage.

Now the friendly folks at Amazon have announced their SimpleDB which provides the core functionality of a DB - real-time lookup and simple querying of structured data. Looks like yet another “good enough solution” for many web based businesses.

It seems like Amazon is rolling along, trying to become the “data center for everyone else”. Big enterprise are not going to be able to divest themselves of their data centers anytime soon, but small business can have the support provided by a data center – with only a fraction of the expense.

Now match this up with a tailored IDE and programming framework to make it even easier to use these services – and you’ll have a killer web application platform (better than Force.com since it doesn’t require the use of a proprietary language – just a specific API).

Decsion Support, BI, BPM and Human Process Management

Monday, November 26th, 2007

I have been thinking lately about decision support in business settings. Executives and managers make many, many decisions a day about the business – most of them involving other people that need to either be part of the decision making process, or act on the decisions. Essentially as an executive, you gather some data, meet with some people, make some decisions and then fire off some emails (or phonecalls) - repeat. From my experience, most processes in an organization are of this ad hoc flavor – and really have no tools (except email) for supporting the end-to-end process (from an ad-hoc set of decisions, through execution and finally to results)

There are various tools that help with the steps – for example I remember in the late eighties\early ninties decision support systems (DSS) used to be all the rage. The problem was that executives were unwilling to use the systems and they morphed into the Business Intelligence (BI) tools that all are the rage today (at least based on the number of acquisitions going on in that space). But, both DSS and BI tools address only part of the decision process – gathering and analyzing the data so that an intelligent decision can be arrived at. So those tools help with the “gather some data” part.

Another set of tools are collaboration tools, which can help somewhat with both the “meet some people” and “make some decisions” part. Other tools like Excel, Word, Powerpoint and email also play an important part in these steps. Most executives I know don’t use the various collaboration tools that are available - they use meetings, secretaries and productivity applications. Maybe they’ll start using Wikis too, but as another productivity tool - not an end-to-end decision support system.

Now if you believe the Business Process Management vendors the final step should be to create a process using your easy to use BPM design tool, and then have the process execute using your BPM (hey maybe even BPEL) engine. Yeah, right. BPM tools are heavy duty tools for the IT department, and are used to string together various IT assets. They don’t support the ad-hoc nature of most business processes, or the heavy (or perhaps exclusive) human interactions needed. Even the emerging area of Human-Centric Business Process Management (as coined by Forrestor) doesn’t fit the bill – they really don’t support the ad-hoc nature of most processes in an organization.

So where does that leave us? Essentially with meetings, email (sometimes phone calls and faxes) and productivity tools (ala Excel, Powerpoint, Word). That is how most business and business processes are done. I think this is main cause of email overload in organizations – and until some more natural mechanism for managing these ad-hoc business processes come along – the overload will only get worse…

An interesting article on email overload from First Monday.

Software as a Service and Hardware Virtualization

Thursday, November 15th, 2007

I have been musing lately about the connection between software delivered as a service and hardware virtualization. For me they are two sides of the same coin (I guess we could have just as easily called it Hardware-as-a-Service and Software Virtualization). The simplest way to implement a SaaS’ified version of an existing application is via Virtualization – just run as many instances of the application (or application components) as needed, each in their own virtual machine.

The down side is that this may not be very cost effective. First, you need to be able easily deploy and manage new instances of the application within you virtual environment (and hence VMWare’s acquisitions of Akimbi and Dunes), have a appropriate pricing model for the various components technologies that make up the application, and the ability to easily monitor the virtual vs. real machine resources needed for the application.

It is not always easy to reconcile the software component models with virtualization. Many traditional software vendors charge per instance of their application deployed on a server. So if you want to deploy a DBMS for each instance of the application – the price can be quite prohibitive. It would probably depend heavily on the number of users per instance, but for many SaaS applications there are only a few users per instance. You could rewrite the application so that you could use a shared DBMS, having each application instance use a different DB in the DBMS – but rewriting an application is very costly.

Monitoring all those instances isn’t easy either. You somehow need to correlate all the virtual instances with the physical resources on the machine. One of the key reasons to virtualize is to be able to use machine resources (especially the CPU) more effectively – which means you want to load as many instances as possible before having to buy a new machine – very different then what is available from today’s monitoring tools.  A good overviewof these issues by Bernd Harzog can be found here.

So what’s my point? I think that we’ll see SaaS take-off when it really easy to take an existing app, and created a SaaS’ified version of it – and that will happen when it is as easy as taking a “virtual version” of the application and deploying it for “tenant” as needed. We are still missing some pieces of the puzzle for that to happen, but my guess is that we will see it happen in the next couple of years.

Data Integration and Mashups

Saturday, November 10th, 2007

I am attending Mashup camp and university here in Dublin (the weather reminds me of a poem that a friend of mine wrote about Boston in February - gray, gray, gray, Gray!). IBM was here in force at Mashup University giving three good presentations (along with live demos) on their mashup stack. They were saying that the products based on this stack should be coming out early next year (we’ll see, since from my experience it can be very difficult to get out a new product in an emerging area in IBM - since you can’t prove that the space\product is valuable enough). They have decided to pull together a whole stack for the enterprise mashup space (the content management layer, the mashup layer and the presentation layer -see my previous post on mashup layers). One thing that struck me, especially when listening to the IBM QEDwiki and Mashup hub presentations, is how much those upcoming set of tools for enterprise mashup creation are starting to resemble  “traditional” enterprise data  integration tools (e.g. Informatica and IBM\Ascential). These new tools allow easy extraction from various data sources (including legacy data like CICS, web data  and DBs), and easy wiring of data flows between operator nodes (sort of a bus concept).  The end result isn’t a DB load as with ETL, but rather a web page to display.  No real cleansing capability yet, but my guess is that will be coming as just another web service that can be called as a node in the flow. So it is like the mashups are the lightweight cousin of ETL - for display rather than bulk load purposes. It will be interesting to follow and see how ETL tooling and mashup tooling come together at IBM, especially since the both the ETL and mashup tools tools are part of the Data Integration group at IBM.

Microsoft seems to be taking another route, a more lightweight desktop like approach, and focused on the presentation layer. Popfly is a tool that also allows you to wire together data extraction (only web data as far as I could tell, though it could be extended to other data types) and manipulation nodes – as you link the nodes, the output of one node becomes the input of the next etc… It seemed very presentation oriented, and I didn’t see any Yahoo! Pipes like functionality or legacy extraction capability.

Serena is presenting tomorrow, it will be interesting to see what direction they have taken.

Email and Enterprise 2.0

Wednesday, October 24th, 2007

I just read an interesting post on The state of Enterprise 2.0 and it seems like the various technologies that make up Enterprise 2.0 (RSS, Blogs, Wikis, Mashups, Communities) seem to be gaining acceptance and some traction in the enterprise.  Not surprising - I think the big losers will be the traditional enterprise portals. At the moment you can’t really find a complete Enterprise 2.0 stack, but it is clear that the writing is on the wall - and the Enterprise\Web 2.0 versions of the stack are much more useful, entertaining and engaging then the standard enterprise portal solution.

As I stated in an earlier post the Web 2.0 world is starting to penetrate into the enterprise, defining new ways to collaborate and raising ease of use expectations - things that are not usually at the forefront of existing enterprise portal technology.There was one specific quote in the article that intrigued me “The biggest impact of this lesson is that these new tools are so different and generally support such different types of knowledge than usually captured, that impact to existing systems seems to be minimal. Interestingly, you might see a decrease in the use of e-mail or ECM when the conversations that formerly happened on those platforms make a more natural home in Enterprise 2.0 platforms” (the emphasis is mine).  This got me thinking, since one of the main selling points of Web 2.0 technologies is that they will eliminate (or at least substantially decrease) email usage.  I have never seen any numbers to bear out this claim. My gut tells me that the number of emails in enterprises is growing, not shrinking (see “Intel flirts with No Email Fridays” for at least anecdotal corroboration), and I just don’t see why these technologies will change that substantially. Enterprise 2.0 technologies  may end up slowing the growth of email a bit, but are certainly not turning the tide.

My guess is that email is too pervasive, too general, too useful and too simple a tool to ever be replaced. I only wonder if as with the “paperless office” - where computers and technology were going to replace the need for paper, but instead only seemed to increase its usage - enterprise 2.0 technologies won’t actually generate additional uses for email…

Personalized Feeds (or more on Open APIs)

Friday, October 5th, 2007

 I just read an interesting study on the problems with existing news RSS feeds from the University of Maryland’s International Center for Media and Public Relations. I think it is a great example of how user’s can’t depend on the organization that creates the content to provide access to the content in the form or format most useful for them, and why the ability for users to create their own feeds is so valuable. To quote from the study:

“This study found that depending on what users want from a website, they may be very disappointed with that website’s RSS.  Many news consumers go online in the morning to check what happened in the world overnight—who just died, who’s just been indicted, who’s just been elected, how many have been killed in the latest war zone.  And for many of those consumers the quick top five news stories aggregated by Google or Yahoo! are all they want.  But later in the day some of those very same consumers will need to access more and different news for use in their work—they might be tracking news from a region or tracking news on a particular issue.

It is for that latter group of consumers that this RSS study will be most useful.  Essentially, the conclusion of the study is that if a user wants specific news on any subject from any of the 19 news outlets the research team looked at, he or she must still track the news down website by website.”

Bottom line, as long as we depend on publishers as both content providers and access providers we as consumers of content won’t be able to get what we need in the way we need it - just like with APIs.  The only way to solve the problem is to allow users or some unaffiliated community to create the access to content (or API), as opposed to limiting that ability to only the publisher.  As web 2.0 paradigms catch on with the masses, turning more and more of us to prosumers, this will become more and more of an issue.  Publishers that try to control access will lose out to those that provide users the to tailor the content to their own needs. Publishers need to understand that this benefits both them and the users.

I see signs that this is actually starting to happen (in a small way) with the NYTimes and WSJ both announcing personal portals for thier users. The jump to personalized feeds isn’t that unthinkable…

Vertcal Mashup Platforms

Wednesday, September 12th, 2007

Gartner just put out a report on “Who’s Who in Enterprise Mashup Technologies” whcih contains all of the usual enterprise paltform companies and all the usual web mashup players . They gave some good, though standard advice that you should understand the problem, before you choose the technology (duh?) - but I thought it was interesting that they didn’t try to define a best practices architecture, or give some guidance on how to combine technologies or choose between them (see my post below).

One thing that was clear is that all of the Mashup Platforms are trying to be generic - allow users to build any type of mashup application. As always, being generic means being more abstract - and making it harder for people to easily build a mashup for a specific domain or vertical. This isn’t unusual for platform builders, since by building a generic tool they can capture the broadest audience of user. But I think that they might be making a mistake with respect to Mashup Platforms - the whole idea is to make it easy for anyone to build “situational applications” - that solve a specific need for information quickily, and that can be used by non-developers. For me, that means that platforms will have to be tailored to the domain of the user.

I am expecting that in the next wave of Mashup Platforms we’ll start seeing vertically oriented mashup platforms that will make it even easier to build a mashup for a specific vertical - from standard verticals like Finance, to more consumer vertical like advertisements.

Open Source and Freeware

Friday, July 13th, 2007

Selling IT to corporations is hard (well, selling to anybody is hard) and requires a lot of resources (especially around presale - POCs, Bake-offs, etc.) So a lot of VCs are looking to the open source model for salvation - not Open Source in its purest (as published in The Cathedral and the Bazaar), but as a way to lower the cost and frcition in selling to the enterprise.

The logic behind it is that the techies (especially in larger organizations) will download the software, play with it, and start using it in a limited way. This can be either as part of a project to solve a specific problem  (e.g. we need a new documant management systems), or just something that interests them as part of their job (why pay for a FTP client and server if you can just use FileZilla, or pay for a databsae if you can use MySQL). So the thinking is that this solves the issue of both penetration (the user find the stuff themselves), expensive POCs (the users will create the POC themselves) and the length of the sale cycle.

The second part of the open source equation is that users will become an active and viable community - both recommending and improving the product directly. Linux is usually given as the prototypical example - with a vibrant user community and a large number of developer\contributors. The allure behind this idea, and the feeling that you have more control (you can modify the code yourself, no vendor tie in, a community of developers\contributers) is what differentiates Open Source from just Freeware.

So how does a company make money off an open source product:

1. Sell services - any large organization that uses a product wants support, and will pay for it.

2. Sell add-ons, upgrades, premium versions - once they get used to the product, they will be willing to pay for added functionality

What doesn’t seem to work is proving a dumbed down, or partial functionality product to get people “hooked” and them sell them the full version, or leaving out important features.

So should you turn your enterprise software product open source. Before you you do here are a few things to consider:

1. How will the techies find your product? Is it a well know category (so that when  they need to find a CRM system, and the search for vendors, your product will show up - e.g. SugarCRM,).

2. Do you really have a technological breakthrough - or are you trying to sell an enhnaced version of a well established product category? If you do have a real, viable techical breakthrough - your code is open and you can be sure that the first people to download your product will be competitors looking for the “secret sauce”.

3. There are a LOT of Open Source projects out there -  take a look at Sourceforge, there are at least 100K projects out there. You’ll need to put effort (probably at least 1 or 2 people) to make sure that you stand out from the crowd and start growing a user community.

4. The open source download to sale conversion rate is low somewhere between 1 in 1,000 to 1 in 10,000, so you have to make sure that you get enough users to be viable.

5. It is a one way street, you can make your code open source, but it is really impossible to take back that decision once it is out in the wild.

6. Choosing a license - GPL gives you the most control, but many organizations don’t like it’s restrictions. Apache license seems to be universally acceptable - but gives you almost no control.

7. You need to decide what you will do with user submissions - and make sure you get the copyright for everything that is submitted.

Mashups and Situational Apps

Saturday, July 7th, 2007

Mashups both for prosumers (a new term that I had first heard from Clare Hart at the “Buying & Selling eContent” conference) - high-end consumers and creators of content and for scripters (my own term since I am not sure what exactly to call these high end-users - for example the departmental Excel gurus that create and manage departmental Excel scripts and templates).

The search for tools that empower these domain experts to create applications without programming has been around since at least the 80s (i.e.  4th generation programming languages) - which led to various new forms of application creation - but the only one that has really evolved into a “general use”  corporate tool for non-programmers has been Excel (though not really a 4GL). The reasoning behind those tools was to put the power to create appplications into the hands of the domain expert, and you will get better applications, faster. One new evolution of these types of tools are Domain Specific Languages (DSL) that make programming easier by focusing on a specific domain and building languages that are tailored to that domain.

So much for the history lesson - but what does that have to do with Mashups and  Situational Apps?  Well they both focus on pulling together different data sources and combing them in new ways in order to discover new insights. Mashups seem to be the preferred web term, Situational Apps is a term coined by IBM for the same tyoe of application in a corporate setting.

These types of applications (and application builders) have a lot in common:

1. They all start from a data feed of some sort. either RSS or XML.

2. They focus on ease of use over robustness.

3. They create allow users to applications easily to solve short term  problems.

Many of these tools are experimental and in the Alpha or Beta stage, or are Research projects of one type or another (QEDWiki, Microsoft Popfly, Yahoo Pipes, Intel MashMaker, Google Mashup Editor). As these tools start maturing, I think we will see a layered architecture emerging, especially for the corporate versions of these tools.  Here is how I see the corporate architecture layers evolving (click on the chart to enlarge it):

Mashup Layers

I think the layers are pretty self explainatory, except for the top-most Universal Feed Layer which is simply an easy way to use the new “mashup” data in other ways (e.g. other mashups, mobile).

If you look at the stack there are players in all layers (though most of the mashup tools I mentioned above are in the presentation and mashup layers), and the stack as a whole competes very nicely with a lot of current corporate portal tools - but with a much nicer user experience - one that users are already familiar with from the web.

One important issue that is sometimes overlooked is that mashups require feeds - and even though the number of web feeds is growing, there is still a huge lack of appropriate feeds. Since most mashup makers rely on existing feeds they have a problem when a required feed is not available. Even if the number of available feeds explodes exponetially there is no way for the site provider to know how people would like to use the feeds - so for mashups to take off, the creation of appropriate filtered feeds is going take on new importance, and the creation of these feeds is going to be a huge niche. Currently “Dapper” is the only tool that fills all the needs of the “universal feed layer” - site independence, web based and an easy to use, intutive interface for prosumers and scripters.