About Us Secure Tabs Our Investments News Investor Info Blog

Archive for December, 2008

Turning an Organization’s Practical Intelligence into Explicit Knowledge

Tuesday, December 30th, 2008

There are three ways to create a system to automate and streamline an exisiting process in an organization.  Two are the common methods used by most organizations today -  the first is by using a standard, packaged application that implement the process, and the second is through BPM modelling and implementation tools.

The first (which is easiest technically, but hardest organizationally) is to replace the existing process with a packaged application that implements the process (e.g. a CRM system) and use that as the basis for replacement process. The best way to do this is to be willing to adopt to a version of the standard process that is supported by the application and keep customizations to a minimum - otherwise the cost in both time and resources can be overwhelming (and the possibility of failure is a distinct possibility). Of course, this method only works for well defined standard process, and for organizations willing to change their processes to suit the system.

A second way is to try and capture the practical intelligence and tacit knowledge of the people participating in a process and create an explicit model of the process - and then to optimize that model. The model becomes the basis for building a speciality application to manage that process. A BPM system is the preferable way to do this since it makes the implementation of the process easier and more standard from an IT perspective. A BPM systems also tends to make the distance between the model and the implementation less then if the application was created not using a BPM platform.This method only works for processes where it is justified (and possible) to create an exhaustive model of the process, allowing the IT department the detailed requirements it needs to implement the process.  In this method the discovery and modeling phase is of crticial importance - get that wrong and no one will use the application. The goal of the discovery and modelling process is to leverage people with strong analytical intelligence skills to take an organization’s tacit knowledge of the process and turn it into explicit knowledge that can be articulated and programmed. This is a relatively long and difficult requirements process - and if it isn’t done right no one will use the resulting application. This works only for processes that are common enough to justify the expense (of discovery, modelling and implementation), and that can be rigorously defined using analytical techniques. 

ActionBase’s HPM provides a third possibility - appropriate for both processes that can’t justify the expense of a full blown BPM implementation (or just can’t wait), and as a way to gain insight into the practical intelligence and tacit knowledge of an organization - without embarking on wide spread, never ending process discovery and modelling. By using ActionDocs and ActionMail as simple proxies for an existing email and document based process - a system for managing any human process can easily be set up. It doesn’t need to be complete, the process can easily evolve through usage. If someone was left out of the process they can easily be added by sending them an ActionMail, and if a document was overlooked it can easily be added as an attachment.  So almost immediately the organization gets all the benefits of a managed process that evolves as the process is used.

Over time, a standard core of the process may evolve and it’s model can be seen using ActionBase’s server. The organization can then decide to implement the process using method 2 (with the expensive discovery and process steps already complete), or leave the ActionBase process.

I’ll discuss more about this “third way” methodology in following posts.

Performance Anxiety

Wednesday, December 10th, 2008

I am attending the CMG (Computer Measurement Group) conference – it is the annual application\performance professionals get together. Turns out there is really good technology for production-time performance monitoring at any tier – and that is becoming more and more of a commodity covered by the big 4 – (IBM, CA, BMC , HP and if you are interested in Mainframe – ASG and Compuware).
So if that’s covered – what’s left to worry about? Performance problems have been licked, right? Well it seems like the while single tier performance issues are less common, multi-tier performance problems are now coming to the forefront which are even harder to diagnose and fix. So performance tools will need to now shift to next frontier -problems during production caused by relationships and interdependencies between the different components that make up a complete application, or between the different transactions going through the same tier at the same time. So if you want to find the reason a modern componentized, tiered application is performing poorly – you need to start understanding and analyzing the relationships between the components of the application, or the interdependencies between different applications or transactions executing on the same tier at the same time. Looking at each performance monitor in isolation isn’t good enough – you need to start looking at the relationships between components. The move towards SOA is only going to exacerbate this need. Now once you start looking at relationships and interdependencies - the amount of data needed for root-cause analysis grows exponentially, so there will also be the need for tools to help analyze the mountains of data to help pinpoint the relevant information needed for root cause analysis.
Another issue is tying back all of those components to the business - so that if a problems does occur, the business impact of the problems can be understood.
Finally, it is also clear that the Mainframe isn’t dead – it shows up everywhere, either as legacy (CICS+DB) or as a hosting platform for Linux server consolidation.