March 19th, 2009 by Jacob Ukelson
I was reading an interesting discussion on a BPM forum about whether innovation is a odds with process. If you understand process to be a rigidly structured, unchanged prescription of how work gets done, then there certainly is truth to that. The main task of those types of processes is to make sure work is standardized, and done the same way. Innovation is frowned upon.
On the other hand if you think of process as including ad-hoc and unstructured business processes - then processes actually help with innovation. If you can gain understanding of how things actually get done (as opposed to how they are supposed to happen) - then you can use that insight to generate innovation.
Take any structured process (e.g. CRM), and look at the work it generates outside of the system (for example via email). Sometimes the work is really an odd ball one off. But in other cases (especially if it repeats itself) it may be an indication of a new unfufilled need, or a change in the environment that should be handled. Exactly the kind of input you need to create useful innovation.
I think companies are loosing a lot of potential innovation by not capturing and analyzing the exceptions to their main stream processes - I think they would be surprised by what they learn.
February 15th, 2009 by Jacob Ukelson
I was thinking about a comment from Dennis Byron where he asks (and answers) “Are there ten times as many unstructured processes in the world as structured processes just as there is ten times as much unstructured data as structured data?”
So I thought I’d try to take this analogy a bit further. Before I do that I’ll define business process using a modified wikipedia definition: “A business process or business method is a collection of related activities or tasks that produce a specific service or product (serve a particular goal) for a particular customer or customers.” Wikipedia actually used the term “structured activity” - but I don’t understand what that means, so I left it out. So now on to the different types of processes:
- Unstructured processes - every instance of the process can be different from another based on the environment, the content and the skills of the people involved. These are always human processes. These processes may have a framework or guideline driving the process, but only as a recommendation.
- Structured processes - a rigorously defined process with an end-to-end model, that takes into account all the process instance permutations. No process instance can stray from process model, Just like structured data - there is a specific data model associated with the data - and the data cannot stray from that model - and if it does, the data is invalid.
- Semi-structured processes - these are processes in which a portion of the process is structured, and sometimes unstructured processes are invoked (during exceptions, or when the model doesn’t hold).
While thinking that through I came to conclusion that, as opposed to data, there really is no such thing as a true structured business process once you get people involved (and most business processes require people sooner or later). If you really want an end-to-end model of a business process that works - the best you can hope for is a semi-structured process.
December 30th, 2008 by Jacob Ukelson
There are three ways to create a system to automate and streamline an exisiting process in an organization. Two are the common methods used by most organizations today - the first is by using a standard, packaged application that implement the process, and the second is through BPM modelling and implementation tools.
The first (which is easiest technically, but hardest organizationally) is to replace the existing process with a packaged application that implements the process (e.g. a CRM system) and use that as the basis for replacement process. The best way to do this is to be willing to adopt to a version of the standard process that is supported by the application and keep customizations to a minimum - otherwise the cost in both time and resources can be overwhelming (and the possibility of failure is a distinct possibility). Of course, this method only works for well defined standard process, and for organizations willing to change their processes to suit the system.
A second way is to try and capture the practical intelligence and tacit knowledge of the people participating in a process and create an explicit model of the process - and then to optimize that model. The model becomes the basis for building a speciality application to manage that process. A BPM system is the preferable way to do this since it makes the implementation of the process easier and more standard from an IT perspective. A BPM systems also tends to make the distance between the model and the implementation less then if the application was created not using a BPM platform.This method only works for processes where it is justified (and possible) to create an exhaustive model of the process, allowing the IT department the detailed requirements it needs to implement the process. In this method the discovery and modeling phase is of crticial importance - get that wrong and no one will use the application. The goal of the discovery and modelling process is to leverage people with strong analytical intelligence skills to take an organization’s tacit knowledge of the process and turn it into explicit knowledge that can be articulated and programmed. This is a relatively long and difficult requirements process - and if it isn’t done right no one will use the resulting application. This works only for processes that are common enough to justify the expense (of discovery, modelling and implementation), and that can be rigorously defined using analytical techniques.
ActionBase’s HPM provides a third possibility - appropriate for both processes that can’t justify the expense of a full blown BPM implementation (or just can’t wait), and as a way to gain insight into the practical intelligence and tacit knowledge of an organization - without embarking on wide spread, never ending process discovery and modelling. By using ActionDocs and ActionMail as simple proxies for an existing email and document based process - a system for managing any human process can easily be set up. It doesn’t need to be complete, the process can easily evolve through usage. If someone was left out of the process they can easily be added by sending them an ActionMail, and if a document was overlooked it can easily be added as an attachment. So almost immediately the organization gets all the benefits of a managed process that evolves as the process is used.
Over time, a standard core of the process may evolve and it’s model can be seen using ActionBase’s server. The organization can then decide to implement the process using method 2 (with the expensive discovery and process steps already complete), or leave the ActionBase process.
I’ll discuss more about this “third way” methodology in following posts.
December 10th, 2008 by Jacob Ukelson
I am attending the CMG (Computer Measurement Group) conference – it is the annual application\performance professionals get together. Turns out there is really good technology for production-time performance monitoring at any tier – and that is becoming more and more of a commodity covered by the big 4 – (IBM, CA, BMC , HP and if you are interested in Mainframe – ASG and Compuware).
So if that’s covered – what’s left to worry about? Performance problems have been licked, right? Well it seems like the while single tier performance issues are less common, multi-tier performance problems are now coming to the forefront which are even harder to diagnose and fix. So performance tools will need to now shift to next frontier -problems during production caused by relationships and interdependencies between the different components that make up a complete application, or between the different transactions going through the same tier at the same time. So if you want to find the reason a modern componentized, tiered application is performing poorly – you need to start understanding and analyzing the relationships between the components of the application, or the interdependencies between different applications or transactions executing on the same tier at the same time. Looking at each performance monitor in isolation isn’t good enough – you need to start looking at the relationships between components. The move towards SOA is only going to exacerbate this need. Now once you start looking at relationships and interdependencies - the amount of data needed for root-cause analysis grows exponentially, so there will also be the need for tools to help analyze the mountains of data to help pinpoint the relevant information needed for root cause analysis.
Another issue is tying back all of those components to the business - so that if a problems does occur, the business impact of the problems can be understood.
Finally, it is also clear that the Mainframe isn’t dead – it shows up everywhere, either as legacy (CICS+DB) or as a hosting platform for Linux server consolidation.
November 10th, 2008 by Jacob Ukelson
I was reading the latest Forrester BPM report on eBizQ and found it to be quite an interesting read - especially for the endorsement it seemed to give human process management as a required extension to BPM - without actually naming it as such. That was my only quarrel with the authors - they expect BPM suites to be extended to handle unstructured, ad-hoc, chaotic (their term) human processes. That makes it sound like handling those types of process is just a small feature of an BPMS, a small extension that BPM suites should add. In my experience that isn’t the case - building an system to manage these types of human processes is no trivial task, and don’t expect BPM vendors to be able to do it - it just requires a different type of thinking -especially since to get people to adopt it you need to unseat an entrenched “competitor” - email. Here are some of the quotes that relate directly to human process management:
“in real life, processes change all the time; in fact, our interviews consistently show that processes never stop changing“
“The outcome of a discounting decision may be captured in the BPMS by integrating or embedding a business rules engine, but the way the decision was made — the reason for the discount — is often recorded in an obscure email thread, if at all.”
“But many real-world, people intensive processes are so rife with exceptions that it’s impossible to model all the permutations in a traditional process modeling tool. These ad hoc, chaotic processes are difficult to support even using today’s BPMS tools”
OK -so even for the most structured processes in an organization - the ones that have actually been implemented via a BPMS - even those processes are constantly in flux - which means that almost always the users are going to need to morph and change the process before the IT department can reprogram system - no matter how good the tools are. So how is this actually handled in the real world? - no surprise here, it is done via email. These above quotes from the paper make it clear that no matter how well designed the process implementation is - it can’t anticipate every nuance of the process, or every new context - there will always be the need for a tool that allows end users the flexibility to handle the ever changing requirements and demands of real life business processes without IT involvement - while still allowing for management, monitoring and optimizing. Email provides the flexibility, and HPMS built on top of email - provides the rest. If not - BPM initiatives will bring only limited business value.
So in short - even for companies embarking on enterprise BPMS - remember H comes shortly after B, and you’ll need a good HPMS to round out your BPMS.
November 2nd, 2008 by Jacob Ukelson
I was reading various posts about modeling (in the BPMN sense) and it seems to me that many of them are confusing the ability to use modeling tools to codify requirements, and the use of modeling tools to actually generate the code needed to execute the process.
I remember that we started thinking about model driven development over 10 years ago at IBM Research. If you could only let the business analyst model the process - and then generate the actual executable from that model - what a jump in productivity and agility. It was even worked with various toy examples. The difficulty is that designing a detailed enough model of a process to generate an actual executable program required the same effort (if not more) and the same set of skills that you needed to develop the program to implement the process. So in reality the model became a spec or requirements doc for the developers. You could imagine these models being a good way to implement agile programming for BPM - where the analysts and developers use those tools as part of an iterative development process - but alas most of their usage is closer to waterfall development than iterative development.
Take a look at the real life BPEL diagram shown in the drool’s blog - it made the process description the worst kind of unmanagable spaghetti. Even with all the complexity shown, it isn’t even close to the most complex business process you can find out there - whether it is a human process or a business process.
So are BPMN and BPEL a step forward or backward in enterprise application development? I think they are a good way to collect initial, high-level requirements (BPMN) and as a machine readable, system independent language for business models (BPEL). Lets not fool ourselves though, as with any tool for collecting requirements - it ends being a recommendation to the developers - not a production code generator. That is unless you limit the process domain being modeled to processes that consist of a small set of simple, well-defined, recurring, easily tailorable task templates - with relatively simple control flow logic between the tasks.
October 26th, 2008 by Jacob Ukelson
I have been looking at the mainframe market (yes, those IT dinosaurs that were supposed to be finished in the 1990s). It turns out that plenty of the beasts are still around. Mainframes still host around 70% of the world’s business critical data. That means that even if you are using your bank’s web front-end, there is a good chance that one of the tiers in the application still resides on a mainframe.
Not only are mainframes alive and well, so is mainframe software. CICS, Cobol and so on - they are still in use at many, if not most, enterprise data centers - and they won’t being going away anytime soon. Q32008 IBM System z hardware revenues increased 25% year/ year, with double digit revenue growth in all geographies. MIPS (capacity shipped) grew by 49 percent. And thats just hardware. I couldn’t find any recent data on the mainframe eco-system-but here is a chart I found for the 2004 server market eco-system:
$50B - the market may have changed in the last four years - but I am guessing it is still an impressive number. Given the way this technology is quietly hanging around even with some many trying to kill the market - I think we should call mainframes cockroaches rather than dinosaurs.An added benefit - mainframes are actually a “greener” alternativs to using a plethora of open systems…
The new uses of these existing, legacy systems (e,g, Web Interfaces, SOA, RSS and ATOM feeds) is putting demands on the systems that they weren’t originally designed for. That along will the dwindling number of mainframe skills available - leads me to the key question of how is all this legacy infrastructure and applications going to be managed and maintained…
October 19th, 2008 by Jacob Ukelson
I was reading an article on Gartner’s “four disruptions that will transform the software industry“. While I was reading it occured to me that three of the four disruptors have the same core - there is a new type of user out there, and they are becoming more vocal about having more control over the tools and applications they use. As John and Claire-Marie Karat wrote in our article ”Affordances, Motivation and the Design of User Interfaces” - “There is a paradox in human behavior that is valuable for designers of applications to keep in mind: Everyone wants to be in control, but nobody wants to be controlled.” This basic truth is driving the “Rise in New Technologies and Convergence of Existing Technologies” disruptor especially around SOA, device portability and mashups. It is also driving the other two disruptors “Change in Software User and Support Demographics” and “Revolutionary Changes in Software and How it is Consumed”.
I think that everyting Gartner says is true - but it isn’t that futuristic - just extrapolating from the trends we are seeing now in early adopters. As William Gibson wrote “The future is already here, it’s just not evenly distributed”. What I think they are missing is that software is going to have to evolve to support a new type of work, not just a new type of worker. Most of todays packaged apps are around to support the highly strutured processes of the “old enterprise”- and I put BPM tools in that bucket. The next generation of enterprise software is going to have to provide much better support for knowledge work processes. Lotus Notes, MS SharePoint and Wikis are a start in providing support for collaboration - but not for the tacit interaction (or human processes) - which include individualized behaviour and social dynamics. Enterprises are going to need tools for the 80% of human centric business processes that are currently handled through ad-hoc use of email and documents - a Human Process Management System. As you know from my previous posts HPMS’ will be extensions of the way people use email and documents today as their basic framework for tacit interactions (or human processes) with a focus on traceability and flexibility rather than control.
October 16th, 2008 by Jacob Ukelson
I read an interesting blog post by Ross Mayfield today on email overload. It mentioned something that I have been thinking about for a while - handling exceptions in structured processes (especially B2B). It had an interesting pointer to John Seely Brown and John Hagel’s book The Only Sustainable Edge that most employee time is not spent executing process, but handling exceptions to process. That meshes well with other things I have read and seen from experience.
The key point for me is that as process automation (through BPM and other applications) becomes more pervasive for standard processes, mechanisms to support human handling of exceptions will become more and more important (since that is where employees spend their time) - which means Human Process Management will come to the forefront.