Saturday, July 26, 2008

On Patagonia Dinosaurs and Disruptive Technologies


Saturday night, at home in Haifa. Today I went along with (most of) my family to visit the fossils of some Patagonia Dinosaurs . The exhibition took place in the local technology museum, and since the last time I've been there, I noticed that they have eliminated the museum's parking lot, asking the visitors to find parking in the center city -- which is not easy even in regular days, and becoming more difficult when many people somehow have the same idea that they wish to spend Saturday noon in looking at old lizards... Anyway, after two rounds I have found a good parking place, making a note that there are soon elections for the mayor of Haifa.

Anyway, looking at the fossils, among the many people who were there, I also looked at a poster explaining the various assumptions why these animals became extinct.
This has drawn me to thinking in two directions -- when will humanity become extinct, and getting back into technology -- when does disruptive technology makes previous technologies obsolete? in our case -- will we have a second (or third) generation of event processing technologies which will be disruptive for everything that exists today ? --- well, we can only speculate at this point, but it is a good topic to think about... This can be one of the topics that we'll ask the senior technologist panel in the EPTS F2F meeting. More - Later.

Thursday, July 24, 2008

On optimization criteria for EP applications


This picture shows optimization of sitting on chairs, I actually know a person who sits on a big ball when he works, claiming it is good to his back. I have read with interest Paul Vincent's report on the OMG Real-Time workshop (since I cannot be everywhere, it is good that other people are reporting on what's happening, and Paul is especially good on reporting on conferences), in this meeting there has been a discussion about metrics for metrics for how to measure event processing applications. We don't have a standard benchmark yet, and I don't believe in a single benchmark fits all - but on a collection of benchmarks based on classification of applications. I would like to go deeper into the issue of "runtime performance" mentions there -- interestingly "runtime performance" means different things to different people, and indeed different application have different requirements -- if we just look at the metrics of -- latency and throughput, then we have the following variations of goal functions (this is probably not a complete list):
  • min (average e2e latency)
  • min (max e2e latency)
  • min (variance e2e latency)
  • min (deviation from time constraints)
  • max (input throughput)
  • max (output throughput)

The metrics are not identical - in latency there is a difference if the metrics is to minimize average latency or minimize maximal latency. For example, in Java the maximal latency can suffer from garbage collection that will make it untypically high, while "real-time Java" implementations that smooth the garbage collection minimize the maximum latency, but the price is that the average latency may grow. Throughput can be measured by input or output events, which are not really identical. Each of these goal functions indicates different kind of optimization, and this is just by looking at two parameters of throughput and latency...

This poses two interesting questions: will there be partition of the market according to optimization capabilities, or will be able to generate adaptive software that will be able to be tuned to multiple optimization ? more about performance metrics - later.


Monday, July 21, 2008

On Historic Truth, Archeological Truth and hype cycle







In one of the past blogs I have referred to Agnon an Israeli novelist who received Nobel Prize in Literature - today I am using another known essayist, Ahad Ha'am (literally: one of the people), who had an excellent essay in which he makes distinction between two types of truth: archeological truth and historical truth.

The archeologic truth is more objective, and is determined according to the archeological fidings. Historic truth, on the other hand, relates to the perception, the mythos, sometimes the legends around it, but this what is perceived regardless of what happend.

Why am I writing all of a sudden about Ahad Ha'am ? -- the association came while reading the amusing Blog entry by Paul Vincent, entitled: CEP: hype, or the next best thing since sliced bread?

In our context, the archeologic truth looks at the reality of the EP market, while the historic truth looks at the perception. Let's look about the two sides of this equation.

First, as people have realized, it is not very easy to determine the reality of the market. There is a variance between the figures cited by analysts, and each of them probably is looking for a certain segment of the market. Pure CEP players are typically privately held and don't provide public account of their sales figures, while bigger companies who do, do not isolate the sales of their CEP sotfware relative to other software. On the other hand, if the CEP software has been critical to close a larger deal, then the entire larger deal should be attributed to the CEP sotware, since deals have a binary nature... In some cases the application which the CEP software is used for is a marginal application, and in some cases it is critical mission applications, and they also should not have the same weight. Tim Bass has proposed the "reference clients" as a yardstick, by collecting public announcements and press releases, as done last year. I have not been very impressed from the results and their ability to reflect the reality, for example, when IBM acquired Aptsoft, it has been published that Aptsoft at that point had 19 active customers, most of them from 2005-2007, yet in the reference customers table they have 4 - a big difference (I heard from other vendors also that these numbers are far from reflecting their cusotmer base as they see it). It seems that the majority of sales are not reported as public refernce customers -- there are various reasons, and may be I'll return to the issue of reference customers, but the bottom line of archeology truth is not easy to obtain.

Let's talk about historic truth now - perception is easier to measure ? -- we can measure it by the VC investment in a certain area, analysts reports, big vendors attitude and customer's perception. Let's look at all these parameters:

  • VC: As the EPTS gatekeeper, I am still getting membership applications from startups whose name I have never heard before. I estimate that there are 20-30 companies that are financed by VCs, and the fact that new ones are emerging is an indication that the perception of the VC community is that there is a potential in EP software.
  • Analysts: While Gartner has endorsed this area long time ago, other analysts have joined. A recent quote from Forrester says: “Forrester Research has seen an increasing level of interest in and adoption of event technologies in our recent data on software decision makers. Based on this interest we have significantly increased our ongoing research focus in this area". Again - the perception is that this is an area that the analysts should watch.
  • Big Companies: Big software companies, in many areas, tend to wait and let the smaller company play the first generations. In the recent year we have seen two of the big software companies - IBM and Oracle increase their involvement into the EP area both in acquisitions and self-investment. The other big software companies - SAP and Microsoft are showing signs of interest, again - this is an indication of positive perception.

  • Last but not least - the customers -- we can see interest in several indications - some surveys like the ebizQ market pulse which gave insight about the perception, and I have cited it before, there are some other indications -- like the amount of customer participants that the Gartner Event Processing Summit will succeed to attract (the fact that they succeeded to do it second time is an indication); OMG is constnatly doing CEP sessions in its meetings, like the one that Paul Vincent reports about, and general interest among customers. Note, that Gartner in one of its advertisments to their summit stated that "event processing is the future of software technology".

Thus - on the perecption front, it seems that the indications show acceleration and growth.
Alas, one may claim that these are only indications of perception, and not to real substence, and are results of over hype. There is some truth in event processing being a current hype, however, this is a natural phenomenon, as the hype cycle theory indicate - new technologies are moving through hype up and down cycles, and EP seems to be in its way up. Hype is also a good phenomenon, since it act as a catalyst and increase awareness, however, hype cannot replace substence, and my own opinion is that all the perception indications are not a result of hype only. Actually some of them are helping to create the hype.

The interesting question is how the the substence of value to the client and the derived market opportunity is being viewed from the point of view of the people who has to put money on EP software (writing in Blogs is much easier...).

To (try and) get some answers for this question we plan in the EPTS F2F meeting ("the 4th event processing symposium") two panels: the first one will consist of people from the business side of vendors (small and large) to provide their view, the other will be a panel of customers - two populations that spend money on this - and we'll pose them some interesting questions, and summarize it as a service to the entire community. Enough for today...