Fast Takes

Home page of John McDowall

My Photo
Name:John McDowall
Location:Redwood City, California, United States

Thursday, April 14, 2005

Open Source to Open Services

Open source has shown that simplicity can be created in the infrastructure but it still misses its potential as code still has to be designed to be installed and maintained on various platforms that are changing at uneven rates. It should rather be offered as an open service that can be combined with other services to create new services that live in the network. I do not want software I want solutions to problems. (I still like software and looking at code but i really do not need it - figuring out how Google Maps works is fun and interesting but I do not need to know how it works to use it. To integrate with it I should need to know very little but integration is still too hard.

Integration is a problem because most software is not designed to be integrated. Just as till the advent of open source most enterprise software was not designed to be easily maintained, built and deployed. A huge effort went into (and still goes into) designing software to be deployed on a large number of target platforms. This effort provides zero value to the consumer of the solution.

If the same effort by smart people went into creating simple open service interfaces as they have done to create simple maintenance and deployment for open source we would have a very rich ecology of shared services and a very different economic model. Open source is trying (and in many cases succeeding) in moving the economic model to a maintenance model - well take the next logical step and move to a complete utility model that measures economic value by ease of integration and SLA (Service Level Agreement).

The economics are on the side of shared infrastructure and software as a service. Moving to open services would dramatically increase the impact of the open source community and move the value to the SLA of the service not the portability of the service.

More simplicity

After my last post on simplicity I received an email from an old Teknekron/Tibco colleague Steve Wilhem who made a couple of great points that should be shared. I have para-phrased his email so any errors in interpretation are mine.

The first was around refactoring and how to achieve simplicity (specifically in a framework) the need to constantly refactor, do not expect to get it right the first time but keep evolving it. The other was more subtle and I thought most important. Do not try and solve all possible problems - leave the system open enough such that the last ten percent or so can be done by the user of the system. I think this is key as this is typically where we add complexity - trying to solve all the weird edge conditions we can possibly dream up. Let's assume our users are smart and give them the hooks to solve the problem in the way they see fit. Moves any complexity to specific applications rather than the infrastructure where it does not belong. Thanks Steve great suggestions.

Wednesday, April 13, 2005

Simplicity Rules!

Sam Ruby recently posted a great presentation on amongst other things simplicity. Adam Bosworth posted a long article a while ago in a similar vein. It seems that the importance of simplicity is being recognized.

I am not sure that it is simplicity of implementation as in being not sophisticated that is being suggested, rather it is elegance or simplicity of the solution. The original Mac was a complex piece of engineering as was the original Palm but both had simplicity of use.

Open source drives simplicity by rubbing off parts of a design that have unnecessary complexity. Open source projects that have community value tend to get simpler as in easy to build test and deploy because no one wants to go through complex steps again and again using up their most valuable commodity - personal time. To take an example from Sam's talk - try installing and running WebSphere versus JBoss - last time I tried, I gave up on WebSphere after several hours while I had JBoss downloaded and running in less than 30 minutes. (This is not a recommendation that programming J2EE is a model of simplicity ;-)).

Rapid evolution towards simplicity is usually the result of several smart people driving a solution to a shared problem. Open source by its nature attracts more people and as it is a meritocracy it gets rid of over complex solutions pretty quickly. Having development teams organized in a similar fashion grinds out clean solutions faster too.

Making simplicity a goal in integration is key to success - it can be an aspirational goal i.e. integration may never but point and click but unless the goal is simplicity rather than solving many imaginary problems then the possibility to asymptotically approach it will never happen. So setting simplicity of solution is always the right goal.

Monday, April 11, 2005

Deploying hosted applications - boxes are bad

I have used hosting providers for many years now and before that worked with rooms full of computers to serve various customer applications. This is one of the first things where I saw the huge value in outsourcing and the value that shared infra-structure brought to customers, but I want a lot more than is being provided today.

We have come a long way in managing the infrastructure for hosting applications. However the hosting model is still tied to the model of boxed cpu's. Blades are great and racking and stacking has been a great way to go for creating cheap scalable architectures. We are however a long way from the ideal case. The ideal case is where I agree with the hosting provider the high level architecture and then push software to the specific part of the system architecture and pay for an SLA and usage. Both SLA and usage are complex numbers to work out but the degrees of freedom they introduce should make it easy to develop innovative and fair pricing models.

We are a long way from this as the software tools we use to develop are also locked into the box model. This is not surprising as commercial tools are priced by the box. However open source tools do not have this limitation. Another way to look at this is from a virtual machine - why should the JVM or CLR be tied to a box should it not manage resources for the application and demand more resources as demand increases?

Global computing projects have been leveraging this for years but each application has been custom designed is it not time for this to become a standard part of the network?

Tools and Intermediaries

When I think of intermediaries I typically think of additional tools that I have in my tool box and can use. They are something I can use within my processes but are run and managed by someone else - in other words shared infrastructure.

A simple example of an intermediary would be a data transformation service. This service would accept any message in a specific wrapper (SOAP or REST) and a URL pointing to a definition of the transformation instructions and then return the result. Other services could be shipping calculators, tax calculators etc.

Whether the service returns the message to the sender or forwards it to another destination seems to determine whether the service is defined as an intermediate service or an end point. However the service should not really care, where the message goes next, it should be a capability of the service (does it support forwarding) and a decision made by the application developer - where do I want it to go.

Discovery of these shared services is a challenge today as I typically need a complete ecology working together before they are truly useful. The begining of an ecology does seem to require a common set of semantics and a way of transforming the semantics. This implies a common schema for the ecology and transformation services to move in and out of the common semantics. For the ecology to exist there must be sufficient agreement between all parties of common semantics. A good example of ommon semantics is in the retail industry where ARTS that has defined a set of XML Schemas under the heading of IXRetail. This provides the common semantics that enable parts of the ecology to communicate.

Once the semantics are defined and available to the community shared services can be created but how individual services interact and work together is still an issue as there is no really good way to build collaboration between service providers in the loosely coupled framework that is needed to foster collaboration and independant development.

To make interacting intermediaries a viable source of tools the ability to boot strap an ecology needs to be available - anyone got ideas?