Archive

Archive for January, 2013

Winds Change

January 25, 2013 Leave a comment

Owens ValleyI used to fly a hang glider cross country in the 100 mile long Owens Valley of California.  The valley runs North and South.  The typical prevailing wind is from South to North.  We would normally launch early in the morning to catch smooth air and to give ourselves sufficient time to navigate as far up the valley as possible while there was still daylight and safe flying conditions.

One of the first things you learn in Hang Gliding, especially when going cross country, is how to read a weather and wind forecast, and how to detect changes in the conditions while en route.  On a cool summer morning I launched from Walt’s Point at 9,000 feet with my friend Mike knowing that the wind was blowing North to South, but that if we were going to get in a flight that day we would have to fly south down the valley and just have fun without expecting any personal distance records.

All was going well and instead of going for distance we played around taking pictures as we slowly inched our way South until, off in the distance, we saw dust being blown up along the ground coming at us from the South –  a clear sign that a cold front was moving in with what was likely strong and turbulent wind.  

From day one of my hang gliding training I had it drilled into me that when it doubt, land and sit it out by my instructor Joe Greblo.   In fact, his philosophy was that you never change more than one piece of equipment or alter any part of your launch and flight schedule at a time, because if there was a problem you needed to be able to focus all your attention on fixing one thing that you changed instead of dealing with additional layers of complexity that can be introduced when you make more than one change at a time (which turned out to be good advice when it comes to making changes to things related to IT and computing).

In my head, my flight plan was to fly south as far as I could and then land safely.  When I saw the dust on the ground I knew there would be additional turbulence at altitude, and sure enough I was able to find a safe landing site and secure my glider before all heck cut loose.  Mike and I had discussed finding a safe spot to land before the storm hit via short wave radios, but I ignored the radio during my landing, giving the task of landing in the open desert the full attention it deserved.

As the dust storm passed I hit the microphone to find out where Mike was, only to catch him yelling at the top of his lungs that he was riding the front North, headed for what was a great flight for the day up the length of the valley in record time.

When I think back over the transformative “fronts” that moved through the data processing world over the last 20 years I am reminded of times when I turned and went with it (data warehouse, cloud computing, big data) or sat it out (search, Internet advertising).   Is in-memory database one of those times when we need to go with it?  What other transformations might I be missing that will be obvious in hind-sight?

Advertisements
Categories: Uncategorized Tags:

Broken Stacks

January 17, 2013 Leave a comment

Layer CakeI actively traded futures and equities before the markets became erratic.   During that time I developed hundreds of models and tools for portfolio management.  I did all my work on a desktop that was devoted solely to trading, and had it in storage the last couple years as I got involved in other activities.  Last week I went to a conference where I met a company that specializes in risk management and I got the bug to dust off my homegrown portfolio manager, so I fired up the desktop I hadn’t touched in 2 years to run through it again.  And it was BROKEN.

How could that happen, since none of the code was ever changed?  It has a web interface so I could check it when I was mobile.  I was using a Windows/Apache/MySQL/PHP stack.  All the services were running, and I went through all the logs trying to figure out why PHP couldn’t find MySQL.   I spent hours going through the process of elimination.   What had changed?

After ranting for a while about how brittle computing stacks are, it occurred to me the only thing that could have changed is an antivirus program I had on the machine that probably called home for two years of updates.   Sure enough, when I shut the antivirus program down everything worked as just as it had two years earlier.

In the ideal world, the stack would have done the diagnostics for me like some monolithic brain providing output in clear English and either healing itself, or pointing out what the likely culprit(s) might be.  The stack approach to computing has served well, allowing technology layers to be swapped, replaced, and upgraded as needed without having to replace everything from application through database to operating system when one of the components gets an upgrade.   But – it introduces overhead in terms of communication between the layers, multiple points of failure, and complexity in isolating problems.

Why do I mention it?   Because I think the move to in-memory computing is a great step in eliminating complexity and a big point of failure in the hardware stack.   Cutting out all the overhead in programs that have to talk to the hard drives and maintain the data on the hard drives is a huge boost to application performance and data center productivity.  It also means eliminating the hard drives – the data center component most prone to mechanical failure.

I’m sure there are people working on integrated software development and deployment environments that will be akin to eliminating hard drives.   I’m not sure what great leaps they will make, but I’m looking forward to it.

Categories: Uncategorized