Home > Uncategorized > Broken Stacks

Broken Stacks

Layer CakeI actively traded futures and equities before the markets became erratic.   During that time I developed hundreds of models and tools for portfolio management.  I did all my work on a desktop that was devoted solely to trading, and had it in storage the last couple years as I got involved in other activities.  Last week I went to a conference where I met a company that specializes in risk management and I got the bug to dust off my homegrown portfolio manager, so I fired up the desktop I hadn’t touched in 2 years to run through it again.  And it was BROKEN.

How could that happen, since none of the code was ever changed?  It has a web interface so I could check it when I was mobile.  I was using a Windows/Apache/MySQL/PHP stack.  All the services were running, and I went through all the logs trying to figure out why PHP couldn’t find MySQL.   I spent hours going through the process of elimination.   What had changed?

After ranting for a while about how brittle computing stacks are, it occurred to me the only thing that could have changed is an antivirus program I had on the machine that probably called home for two years of updates.   Sure enough, when I shut the antivirus program down everything worked as just as it had two years earlier.

In the ideal world, the stack would have done the diagnostics for me like some monolithic brain providing output in clear English and either healing itself, or pointing out what the likely culprit(s) might be.  The stack approach to computing has served well, allowing technology layers to be swapped, replaced, and upgraded as needed without having to replace everything from application through database to operating system when one of the components gets an upgrade.   But – it introduces overhead in terms of communication between the layers, multiple points of failure, and complexity in isolating problems.

Why do I mention it?   Because I think the move to in-memory computing is a great step in eliminating complexity and a big point of failure in the hardware stack.   Cutting out all the overhead in programs that have to talk to the hard drives and maintain the data on the hard drives is a huge boost to application performance and data center productivity.  It also means eliminating the hard drives – the data center component most prone to mechanical failure.

I’m sure there are people working on integrated software development and deployment environments that will be akin to eliminating hard drives.   I’m not sure what great leaps they will make, but I’m looking forward to it.

Categories: Uncategorized
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: