Sunday, May 11, 2008

Data Centers, VMware and all that

It has been some time since I was directly involved in the activities in the data center or even indirectly, other than simply having to work with the teams of engineers who now manage them for deployments but I did run across this post recently and in the context of carbon footprints, Green IT and all the buzz in that space it was interesting reading and a very proper step in the right direction.

Here are a few of my observations on this subject.

VMware: The virtualization of many servers onto a single piece of hardware, has been a revolution in terms of CPU efficiency and TCO and I strongly suspect the low utilization rates in the article linked above probably have some relation to the amount of non-vm servers still running out there.

Case in point: I had a conversation with a friend recently and he described how he had taken a room of servers and moved them into a rack of 4 blades with servers to spare..amazing !

But one thing I have come to understand however is that VMware poses a bit of a problem to application development teams when they come to doing performance testing. As the servers in the Test, Alpha, Beta, QA (or whatever else you have designated your performance testing environment) are very likely VM's, as they will be in production, it is hard to gauge whether the additional application activities on the shared tested server at the time of test are also a mirror of what the production VM environment will yield.

We seem to have forgone the 'control' aspect of the lab test owing to the great desire to make all servers virtual. I have yet to discern if this will have a long term impact but it does seem to remove some of the validity of load testing results.

One other aspect of VM's: try calling your application vendor for support....90% of the time they will tell you it is not supported on VMware...Ooops!

Data Centers: I have yet to be in a data center that did not require me to wear a jacket, sweater and sometimes a coat. Why is this..? When did microprocessors get so finicky that they could not be expected to work in anything over 50F? I seem to recall the spec for these processors allows temperature around 80F with comfort and often much higher. Think of all the home PC's that operate just fine even when it is 100F outside and a relatively cool 85F inside. Perhaps there is a reliability curve at work here..?

Surely raising the data center temperature a few degrees would yield substantial savings in cooling bills?

The advent of blade servers has undoubtedly raised the ante here as they are so dense in heat producing components.

A better solution of course would be to have the cool air directed where it is needed, right into the server stacks first.

I would like to think that Intel (now powering CRAY computers by the way !) is working on chips that don't throw out all that waste heat but, that will quite a time coming and in the meantime our cooling costs are soaring.