Out-Law Analysis | 18 Aug 2009 | 12:04 pm | 5 min. read
The following article was contributed to OUT-LAW by Owen Cole, Technical Director UK & Ireland, F5.
There’s been a lot of hype over the past year surrounding 'green' computing and the drive to lower the impact of IT and data centres on the environment. While we’re all for the concept of green computing and reducing the impact of computing on our environment, we’re also cognisant of the reality that every IT organisation also has to worry about the other kind of green: its bottom line.
The good news is that there is some amount of overlap between these green computing initiatives. Reducing power consumption and management expenses, and increasing efficiency of existing resources through consolidation and virtualisation decreases both the impact of devices on the environment as well as on IT’s increasingly tightening budget.
The easiest way to reduce the impact of any device on the bottom line, be it a server or networking equipment, is to reduce the amount of power it requires. Modern servers often draw variable amounts of power based on the processing power in use by applications.
Similarly, some networking equipment and other devices provide the same functionality, drawing varying amounts of power based on their load and configuration. This can be beneficial in reducing the operating cost of the server or device, but like dealing with variable costs of bandwidth due to bursts in usage, also makes it difficult to estimate annual costs and budget appropriately.
Another simple but often overlooked facet is how many BTUs [British Thermal Units] are generated by any given device. By decreasing the BTUs generated, there is less heat and thus less cooling required within the data centre.
The costs of cooling a data centre are larger than those to heat one, owing to the fact that much of the heating needs in a data centre are inherently taken care of by the BTUs generated by the devices it houses. Reducing these costs can have a significant impact on the operating expenses of any IT organisation.
Reducing power consumption and generation of BTUs for devices and servers is something over which IT has no control. While IT can certainly use such ratings as part of its decision making process for purchasing, it really can’t do a thing to affect how much power is consumed or how many BTUs are generated by any given device. It’s simply a cost of doing business.
Yet IT can make decisions, both in purchasing and architecture, which reduce power consumption and heat generation by reducing the number of servers and devices that make up its data centre. Consolidation and virtualisation are both ways in which IT can positively impact its bottom line.
Consolidation has been an ‘initiative’ in IT for many years, and it generally revolves around the consolidation of the data centre in terms of the number of servers deployed to support mission-critical applications. While reducing the number of servers in the data centre, and thus rack density, both power consumption and heat generation can be positively affected.
Yet capacity needs must be balanced with consolidation efforts, and at some point consolidation is no longer possible. As the volume of users and application usage grows, so must the number of servers – and devices such as application delivery controllers – necessary to scale mission-critical applications.
Striking a balance between scalability and controlling costs is difficult, and thus far it has been nearly impossible to avoid the deployment of additional application delivery controllers as a mechanism for scaling out a data centre.
Whether chassis or appliance-based, these devices have only added to the cost of power consumption and increased the generation of heat within the data centre, raising operational costs.
Solving this problem requires effort on both the part of the application delivery controller vendor to reduce the power consumption and BTU generation of devices while simultaneously providing a way to scale without increasing the number of devices required for deployment within the data centre.
A single, chassis-based application delivery controller requiring less power and generating fewer BTUs that also scales via a virtualised bladed architecture can address the growing need for capacity without adversely impacting IT’s bottom line, or the environment.
By architecting a new breed of chassis-based application delivery controllers that take advantage of virtualisation not only at the server level but at the chassis and blade level, these new devices can provide better performance in a single unit than could previously be obtained with multiple appliance-based solutions or legacy chassis-models.
By virtualising blades and CPUs, essentially creating a single, powerful processing matrix, this new breed of chassis-based application delivery controller can scale nearly linearly.
This internal processing scalability means that every last drop of processing power is being used and can provide a much higher capacity than its legacy ancestors. By more efficiently using the processing power available, the performance per power unit is increased, making each transaction processed by the application delivery controller cost a fraction in terms of power consumption than would otherwise be possible.
|Layer 7 CPS||Watts||CPS per watt||BTUs|
|New chassis model||1,260,000||1,463||862||4,991|
Consider the comparison in Figure 1. Regardless of what the cost per kilowatt hour, there is a significant saving in terms of power when moving from the legacy chassis-model to a new, virtualised chassis-model. This has a significant positive impact on the environment as well as on the organisational budget.
Given the higher performance capacity of the new chassis model, this also allows for fewer devices necessary to meet the growing traffic management and application delivery needs of today’s IT organisations, which lowers the cost of operations as well as management.
The management costs of such a new breed of application delivery controller are inherently lower than a traditional application delivery solution, owing to its virtualised architecture and the ability for the device – and IT manager – to manage the system as a single entity rather than as individual blades in a larger system.
This reduces the amount of management necessary, and in turn reduces the costs associated with managing the device.
This is especially true as capacity is added, as it would require multiple legacy chassis-based devices to match the processing power of a single virtualised chassis-based system. Each added device must be managed, and adds to the amount of power consumed and BTUs generated, making it much more expensive to scale.
Also having an impact are the BTUs generated by each device. There is a definitive cost associated with removing the heat generated by these devices in the form of cooling, so the lower BTU generation of the new breed of chassis-based solution is a definite boon both on the environment as well as on the budget.
It’s rare that an environmentally-friendly movement such as Green IT results in reducing costs, especially in its early stages. And yet in the case of this new breed of chassis-based application delivery controllers, that’s exactly the result. With the decreased management and power consumption costs and increased performance, these new application delivery controllers are both green as in grass and green as in cash.
F5 is exhibiting at Storage Expo, 14th – 15th October 2009, Olympia, London.