Once the servers are grouped appropriately, activities such as patching can be automated in the majority of instances.
Investing in new IT to create value and add other benefits to a business is a challenge. All too often the benefits are never fully realized, as the theoretical promises of the technology are never achieved due to the pragmatic constraints of delivering IT services in a real operational environment.
For instance, virtualization technology that allows multiple virtual independent computers to simultaneously run on a single physical computer has been readily adopted by many firms with the objective of increasing hardware utilisation, saving costs and improving service with easier deployment. It was suggested that it could reduce server idle time by 70%, yet statistics show that 44% of the organizations that have deployed server virtualization technology are unable to say whether or not the deployment has been successful. Achieving all of the benefits from virtualization is a harder challenge than many companies first thought it would be.
The primary reason for the gap between expectation and reality is that there is far more to delivering IT services than just implementing new technology. For example, with virtualization technology, routine maintenance activities such as server patching become more complex; every month (or even more frequently) vendors such as Microsoft, release updates to their operating environment to resolve problem issues and maintain the operational integrity.
These typically require servers to be shut down and restarted, and this task becomes more laborious when an additional component (ie Virtualization software) is added to the mix.
What can be done to improve the odds of success?
At PA, we’ve developed a simple set of controls that can be implemented in order to maximise the cost savings derived from virtualization, whilst also ensuring that service quality does not deteriorate. Build your service catalogue Before you embark on virtualization it is important to understand what is currently in place, firstly in terms of hardware and software, and then also in terms of how these elements combine to support service delivery.
Unfortunately, few organizations have a definitive and centralised inventory of their IT and if they do, it is often antiquated and considered the realm of the ”technical teams”. Information is updated in a sporadic manner and key information can be missing such as who owns the equipment, what services it supports and who can authorize changes to it. Inventory information should be shared across the entire organization, allowing teams to spot inaccurate information and correct this.
Understand the priorities
As part of any inventory, it’s important to understand the business priorities associated with each component. Key elements of any IT infrastructure, such as high availability servers may be used for business critical systems which cannot be interrupted at certain times, while others are less important and can be updated without disruption to users. Using the operating system patching example, the less critical components can be patched at a particular time each month without any additional planning, but updates involving business critical systems will require coordinated planning for each upgrade. Understanding the various business priorities associated with each IT infrastructure component is therefore important.
Virtualization adds another layer of complexity to this problem, as a given unit of hardware can simultaneously support multiple applications. Virtualized hosts often need patches themselves, meaning all services on the hardware need to be interrupted at the same time. Logical servers should therefore be grouped appropriately and sited on common underlying hardware according to these applications and the users needing to access them.
Automate or separate
In most cases, if servers are grouped appropriately, with those of a higher priority seperated for careful management, patching and maintenance activities can be automated. This can lead to enhanced delivery efficiency, with no reduction in service quality and result in significant cost savings.
Looking again at the patching example, for critical servers, often exhaustive testing processes of these patches need to be completed before any patch can be applied. Following this, the patch must be applied at a specific time with engineers ready should anything go wrong. Naturally, this is a lengthy and costly activity. In these cases, it is often unwise to virtualize these servers and if they have already been virtualized, separating these may be needed. However, by understanding the differences in priorities, only some servers are constrained by this, with the majority being patched automatically.
Finally, once any IT process is in place, it must be governed to ensure it meets with expectations and this governance should take place outside of the technical teams. In order to influence behaviors, clear reporting on progress and results should be provided. In the case of patching, metrics on which servers have been patched is essential, and this need not be time consuming. It is likely to be a case of making existing statistics available rather than generating new ones.
On a technical level various management suites can be used to monitor and provide feedback on the success of such activities.
Technology, such as virtualization, has the potential to create business value, but needs to be implemented with careful consideration, as to how it will be managed to deliver IT services in the real world. PA’s experience is that all aspects of how services are managed need to be considered. Even seemingly straightforward management tasks, such as patching, need to be clearly thought through. Only then does it become possible to maximize the benefits of new technology.
To speak to one of our experts on virtualization, please contact us now.