ISSN: 1391 - 0531
Sunday November 4, 2007
Vol. 42 - No 23
Financial Times  

IT Organizations: Operating in a world of constant change

By Amer Khan

The nature of business today is that change is the only constant. Organizations, be they public or private entities, are faced with change as a result of reorganization, business expansion, competition, the impact of new technology, mergers and acquisitions, industry or government regulatory controls and a myriad of other factors.

The reality is that any change that affects an organization will have a flow-on effect to the IT organization.

One can say that an organization’s ability to adapt to change is directly related to its IT system’s ability to adapt to those changes. There are many examples of organizations that have suffered considerable harm to their reputations and market values through IT disasters that resulted from poorly implemented systems, and upgrades that went wrong.

From the release of the first commercially available relational database system in 1979, to support for Very Large Database (VLDB) requirements in the late 1990s, to databases for grid computing environments in recent years -- the last 30 years have seen many important innovations with new server architectures emerging to support mission critical systems.

In the past, customers had fewer choices in server architectures as symmetric multiprocessing (SMP) servers were almost the de-facto standard for UNIX-based applications. Today however, we witness the emergence of architectures such as blade servers, clutered servers and new operating systems such as Linux.

Back then, moving from one vendor’s SMP server to another was relatively simple as benchmarks could be conducted to ensure that the new server would deliver the required performance.

Today, customers looking to migrate from a UNIX SMP architecture to a Linux architecture based on blade servers are faced with a significantly more complex task. The potential for errors is higher and this can lead to decisions that bring on disastrous results.

Data centres have changed fundamentally in the way they look and operate with the introduction of grid computing. From silos of disparate resources to shared pools of servers and storage, organizations cluster low-cost commodity servers and modular storage arrays in a grid.

Databases built for grid environments have enabled organizations to improve user service levels, reduce downtime, and make more efficient use of their IT resources while still increasing the performance, scalability and security of their business applications.

Nevertheless, managing service level objectives continues to be an ongoing challenge. Users expect fast and secure access to business applications 24/7, and IT managers must deliver without increasing costs and resources.

Databases play a key role in ensuring high availability. In the next generation database, the ability to run real-time queries on a physical standby system for reporting, or the ability to perform online, rolling database upgrades by temporarily converting a physical standby system to a logical standby, or a snapshot standby to support test environments, can all help ensure rapid data recovery in the event of an IT disaster.

Application Performance Testing is a Necessity, Not a Luxury To understand the impact of Application Performance Testing to businesses, let us take a closer look at a key IT issue for organizations in relation to managing change. During the lifespan of any application system, changes are a fact of life but the complete impact of these changes has to be known before the application goes into production. Common system changes are:

Updates to an application requiring it to be moved from a testing to a production environment

- Upgrading or patching the database or
operating system
-Changes to the database scheme
-Changes to storage or network
-Testing a potential new hardware platform (e.g. comparing UNIX platforms)
Testing a potential new operating system (e.g. migrating from Windows to Linux)

In order to provide some structure to this process, a range of tools has been released to help customers better manage this process and provide some capability for customers to test application performance. Despite helping to make the testing process easier, it requires significant investments of time and effort to gain a functional understanding of the underlying application of many of such tools before the testing workloads can be generated. In a vast majority of cases, the bigger issue is that the resulting workloads are to a large degree, artificial.

Despite extensive testing and validation - both time consuming and expensive - the success rate has traditionally been low as many issues still go undetected and application performance can be affected, leading to potentially disastrous outcomes.

In order to help customers deal with application performance testing, the latest release of the industry’s leading database incorporates new features that allow customers to capture a production workload which can then be “replayed” on a test system to show how the application functions in a new environment.

The key difference in this approach is that all external client requests directed to the database can be captured -- so the real workload is captured and can then be replayed on a test system.

This will throw up any errors or unexpected results (i.e. a different number of rows returned by a query) using the comprehensive reporting system provided.

(The writer is Senior Sales Manager, Technology Sales SAGE (West), Oracle Corporation).

 

Top to the page
E-mail


Reproduction of articles permitted when used without any alterations to contents and the source.
© Copyright 2007 | Wijeya Newspapers Ltd.Colombo. Sri Lanka. All Rights Reserved.