IT Automation and Its Relationship With Configuration Testing

Posted by Sam Lee

Why IT Automation Needs Configuration Testing

Companies wishing to leverage their existing IT infrastructure investments, in order to take advantage of the cloud, need to carefully evaluate what's involved. Even after much deliberation, in which the benefits have been quantified, and small-scale pilots and trials have been completed with success, the process of a wholesale migration to the cloud can be complicated and fraught with risks.

There are distinct advantages for being in the cloud such as expected cost savings, faster response times and better availability of services on a 24/7 basis. By provisioning your computing platform using cloud technologies, you'll be offering Software as a Service (SaaS) to all of your corporate IT users, whether internal or external. Cloud technologies also involve virtualization, which is one of the main enabling building blocks on the road to always on and marginal cost computing. It has been employed in all environments to provide better resource utilization on hardware platforms in traditional data centers, generating other benefits, such as high availability and improved performance through load balancing techniques.

Taken together, cloud computing and virtualization can be seen as part of an overall trend in enterprise IT: a drive towards autonomic computing; a vision in which the IT environment will be able to manage itself based on perceived user activity and service loads. IBM defines autonomic computing as "the self managing characteristics of distributed computing resources adapting to unpredictable changes, while hiding intrinsic complexity to operators and users". The aim is to develop computing systems capable of self-management, in order to overcome rapidly growing complexity in computer systems management and to reduce the barriers from future growth.

Central to all of this is the concept of DevOps, or rather, a set of emerging principles, methods, and practices for better communication, collaboration, and coordination between previously separate disciplines of software development and IT operations. DevOps developed as a bridging discipline out of the interactions and collaborative work undertaken between software engineers/application developers and system administrators/infrastructure architects. In meeting a company's goal of rapidly producing software products and services running seamlessly on top of existing enterprise IT infrastructures, organizations required a better understanding of the interdependence and importance of both development and operations disciplines, which were previously treated in completely separate silos of automation.

Conceptually, DevOps can be seen as the intersection of development, technology operations, and quality assurance. Many of the people and ideas involved in DevOps emerged from enterprise systems management and agile infrastructure/operations backgrounds. Companies with very frequent software releases such as Flickr, acquired DevOps capability in order to support ten software deployments (or more) per day. Other companies that produce multi-focus or multi-function applications may have even higher daily deployment cycles, which is where mature DevOps capabilities are a must, in order to ensure consistent delivery of new software as part of the overall application lifecycle management process.

Adoption of DevOps is driven by factors such as the following:

1. Wide usage of Agile and other rapid development methodologies and processes.

2. Increasing data center automation usage, configuration management and testing tools, such as Chef and Chef Server, and other open source tools like Puppet Labs.

3. Ubiquitous cloud infrastructure and virtualized environments from internal and external providers.

4. Ever increasing demand for faster rate of production in software releases from business and application unit stakeholders.

DevOps is all about targeting software product delivery, quality testing, feature development, and controlling maintenance releases in order to improve security and reliability with faster development and deployment cycles. DevOps in practice is more about enabling a collaborative and productive relationship between development teams and operations teams. It also leads to increased efficiency and reduces production risks associated with frequent software release changes.

Other disciplines have also evolved to tackle the challenges of IT complexity, configuration richness, and software reuse. ITIL Change and Release Management is a subset of the IT Infrastructure Library, devoted exclusively to these problems: a set of best practices framework for managing complex IT systems. Other frameworks, such as the Enterprise Continuum (TOGAF), as part of the Open Group Architecture Framework, are all about software and architecture reuse.

Central to both frameworks is the configuration management database, a repository of information where dynamic software libraries and dynamic hardware libraries (DSL and DHL) of complete IT systems are stored. In this configuration management database, the desired configuration state is one in which all configuration items, whether hardware or software, in the IT infrastructure have been detailed and their relationships clearly defined, with changes tracked over time. Deviations from this desired configuration state are known as configuration drift, leading to the possibility of uncontrolled change and release with negative consequences.

It's easy to see why DevOps has evolved as a response to increased enterprise IT complexity and the need to have better controls in place when software releases are deployed to distributed computing environments. If we take an example of a typical web server environment, the intrinsic complexity of this interconnected and heterogeneous web server infrastructure, with perhaps as many as hundreds of individual web applications, can make configuration management and review a primary step when testing and deploying each and every application.

Even if we simplify our analysis and restrict ourselves to security and policy management alone, it only takes a single vulnerability to undermine the security of the entire infrastructure. Small and almost unimportant issues may quickly evolve and escalate into severe risks for other applications on the same server, and integrity of the whole environment is only as strong as the weakest link in the chain. Therefore, in order to mitigate these problems, it is of the utmost importance that an in-depth review of configuration and known security issues be performed, with exhaustive testing prior to release.

In order to preserve the security of the application itself, it will be necessary to perform proper configuration management of the web server infrastructure. Elements such as the authentication servers, the back-end database servers, and the web server software itself all present possible undesirable risks and new vulnerabilities, especially if they haven't been properly secured and reviewed. For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself, which has arisen a number of times in both web servers or application servers, could compromise the application, as anonymous users can use the information disclosed in the source code to leverage attacks against the application or its users.

Companies are finding that in order to enable IT automation, configuration testing of the management infrastructure and unit testing for infrastructure require the following steps:

1. The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and the impact this has its security.

2. All the elements of the infrastructure must be reviewed in order to make sure that they don't hold any known vulnerabilities.

3. A review needs to be made of the administrative tools used to maintain all the different components.

4. The authentication systems, if employed, need to be reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.

5. A list of defined ports which are required for the application should be maintained and kept under change control processes.

The above are just some of the considerations companies need to assess before porting their existing enterprise IT and legacy architectures onto the cloud. Distributed computing environments, are, by their nebulous nature, complex and difficult to manage. Without modern disciplines and processes in place such as configuration management and testing, coupled with techniques that embody aspects of autonomic computing employing varying degrees of IT automation such as virtualization and systems management frameworks, modern enterprise IT wouldn't have been possible, and the cloud computing paradigm wouldn't exist.

Indeed, it can be argued that the foundation of cloud computing is built on pervasive virtualization. Companies are now using virtualized operating systems and applications at every application tier throughout the enterprise. This has brought about the realization of cloud computing benefits, through a shared pool of preconfigured and flexible computing resources that are fully integrated. Companies are now able to deliver better IT services at a lower cost, more reliably, and quicker, when compared to traditional data center environments.

Although the benefits of the cloud and virtualization are widely understood, there remains considerable obstacles that prevent or slow down implementation. Issues with complexity and managing the interdependencies of components remain as key challenges.

Modern enterprise IT architectures embody the best of breed approach, comprising of a large variety of devices, hardware and software from multiple vendors, slowly built up as requirements change over time. On closer inspection, the environment is often underutilized and sometimes over-utilized in terms of resources, and always in a complex and fragmented network configuration. The components are time consuming and costly to manage, configure, and provision. IT departments ideally want to improve service delivery to their end users or customers, but in actuality, end up spending significant time and money on getting the pieces to work together, spending more time than they should on planning upgrades and enhancements, devising workarounds, and tuning the environment for better performance.

One new approach to managing cloud migrations involves provisioning pre-built and tested IT automation platform components with rigorous configuration testing processes and procedures. These components, usually employing virtualization techniques, provide the following five layers of functionality:

1. Compute; provided by a next generation data center platform, which unifies compute, network, and storage access. Optimized for virtualization and designed with open industry-standard technologies, which reduces total cost of ownership and improved business agility.

2. Network; utilizing software network switches to deliver virtual networking services and capabilities to virtual machines, network services bridge the server, storage, and network management domains within the platform.

3. Storage; best of breed storage and virtualization technologies built for availability and scalability underpinned with Storage Area Network (SAN) technology.

4. Virtualization; allows virtualization of all application servers and provides high availability and dynamic resource scheduling. Simplifies provisioning using customizable templates and workload management with centralized console.

5. Management; tools at each layer of the platform allow views of that layer's configurations, resources, and usage. Manages the configuration and compliance of the overall platform, simplifying deployment and integration into existing environments.

Topics: configuration testing, SaaS, Agile, automation

UpGuard Customers