Virima Data Center Discovery Best Practices

Data Center Discovery Best Practices

Executive Overview

If you were to ask CIOs and IT executives “What are your top data center challenges?”, responding quickly and efficiently to shifting business priorities and elevating IT’s value to the enterprise would undoubtedly be at the top of many lists. To that end, forward-thinking IT professionals are looking at initiatives to transform the data center with new technologies and practices.

What’s on the horizon for your IT organization? Industry surveys frequently mention cloud computing, digital transformation, virtual machines, containers, automation, software-defined networking, and security safeguards as common IT projects. Many IT professionals are considering moving to a converged infrastructure to help break down a proliferation of silos. Reducing power and cooling requirements by replacing aging equipment with energy-efficient hardware are other goals.

However, you can’t transform (consolidate, migrate, optimize, virtualize) what you don’t know.

Data center discovery – an accurate inventory of your data center ecosystem and assets — may seem mundane, but it’s the essential linchpin to successfully implementing service improvements. Any effort to transform your data center based on incomplete information, and without paying attention to the interdependencies between hardware, software, data and users will likely threaten existing business continuity.

Additionally, many organizations have limited IT personnel resources with the expertise (or free time) necessary to conduct a thorough discovery of all IT assets, configurations, relationships and dependencies.

This white paper examines the challenges of embarking on a thorough data center discovery and offers suggestions to overcome the obstacles commonly encountered, minimize disruptions and deliver the best return on IT investment:

  • The Necessary Voyage of Discovery: What Have You Got? How Do The Pieces Fit?
  • Success is in the Details: What You Need to Know (Servers, Storage, Networks, Applications, Services and Middleware, Contracts)
  • Getting the Discovery Job Done

The Necessary Voyage of Discovery

“Doesn’t the IT department already know what hardware and software they have, how it interacts and which users those assets support? After all, IT supports its systems every day.”

It’s a valid question, and the answer, unfortunately, is probably “no.” Or, not in the detail required to successfully complete a data center move, consolidation or cloud migration without running into serious problems.

The discovery phase is the critical first stage of a data center transformation project. In a complex environment, this phase can last months, particularly if performed manually. But businesses are not static. During the ongoing discovery phase, you might add or decommission new equipment, create new virtual instances, or modify applications. Therefore, if you do not take care, the discovery phase deliverable could become outdated before its completion.

What Have You Got? Many organizations embark on major data center initiatives without first understanding what assets they have and their relationships to applications, business processes, and internal or external services.

If you lack a comprehensive and up-to-date inventory of your IT assets, overlooking an asset, service, or dependency could lead to the loss of critical services during a project.

Most data centers have evolved by adding technology piece-by-piece over the years, with new servers introduced to support newly computerized business functions.

A merger or acquisition amalgamated heterogeneous hardware and software and those most familiar with the acquired systems were no longer with the company post-merger.

To accommodate new and/or growing databases, the team added more storage units. They also added networking components to improve connectivity and/or bandwidth.

Conducting a visual inspection of hardware and using a spreadsheet checklist proves to be an error-prone exercise. It is especially when equipment is spread out over multiple locations.

And what about all of those virtual assets?  Think of all the hypervisors, operating systems, apps, databases, middleware, and other supporting services that can run on a single physical blade server.  With all this to account for it’s no wonder why things get overlooked.

How Do The Pieces Fit Together?

Knowing what you have in your data center isn’t enough. You must understand how every piece interrelates and what business processes they support, essentially “what talks to what.” Consider again that if employees with critical knowledge leave your company, their knowledge may go with them if there is no up-to-date documentation.

Here are some questions to consider: Which storage units, networking gear, and physical or virtual servers support which applications? Who “owns” and uses those applications? What business processes depend on each application? What dependencies exist between applications and technologies?

Without first answering these fundamental questions, making changes to IT infrastructure can lead to some applications not working at all. It is because you failed to incorporate vital components into the new environment or broke necessary linkages.

Success is in the Details: What You Need to Know

It’s the details that are most likely to trip up a major data center project. One piece of seemingly trivial hardware that isn’t moved over to the new environment can shut down a vital system or, because of interdependencies, multiple systems.

Here is a sample data center checklist:

1. Servers

The workhorses of your data center, information collected for every physical server should include: manufacturer and model number; physical attributes (memory, CPU, storage, NICs, HBAs); firmware; location; MAC addresses; IP addresses and VLANs; configuration settings; operating systems or hypervisors; virtual servers running on each physical server; applications or services running on each physical/virtual server; and known interrelationships between servers. Keep in mind that not only are application and database servers important, but crucial infrastructure servers that support “under the hood” functionality areas are as well.

2. Storage

The cost of storage has dropped dramatically over the years, which means the cost of storing each gigabyte of critical data is a small fraction of what it once was. The downside is that low cost has encouraged data proliferation, little of which is ever deleted. 

Critical data storage information that must be documented includes:

  • Where are all of the storage units and what data is stored where? What makes and models are in service?
  • Is the data structured or unstructured?
  • Which applications will be affected if some storage units are shut down for maintenance?
  • How is data backed up? Is all data backed up to the level warranted by its criticality? What, if any, archiving plans are in place?

3. Networks

Understanding your network infrastructure is a prerequisite for ensuring that vital connectivity is not disrupted. A complete inventory of all makes, models, IP addresses, and installed hardware options for a variety of devices should include routers, switches, firewalls, proxy servers, network appliances, and WAN topology. A thorough understanding of the role that each piece of equipment serves in your network architecture is also critical.

Please be aware that you often categorize firewalls, load balancers, proxy servers, VPN terminations, and caching appliances together with other network components.

Physical networking infrastructure also often supports multiple virtual networks that maintain isolation from one another, possibly for security reasons. Networks may not stop at your organization’s walls. You should also take into account support for connecting to credit card processors and supply chain partners through a private network or virtual private network, which may involve third-party connectivity.

4. Applications, Services and Middleware

The discovery phase must catalog applications, services and middleware, including documenting functionality and applications that depend on them. Failure to recognize and accommodate the following components when reorganizing a data center may cause critical applications to fail:

  • Applications that share information and interact in real-time and are supported by business, core and middleware services.
  • Business services, such as a service used to check in an airline passenger, may draw data from several databases and access the functionality of a variety of applications.
  • Core services support business applications, technology infrastructure and/or middleware. Examples include domain name services (DNS), directory services, authentication services, and FTP services. From an end-user perspective, these services operate transparently—as long as they function properly. If a domain name server fails, every application in your organization may be impacted.
  • Middleware services are essential elements that support business operations, including services for queuing and transaction management, transporting data, and coordinating data movement and transaction flows.

5. Contracts

You must track associated documentation for hardware and software IT assets, including contracts, warranty protection, maintenance, and service contracts. For example, Service Level Agreements often guarantee the performance, reliability and availability of an IT asset or service.

Remember that a data center transformation might affect a contract. For instance, if you move hardware to a new geographic location to centralize operations, it might impact service agreements. Or, virtualizing multiple servers onto one larger physical server may also affect software contracts.

Getting the Discovery Job Done

Manual discovery of all IT assets is a time-consuming, cumbersome, error-prone job. And by the time it is completed, assets may have been changed, added or removed. A tool that automates many discovery processes is necessary to make the task both cost-effective and comprehensive.

Many existing tools are good at discovering the existing technologies in your data center, but may be less efficient, or possibly ineffectual, when it comes to matching those assets with the people and business processes they serve. To achieve this objective, the tool must automate as much of the discovery as possible, but also allow for the input, mapping and organization of information gathered from those with “tribal knowledge” of how things really work.  Tools that provide scanning for network and security purposes are common, but typically don’t show the big picture. You must correlate the data when these tools collect it to obtain a comprehensive view of the enterprise-wide infrastructure, including the interrelationships among various components and their users. Since these tools frequently see deployment at the departmental level, the discovery team might face challenges in gaining access, especially if they do not know about their existence. Unfortunately, turf wars are not uncommon in data center transformations.

Tools can also miss dependencies. Scans may occur at the wrong time to catch an occasional communication flow, so it is important to let dependency mapping scans run over long periods or at different intervals. And since an enterprise data center does not remain static during or after a discovery project, any tool should refresh the data on an ongoing basis.  Likewise, the actual transformation doesn’t take place over night so it’s critical that operations and support teams are aware of changes as they occur. The initial discovery of assets and relationships should dove-tail into a thorough change management program. 

Without a highly automated tool, a data center discovery process requires particular expertise and can consume considerable time and resources. This often results in the need for external consultants to perform the work, which can be expensive.  And as with any manual/point-in-time discovery, the information is static while the data center remains dynamic.

At the end of the day, IT organizations bear the responsibility of swiftly responding to the organization’s goals and delivering the best return on investment in the process. A comprehensive, automatic discovery tool that keeps the IT inventory configuration management database (CMDB) and relationship/dependency map up-to-date should be one of the lasting deliverables of any data center discovery.

About Virima Technologies

Virima is an innovator of cloud-based and on-premise IT asset and service management (ITAM & ITSM) solutions. Our mission is simple:  Automate IT operations functions for improved service, security, risk, and compliance management.

Our product, Virima®,  automatically discovers IT assets, configurations, relationships, and dependencies providing an easy linkage to business processes through dynamic visualizations.   It also includes project, release and risk management, as well as six PinkVERIFY™ certified ITIL® service management processes:  service asset and configuration management (SACM), change, incident, problem, request and knowledge management. 

The result is an easy to use ITAM and ITSM platform that provides unparalleled oversight of the IT ecosystem for management, audit, compliance and ITIL support functions to help IT organizations become better aligned and more responsive to dynamic business requirements.

Virima, headquartered in Atlanta, GA, also operates a research & development center in Bangalore, India. Clients in technology, healthcare, professional services, manufacturing, distribution, hospitality/entertainment, and education sectors use Virima.

PinkVERIFY™ is a registered trademark of Pink Elephant, Inc.  ITIL® is a registered trademark of AXELOS Limited.