Data center managers know that their job is much like scaling a mountain – advancement can be gradual, rest stops are often precarious, and indeed, in a continuously changing climate, every move is critical.
Today in the data center, network downtime comes at an increasing cost to an organization – so getting things right from the outset is essential. However, changes within the data center environment are so frequent and fast, that data center managers often work reactively rather than adopting a proactive strategy.
In reality, planning for the future of the data center – to address elements such as the best migration path to higher speeds, infrastructure management and scalability, and increased virtualization support – is necessary for data centers to keep pace with the demand driven by massive digitalization, urbanization and the rise of the megacity.
So, what should data center managers prioritize and where should they focus their efforts, in order to formulate a proactive strategy for the evolution of the data center? There are three key areas that, we believe, should be part of a successful strategy:
We’ve already explored strategic considerations for migration to higher speeds in a previous post. Here we outline key considerations for optimal infrastructure management.
The number and density of servers, switches and devices balloons as compute and storage demands inside the data center grow. The total server installed base in the U.S. is projected to increase by 40% from 2010 to 2020. Furthermore, the any-to-any connectivity offered by leaf and spine networks can quickly become complex, with the resulting costs – measured in longer mean time to repair, higher OpEx, and poorly executed moves/adds/changes – growing. These factors make managing infrastructure within the data center a challenge.
To see success, data center managers should consider both the physical aspect of the cabling as well as the operational connectivity of the network.
The main decision to be made concerning connector technology is whether to deploy parallel or serial optics. Often the most cost-effective way to support higher transmission speeds is using parallel optics, and in particular, the use of the proven MPO connectors, which is a trend that will likely continue. The next decision is in what form to use the MPO connectors; 8-fiber, 12-fiber or 24-fiber? All of these have important applications, but the 12- and 24-fiber modules are more versatile when it comes to supporting more high-speed fiber configurations.
As the port count has increased from a few hundred to several thousand, the cabling network has grown into a sprawling mass. This makes moves, additions and changes more frequent and complex, thus increasing the mean time to repair, and making mistakes, more likely. For decades, network managers relied on standard rack panels to manage their fiber terminations and connectivity, but these systems weren’t designed to support today’s high-density optical networks.
Today, data centers and central office facilities are using more optical distribution frames (ODFs). An ODF is typically used as the connection point between the plant network and the physical layer infrastructure inside the facility. ODF solutions tend to integrate fiber splicing, fiber termination, fiber-optic adapters and connectors, and cable connections in a single unit. ODF vendors continue to improve their system designs to accommodate growing fiber counts as fiber density increases, with 144-port/rack unit systems now commonplace. CommScope’s NG4access ODF offers alternating front-and-back port design and can accommodate 288 SC and 576 LC connections, providing the density required in today’s data centers.
Cable routing systems are also evolving and are being used to guide fiber patch cords and multifiber assemblies between fiber splice enclosures, fiber distribution frames, and fiber-optic terminal devices. Overhead tracks and troughs, which were primarily containment systems, have become more important as data center managers rely on routing systems to help organize and protect thousands of individual cables as the maze of cabling multiplies. Systems such as CommScope’s FiberGuide optical raceway system employs highly engineered bend management to maximize the optical performance of the fiber. Indeed, dwindling space in the data center is forcing IT staff to be more creative when it comes to routing, and the available configurations for these cable management systems are quickly increasing.
The increased complexity in the physical layer infrastructure is prompting data centers to deploy AIM (Automated Infrastructure Management) solutions. The information within the AIM platform’s databases provides crucial insight into the status of each connection in the datacenter, enabling the data center manager to trace the entire circuit. An AIM solution enables network administrators to streamline the provisioning and monitoring of network connectivity. They will also be able to gain an accurate view of what is connected, and where, in the network, reduce downtime by real-time notification of unplanned changes, and produce up-to-date reports on the state of the infrastructure.
A full-featured AIM such as CommScope’s imVision can help ensure that capacity is available when upgrading from duplex to parallel optics as well as helping to identify surplus cabling and switch ports available to parallel-to-duplex migration.
To be able to successfully manage an increasingly complex infrastructure, data center managers must look ahead and be proactive – they must plan for the physical cabling as well as the operational connectivity of the network. Considering connector technologies, cable management and routing, and connectivity management solutions can help ensure that the network infrastructure in the data center is able to meet future demand. In such a fast-moving space, it is crucial to be flexible. Those that are able to pivot quickly, to take advantage of new technologies and market opportunities, are more likely to see success. This ability to meet change is created from the inside out, starting with the physical layer infrastructure in the data center.
You can learn more about considerations for the modern data center here.