Data Center

Issues to Consider Ahead of Data Center High-Speed Migration

Issues to Consider Ahead of Data Center High-Speed Migration

25 July 2018 | Reading Time: 4 minutes

Futureproofing the Data Center With High-Speed Migration

There are multiple challenges and issues that data center professionals must address today. To maintain success now and in the future, data centers must be prepared to deliver high speeds, support higher server densities, and meet evolving standards.

For data center managers, this means they must plan ahead, prepare for short and long-term change – and chart the most flexible and efficient course for migration to higher speeds.

Of course, this also means adapting to evolving standards, as application standards organizations continue to update their recommended guidelines in order to keep pace with rapid increases in bandwidth. These standards groups facilitate the evolution to ever-increasing line rates, and also encourage the development of higher-speed applications that will increase the cost-effectiveness of links between the data center equipment.

So, as technologies and standards constantly evolve, it’s also important to consider futureproofing and scaling of the physical network infrastructure within the data center.

Issues to Consider Ahead of a Data Center High-Speed Migration

The conversation surrounding migration to higher line rates is complex and continually evolving. Data center managers must make decisions regarding fiber type, modulation, transmission schemes, connector configurations and cost. Determining the best migration path for an environment will depend on a wide number of factors. We’ve previously outlined four factors (40g or 25g, modulation schemes, transceiver technology, and serial or parallel transmission) to consider. Below are additional considerations that should be assessed to achieve effective migration.

Preterminated or field terminated cables?

Requirements to turn up networking services quickly have increased the value and demand for preterminated cabling. According to some estimates, the plug-and-play capability of preterminated cabling results in 90% time savings compared to a field-terminated system, and is about 50 percent faster when it comes to network maintenance. Increased numbers of fiber connections within the network also increases this value. Indeed, factory-terminated systems are the only viable solution to the extremely low-loss systems required to support high-speed links.

MPO fiber is the system of choice among preterminated solutions for both singlemode and multimode connectivity due to its high performance, ease of use, deployment speed and cabling density.

Singlemode or multimode?

This is one of the most complex decisions data center managers are facing: when and where should they deploy singlemode or multimode links?

While pluggable singlemode optics are becoming more affordable, enabling 100G Ethernet to capture a large share of the data center switch port market for hyperscale and enterprise data centers, it’s important to look beyond this cost alone.

Consideration should be given to the total channel cost, the anticipated growth of the data center and its migration road map. Here are some key issues to address when deciding whether to opt for singlemode or multimode:

  • Link distances: Data centers generally require a large number of network links within relatively short distances – this makes lower cost multimode attractive. However, it should only be used if it can continue to support the speeds that will be required as the network evolves. In comparison, singlemode is most often used in data center entrance facilities and its long-distance capabilities mean it’s the only choice for links between data centers and metro/wide networks.
  • Network topology: The size of the data center and the placement of network equipment (centralized or distributed throughout the data center) determine the number of network links and the distance that the network links must support.
  • Total channel cost: Comparing link costs between network types involves assessing the cost of the entire link – transceivers, trunks and patch cords. Costing models can be used to help compare the relative cost of different network link types and these can provide guidance. Where the average channel length is known it is easier to accurately compare link types. It’s also important to remember that although the cost of any link is length dependent, some have a higher cost due to increased fiber count so this difference needs to be taken into account.
  • Link Speeds: Every data center facility requires its own migration roadmap, based on the anticipated IT needs of the organization and the infrastructure evolution needed to support it. The transmission medium must support the maximum link speed for current and future applications.
  • Channel OpEx: Operational costs should include an evaluation of the personnel, process and vendor relations necessary to support the transmission medium under consideration. To avoid greater risks and costs, it’s crucial to establish the resources and capacity necessary to launch a new transmission medium.
  • Infrastructure life cycle: In order to avoid a costly rip-and-replace scenario, the data center infrastructure should be able to support multiple generations of equipment and technology.

OM4 or OM5 (wideband)?

Which multimode technology to deploy is another important decision for data center operators – what fiber type should be used, OM3, OM4 or OM5? OM3 laser optimized fiber and its successor, OM4, are both optimized for VCSEL transceivers operating at 850 nm and both use identical connectors. However, OM4 improves attenuation and bandwidth over OM3. OM4 can support greater distances and increased throughput.

Thus, the decision is really between OM4 and OM5. CommScope introduced OM5 in 2015 and it was approved under ANSI/TIA-492AAAE. OM5 enhances the ability of short-wavelength division multiplexing (DWDM) to span longer distances as well as enabling data center operators to reduce parallel fiber counts by at least a factor of four. Thus, to support 40 Gbps and 100 Gbps lanes, two OM5 fibers can do the job of eight OM4 fibers. In addition, OM5 support all legacy multimode applications and is compatible with OM3 and OM4 fiber.

Do I need an AIM System?

Automated infrastructure management (AIM) systems can be an important asset to assist the migration process as they provide an accurate mapping of the physical layer and all connected devices. AIM systems monitor and document all ports and fibers in use so they can help ensure capacity is available when upgrading from duplex to parallel. AIM can also help identify surplus cabling and switch ports and make them available for parallel-to-duplex migration. These capabilities have been highlighted in the ISO/IEC 18598 standard and the European Standard EN 50667 for AIM, which were both ratified in 2016.

Preparing Data Centers for the Future

When considering solutions to help you meet increasing demands for speed, you should consider how these affect the velocity of change and scaling requirements in the data center, as well as the total cost of ownership for the migration scenarios being considered. We believe that suitably selected, ultra low-loss (ULL) singlemode and multimode fiber trunks and cabling will greatly enhance the support for high-speed applications while maintaining the flexibility to support TIA 942-B structured cabling designs. Remember that you can use various knowledge resources, including CommScope, to help you make the right futureproofing decisions.

You can learn more about high-speed migration considerations in the data center here.

Previous Next

Understand Passive Infrastructure That Underpins Your Network

To make the most of the opportunity, accessible passive infrastructure training is the gateway to success.


To start your free download, please provide your details below
[contact-form-7 404 "Not Found"]

Subscribe to our newsletter

Get the latest news from CommScope direct your inbox every month