Datacenter Planning: Installation, Maintenance and Expansion

All datacenters, from the smallest corporate data center to the largest hyperscale datacenter of cloud service providers, have many components in common, and all of these components need to communicate. That means lots of connections, and opportunities to either excel or fail when it comes to speed, efficiency, reliability, and security, regardless of size.

Let’s dig into each area of opportunity, and the importance of picking the right interconnections for every aspect of your datacenter. As you can imagine, there are a lot of options out there, and choosing the right one isn’t always straightforward!

Engineers design datacenters around some very stringent criteria. Speed is among the most important, as it determines how quickly data can be transmitted. As speed increases it also reduces latency. That determines how well the datacenter performs its role of serving data on request. Internet services are in a competition for the user’s attention. As long as the user gets a response that’s <100ms (that’s 1/10th of a second) the interaction feels instantaneous. But that 100ms is the budget for the complete circuit of the process. From user action, to server reaction, to content modification, to user presentation. 1GigE connections have a latency of 12µs, 10GigE have a latency of 1.2µs, and 25GigE has a latency of .48µs. While the difference between 1.2µs and .48µs doesn’t seem like much, interface latency adds up for every single inteface, so there’s real advantage to getting those numbers down whereever possible.

Datacenters, especially at hyperscale, consume a lot of power with the goal of increasing network performance. The power consumed by these servers, network elements and interconnections generate heat waste, and heat requires cooling. Datacenter designers assume that removing the generated heat takes at least as much power as the equipment itself, so total power consumption is 2-3x what is needed for the actual datacenter electronics.

While higher speeds generally require more power, careful selection of components can actually reduce power consumption. For any given speed and link length, there is an optimum type of link – fiber or copper – that will provide the best tradeoffs.

Reliability involves careful design of the operational environment for the racks of equipment to maximize uptime and minimize maintenance. It also involves redundancy. Equipment can work around electronic failures when interconnections are redundant, creating a mesh-like network that is more failure tolerant. In addition to increasing your cumulative bandwidth, networks with a healthy amount of parallel and alternate connection paths are less disrupted by unforeseen failures of equipment in service.

It’s more than the network that demands high availability, it’s the operational infrastructure. Engineers have power and environmental facilities run with contingencies in mind, which is why rare to see professional networking equipment operating with a single power supply. Modern datacenters use a mixture of AC and DC power, battery backups, and generators to ensure reliable operation. HVAC has a preventative maintenance schedule and also the ability to operate at a reduced capacity without compromising network reliability.

Datacenters use physical interconnections and carefully developed access procedures to ensure that operations are kept safe and secure. This is true for all datacenters, large and small. Aside from a well-maintained operation, network monitoring systems control access, configuration, and alert the administrators to any conditions out of the ordinary. Intrusion detection and prevention systems (IDS/IPS) use network taps to halt unauthorized access in its tracks.

Then, there is the need to plan for upgrades. While you don’t necessarily need to plan for what your datacenter will look like in ten years, you should predict where your bottlenecks are going to arise and what you can do to eliminate them when they do. Many datacenters are on an upgrade cycle of 18-24 months. This includes adding storage, upgrading to faster storage, and upgrading the speed of servers. That follows with necessary upgrades to the network and its interconnections. Choosing solutions for today’s needs with a clear path for upgrades is usually the most cost-effective solution.

We’ve discussed the importance of picking the right interconnections for every aspect of your datacenter, but there are a lot of options out there.

Conductive and fiber optic connections each have their advantages and disadvantages. Twisted pair connections can be custom-made in house, without expensive tools, but have limitations on the bandwidth they can carry and maximum distance.

Multimode fiber is compatible with short reach optical transceivers and have distances and bandwidth availabilities that make them a historically widely used datacenter technology. OM2, OM3, OM4, and OM5 all have varying maximum bandwidths and lengths. OM5 is the new kid on the block and is intended to carry multiple signals across duplex runs, but because of the accessibility of singlemode fiber, OM5’s adoption has been lukewarm.

Singlemode fiber is more and more becoming the infrastructure of modern datacenters where detached optical transceivers are used. There’s a great deal of flexibility with the long spans that are possible, and compatibility with passive WDM filters provide options for making better use of fiber where additional runs come at a cost, such as between floors or from facility to facility.

But there’s an in-between option as well! Direct Attach Copper (DAC) cables and Active Optical Cables (AOC) are essentially two transceivers with the connecting media attached. The sacrifice is that these connections are ordered to length, so planning and slack management are a bit more involved. That being said, the benefits can be huge. Attached transceivers are available in lengths from .5 meter up to over 100 meters, cost less in general than the two transceivers with interconnect, and have great power utilization stats. These connected transceivers are also available in speeds ranging from 10Gb up to 400Gb and in most formats and combinations.
It’s these interconnections we want to focus on – that’s our business. There are quite a few options to choose from, each with tradeoffs, dictated by the design and use of the data center. Fortunately, servers, switches and storage are modular. Most data ports use standardized component interfaces which will accept pluggable transceivers compatible with various copper and fiber optic cabling options.Note: These numbers vary a bit depending on media manufacturing methods.

We know, the choices can be mind boggling! That’s why we have knowledgeable applications engineers available to assist you in optimizing and future-proofing your design of a new datacenter, or choosing the best options to most efficiently upgrade your current center.