How to Calculate Data Center Cooling Needs

Data Center Raised Clean Flooring

When it comes to running electrical equipment, especially the highly complex and powerful equipment found in data centers, heat generation is a normal side-effect that needs to be handled quickly and appropriately to prevent system disruptions and extensive damage to servers. Equipment may suddenly crash and have its lifespan shortened if temperatures are allowed to climb too high.

That said, humidity is another critical concern data centers need to watch out for. Too low humidity can cause a burst of electrostatic discharge, and too high humidity can lead to equipment condensation and eventual corrosion. To prevent these critical issues, data centers need to employ powerful and effective cooling systems, such as those provided by the technical experts at C&C Technology.

Related: Data Center Cooling 101: From Start to Finish

Basic Temperature Guidelines to Follow

Temperature guidelines for effective data center operations have been repeatedly established by The American Society of Heating, Refrigerating and Air-Conditioning Engineers also referred to as ASHRAE. According to their most recent recommendations, information technology (IT) equipment in data centers should be kept at a temperature of between 64 and 81 degrees Fahrenheit (°F), or 18 and 27 degrees Celsius (°C) with relative humidity (RH) of approximately 60%, and a dew point (DP) between -9˚C DP to 15˚C DP.

The document also provides a list of more specified information based on the requirements of different data center IT equipment classes, ranging from class A1 to A4, along with classes B and C. Previous guidelines provided by the ASHRAE recommended narrower temperature ranges but adjusted their suggestions in recent years because data centers have started prioritizing energy-saving techniques. That said, data centers that utilize a range of newer and older equipment may run into a few problems, so they need to ensure that they are finding the correct humidity and temperature ranges that work for each piece of equipment.

Please refer to this document carefully to learn more about the specific recommendations provided by ASHRAE to ensure your data center’s equipment is operating in the most optimal conditions for effective performance.

How to Calculate The Total Cooling Requirements of Data Centers

Once a data center has determined the optimal temperature range for its equipment, it will need to carefully calculate the general heat output from its systems to determine the required cooling capacity. This process will require an overall estimate of the heat output from all IT equipment within the data center along with other potential heat sources, which will, in turn, indicate how much cooling power the data center should require. These estimates for cooling requirements can be determined by carefully following the six steps explored below.

Related: Data Centers: What They Do & Why [2021]

Step 1: Measuring The Heat Output of Individual Units

The first step of determining a data center’s cooling requirements for its equipment involves measuring the general heat output of said equipment. However, this process can be tricky, as heat (a form of energy) is often expressed using a range of measurements, including calories, tons per day, joules per second, and British thermal units (BTU) per hour. The fact that these measurements are often used together only serves to make the process more confusing. 

At present, there is a movement towards standardizing the measurement of heat output using watts, as BTUs and tons are slowly being phased out of use. That said, you may still have data that relies on other measurements. In that case, you’ll need to carefully calculate them into a common format, such as the watt, or whatever unit format is most commonly used in your data using the following calculations:

  • Tons to watts: multiply by 3,530
  • BTU per hour to watts: multiply by 0.293
  • Watts to tons: multiply by 0.000283
  • Wats to BTU per hour: multiply by 3.41

Because the power consumed by AC units is almost completely converted into heat and the power sent through data lines is low enough to be negligible, the thermal output of a piece of equipment in watts should be equal to the unit’s overall power consumption. However, one exception to this rule involved voice-over internet protocol (VoIP) routers, which tend to produce heat outputs lower than their power consumption. The difference shouldn’t be enough to make a significant impact on your calculations. However, it would help if you considered including it as a factor to reach an outcome as precisely as possible. 

Step 2: Measuring the Heat Output of Complete Systems

To calculate the total heat output, you must determine the heat output of all equipment working as part of that system by adding them all together. In data centers, this includes the heat output from IT equipment, air conditioning units, power disruption systems, uninterrupted power supply (UPS) systems, and others. The heat output of lighting and people will also need to be determined, though common values can also be used.

The overall heat output of power distribution systems and UPS includes a fixed loss proportional to the system’s operating power. These losses should be relatively consistent across different equipment brands and models. You’ll also need to calculate the overall cooling load of the data center’s floor area in square feet alongside the rated power of the electrical system. While air conditioning units and fans create large amounts of heat, the heat is directed outside and doesn’t need to be added to the thermal load of the data center’s equipment.

It’s important to understand that you can rely on quick and easy estimates for this data to determine cooling requirements for the data center and its various server rooms, which is a major advantage. However, you can also take the time to calculate the direct total using the following calculations:

  1. Add together the load power of all IT equipment, which should be equal to the heat power of said equipment.
  2. Use the formula: (0.04 x Power system rating) + (0.05 x Total IT load power) to determine the output of UPS systems with batteries. If your center uses a redundant system, you shouldn’t include their capacity.
  3. Use the formula: 2.0 x floor area in square feet or 21.53 x floor area in square meters to calculate the heat output of data center lighting.
  4. Multiply the maximum number of people present in the data center at a given time by 100 to calculate the heat they produce.
  5. Finally, add up the totals of each of the equations above to determine the total heat source output of the facility.

Step 3: Other Essential Heat Sources to Note

Some small data centers may need to determine the effect of heat coming into the building through windows or other heat conducted outside. If your data center has a large number of windows or exposure to outside heating sources, please get in touch with an HVAC consultant to help you determine the maximum possible thermal load of rooms or the facility as a whole. Add that amount to the total heat output calculated in Step 2.

Are you looking for a professionally trained service to provide your data center with a range of top-quality cooling solutions to keep your equipment running at peak efficiency? Reach out to the experts at C&C Technology today to learn more about what they can do for you.

Step 4: Determining The Impact of Humidification

As stated previously, data centers need to have a stable and consistent humidity level, as too much and too little humidity can cause different types of damage to data center equipment. This can be tricky, as air conditioners typically create a considerable amount of condensation, causing the air to be less humid. Supplemental humidification will then be needed to make up for this lost humidity. However, the equipment necessary to provide this additional humidification will also add to the heat load of the facility, meaning that the capacity of your cooling systems will need to increase. A balance between the two will need to be determined. This is especially the case for larger data centers that have larger AC systems to mix the air. In that case, AC units may need to be oversized by as much as 30%. 

Step 5: Additional Oversizing Requirements 

Cooling systems will also need extra capacity to account for potential load growth and equipment failures within the facility. Cooling equipment may fail, and data centers cannot afford to let the temperature increase too drastically if that should occur. AC units will need to be periodically taken offline for cleaning and maintenance purposes to help prevent these issues. These needs can be planned for by adding redundant capacity to the data center’s cooling system. The basic rule is to have at least n+1 redundancy so that the data center will have one more unit than strictly needed to serve as a backup.

Additionally, extra capacity can be added to help accommodate for potential load growth in the future as the amount of data that data centers generate is rapidly expanding as time goes on, along with the overall demand for data storage. By oversizing the cooling capacity of data centers ahead of time, your center should be able to meet these increasing demands more quickly.

Step 6: How to Determine Effective Air Cooling Equipment Sizes

Once all of the cooling requirements for your data center have been carefully calculated through the processes noted above, you should be able to accurately determine the size of the air conditioning system (or systems) your data center will require to function effectively. Again, the factors you need to know include:

  • The heat output and cooling load of all your equipment
  • The total heat output of the data center’s lighting
  • The total heat output of data center personnel
  • (If necessary) the cooling load of the building as a whole
  • Any requirements for oversizing due to humidification balancing
  • Oversizing for the potential of redundancy
  • Oversizing for the potential of future growth

This sum of this data should provide you with the cooling capacity of your data center. However, please note that the cooling capacity required is often about 1.3 times the anticipated IT load, plus any redundant capacity calculated, depending on the overall size of your data center and its server rooms.

Related: Data Center Infrastructure: What You Need to Know

How Much Does It Cost To Cool a Data Center?

The cost to cool your data center will vary depending on your location, energy costs, the size of your data center, and the number of racks. But to help you ballpark a loose estimate, a general cost to cool a small to medium size data center is around $1250 – $2080 per month. This means a small data center of five to ten racks will cost about $15,000 annually.

There are some tips and techniques to reduce the cost of cooling your data center that you should implement when designing or reorganizing your data center. At C&C, we specialize in optimizing your data center infrastructure to reduce your cooling costs. Contact us to learn more about our data center design services.

Simple Ways to Reduce Energy Waste in Your Data Center

Energy Star offers a few common-sense tips you should implement when designing your data center to cut energy waste:

  • Consolidate any lightly used servers to reduce unneeded hardware.
  • Utilize software and other technology to reduce the amount of data stored and to store the data more efficiently.
  • Activate any built-in server management features that can reduce power consumption.
  • Install smart, high-efficiency power distribution units (PDUs) to monitor and manage power usage.
  • Utilize grommets, diffusers, and blanking panels to manage airflow efficiency.
  • Orient your server racks to create hot and cold aisles.
  • Utilize curtain or Plexiglas containment systems to help keep the cool air from mixing with hot air.
  • Consider installing a water-side economizer like a cooling tower.
  • Install in-rack or in-row cooling systems to apply cold air directly to the servers.
  • Install energy-efficient humidification technologies like misters, foggers, and ultrasonic units.
  • Install an air-side economizer if your location allows it.
  • Install Data Center Infrastructure Management (DCIM) sensors and controls to manage cooling capacity and airflow.

How To Calculate the BTU of Other Factors in a Data Center

When calculating the cooling needs for your data center, you need to calculate the BTU of other factors that affect a data center beside the equipment. Here are some simplified formulas to help you estimate the BTU for some of these more common factors:

  • Area of the Data Center BTU: Width (meters) x Length (meters) x 337
  • North Facing Window BTU: Window Width (meters) x Length x 165
  • South Facing Window BTU: Window Width (meters) x Length x 870
  • Factoring No Blinds on Windows BTU: Window BTU x 1.5
  • Personnel BTU: Number of People in Data Center x 400
  • Lighting BTU: Total Wattage of Lights x 4.25

Totaling These Additional Factors Together

To find the total kW needed to cool these additional factors, you’ll want to add all the factors together to get the overall BTU.

Total kW for Other Factors: Overall BTU / 3412

By factoring in these other elements in your data center, you’ll have a more accurate measurement for determining your cooling needs.

Need an efficient cooling system for your data center? Contact us to customize an optimal cooling solution.

Related Link: Environmental Monitoring Systems: Best Practices

Effective Colling Methods for Data Centers to Consider

Once the cooling needs of your data center have been fully determined, you’ll then have to select what type of cooling method (or methods) you wish to employ. There is a range of methods for you to choose from, and you should understand the benefits, drawbacks, and general capabilities of each before selecting the best system for your particular data center. Some of the primary cooling technologies you can choose from include:

  • Rack-mounted cooling systems: AC units can be installed directly onto a rack to enable precise cooling capabilities.
  • Rack door heat exchanges: These cooling systems are affixed directly to server racks to take the heat output from the server and exchange it before releasing it back into the data center.
  • Chillers: Chillers remove heat from one element and transfer it into another element to help keep servers effectively cool.
  • Cold aisle containment systems: Several airflow management solutions are designed to contain cold air by separating cooling units from the supply airflow. This setup allows for a more precise level of temperature control in the data center, alongside enhanced cooling system efficiency. 
  • Hot aisle containment systems: This containment solution involves taking hot air from equipment exhaust the running it directly through an AC unit to prevent it from mixing with the general air supply to improve the performance of AC units.
  • In-row cooling: These systems are installed close to the equipment they’re cooling on either the floor or ceiling and can be easily scaled to remove high heat loads quickly.
  • Portable cooling: These provide cooling system flexibility by providing cooling solutions to needed areas at any time.
  • Downflow cooling: These systems pull hot air through the top of the unit and release cool air from the bottom of the unit into the data center.
  • Blanking panels: These prevent the recirculation of hot air between system racks.
  • Directional or high-flow floor tiles: These tiles help direct air toward data center equipment to increase their efficiency and help increase cooling capacity.

Would your data center benefit from the installation of new, top-quality cooling systems to keep your equipment running as effectively as possible throughout the day? Consider reaching out to the experts at C&C Technology today to learn about everything they can do for you.

Last Updated on January 20, 2023 by Josh Mahan

Scroll to Top