Home » Blog » Data Center Temperature & Humidity Best Practices: A Complete Checklist
Businessman,Holding,Pencil,At,Complete,Checklist,With,Tick,Marks.,Business

Temperature and humidity have a significant impact on your data center infrastructure. High temperatures can cause devices to overheat, whereas extreme low temperatures can cause mechanical and electrical failures. High humidity can lead to moisture build-up, corrosion, and shorts, but low humidity can lead to electrostatic discharge. That’s why it’s critical that you monitor the environment in your cabinets and follow data center temperature and humidity best practices.

Data center temperature and humidity guidelines

Each piece of data center equipment—including enterprise servers, storage devices, switches, firewalls, and other appliances—has a recommended temperature and humidity range at which it operates most efficiently. However, you can’t create individual climates for each piece of gear, because it all needs to coexist in the same space. That’s why broader guidelines exist, such as those provided by ASHRAE (the American Society of Heating, Refrigerating and Air-Conditioning Engineers).

ASHRAE outlines four classes, based on temperature and humidity sensitivity, into which you can organize your data center equipment. Plus, arranging the layout of your data center to account for these classes can help you manage environmental conditions more efficiently (more on that later). The four equipment classes are:

A1: The most sensitive enterprise servers, legacy hardware, and specialty equipment that requires the strictest level of environmental control.

  • Temperature range: 15°C (59°F) to 32°C (89.6°F)
  • Relative humidity range: 20% to 80%

A2: Most modern servers, appliances, storage devices, and personal workstations fall into this class.

  • Temperature range: 10°C (50°F) to 35°C (95°F)
  • Relative humidity range: 20% to 80%

A3: Some newer equipment that’s designed to withstand a broader range of temperatures and humidity.

  • Temperature range: 5°C (41°F) to 40°C (104°F)
  • Relative humidity range: 8% to 85%

A4: Equipment that’s specifically made to operate in extreme environments.

  • Temperature range: 5°C (41°F) to 45°C (113°F)
  • Relative humidity range: 8% to 90%

It’s important to note that there is still a “sweet spot” where a piece of equipment will perform best even within these recommended temperature ranges. The key is to balance that performance against other devices’ cooling costs and needs in the same rack or cabinet.

Data center temperature and humidity best practices: a complete checklist

Now, let’s dig into the data center temperature and humidity best practices to help you achieve these standards.

1. Monitor rack conditions, not just room conditions

The location of your sensors matters a lot. Simply monitoring the ambient temperature and humidity in the room doesn’t give you an accurate picture of the conditions in each rack. Different spots within the room may report different readings depending on the location of the cooling system vents, how particular hot machines are running, and other factors.

Instead, data center temperature and humidity best practices recommend installing multiple sensors in each cabinet. You need to monitor the air that’s flowing into your equipment, so to get the most accurate readings you should place your sensors near the air intake vents (typically at the front of a rack-mount chassis).

2. Calculate and optimize your power usage effectiveness (PUE)

Power usage effectiveness, or PUE, is a metric that data center infrastructure management (DCIM) engineers track to determine their data center’s energy efficiency. You calculate data center PUE by dividing the amount of power flowing into the facility by the power usage of the devices and infrastructure contained within. The higher your PUE number, the less efficiently you’re using your power. According to Uptime Institute’s annual Global Data Center Survey, the average PUE was 1.57 in 2021. You want your PUE to be as close to 1 as possible, but at the bare minimum, you should strive to meet that average number.

Data center HVAC (heating, ventilation, and air conditioning) systems are notorious power hogs. You need to keep temperature and humidity within acceptable limits, but you also need to consider the power costs—both in terms of money and your data center carbon footprint. Keeping an eye on your PUE will help you determine that balance.

3. Design for more efficient cooling

If your PUE is too high, you should look into more efficient cooling techniques to build on that last point. Data center cooling systems are known as CRACs (computer room air conditioners) or CRAHs (computer room air handlers). CRACs use refrigerants and compressors to cool the air, whereas CRAHs blow air across chilled water. Both systems require a lot of power, but there are ways to increase your cooling efficiency without increasing your energy consumption.

For example, you can strategically arrange your data center equipment to maximize cooling efficiency. In a smaller server room, you could place your highly sensitive, A1-class equipment closest to your cooling system. The best practice is in large data centers and colocation facilities to have “hot and cold aisles.” That means arranging cabinet aisles back-to-back, so all the hot air venting out the back of your equipment flows to the exit vents in one concentrated stream.

It would be best if you always strived to stay within the temperature and humidity guidelines specified by device manufacturers, and your data center should follow the environmental standards outlined by ASHRAE. These data center temperature and humidity best practices for environmental monitoring, power usage tracking, and efficient cooling will help you meet those standards while saving money and optimizing performance.

4. Monitor the environment in your cabinets

Environmental monitoring sensors collect data on the conditions in your rack so you can ensure that the temperature and humidity are within recommended limits. Some best practices for data center environmental monitoring include:

  • If you’re managing remote data center infrastructure, you should implement remote out-of-band management, which provides a dedicated connection to your environmental sensors even during a network outage.
  • Temperature and humidity aren’t the only data center environmental risks. Your environmental monitoring solution should also include sensors for tampering, smoke, airflow, dust, and particulates.
  • You can’t keep your eyes on your monitoring logs 24/7, so you should set up automatic alerts, so you’ll be notified if conditions exceed expected thresholds. To gain even more control, you should look for an environmental monitoring solution that includes web dashboards with visualizations so you can track conditions over time and spot opportunities for optimization.

Achieve comprehensive data center temperature and humidity monitoring with Nodegrid

Nodegrid’s line of environmental monitoring sensors gives you a complete picture of the conditions in your rack so you can follow data center temperature and humidity best practices. With sensors for airflow and temperature, particulates, smoke, proximity, temperature, and humidity, you can keep a close eye on your physical equipment even from thousands of miles away.

ZPE Cloud provides a cloud-based web portal to monitor and manage your sensors, with analytics and visualizations to help you monitor power usage trends, detecting temperature spikes, and more. Plus, when you connect your rack infrastructure to Nodegrid Serial Consoles, you get reliable, secure out-of-band access to your environmental sensors and other data center devices, even during a network outage.

Learn more about data center environmental monitoring

Learn more about Nodegrid’s data center solutions

Need more help achieving data center temperature and humidity best practices with Nodegrid?

Reach out to contact ZPE Systems online or call 1-844-4ZPE-SYS.

Contact Us