What Is a Data Center?

Data centers are the backbone of modern IT infrastructure. Discover the key components that ensure their security and reliability.
By SentinelOne May 22, 2024

Data centers are the backbone of modern computing, housing the servers, storage, and networking equipment that power digital services. Our guide provides an in-depth look at the components, design, and management of data centers.

Learn about the different types of data centers, including on-premises, colocation, and cloud-based facilities, and their unique characteristics. Discover the importance of physical security, environmental controls, and redundancy in ensuring the reliability and availability of data center infrastructure.

Stay informed about the latest trends and best practices in data center management.

Types of Data Centers

Broadly speaking, there are three types of data centers: enterprise (on-premises), cloud data centers, and managed data centers/colocation data centers. Each has its own set of advantages and disadvantages making them appropriate for different situations.

  • Enterprise (On-Premises) Data Center – This type of data center resides on the premises of an overall business or enterprise, often in the IT department. It is typically an arrangement of server computers, along with associated hardware often mounted in racks. Smaller enterprises might refer to this as the “server room” or even “server closet.”
  • Colocation Facilities and Managed Data Centers – When an enterprise gets to the point where it no longer has space to house its own data equipment (and/or the HVAC capacity to handle its cooling needs) but still wants full control and ownership of said equipment, a colocation facility can be the answer. Colocation facilities rent the space and infrastructure needed for another company’s servers. Managed data centers work similarly, but the data center owns both the computing hardware and the corresponding infrastructure. In a managed data center arrangement, enterprises lease both the equipment and infrastructure and rely on the data center company to maintain its operation.
  • Public Cloud Data Center – Server services are performed “in the cloud,” with enterprises paying for data center operations as a service. The physical servers, data storage, and infrastructure are left entirely up to the cloud data center itself. While nominally in the cloud, compute and storage resources ultimately reside at the provider’s physical data center.

Although enterprise data centers are relatively easy to set up and use, as a company grows it might consider the use of a managed data center or colocation data center facility. Alternatively, a public cloud-based data center has the advantage of near-unlimited scaling, and companies can offload many of the tasks involved in its operation for a monetary cost. However, security and infrastructure are ultimately out of the company’s hands, which can be beneficial or problematic depending on your business context and scale.

Data Center Infrastructure Components (Architecture)

As listed below, data centers use a wide range of infrastructure components that work together to perform data management tasks. These components, along with how they work together and the facility’s physical layout are referred to as the data center’s architecture. Architecture can also refer to the data center’s performance Tier (I – IV), largely an indicator of its reliability, discussed in the next section.

Servers (Compute)

Servers are the individual computers that make up a data center’s computing capability. In data center usage, servers typically come in one of the forms below and need to be powerful enough to respond to requests from other computers (clients in this context) in a timely manner. In a more general sense, a server can be any computer that automatically responds to requests from other computers. Any computer can be set up as a server, but in a data center context, general-purpose computers would normally not be up to the task.

  • Rack-Mount Servers – Physically similar in shape to a pizza box, which slide into racks. Each server features its own computing components, power supply, and networking equipment, and multiple servers are normally stacked top-to-bottom in the racks.
  • Blade Server – Smaller than a rack-mount server, blade servers are meant for high-density installation. While powerful, physical limitations mean they typically cannot accommodate as many hard drives or memory slots as their rack-mount cousins. Power and cooling elements are shared by the chassis.

Storage Systems

Storage systems for servers come in three main types:

  • Direct-Attached Storage (DAS) – Data storage directly attached to the server (e.g., a computer hard drive is a simple form of DAS).
  • Network-Attached Storage (NAS) – Data storage is provided over a standard Ethernet connection. NAS storage generally resides on its own dedicated server or servers.
  • Storage Area Network – Shared data storage using a specialized network and techniques to deliver data with extremely high performance, data protection, scalability, and management.

Networking

Networking devices must be implemented at a scale commensurate to the server’s capabilities. Unlike a home or small business router setup, a data center will feature a wide array of switches and other data transmission components, which need to be upgraded and maintained on a (potentially) massive scale.

Power Supply and Cable Management

Power must be supplied to each server and storage system in a data center, which at scale can be a massive requirement. Physical cabling will also require significant space, which should be organized and minimized to the greatest extent possible.

Redundancy and Disaster Recovery

System architecture should be set up so that failures or intrusions disrupt operations as little as possible. Capabilities are outlined in the next section in four tiers.

Environmental Controls

Power input to a data center is used in processing and is ultimately turned into heat. While a standard computer’s processor can typically be kept cool by a simple fan, a data center has many orders of magnitude more processing capabilities than a consumer PC and thus needs much more heat dissipation. A data center’s HVAC systems must be designed to deal with this excess heat, and liquid cooling solutions can also be implemented to transport thermal energy.

Physical Security

Highly rated data centers must be physically secured via devices like cameras, fencing, and entry scanners, as well as human security personnel. Fire suppression would also fall under this umbrella, as would any preparations for natural disasters.

Data Center Ratings and Redundancy Levels

Data centers can be classed into four performance tiers, which largely relate to reliability and security. These tiers can serve as a quick reference for how to judge a data center’s capabilities, as well as a playbook for enhancing your own network. Standards include the following:

  • Tier I – Requires an uninterruptible power supply (UPS) for short power interruptions, a generator for longer-term outages, cooling equipment, and a dedicated area for IT operations. Notably, Tier 1 facilities need to shut down for scheduled preventive maintenance.
  • Tier II – Adds redundant power and cooling systems for enhanced security against unexpected downtime. Component replacement can be performed without shutdown.
  • Tier III – Adds redundant distribution paths for power and cooling capabilities. Does not require shutdown for equipment maintenance or replacement.
  • Tier IV – Features several redundant, independent, and physically isolated systems that act together to ensure that an interruption in one system does not affect another. In theory, a Tier IV system is not susceptible to disruption from an unplanned event, though if one isolated system is down for maintenance, the chance of disruption is increased.

Depending on business needs, an in-house data center may be appropriate, or an offsite Tier III or IV data and compute facility could work better (either for colocation or via managed computing hardware). Alternatively, public cloud-based server solutions may be appropriate, allowing businesses to scale up without dedicating significant technical resources to this transition.

Related Resources

For more on the subject of data management, also see our article on latency. A great data center setup can help minimize latency time (good), while insufficient compute and data transport resources will result in a less efficient setup. More than just a simple inconvenience, a few seconds added up time after time, day after day, via employee after employee, can lead to much less productivity. For public-facing applications, delays can mean frustrated, or even missed, customers.

FAQ

How do data centers make money?

In the context of this article, data centers do not make money directly but provide computing resources to allow enterprises to run efficiently and make (more) money at their core business function. A notable exception is cryptocurrency mining, where data resources perform computation to generate funds.

What is the difference between a data center and a server?

A server is an individual compute module, while a data center is a dedicated area, building, or buildings that house one or more servers.

Conclusion

Data centers are extremely important for our modern network-centric computing environment as the physical locations where cloud-based computing largely takes place. Data centers house servers, storage, and networking equipment, along with power and cooling infrastructure to support these machines. They enable everything from remote storage for individuals and small businesses to the massive computing needs of online commerce, streaming, and AI services.

Experience the World’s Most Advanced Cybersecurity Platform

See how our intelligent, autonomous cybersecurity platform harnesses the power of data and AI to protect your organization now and into the future.