At Intel, we're unlocking the potential of the cloud. We envision a world where data and services are shared securely—confidently—from person to person across multiple clouds. Where shared resources and cloud infrastructure can be redeployed and reallocated on the fly. Where clouds are client aware, so people who access the cloud have a great experience no matter what device they're using. Intel® Architecture is ideal for the cloud, whether at the system, storage or network level.
The sky really is the limit. Here's how we're making it happen.
Explore the stack
Security up there begins down here.
Security in the cloud begins on the ground with
hardware you can trust. That's the concept behind
trusted compute pools
. With these core elements in
place, you can validate the integrity of your cloud-
Trusted compute pools allow you to attest to the safety of your computing infrastructure. You can prove that your physical and virtual infrastructure components are trustworthy. This is a critical capability—because if you can�t attest to the safety of your computing infrastructure, you can�t validate the security of the data, software, and services running on top of that infrastructure.
Security policies protect data and applications in the cloud, ensuring that your data and workloads touch only known-good systems. Within a trusted compute pool, policies are addressed in four layers:
Hypervisors allow you to build and manage a virtualized IT infrastructure. These important cloud-based tools abstract processor, memory, storage, and networking resources across multiple virtual machines running multiple operating systems and applications. With virtualized infrastructure, you can:
With trusted hardware as your
foundation, you deploy your workloads
on known-good pools of servers that
have been tested, validated, and
proven secure. It�s a crucial first step
toward securing your cloud.
Intel® Trusted Execution
Technology (Intel® TXT)
Intel® Advanced Encryption
Intel® Virtualization Technology
Trusted compute pools article
Control your corner of the cloud.
Trusted compute pools article
From the halls of government to the high-rise towers of the corporate world, forward-looking organizations are recognizing the potential of cloud computing models. The cloud is now widely seen as a path to a wide range of business and IT benefits—from dynamic provisioning to meet unpredictable workloads to a more cost-effective approach to the acquisition and use of IT resources.
But that�s all the easy part—seeing the benefits. Before your organization can move critical applications to the cloud, you need to overcome well-founded concerns about security risks that arise with cloud deployments. Today�s cloud environments face an ever-growing range of security threats, such as hypervisor and firmware attacks and malicious root-kit installations designed to take control of an operating system. The platform itself is now a target.
These new security threats are emerging in a time when the requirements and mandates for data security are higher than ever before. Tighter industry and government regulations, along with well-publicized data security breaches, have raised the bar for data center security to new heights. In this climate, your organization can�t move applications and data to the cloud until you have complete confidence in your security strategy.
This need is a key driver for trusted compute pools. Trusted compute pools give you the assurance that the operating systems and virtual machine managers (VMMs) that run on a set of physical servers have been measured and checked against a known, trusted code state.
Trusted compute pools allow you to control more aspects of your cloud deployment, so you get the advantages of the cloud along with many of the secure attributes of a privately owned environment. The trusted pool spans hardware, the virtualization engine, the virtualization management system, and the security reporting system.
“Trusted compute pools allow you to control more aspects of your cloud deployment.”Along the way, the trusted compute pool creates visibility and transparency for compliance and audit purposes. It gives you the reporting mechanism you need to attest to the security of the cloud environment.
While they are essential for cloud deployments, trusted compute pools aren�t the be-all and end-all of cloud security. Rather, they create a hardware-level foundation that supports additional security policies and enables secure multi-tenancy operations. In this sense, trusted compute pools help you achieve the level of trust you need to move high-end applications to the cloud—with all the confidence that comes with a tightly controlled, private data center.
Ultimately, with trusted compute pools you have greater control over your corner of the cloud.
NEXT UP: Security in the cloud begins on the ground.
Security in the cloud begins on the ground.
Trusted compute pools article
If your organization is thinking of moving applications and data to the cloud, you�re no doubt thinking about a security strategy. But how do you start building your cloud security strategy? In Intel�s view, cloud security begins on the ground—with the physical servers on which cloud infrastructure is built.
Why? Because hardware-level security is a lot like the foundation on a house. The structure that rises from the foundation is only as strong as the concrete that it sits on. By deploying your workloads exclusively across a foundation of server pools that have been tested, validated, and determined secure, you take a crucial first step toward securing your cloud.
This is the concept of trusted compute pools. Trusted compute pools give you the ability to establish, log, and communicate the trustworthiness of the servers you�re using in the cloud data center. These capabilities create a baseline for security, compliance, and assurance of platform integrity. You know that when the operating systems on your servers are launched, they are running only approved code.
What�s more, trusted compute pools allow you to attest to the safety of your computing infrastructure. You can prove that your physical and virtual infrastructure components are trustworthy. This is a critical capability—because if you can�t attest to the safety of your computing infrastructure, you can�t attest to the security of the data, software, and services running on top of that infrastructure.
“Trusted compute pools give you the ability to establish, log, and communicate trustworthiness to the servers you�re using.”Trusted compute pools create a hierarchy of trust that is rooted in hardware and that extends to the other components of a secure infrastructure—including virtual machines and the applications that run on them. Higher-level security policies are built on the secure foundation to create a trusted computing environment that gives you many of the security benefits of a privately owned data center along with the benefits of a cloud environment.
One important caveat: When we are talking about trusted compute pools, we are talking about a secure foundation for your trusted compute environment. While this is a crucial first step toward establishing a trusted compute environment, the security of your data and applications also depends on the security of your virtual machines, virtual machine managers, applications, and other exposure points that are above the hardware level. Security solutions at all of these layers work together to create a trusted environment that is ready for your mission-critical applications.
NEXT UP: Building your cloud on technologies of trust.
Building your cloud on technologies of trust.
Trusted compute pools article
Trusted compute pools leverage multiple advanced technologies to create a secure hardware foundation for cloud computing. Taken together, these technologies enable increased isolation and safer migration of virtual machines, hardware-assisted protection against launch-time attacks, and faster data encryption and decryption.
Let�s walk though some of the most important technologies that enable trusted compute pools.
The foundation for hardware-level security is Intel® Trusted Execution Technology (Intel® TXT). This technology enables an accurate comparison of the critical elements of the launch environment against a known good source. This "Measured Launch Environment" (MLE) provides hardware-based enforcement mechanisms to block the launch of code that does not match approved code. This approved-code approach enhances security by blocking both known and unknown threats. Even if you haven�t recognized a new malicious root-kit hypervisor, Intel TXT will block the threat simply because the malware doesn�t match the approved code. If the code is unapproved, it doesn�t get loaded.
Similarly, Intel TXT can enable policies that restrict the migration of virtual machines to only trusted platforms within a trusted compute pool. Virtual machines (VMs) that were created on a trusted platform can then migrate freely within the trusted pool. Like travelers at an airport, VMs that have cleared the security check can move freely between gates.
Intel® Virtualization Technology (Intel® VT) is another important component of trusted compute pools. Intel VT increases virtualization software performance with a hardware assist. This performance enhancement allows virtualization to be more viable in a cloud environment. Intel VT also creates memory protections and allows for some VM isolation.
Another technology that complements trusted compute pools is Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI). This technology enhances the performance of data encryption tools—and better performance makes encryption more viable in cloud data centers. In addition, Intel AES-NI helps reduce the risk of side-channel attacks on AES by performing decryption and encryption completely in hardware without the need for software lookup tables.
“Like travelers at an airport, VMs that have cleared the security check can move freely between gates.”Taken together, these technologies help you create a secure hardware foundation that supports layers of higher-level security policies. These layers make cloud computing feasible—and give your organization the confidence to move applications and data to the cloud.
NEXT UP: Sound policies for controlling your cloud.
Sound policies for controlling your cloud.
Trusted compute pools article
When it comes to protecting your data and applications in the cloud, security policies rule the skies. Through security policies, you harden your security infrastructure and control how your workloads are handled, so your data touches only known good systems. This is where trust originates.
There are many ways to configure the solution stack to get to the policies that drive toward trusted compute pools. To keep things simple, we�ll look at a theoretical stack that has four layers.
At the hardware level, security policies are enabled by Intel® Trusted Execution Technology (Intel® TXT). This technology is designed to harden computing platforms to ward off hypervisor and firmware attacks, malicious root-kit installations, and other threats. Intel TXT uses the processor to initiate a trusted boot and provide assurance of platform integrity.
Intel TXT works in tandem with Trusted Platform Modules (TPMs) that comply with specifications from the Trusted Computing Group. The TPM component stores policies from the hardware manufacturer and the platform owner. In addition, Intel TXT is designed to work with industry-standard encryption tools.
The virtualization layer is where the hypervisors live. At this level, policies harden the virtualization infrastructure, following known best practices, such as VMware�s security hardening guidelines. These guidelines explain how to securely deploy hypervisors in a production environment.
Virtualization management layer
The virtualization management layer aggregates the platform trust status from the hypervisors running on the host systems. This is accomplished via a virtualization manager, such as VMware vCenter*. The virtualization manager can challenge a host system to find out if it is trustworthy—specifically if it booted up in a known, trusted state, as measured by Intel TXT.
The virtualization management layer provides an application programming interface (API) that allows the next layer, which encompasses security and compliance applications, to gather information on the state of the physical hosts and the hypervisors running on them.
Security application layer
The security application layer encompasses security policy engines such as the HyTrust* Appliance and compliance consoles such as the RSA Archer eGRC* (enterprise governance, risk and compliance) suite. These applications can take the information the virtualization manager has aggregated on platform trust and compare it against expectations, and use it to define and enforce policies or rules or present it for reporting and audit functions.
Say, for example, that the server platform is supposed to be in compliance with a company�s guidelines on the Federal Information Security Management Act (FISMA) for data protection that mandates that a platform hosting a sensitive workload must be trusted. The compliance application verifies whether this is the case or not and then shows the results in a dashboard view.
In this manner, the layers of the solution stack build on each other to create a trusted compute pool. When all the levels of the stack are working together, you can verify the trustworthiness of your cloud environment.
NEXT UP: Rent the cloud, own the key.
Rent the cloud, own the key.
Trusted compute pools article
To run high-value mission-critical applications in the cloud, you should ideally have the same level of security in the cloud that you have with privately owned infrastructure—where you own the building and systems, where you lock your own doors, and where you have your own IT people managing everything.
The reality is, it�s difficult to achieve that level of trust when you�re using someone else�s infrastructure and sharing that infrastructure with other tenants. A multi-tenant environment creates new types of risks and new requirements for security.
One approach to addressing these requirements is the creation of trusted compute pools that act as "safe zones" within the multi-tenant data center. Trusted compute pools help you reduce security risks and gain the confidence you need to use the cloud for your mission-critical applications. They essentially allow you to own the key to your rented corner of the cloud.
Trusted compute pools begin with technology that is built into the processor silicon. This hardware-based approach provides strong platform protections and facilitates compliance with policies, regulations, and standards. You wouldn�t want to go to the cloud without them.
To make the vision of trusted compute pools a reality in today�s data centers, Intel delivers a range of enabling technologies. These include Intel® Trusted Execution Technology
(Intel® TXT) to enable an accurate comparison of the critical elements of the launch environment against a known good source—and to block the launch of unapproved code.
Other important foundational elements include Intel® Virtualization Technology
(Intel® VT), which increases virtualization software performance, and Intel® Advanced Encryption Standard New Instructions
(Intel® AES-NI), which enhances the performance of data encryption tools. These complementary technologies work together to enable the creation of trusted compute pools that help protect your hardware platforms, data, and applications against an ever-growing range of threats.
The cloud security problem, of course, is much larger than that the challenges of protecting your hardware from rouge hypervisors, malicious root-kit installations, and other malware. But putting hardware-level protections in place is a critical first step in the process of building a comprehensive cloud security solution.
“Trusted compute pools act as �safe zones� within the multi-tenant data center.”When you establish trusted compute pools, you create a sound foundation for a trusted compute environment. This foundation gives you the assurance that your mission-critical applications and data are moving across platforms you know and trust.
Explore the network
LAN and SAN traffic
One network really should be enough.
allows you to converge your network
into one fabric that can carry all
network traffic. One network. One
standardized set of cables, NICs, and
switches. One simple way to reduce IT
headaches and equipment costs.
At the operating system level, software storage initiators use native protocol stacks to enable storage traffic to move over a 10Gb Ethernet fabric. Protocol processing is handled by the server processors while data-path processing is done by the network adapter. Supported protocols include:
Fibre Channel over Ethernet (FCoE)
Internet Small Computer System Interface (iSCSI)
Network File System (NFS)
Intelligent offloads improve application performance by shifting targeted processing functions from the operating system onto the network controller. Efficiencies are gained with:
LAN and iSCSI offloads
10GbE unified network port
A single Intel® Ethernet server adapter handles multiple types of network traffic, including Ethernet, iSCSI, and FCoE. The adapter improves performance with capabilities that address bottlenecks in virtualized environments and improve quality of service by supporting:
LAN and SAN traffic
Both local area network (LAN) and storage area network (SAN) traffic run over the 10Gb Ethernet fabric. Storage traffic can use a dedicated 10GbE network or be merged onto the LAN to create a single converged 10GbE network that supports:
One network fabric
LAN and SAN networks
Unified networking article
Unification enables simplification.
Unified networking article
The rise of cloud computing is a bit like that of a space shuttle taking off. When the rocket engine and its propellants fire up, the shuttle lifts slowly off the launch pad and then builds momentum until it streaks into space.
Cloud is now in the momentum-building phase and on the verge of quickly soaring to new heights. There are lots of good reasons for the rapid rise of this new approach to computing. Cloud models are widely seen as one of the keys to increasing IT and business agility, making better use of infrastructure, and cutting costs.
So how do you launch your cloud? An essential first step is to prepare your network for the unique requirements of services running on a multi-tenant shared infrastructure. These requirements include IT simplicity, scalability, interoperability, and manageability. All of these requirements make the case for unified networking based on 10 Gigabit Ethernet (10GbE).
Unified networking over 10GbE simplifies your network environment. It allows you to converge your network to one type of fabric�so you don�t have to maintain and manage different technologies for different types of network traffic. You also gain the ability to run storage traffic over a dedicated SAN if that makes the most sense for your organization.
Either way, 10GbE gives you a great deal of scalability. 10GbE enables you to quickly scale up your networking bandwidth to keep pace with the dynamic demands of cloud applications. This rapid scalability helps you avoid I/O bottlenecks and meet your service-level agreements.
“10GbE simplifies your network, allowing you to converge to one type of fabric.”While that�s all part of the goodness of 10GbE, it�s important to keep this caveat in mind: Not all 10GbE is the same. Intel is uniquely positioned to deliver a 10GbE solution that scales with the performance and features of Intel® Xeon® processors. And with features like intelligent offloads of targeted processing functions, Intel helps you realize best-in-class performance for your cloud network.
Intel�s unified networking solutions are enabled through a combination of standard Intel® Ethernet products along with trusted network protocols integrated and enabled in a broad range of operating systems and hypervisors. This approach makes unified networking capabilities available on every Intel processor-based server, enabling maximum reuse in heterogeneous environments.
Ultimately, the Intel approach to unified networking helps you solve today�s overarching cloud networking challenges�and create a launch pad for your private, hybrid, or public cloud.
NEXT UP: The urge to purge: Have you had enough of "too many" and "too much"?
The urge to purge.
Unified networking article
In today�s data center, networks are a story of "too many" and "too much." That�s too many fabrics, too many cables, and too much complexity. Unified networking based on Intel® Ethernet simplifies this story. "Too many" and "too much" become "just right."
“Convergence allows you to standardize your network�same cabling, same NICs, same switches.”
Let�s start with the fabrics. It�s not uncommon to find an organization that is running three distinctly different networks: a 1GbE management network, a multi-1GbE local area network (LAN), and a Fibre Channel or iSCSI storage area network (SAN).
Unified networking enables cost-effective connectivity to the LAN and the SAN on the same Ethernet fabric. Pick your protocols for your storage traffic. You can use NFS, iSCSI, or Fibre Channel over Ethernet (FCoE) to carry storage traffic over your converged Ethernet network.
You can still have a dedicated network for storage traffic if that works best for your needs. The only difference: That network runs your storage protocols over 10 Gigabit Ethernet (10GbE)�the same technology used in your LAN.
When you make this fundamental shift, you can reduce your equipment needs. Convergence of network fabrics allows you to standardize the equipment you use throughout your networking environment�the same cabling, the same NICs, the same switches. You now need just one set of everything, instead of two or three sets.
In a complementary gain, convergence over 10GbE helps you cut your cable numbers. In a 1GbE world, many virtualized servers have 8�10 ports, each of which has its own network cable. In a typical deployment, one 10GbE cable could handle all of that traffic.
This is not a vision of things to come. With Intel® Ethernet, this world of simplified networking is here today. Better still, this is a world based on open standards. The Intel approach to unified networking increases interoperability with common APIs and open-standard technologies.
A few examples of these technologies:
Data Center Bridging (DCB) allows multiple types of traffic to run over an Ethernet wire.
Fiber Channel over Ethernet (FCoE) enables the Fibre Channel protocol used in many SANs to run over the Ethernet standard common in LANs.
Management Component Transport Protocol (MCTP) and Network Controller Sideband Interface (NC-SI) enable server management via the network.
Drawing on these and other open-standard technologies, Intel Ethernet enables the interoperability that allows network convergence and management simplification. And just like that, "too many" and "too much" become "just right."
NEXT UP: Know your limits�then push them with super-elastic 10Gb Ethernet.
Know your limits—then push them.
Unified networking article
Let�s imagine for a moment a dream highway. In the middle of the night, when traffic is light, the highway is a four-lane road. When the morning rush hour begins and cars flood the road, the highway magically adds several lanes to accommodate the influx of traffic.
This commuter�s dream is the way cloud networks must work. The cloud network must be architected to quickly scale up and down to adapt itself to the dynamic and unpredictable demands of applications. This super-elasticity is a fundamental requirement for a successful cloud.
Of course, achieving this level of elasticity is easier said than done. In a cloud environment, virtualization turns a single physical server into multiple virtual machines, each with its own dynamic I/O bandwidth demands. These dynamic and unpredictable demands can overwhelm networks and lead to unacceptable I/O bottlenecks.
The solution to this challenge lies in super-elastic 10 Gigabit Ethernet (10 GbE) networks built for cloud traffic. So what�s it take to get there? Intel helps you build your 10 GbE network today with unique technologies designed to accelerate virtualization and remove I/O bottlenecks, while complementing solutions from leading cloud software providers.
Consider these examples:
The latest Intel® Ethernet servers support Single Root I/O Virtualization (SR-IOV), a standard created by the PCI Special Interest Group. SR-IOV improves network performance for Citrix Xen* Server and Red Hat KVM* by providing dedicated I/O and data isolation between VMs and the network controller. The technology allows you to partition a physical port into multiple virtual I/O ports, each dedicated to a particular virtual machine.
Virtual Machine Device Queues (VMDq) improves network performance and CPU utilization for VMware and Windows Server 2008 Hyper-V* by reducing the sorting overhead of networking traffic. VMDq offloads data-packet sorting from the virtual switch in the virtual machine monitor and instead does this on the network adaptor. This innovation helps you avoid the I/O tax that comes with virtualization.
“The cloud must be able to adapt itself. Super elasticity is a fundamental requirement.”Technologies like these enable you to build a high-performing, elastic network that helps keep the bottlenecks out of your cloud. It�s like that dream highway that adds lanes whenever the traffic gets heavy.
NEXT UP: Manage the ups, downs, and in-betweens of services in the cloud.
Manage ups, downs and in-betweens.
Unified networking article
In an apartment building, different tenants have different Internet requirements. Tenants who transfer a lot of large files or play online games want the fastest Internet connections they can get. Tenants who use the Internet only for e-mail and occasional shopping are probably content to live with slower transfer speeds. To stay competitive, service providers need to tailor their offerings to these diverse needs.
This is the way it is in a cloud environment�different tenants have different service requirements. Some need a lot of bandwidth and the fastest possible throughput times. Others can settle for something less.
If you�re operating a cloud environment, either public or private, you need to meet these differing requirements. That means you need to be able to allocate the right level of bandwidth to an application and manage network quality of service (QoS) in a manner that meets your service level agreements (SLAs) with different tenants.
This is where Intel® Ethernet enters the picture. Intel Ethernet incorporates multiple technologies that allow you to tailor service quality to the needs and SLAs of different applications and different cloud tenants.
Here are some of the more important technologies for a well-managed cloud network:
Data Center Bridging (DCB) provides a collection of standards-based end-to-end networking technologies that make Ethernet the unified fabric for multiple types of traffic in the data center.
It enables better traffic prioritization over a single interface, as well as an advanced means of shaping traffic on the network to decrease congestion.
“Intel® Ethernet allows you to tailor your cloud offerings to different tenants.”Queue Rate Limiting (QRL) assigns a queue to each VM or each tenant in the cloud environment and controls the amount of bandwidth delivered to that user. The Intel approach to QRL is unique. It guarantees that a VM or tenant will get a minimum amount of bandwidth but it doesn�t limit the maximum bandwidth. If there is headroom on the wire, the VM or tenant can use it.
Traffic Steering sorts traffic per tenant to support rate limiting, QoS, and other management approaches. Traffic Steering is made possible by on-chip flow classification that delineates one tenant from another. This is like the logic in the local Internet provider�s box in the apartment building. Everybody�s Internet traffic comes to the apartment in a single pipe, but then gets divided out to each apartment, so all the packets are delivered to the right addresses.
Technologies like these enable your organization to manage the ups, downs, and in-betweens of services in the cloud. You can then tailor your cloud offerings to the needs of different internal or external customers�and deliver the right level of service at the right price.
NEXT UP: On the road to the cloud, who do you want in the driver�s seat?
On the road to the cloud, who do you want in the driver�s seat?
Unified networking article
For years, people have talked about 10 Gigabit Ethernet (10 GbE) being the future of networking and the foundation of cloud environments. Well, the future is now�10GbE is here in a big way.
There are many reasons for this fundamental shift. Unified networking based on 10GbE helps you reduce the complexity of your network environment, increase I/O scalability, and better manage network quality of service. This is a story of simplification: One network card. One network connection. Optimum LAN and SAN performance.
So how do you get started down this road? To move to 10GbE, you want to first make sure you have the right driver behind the wheel. You want a driver who can keep you on the right path and help you avoid wrong turns. That�s Intel. Intel is uniquely positioned to help your organization make the transition to unified networking and then gain the greatest value from your 10GbE network on an ongoing basis.
Consider these Intel differentiators:
A systems focus—Intel�s focus isn�t just on the network adaptor. Intel works to optimize processors, chipsets, and network adaptors to improve performance for the virtualization software that empowers your cloud environment. Intel® Ethernet works in a complementary manner with Intel® Xeon® processors to improve virtualization performance. The performance of Intel Ethernet scales with the performance and features of Intel Xeon processors.
Unique technologies—To move to the cloud you need a network that can move data in a big way. Intel Ethernet helps you get there by accelerating virtualization and removing I/O bottlenecks with unique technologies that complement the solutions from leading cloud software providers.
Open standards—In its approach to networking products, Intel drives open standards. The goal is to deliver maximum reach for Intel silicon with software and other network components.
“The idea behind Intel® Ethernet: It just works.”A one-stop shop—To deploy a unified network, you shouldn�t have to deal with various vendors who each provide a piece of a jigsaw puzzle. And you won�t have to with Intel�s full product line, including multiple connection types and port configurations on 10GbE as well as 1GbE. Intel provides a one-stop shop for all of your Ethernet needs.
Rock-solid reliability—As you head toward the cloud, you need to have complete confidence in your converged network or your local and storage networks that run over 10GbE. With 30 years of experience in Ethernet networking, and more than 600 million ports shipped, Intel gives you the confidence that your network will be stable and reliable. That�s the idea behind Intel Ethernet: It just works.
Put it all together and you have a trusted driver for your journey to a unified, cloud-ready network.
Explore the power
technology to the data
Power is no longer a commodity. In the
Open Data Center, it's a form of
currency that needs to be monitored,
managed, and sometimes capped
under preset policies. Such is the
"power" of intelligent
in the cloud.
Monitoring and control
Policy-based power management uses integrated tools to enable power monitoring and control from the chip to the chiller—or from the server microprocessor to the data center cooling system. Policies can cover:
To boost efficiency, the power and cooling for a facility should be aligned with power and cooling for IT infrastructure. Managing these systems in an integrated fashion holds the greatest potential for gains in:
Real-time application performance monitoring will allow IT managers to adjust power consumption to meet target SLAs for mission-critical applications. Since IT managers control power allocation, power-capping decisions can be based on the actual needs of�applications.
Embedded instrumentation provides power information and power controls over single servers and groups of servers down to the system, processor, and memory level. IT managers control pools of virtualized servers as a single entity, potentially reducing not only server power consumption but also cooling requirements. Technologies include:
Power management article
The case for policy-based power management.
Power management article
Not many years ago, server power consumption wasn�t a big concern for IT administrators. The supply of power was plentiful, and in many cases, power costs were bundled with facility costs. For the most part, no one thought too hard about the amount of power going into servers.
What a difference a few years can make. In today�s ever-growing data centers, no one takes power for granted. For starters, we�ve had too many reminders of the threats to the power supply—including widely publicized accounts of catastrophic natural events, breakdowns in the power grid, and seasonal power shortages.
Consider these examples:
In the wake of the March 2011 earthquake and tsunami, and the loss of the Fukushima Daiichi nuclear power complex, Japan was hit with power restrictions and rolling power blackouts. The available power supply couldn�t meet the nation�s demands.
In the United States, overextended infrastructure and recurring brownouts and outages have struck California and the Eastern Seaboard, complicating the lives of millions of people.
In Brazil and Costa Rica, power supplies are threatened by seasonal water scarcity for hydro generation, while Chile wrestles with structural energy scarcity and very expensive electricity.
Then consider that in today�s data centers, a lot of power is wasted. In a common scenario, server power is over-allocated and rack space is under-populated to cover worst-case loads. This is what happens when data center managers don�t have a view into the actual power needs of a server or the tools they need to reclaim wasted power.
“Policy-based power management helps you improve efficiency, trim electric bills, and match demand to supply.”All the while, data centers are growing larger and power is becoming a more critical issue. In some cases, data centers have hit the wall—they are out of power and cooling capacity. And as energy costs rise, we�ve reached the point where some of the world�s largest data center operators consider power use as one of the top site-selection issues when building new facilities. The closer you are to a plentiful supply of affordable power, the better off you are.
All of this points to the need for policy-based power management. This forward-looking approach to power management helps your organization use energy more efficiently, trim your electric bills, and manage power in a manner that allows demand to more closely match the available supply.
And the benefits don�t stop there: a policy-based approach also allows you to implement power management in terms of elements that are meaningful to the business instead of trying to bend the business to fit your current technology and power supply.
Ultimately, the case for policy-based power management comes down to this: it makes good business sense.
NEXT UP: Using policy-based power management to rein in energy use.
Using policy-based power management to rein in energy use.
Power management article
In today�s data centers, power-management policies are like the reins on a horse. They put you in control of an animal—power consumption—that has a tendency to run wild.
When paired with the right hardware, firmware, and software, policies give you control over power use across your data center. You can create rules and map policies into specific actions. You can monitor power consumption, set thresholds for power use, and apply appropriate power limits to individual servers, racks of servers, and large groups of servers.
So how does this work? Policy-based power management is rooted in two key capabilities: monitoring and capping. Power monitoring takes advantage of sensors embedded in servers to track power consumption and gather server temperature measurements in real time.
The other key capability, power capping, fits servers with controllers that allow you to set target power consumption limits for a server in real time. As a next step, higher-level software entities aggregate data across multiple servers to enable you to set up and enforce server group policies for power capping.
When you apply power capping across your data center, you can save a lot of money on your electric bills. Just how much depends on the range of attainable power capping, which is a function of the server architecture.
For the current generation of servers, the power-capping range might be 30 percent of a server�s peak power consumption. So a server that uses 300 watts at peak load might be capped at 200 watts, saving you 100 watts. Multiply 100 watts times thousands of servers and you�re talking about operational savings that will make your chief financial officer stand up and take notice.
“Power-management policies put you in control of an animal—power consumption—that has a tendency to run wild.”Dynamic power management takes things a step further. With this approach, policies take advantage of additional degrees of freedom inherent in virtualized cloud data centers as well as the dynamic behaviors supported by advanced platform power management technologies. Power capping levels are allowed to vary over time and become control variables by themselves. All the while, selective equipment shutdowns—a concept known as �server parking�—enable reductions in energy consumption.
Collectively, these advanced power management approaches help you achieve better energy efficiency and power capacity utilization across your data center. In simple terms, you�re in the saddle, and you control the reins.
NEXT UP: Want a bigger bang for your power buck? Start at the hardware level.
Want a bigger bang for your power buck? Start at the hardware level.
Unified networking article
The expression "bigger bang for your buck" refers to getting more value for your dollar. While some accounts trace the expression back to decades-old discussions of military armaments, you might be more likely to hear these words today when people are talking about data center power expenditures. In today�s data centers, the name of the game is to get a bigger bang for every power buck.
Policy-based power management helps you work toward this goal by leveraging hardware-level technologies that make it possible to see what�s really going on inside a server. More specifically, the foundation for policy-based power management is formed by advanced instrumentation embedded in servers. This instrumentation exposes data on temperature, power states, and memory states to software applications that sit at a higher level.
Intel helps you establish this foundation for more effective power management with a range of technologies built into Intel® Xeon® 5500 and 5600 series processors and Intel® Xeon® E3, E5, and E7 series processors. These technologies include Intel® Intelligent Power Node Manager and Intel® Data Center Manager.
Intel Intelligent Power Node Manager delivers power reporting and power capping functionality for individual servers. Enhanced technology in the Intel Xeon E5 and E7 series processors extends component instrumentation at the platform level. It allows control and reports on the power consumption of the system, the processors, and the memory subsystem.
As mentioned above, Intel Intelligent Power Node Manager also gives you the ability to limit power at the system, processor, and memory levels—using policies defined by your organization. These capabilities allow you to dynamically throttle system and rack power based on expected workloads.
To extend the gains, Intel Data Center Manager scales the capabilities of Intel Intelligent Power Node Manager to the data center level. It enables fine-grained control of power for servers, racks of servers, and groups of servers. It even allows you to dynamically migrate workloads to optimal servers based on specific power policies with the appropriate hypervisor.
“Policy-based power management uses hardware-level technology to show what�s really going on inside a server.”Here�s an important caveat: when it comes to policy-based power management, there�s no such thing as a one-size-fits-all solution. You need multiple tools and technologies that allow you to capture the right data and put it to work to drive more effective power management—from the server board to the data center environment.
It all begins with technologies that are incorporated into processors and chipsets. That�s the foundation that enables the creation and use of policies that bring you a bigger bang for your power buck.
NEXT UP: Building a bridge to a more efficient data center.
Building a bridge to a more efficient data center.
Unified networking article
Putting policy-based power management in place is a bit like building a bridge over a creek. First you lay a foundation to support the bridge, and then you put the rest of the structure in place to allow safe passage over the creek. While your goal is to cross the creek, you couldn�t do it without the foundation that supports the rest of the bridge structure.
In the case of power management, the foundation is a mix of advanced instrumentation capabilities embedded in servers. This foundation is extended with middleware that allows you to consolidate server information to enable the management of large server groups as a single logical unit—an essential capability in a data center that has thousands of servers.
The rest of the bridge is formed by higher-level applications that integrate and consolidate the data produced at the hardware level. While you ultimately want the management applications, you can�t get there without the hardware-level technologies.
“Server-based instrumentation plus middleware allows you to consolidate data so large groups of servers can be managed as a single unit.”Let�s look at this in more specific terms. Instrumentation at the hardware level allows higher-level management applications to monitor the power consumption of servers, set power consumption targets, and enable advanced power-management policies. These management activities are made possible by the ability of the platform-level technologies to provide real-time power measurements—in terms of watts, a unit of measure that everyone understands.
These same technologies allow power-management applications to retrieve server-level power consumption data through standard APIs and the widely used Intelligent Platform Management Interface (IPMI). The IPMI protocol spells out the data formats to be used in the exchange of power-management data.
Put it all together and you have a bridge to a more efficient data center.
NEXT UP: Cashing in on policy-based power management.
Cashing in on policy-based power management.
Unified networking article
When you apply policy-based power management in your data center, the payoff comes in the form of a wide range of business, IT, and environmental benefits. Let�s start with the bottom line: a robust set of power-management policies and technologies can help you cut both operational expenditures (OpEx) and capital expenditures (CapEx).
At the OpEx level, you save money by applying policies that limit the amount of power consumed by individual servers or groups of servers. That helps you reduce power consumption across your data center.
How much can you save? Say that each 1U server requires 750 watts of power. If your usage model allows you to cap servers at 450 watts, you save 300 watts per machine. That helps you cut your costs for both power purchases and data center cooling. And chances are you can do this without paying server performance penalties, because many servers don�t use all of the power that has been allocated to them.
At the CapEx level, you cut costs by avoiding the purchase of intelligent power distribution units (PDUs) to gain power monitoring capabilities and by reducing redundancy requirements—for savings of thousands of dollars per rack.
More effective power management can also help you pack more servers into racks, and more racks into your data center, to make better use of your existing infrastructure. According to the Uptime Institute, each PDU kilowatt represents about $10,000 of CapEx, so it makes sense to try to make the best use of your available power capacity.
Baidu.com, the largest search engine in China, understands the benefits of making better use of existing infrastructure. It partnered with Intel to conduct a proof of concept (PoC) project that used Intel® Intelligent Power Node Manager and Intel® Data Center Manager to dynamically optimize server performance and power consumption to maximize the server density of a rack.
Key results of the Baidu PoC project:
“Power-management policies help reduce power consumption, minimize carbon footprint, and improve regulatory compliance.”At the rack level, up to 20 percent of additional capacity increase could be achieved within the same rack-level power envelope when an aggregated optimal power management policy was applied.
Compared with today�s data center operation at Baidu, the use of Intel Intelligent Power Node Manager and Intel Data Center Manager enabled rack densities to increase by more than 40 percent.
And even then, the benefits of policy-based power management don�t stop at the bottom line. While this more intelligent approach to power management helps you reduce power consumption, it also helps you reduce your carbon footprint, meet your green goals, and comply with regulatory requirements. Benefits like those are a key part of the payoff for policy-based power management.