Leverage the AWS Sustainability Pillar to Minimize Environmental Impact

Related articles in the Well-Architected series:

Designing and implementing your business workloads in Amazon Web Services (AWS) can be a challenge. Though the marketing pitch makes it sound as if it is straightforward to deploy and migrate workloads to the cloud, it nonetheless requires a thorough understanding of several factors. This includes how that cloud environment is configured, the platform’s architecture, what services are available, and how you, as a cloud consumer, can work with them.

Over the years, cloud vendors have added support, documentation, and reference materials to make cloud migration simpler. One example of this reference material is the AWS Well-Architected Framework, which helps cloud solution architects, CTOs, developers, and operations teams understand best practices for architecting scalable AWS applications. The WAF ensures your AWS applications can handle business-critical workloads.

From a high level, the WAF touches on six pillars:

1. Operation excellence
2. Security
3. Reliability
4. Performance efficiency
5. Cost optimization
6. Sustainability

Here, we focus on the sustainability pillar. Explore the rest of the Well-Architected Framework series if you’re interested in learning more about the other WAF pillars.

AWS sustainability pillar

Focusing on the environmental impact of cloud applications, this pillar includes the overall effect on the economy and its potential impact on society. Customers are encouraged to choose more sustainable applications with fewer negative consequences for future generations.

Following the flow and structure of the existing Trend Micro Guide to the Well-Architected Framework, the following sections highlight the six design principles of cloud sustainability.

1. Measure impact. You can’t predict the future effect of your cloud workloads without measuring where they are today. To get an accurate picture, you need to examine where your different types of cloud infrastructure and decide what can be decommissioned over time to minimize environmental impact.

For example, energy-efficient servers have a short shelf life and can be deemed inefficient as soon as three to four years from now. Consider creating a plan to migrate to more energy-economical hardware as it becomes available.

Measuring your application’s resource consumption also helps you spot areas in which to improve. For example, you may have used a lift-and-shift strategy to move an enterprise application to the cloud before realizing that the average CPU utilization is only 20%. This low utilization rate might make the app a prime candidate for migration to a mix of managed services and serverless functions. However, this would mean that your application only runs when and where you need it.

Additionally, consider data storage and transfer. Collecting information you don’t need to keep hurts sustainability due to the additional power required to store this superfluous data. You should also avoid sending more data between apps and APIs than necessary. The impact of millions of applications sending a few additional bytes across the internet billions of times daily adds up.

After gathering metrics and evaluating possible outcomes, you should set achievable key performance indicators (KPIs) to help you continuously optimize productivity while reducing environmental impact.

2. Establish long-term goals. Once you’ve started measuring your data, it becomes easier to set long-term goals. Sustainability is not a short-term fix, as it requires contributions from organizations across the globe.

A comparable data center required a more robust cooling system, more electricity and physical racks ten years ago than it does today. Compute efficiency is now much higher, allowing you to fit more computing power into the same amount of space or downsize to a smaller data center, requiring less cooling and electricity.

This enables you to set long-term goals pertaining to lower resource consumption, even as your computing needs increase. You can start by estimating the mid- and long-term needs for compute power and predicting how this will impact sustainability. According to science and technology journal, Nature, the best strategy for organizations is to move to hyperscale data centers run by large cloud providers, which are more efficient and sustainable.

3. Maximize utilization. After establishing long-term goals for your data center footprint, look for quick wins to maximize utilization. Cloud monitoring and automation can identify resources that are still running but unused and needlessly consuming energy. Analyze your resource use, downsize virtual machines (VMs) where you can, and shut down any machines you aren’t using.

Technologies like hypervisors and containers can extend this benefit further by helping you maximize the utilization rate of your VMs and server hardware. Migrating from heavy VMs to more efficient and optimized containerized workloads and microservices often results in an 85 to 90% compute efficiency for the underlying server, storage, and networking components. This migration enables you to maximize your resource consumption and lighten your footprint.

As a concrete example, a service like Amazon CloudWatch can help you visualize and analyze your cloud workloads, giving you a thorough understanding of to what capacity you are using your current infrastructure while helping you identify areas for improvement.

4. Continuously look for efficiency optimization. Technology is changing rapidly, and this trend will only continue. From a design and architectural perspective, you should avoid designing an architecture that will remain static for years.

Instead, follow the core concepts of the WAF, which can allow you to build more flexible solutions. Considering each of its pillars when designing your solution will enable you build more reliable, efficient, and resilient cloud applications.

For example, when you make changes to address the security pillar, you will likely find an increase in efficiency. This is because you’ll be forced to re-evaluate older, less-efficient parts of your application architecture continually.

Look for emerging trends and how they might help you build more efficient applications. As many development teams resisted the move to containers, those who have embraced containerization have found they can run more workloads on fewer servers, leading to more sustainable applications—and reduced costs.

Look for service providers who embrace sustainable innovation. This includes wind-powered and ocean-cooled data centers, some of which even use the waste from servers to heat local homes.

5. Use shared managed services. Moving your workloads to large-scale managed cloud services can result in major steps toward sustainability. Cloud providers typically achieve economies of scale that most organizations can’t achieve.

Shared cloud services like serverless functions help cloud providers accomplish more computing on fewer machines, which means fewer servers are sitting idle and consuming electricity.

Further, your customers and partners are likely running their services in the cloud as well. When they connect to your cloud resources from within the same data centers, you’ll notice faster, more stable connectivity to your workloads, consuming fewer network resources and reducing the need to duplicate data.

Finally, consider the benefits realized by using other shared services, like serverless databases, that scale down to zero when not used. Unless your company is the size of a cloud provider, it’s unlikely you can build a serverless database in a cost-efficient manner. But in the cloud, you can sign up for a shared, managed database service like Amazon Aurora Serverless or Microsoft Azure SQL Database and realize significant efficiency gains.

6. Work towards energy reduction. While maximizing server utilization and eliminating unused resources is an excellent start, it’s not necessarily enough. It is important to consider taking a holistic approach to reduce energy consumption and environmental impact.

Consider what happens when a popular mobile app updates to a new version that is incompatible with older devices. Many users feel compelled to upgrade to a new device. Now, imagine the energy impact of all those new devices—from mining and transporting and refining the raw materials to manufacturing.

Upgrades that add new features but degrade performance should be considered through a holistic lens. Your applications can have a sustainability impact beyond the electricity that powers your servers. Use engineering solutions like device farms to run tests and better understand the expected or actual impact of changes you’d like to undergo.

While natural energy sources like wind, water, and solar can lower technology’s ecological and energy impact, the most effective method for saving energy is to avoid expending it.

Cloud sustainability best practices

Now that you are acclimated with the AES Sustainability pillar, it’s recommended to follow best practices minimize the sustainability footprint of your cloud workload. This includes maximizing resource utilization, minimizing waste, and optimizing the deployment and power consumption necessary to support your workload.

AWS provides six areas in which to focus on:

1. Region selection has a substantial impact on KPIs like performance, cost, and carbon footprint. Effectively enhancing these KPIs relies on you choosing the right regions for your workloads that align with both your business requirements and sustainability objectives.
For example, by positioning your workload near an Amazon renewable energy project or in regions with low published carbon intensity, you can effectively reduce the carbon footprint of your cloud workload.
2. Alignment to demand requires analyzing the consumption patterns of your users and applications interacting with your workloads and resources.
Scaling your infrastructure in accordance with demand and customer requirements can ensure you’re utilizing only the necessary minimum resources. Additionally, strategically positioning resources and removing unused assets can minimize network usage for users and applications.
3. Software and architecture include implementing load smoothing patterns and maintaining consistent high utilization of deployed resources.
Over time, lack of use may lead to certain components becoming idle. That’s why it is recommended to revise patterns and architecture in order to consolidate under-utilized components and retire those that are no longer necessary.
Understanding the performance characteristics of workload components that consume the most resources allow you to optimize them to improve efficiency.
Recognizing the devices your customers use to access your services can help you implement patterns that minimize the need for device upgrades.
4. Data management practices should be adopted to minimize the provisioned storage needed for your workload and the associated resource requirements.
Gaining a better understanding of your data and leveraging storage technologies and configurations that align with the business value of the data and its specific usage complement data lifecycle strategies. This helps move data to more efficient and less resource-intensive storage options when requirements decrease.
Further optimization of resource utilization can be achieved by regularly removing data that is no longer needed.
5. Hardware and services management practices can be changed to decrease the sustainability impacts of your workload. Look to minimize the quantity of hardware required for provisioning and deployment, and carefully choose the most efficient hardware and services tailored to your specific workload.
6. Process and culture changes to development, test, and deployment practices present opportunities to reduce your sustainability impact. These include:

  • Adopting methods and processes to validate potential improvements, minimizing testing costs, and delivering small improvements.
  • Keeping your workload up-to-date to adopt efficient features, remove issues, and improve the overall efficiency of your workload.
  • Increasing the utilization of resources to develop, test, and build your workloads.
  • Using managed device farms to efficiently test a new feature on a representative set of hardware.

Conclusion

AWS published the WAF guidelines and principles to help customers architect cloud solutions with reliability, security, performance cost, and scalability in mind. The framework was recently extended with the sustainability pillar, recognizing the importance of limiting energy consumption. This new pillar emphasizes the importance of energy and the environmentally conscious management of AWS data centers. It asks customers to consider their cloud designs’ mid- and longer-term energy efficiency.

In adding the sustainability pillar, the WAF acknowledges that software does not exist in isolation but instead impacts the world around it. By migrating and deploying workloads to the cloud, the world becomes a more sustainable environment: energy-friendly, environmentally friendly, and a viable haven for generations to come.

Read More HERE