Contact Us
Case study

Optimize to Edge Services Web Performance

December 29, 2022
Download Case Study

Executive Summary

A payment service supports the business growth of other companies by simplifying payments such as credit cards, debit cards, wallets, and QR codes, either online or in a physical place. Due to accelerated growth with an average of 240 million transactions annually, they need scalability and seek to improve the performance, capacity, and availability of the current application hosted on AWS using Edge Services to provide better customer service.

Challenge

Analysis and redesign of the current payment solution’s architecture which does not follow the AWS Well-Architected Framework.

  • The inbound/outbound traffic to the application layer was causing bottlenecks during busy business hours
  • The solution needed to automatically increase and decrease computing capacity to support all customers promptly
  • The frontend application must be hosted as a static website
  • The solution must be able to handle the static content of the frontend and backend
  • The database ran only in one region. Lack of fault-tolerant architecture

Solution

With the following AWS services, requests could be adequately handled, secured, and distributed with high response time to end clients:

Amazon Route53

The on-premises DNS server was migrated to Route 53 to handle all domain records. In accordance to AWS SLA, 100% service availability was leveraged.

Amazon CloudFront

Distributions CloudFront was configured for the frontend and backend to intercept and validate all web requests. In addition, the website was improved by its Edge Locations and static content delivery.

Network Load Balancer

The input traffic balancer for the EKS cluster nodes distributes in different availability zones.

Amazon S3

S3 stores the static website assets.

Web Application Firewall

The WAF service was integrated into the CloudFront distributions to protect the frontend and backend of common web attacks and have visibility of the requests.

Multi-Region Database

AuroraDB was configured globally with two nodes, one as the reader endpoint. In case of an event, the DB can failover from Region1 to Region2.

Results

Client expectations were met with the following results standing out:

  • Client-server architecture composed of clients, servers, and resources, with the management of requests through HTTPS
  • Stateless communication between client and server. Client information is not stored between GET requests and that each one is independent and disconnected from the rest
  • Data that can be cached and optimizes interactions between client and server
  • Uniform interface between elements where information is transferred in a standardized way
  • With Lambda@Edge logs can be collected from the different CloudFront Edge Locations
  • Database Resilient

Benefits

Superior Performance

The infrastructure is scalable, and the service is stable when user demand increases. The solution is capable of sustaining load peaks through its Edge locations.

Low TCO

The corresponding AWS services, configured according to the client’s needs, are aligned with the best practices of high availability and performance.

Fully Managed

Amazon CloudFront and Lambda@Edge are services that ease the implementation of the solution because of their simplicity and range of options for configuration.

Related Insights

®2024 IO Connect Services
Privacy PolicyCookie Policy
magnifiercrossmenuchevron-down
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram