Amazon Web Services (AWS)


  • image
    image
    1. เรียนรู้พื้นฐานของ AWS Cloud:

      • Cloud Computing คืออะไร?

      • AWS คืออะไร?

      • ประเภทของ Cloud Computing

      • Cloud Computing กับ AWS

      • ภาพรวมความรู้พื้นฐานของ AWS

      • แนวคิดหลักของ AWS Fundamentals

        image
        1. ความเป็นเลิศในการปฏิบัติงาน: การดำเนินงานเป็นระบบอัตโนมัติ
          IaC สามารถใช้ Provision Service โดยอัตโนมัติโดยใช้เครื่องมือและกระบวนการเดียวกับที่ใช้สำหรับ Code ในปัจจุบัน
          Observability ใช้รวบรวม, วิเคราะห์ และดำเนินการกับ Metric เพื่อปรับปรุงการดำเนินงานอย่างต่อเนื่อง

        2. ความปลอดภัย: Zero Trust
          IAM ตามหลักการของ Least Privilege (ให้สิทธิ์การเข้าถึงได้ในระดับที่จำเป็น)
          AWS Network Security กับการออกแบบระบบความมั่นคงปลอดภัยแบบ Defense in Depth คือแนวคิดเรื่องการออกแบบระบบรักษาความมั่นคงปลอดภัยแบบหลายชั้น กล่าวคือ ยิ่งมี Technique หรือกระบวนการในการตรวจจับภัยคุกคามมากเท่าไหร่ ยิ่งทำให้มีโอกาสตรวจจับเจอภัยคุกคามสูงมากขึ้นเท่านั้น ส่งผลให้ป้องกัน Hacker ได้ก่อนที่จะเข้ามาแทรกซึมในระบบ
          Data Encryption ถูกนำไปใช้ทั้งกับ ข้อมูลที่ส่งระหว่างระบบ และภายในระบบ

        3. ความน่าเชื่อถือ: Blast Radius (รัศมีของการเกิดผลกระทบ)
          Fault Isolation Zones เพื่อจำกัด Blast Radius
          Limits เพื่อหลีกเลี่ยงการหยุดชะงักของการบริการ

        4. ประสิทธิภาพของการปฏิบัติงาน: ปฏิบัติต่อ Server แบบปศุสัตว์แทนที่จะเป็นสัตว์เลี้ยง
          เลือกบริการที่เหมาะสมรวมถึงการกำหนดค่า ตามเป้าหมายประสิทธิภาพ
          ปรับขนาดบริการได้สองแบบ Vertical และ Horizontal

        5. การเพิ่มประสิทธิภาพต้นทุน:
          Model การใช้จ่ายจะเน้น OpEx จะมี Technique เช่น การปรับขนาดที่เหมาะสม, Technology ไร้ Server, การจอง, และ Spot Instance
          การตรวจสอบติดตามและเพิ่มประสิทธิภาพงบประมาณโดยใช้บริการ เช่น Cost Explorer, Tags, และ Budgets

      • ภาพรวมของ AWS

        AWS มีส่วนประกอบพื้นฐานที่สามารถประกอบได้อย่างรวดเร็วเพื่อรองรับภาระงานแทบทุกประเภท ด้วย AWS จะพบชุดบริการที่พร้อมใช้งานสูงที่ออกแบบมาเพื่อทำงานร่วมกัน ช่วยให้สร้าง Application ที่สามารถปรับขนาดและซับซ้อนได้

        สามารถเข้าถึงพื้นที่เก็บข้อมูลที่ทนทานสูง, การประมวลผลต้นทุนต่ำ, ฐานข้อมูลประสิทธิภาพสูง, เครื่องมือการจัดการ และอื่นๆ ทั้งหมดนี้มีให้โดยไม่มีค่าใช้จ่ายล่วงหน้า จ่ายเฉพาะสิ่งที่ใช้เท่านั้น บริการเหล่านี้ช่วยให้องค์กรทำงานได้เร็วขึ้น, ลดค่าใช้จ่ายด้าน IT และยังสามารถปรับขนาดได้ตามต้องการ AWS ได้รับความไว้วางใจจากองค์กรที่ใหญ่ที่สุด และ Start-up ที่มาแรงที่สุด ในการขับเคลื่อนปริมาณงานที่หลากหลาย, รวมถึง Application บน Web และ Mobile, การพัฒนา Game, การประมวลผลข้อมูลและคลังสินค้า, การจัดเก็บข้อมูล และอื่นๆ อีกมากมาย

      • AWS Global Infrastructure

      • ศัพท์ AWS

        • Amazon Web Service (AWS): Amazon Web Service หรือ AWS คือหนึ่งในผู้ให้บริการด้าน Cloud Computing โดย Website Amazon โดย AWS มีบริการรองรับสามกลุ่มใหญ่ๆ ด้วยกันคือบริการด้าน IaaS, PaaS และ SaaS ตัวอย่าง Product ที่เป็นที่รู้กันดีของ AWS เช่น Amazon EC2, Amazon Elastic Beanstalk, และ Amazon S3.

        • Auto Scaling (AS): คือ บริการที่สามารถปรับเปลี่ยนทรัพยากรได้ตามที่กําหนดอย่างอัตโนมัติ ซึ่งจะเหมาะสมกับสถานการณ์ที่การใช้งานต้องการทรัพยากรอย่างสูงมากและเร่งด่วนในช่วงเวลาใดเวลาหนึ่ง ซึ่งจะช่วยให้การบริหารจัดการทรัพยากรมีประสิทธิภาพมาก (บริการต้องร้องขอเป็นกรณีพิเศษ)

        • Availability zones: เปรียบเสมือนศูนย์ข้อมูลที่ให้บริการทรัพยากร Computer ถ้าหากว่า Availability Zone แห่งหนึ่งมีปัญหา จะไม่ส่งผลให้ Availability Zone อื่นๆ มีปัญหาตามไปด้วย

        • Cloud Service Provider (CSP): คือบริษัทผู้ให้บริการ Cloud Computing ทั้งในส่วนของ PaaS, IaaS หรือ SaaS

        • Container: คือ Technology ที่เปรียบเสมือนหีบห่อซึ่งสามารถบรรจุ พวก Software, Program หรือ Application ต่างๆ เพื่อนำไปใช้งานบน Server ที่ไหนก็ได้ โดยจะช่วยลดขั้นตอนในการลง Program หรือ Tools ต่างๆ

        • Content Delivery Network (CDN): คือระบบเครือข่ายของเครื่อง Server ขนาดใหญ่ ที่เชื่อมต่อกันทั่วโลกผ่านทาง Internet ทำหน้าส่งข้อมูลให้ไปถึงผู้รับปลายทางให้เร็วที่สุด มีประสิทธิภาพและพร้อมให้ผู้ชมเข้าถึงข้อมูลได้ตลอดเวลา

        • Elastic Block Store (EBS): คือ การจัดการพื้นที่จัดเก็บ Block ประสิทธิภาพสูง ใช้ในการจัดเก็บข้อมูลที่มีการส่งข้อมูลและการทําธุรกรรมในปริมาณงานที่มีการประมวลผลมาก

        • Elastic Container Service (ECS): เป็นบริการจัดการ Container ประสิทธิภาพสูง สามารถปรับขนาดให้รองรับ Container Docker ได้อย่างง่ายได้ ซึ่งช่วยให้สามารถ Run และปรับขนาด Application ที่มี Container ได้ตามต้องการ

        • Elastic IP: คือ  IP address ที่มีลักษณะ Static ทั้งในส่วนของ Private และ  Public IP ซึ่งผู้ใช้งานสามารถเลือกเพื่อเชื่อมต่อไปยัง Internet หรือส่งข้อมูลกันผ่านในระบบ Cloud ได้ ส่งผลให้เกิดความคล่องตัวในการใช้งาน

        • Object Storage (S3): คือ  Cloud Storage หรือ   Object Store ที่ถูกสร้างขึ้นมาเพื่อจัดเก็บข้อมูลใดๆ ที่สามารถนําข้อมูลมาวิเคราะห์ และสามารถเข้าถึงข้อมูลนี้ได้จากทุกที่ ไม่ว่าจะเก็บ Website, Mobile App หรือพวก Data ต่างๆ ที่ต้องการ

        • Resource: ปัจจัยหรือทรัพยากรที่เกี่ยวข้องกับระบบ Computer ที่จำกัดตามการประมวลผลหรือเกี่ยวข้องกับการแก้ไขปัญหาตามโจทย์ที่ความต้องการของผู้ใช้ได้ระบุไว้

        • Virtual Private Cloud (VPC): คือ ระบบที่ช่วยให้ผู้ใช้งานสามารถสร้าง Virtual Networks สําหรับแต่ละระบบแยกออกจากกัน และบริหารจัดการได้อย่างสะดวก ซึ่งจะส่งผลให้การออกแบบ Network และใช้ทรัพยากรบน Cloud ปลอดภัยมากขึ้น
    B-)
  • 41 Comments sorted by

    1. เรียนรู้พื้นฐานของ AWS Cloud (ต่อ):

      • Role งานในระบบ Cloud

        Classic IT Roles:

        • On-Premise Role: Architect

        • Role: System Administrator is responsible for installing, supporting, and maintaining computer systems.

        • Application, Database, Network Administrator

        • Security Administrator is responsible for defending against unauthorized access.

        Cloud Roles:

        Spheres of Responsibility in the AWS Cloud Environment

        image
        Common Duties in the Cloud:

        • Design/validate/expand solution-independent architectures and requirements:
          > Cloud Enterprise Architect
          delivering cloud services for the business.

          • Collaborate to Obtain Business Requirements
            'What are business use cases?'
            'We want to build an entertainment site that can scale and has PCI compliance.'
          • Design Solution-Independent Architectures
          • Present Different Models to Business
          • Validate, Refine, and Expand Architectures
          • Manage, Monitor, and Update Architectures as Necessary

        • > Program Manager: for ensuring that the cloud is managed appropriately.
          • Manage operational teams
          • Manage and Monitor Cloud Metrics - What's the user experience like?
          • Manage Service Reports

        • > Financial Manager: managing financial controls for the cloud.
          • Perform Own Coding Cost
          • Distribute Cost to Sales, Marketing, Engineering
          • Know Cost Usage
          • Optimize Cloud Costs

        • Design/validate/expand solution-dependent architectures and requirements:

          Cloud Infrastructure Role > Cloud Infrastructure Architect - designing solution-dependent cloud infrastructure architectures.
          • Develop and Maintain Plans
          • Collaborate with Enterprise Architect, Mobile, IoT, Gaming Specialist

          Application Role  > Cloud Application Architect - designing cloud-optimized applications.
          • Collaborate with Enterprise, Infrastructure Architect
          • Perform Capacity and Scalability Requirements
          • Provide Deep Software Knowledge to Developer
          • Advise on AWS Best Practices to Developer
            'The software architecture should be implemented this way'

        • Build the infrastructure/application:

          Infrastructure > Cloud Operations Engineer - building, monitoring, and managing the cloud infrastructure and shared services.

          • Collaborate with Cloud Infrastructure Architect
          • Ensure That Service Requirements Are Met
          • Management: OS, Patch and Update Management, Manage Templates, Capacity, Virtual Networks, Application Resiliency, Document Changes (V1, V2, V3), Tag and Review Cloud Infrastructure
          • Support: Provide Operations Support for Cloud Services, Perform Performance Tuning, Root Cause Analysis, Respond and Escalate Incidents, Documentation Review/Modification, Backup and Recovery Support, Monitor and Report on Compliance Programs (PCI, ISO27001)

          Application  > Application Developer - application development
          • Manage Application Changes, Code Release ('It's OK to release v3'), Code Deployment, Application Documentation
          • Provide Application Support, Training
          • Develop Application Optimization Techniques

        • Specifying security requirements:
          > Cloud Security Architect
          • Collaborate with Enterprise Architect, Security Operations Engineer
          • Design and Maintain Security Configuration Checklists, Risk Assessment Plans, Corporate Security Policies and Procedures, Incident Response Plans

        • Managing, monitoring, and enforcing security:
          > Security Operations Engineer
          • Implement Corporate Security Policies and Procedures Implementation
          • Manage and Enforce Compliance
          • Manage Security Configuration, Identity and Access Management and Integration with Federated Identity Sources
          • Configure Security Groups
          • Perform Vulnerability Testing and Risk Analysis
          • Create Security Assessments and Audit Reports

        • > DevOps Engineer:
          Building and managing/operating fast and scalable workflows.
          primarily on deploying and configuring daily builds and troubleshooting failed builds.
          • Collaborate with Developer
          • Design and build Automation Solutions
          • Implement Continuous Build, Integration, Deployment, and Infrastructure as Code (Initiate CI Process > Test > Report > Commit >)
          • Review and Recommend Operational Improvements
          • Perform Application Testing and Recovery
          • Develop and Maintain Change Management Processes

        Infrastructure as Code (IaC):
        • Manually Managing Environment: AWS Management Console, APIs, CLI
        • Managing Environment Using Infrastructure as Code - Provides a reusable, maintainable, extensible, and testable infrastructure
          • Deploy Dev, Test, Prod Environment
          • Update Prod Environment

        Why Use Infrastructure as Code?
        A practice in which infrastructure is provided and managed using code and software development techniques, such as versioning control and continuous integration and delivery.
        • Codify Designs
        • Rapidly Iterate on Designs
        • Easy to Maintain
        • Easily add Company Security Best Practices

        Using the DevOps Model to Develop Applications

        image

        AWS CloudFormation tool uses templates and can be used to deploy infrastructure as code.

        image

        There are many duties in the cloud. Some duties might not be linked to a specific role. Depending on the business or organization, certain duties might be performed by a role. Duties might also be performed by multiple roles.

        Infrastructure as code is a practice in which infrastructure is provided and managed using code and software development techniques, such as versioning control and continuous integration and delivery.

        Must decide where to draw the red line between dev and ops.

        https://content.aws.training/wbt/jobrol/en/x1/1.1.0/story_content/external_files/Competencies_for_Cloud_Roles.pdf

    B-)

    1. เรียนรู้พื้นฐานของ AWS Cloud
    2. เจาะลึกข้อมูลพื้นฐานของ AWS Cloud รวมถึงราคาและการสนับสนุนของ AWS และบริการหลักของ AWS

      • สิ่งจำเป็นสำหรับ AWS Cloud Practitioner

      • Cloud computing is On-demand delivery of IT resources and applications through the Internet with pay-as-you-go pricing.

      • The AWS Cloud offers three cloud deployment models: cloud, hybrid, and on-premises.

        • Cloud-based applications are fully deployed in the cloud and do not have any parts that run on premises.

        • A hybrid deployment connects infrastructure and applications between cloud-based resources and existing resources that are not in the cloud, such as on-premises resources. However, a hybrid deployment is not equivalent to an on-premises deployment because it involves resources that are located in the cloud.
          Deploying applications connected to on-premises infrastructure is a sample use case for a hybrid cloud deployment. Cloud computing also has cloud and on-premises (or private cloud) deployment models.

      • AWS Lambda is an AWS service that lets run code without needing to manage or provision servers.

      • Benefits of cloud computing:

        • Trade upfront expense for variable expense: Not having to invest in technology resources before using them.

        • Stop guessing capacity: Accessing services on-demand to prevent excess or limited capacity.

        • Benefit from massive economies of scale: The scale of cloud computing help to save costs by The aggregated cloud usage from a large number of customers results in lower pay-as-you-go prices.

        • Go global in minutes: Quickly deploying applications to customers and providing them with low latency.

      • Amazon Elastic Compute Cloud (EC2) Instances Pricing / Billing / Purchasing Options:

        • On-Demand Instances: short workload, predictable pricing

        • Reserved require a commitment contract length of MINIMUM 1 year or 3 years with a larger discount.
          • Reserved Instances: long workloads
          • Convertible: long workloads with flexible instances
          • Scheduled: example - every Friday between 4 and 7 pm

        • Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
          Can reduce compute costs by up to 72% over On-Demand costs.

        • Spot Instances are ideal for short workloads with flexible start and end times (like for a total of 6 months), or that can withstand interruptions / lose instances (less reliable).
          Can reduce compute costs by up to 90% over On-Demand costs (cheap).
          Do not require contracts or a commitment to a consistent amount of compute usage.

        • Dedicated Hosts run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer (book an entire physical server), control instance placement. A highest cost than the others, which run on shared hardware.

      • Amazon EC2 Auto Scaling: Automated horizontal scaling enables to automatically add or remove Amazon EC2 instances in response to changing application demand.

      • Elastic Load Balancing (ELB) is the AWS service that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. Helps to ensure that no single resource becomes over-utilized / has to carry the full workload on its own.

      • Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers.

      • Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Enables to send, store, and receive messages between software components through a queue. It does not use the message subscription and topic model that is involved with Amazon SNS.

      • Amazon Elastic Kubernetes Service (Amazon EKS) is fully managed Kubernetes service. Kubernetes is open-source software that enables to deploy and manage containerized applications at scale.

      • AWS Fargate is a server-less compute engine for containers.

      • AWS Global Infrastructure:

        • A Region is a separate geographical area/location with multiple locations that are isolated from each other that contains AWS resources. A Region consists of two or more Availability Zones. For example, the South America (São Paulo) Region is sa-east-1. It includes three Availability Zones: sa-east-1a, sa-east-1b, and sa-east-1c.

          Selecting a Region:
          • Compliance with data governance and legal requirements
          • Proximity to customers
          • Available services within a Region
          • Pricing

        • An Availability Zones (AZ) is A single data center or group of data centers within a Region. A fully isolated portion of the AWS global infrastructure.

        • Deploy infrastructure across at least 2 Availability Zones

        • An edge locations is a data center that an AWS service uses to perform service-specific operations.

        • Amazon CloudFront is a content delivery service. It uses a network of edge locations to store cache copies of content and faster deliver content to customers all over the world. When content is cached, it is stored locally as a copy. This content might be video files, photos, webpages, and so on.

          An origin is the server from which CloudFront gets files. Examples of CloudFront origins include Amazon Simple Storage Service (Amazon S3) buckets and web servers.

        • AWS Outposts is a service that can use to run/extend AWS infrastructure, services, and tools in own on-premises data center in a hybrid cloud approach.

      • Provisioning AWS resources:

        • The AWS Management Console includes wizards and workflows that can use to complete tasks in AWS services.

        • Software development kits (SDKs) enable to develop AWS applications in supported programming languages.

        • The AWS Command Line Interface (AWS CLI) is used to automate actions for AWS services and applications through scripts.

        • AWS Elastic Beanstalk

        • AWS CloudFormation

      • Amazon Virtual Private Cloud (Amazon VPC) is a service that enables to provision an isolated section of the AWS Cloud. In this isolated section, can lunch resources in a virtual network that define.

      • Internet gateway is used to connect a VPC to the internet.

      • A Virtual private gateway enables to Create a VPN connection between the VPC and the internal/private corporate network, such as company's data center. This connection is private and encrypted, it travels through the public internet.

      • AWS Direct Connect can be used to Establish a private dedicated connection between the company's on-premises data center and the AWS VPC.

      • A Public subnets contain resources that need to be accessible by the public, such as an online store's website. A section of a VPC that Support/contains the customer/public-facing website/resources.

      • Private subnets contain resources that should be accessible only through private network, such as an Isolate databases that contains customers' personal information and order histories.

    B-)
  • image
    • Company has an application that uses Amazon EC2 instances to run the customer-facing website and Amazon RDS database instances to store customers' personal information. The developer should configure the VPC by Place the Amazon EC2 instances in a public subnet and the Amazon RDS database instances in a private subnet.

    • Network access control lists (ACLs) perform stateless packet filtering. By default, account's default network ACL allows all inbound and outbound traffic, but can modify it by adding own rules.

    • Security groups are stateful. By default, security groups deny all inbound traffic, but can add custom rules to fit operational and security needs. A virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.

    • Domain Name System (DNS) resolution is Translating/a directory used for matching a domain names to an IP addresses.

    • Amazon Route 53 is used/the ability to manage the DNS records for domain names.

    • Instance stores Best for temporary data that is not kept long term. When stopping or terminating an EC2 instance, data is deleted.

    • Amazon EBS volumes Best for data that requires retention. When stopping or terminating an EC2 instance, data remains available.

    • Amazon Simple Storage Service (Amazon S3):

      • S3 Standard is a storage class that is ideal for frequently accessed data.

      • The S3 Standard-Infrequent Access (S3 Standard-IA) storage class is ideal for data that is infrequently accessed but requires high availability/must be immediately available when needed.

      • In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects' access patterns and automatically moves as image below.

        image
      • S3 Glacier and S3 Glacier Deep Archive are low-cost storage classes that are ideal for data archiving. Retrieve for a minutes to a few hours, and within 12 hours respectively.

    • Comparing Amazon EBS and Amazon EFS:

      • An Amazon Elastic Block Store (Amazon EBS) volume is a service that provides block-level storage volumes that can use with Amazon EC2 instances.
        Stores data within a single Availability Zone.
        To attach an Amazon EC2 instance, both the Amazon EC2 instance and the EBS volume must reside/be located within the same Availability Zone.

      • Amazon Elastic File System (Amazon EFS) is a scalable file systems used with AWS Cloud services and on-premises resources.
        Store data in and across multiple Availability Zones. It is a regional service.
        The duplicate storage enables to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.

    • Amazon Relational Database Service (Amazon RDS) is A service that enables to run relational databases in the AWS Cloud. The scenarios in which should use:
      • Using SQL to organize data
      • Storing data in an Amazon Aurora database

    • Amazon Aurora is An enterprise-class relational database.

    • Amazon DynamoDB is A serverless key-value database service. The scenarios in which should use:
      • Running a serverless database
      • Storing data in a key-value database
      • Scaling up to 10 trillion requests per day

    • Amazon Redshift is used to query and analyze data across a data warehousing service that can use for big data analytics.

    • AWS Database Migration Service (Amazon DMS) is A service that can use to migrate relational, non-relational databases, and other types of data stores.

    • Amazon DocumentDB is a document database service that supports MongoDB workloads.

    • Amazon Neptune is a graph database service.

    • Amazon Managed Blockchain is a service that can use to create and manage blockchain networks with open-source frameworks.

    • Amazon ElastiCache is a service that adds caching layers on top of databases to help improve the read times of common requests.

    • Security responsibilities tasks example of customers:
      • Patching software on Amazon EC2 instances
      • Setting permissions for Amazon S3 objects

    • Security responsibilities tasks example of AWS:
      • Maintaining network infrastructure and servers that run Amazon EC2 instances
      • Implementing physical security controls at data centers

    • In AWS Identity and Access Management (IAM) used for Create users to enable people and applications to interact with AWS services and resources. Can assign permissions to users and groups.

    • The AWS account root user identity is the identity that is established when first create an AWS account. Can update in the AWS Management Console.

    • An IAM policy is a document that grants or denies permissions to AWS services and resources. Can attach to an IAM group. Can apply to IAM users, groups, or roles.

    • When grant permissions by following the principle of least privilege, prevent users or roles from having more permissions than needed to perform specific job tasks.

    • An IAM role is an identity that can assume to gain temporary access to permissions.

    • Multi-factor authentication (MFA) is an authentication process that provides an extra layer of protection for AWS account. Can configure in AWS IAM.

    • Service Control Policies (SCPs) enable to centrally control permissions for the accounts in organization.

    • In AWS Organizations, can apply/set permissions for the organization root, an individual member account, or an Organizational Unit (OU) by configuring SCPs.
      Can Consolidate and manage multiple AWS accounts within a central location.

    • In AWS Artifact is a service that provides on-demand Access to AWS security, compliance reports and Review, accept, and manage select online agreements.

    • As network traffic comes into applications, AWS Shield uses a variety of analysis techniques to detect potential Distributed Denial-of-Service (DDoS) attacks in real time and automatically mitigates them.


    B-)

    • AWS Key Management Service (AWS KMS) enables to perform encryption operations through the use of cryptographic keys.

    • Amazon Inspector checks applications for security vulnerabilities and deviations from security best practices.

    • Amazon GuardDuty is a service that provides intelligent threat detection for AWS infrastructure and resources.

    • Amazon CloudWatch is a web service that enables to:
      • Monitor AWS infrastructure and resources in real time
      • Monitor and manage/View/Access various metrics and graphs to monitor the performance and utilization of resources that run applications from a single dashboard.
      • Configure automatic actions and alerts in response to metrics

    • AWS CloudTrail is a web service that enables to:
      • Track/review details for user activities and API requests/calls that have occurred throughout/within AWS infrastructure/environment.
      • Filter logs to assist with operational analysis and troubleshooting
      • Automatically detecting unusual account activity

    • AWS Trusted Advisor is a web service that Receiving/provides real-time recommendations for improving/inspects AWS environment and Comparing infrastructure to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. The inspection include security checks, such as Amazon S3 buckets with open access permissions.
      Only the Business and Enterprise Support plans include this checks. The Business Support plan has a lower cost.

    • The AWS Free Tier is a program that consists of three types of offers that allow customers to use AWS services without incurring costs: Always free, includes offers that are available to new AWS customers for 12 months free following AWS sign-up date, and Trials.

    • AWS Pricing Calculator enables to Create an estimate for the cost of use cases on AWS.

    • From the Billing dashboard in the AWS Management Console, can view details on AWS bill, such as service costs by Region, month to date spend, and more.

    • Consolidated billing can Combine usage across accounts to receive volume pricing discounts.

    • AWS Budgets enables to create budgets to plan service usage, service costs, and instance reservations. Can Review how much predicted AWS usage will incur in costs by the end of the month. Can set custom alerts that will notify when service usage exceeds (or is forecast-ed to exceed) the amount that have budgeted.

    • AWS Cost Explorer is a tool that enables to Visualize, understand, and manage AWS costs and usage over time.

    • AWS Support is a resource that can answer questions about best practices, assist with troubleshooting issues, help to identify ways to optimize use of AWS services, and so on.

      • A Technical Account Manager (TAM) is available only to AWS customers with an Enterprise Support plan.

    • AWS Marketplace is used to find third-party software that runs on AWS.

    • AWS Cloud Adoption Framework (AWS CAF):

      • The Business Perspective helps to move from a model that separates business and IT strategies into a business model that integrates IT strategy.

      • The People Perspective helps Human Resources (HR) employees prepare their teams for cloud adoption by updating organizational processes and staff skills to include cloud-based competencies.

      • The Governance Perspective helps to identify and implement best practices for IT governance and support business processes with technology.

      • The Platform Perspective helps design, implement, and optimize AWS infrastructure based on business goals and perspectives.

      • The Security Perspective helps structure the selection and implementation of permissions.

      • The Operations Perspective focuses on operating and recovering IT workloads to meet the requirements of business stakeholders.

    • Migration strategies:

      • Rehosting
      • Re-platforming involves selectively optimizing aspects of an application to achieve benefits in the cloud without changing the core architecture of the application.

      • Refactoring involves changing how an application is architect-ed and developed, typically by using cloud-native features.

      • Repurchasing involves moving to a different product.
      • Retaining
      • Retiring involves removing an application that is no longer used or that can be turned off.

    • Snowball Edge Storage Optimized is a device that enables to transfer large amounts of data into and out of AWS. It provides 80 TB of usable HDD storage.

    • AWS Snowmobile is a service that is used for transferring up to 100 PB of data to AWS.

    • Amazon Fraud Detector is a service that enables to identify potentially fraudulent online activities.

    • Amazon Lex is a service that enables to build conversational interfaces using voice and text.

    • Amazon SageMaker is a service that enables to quickly build, train, and deploy machine learning models.

    • Amazon Textract is a machine learning service that automatically extracts text and data from scanned documents.

    • AWS DeepRacer is an autonomous 1/18 scale race car that can use to test reinforcement learning models.

    • The AWS Well-Architected Framework:

      • The Operational excellence pillar includes the ability to run workloads effectively, gain insights into their operations, and continuously improve supporting processes to deliver business value.

      • The Security pillar includes protecting data, systems, and assets, and using cloud technologies to improve the security of workloads.

      • The Reliability pillar focuses on the ability of a workload to consistently and correctly perform its intended functions.

      • The Performance Efficiency pillar focuses on using computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.

      • The Cost Optimization pillar focuses on the ability to run systems to deliver business value at the lowest price point.

    • Advantages of cloud computing:

      • Trade upfront expense for variable expense: Paying for compute time as use it instead of investing upfront costs in data centers.

      • Benefit from massive economies of scale: Receiving lower pay-as-you-go prices as the result of AWS customers' aggregated usage of services.

      • Stop guessing capacity: Scaling infrastructure capacity in an out to meet demand.
      • Increase speed and agility
      • Stop spending money running and maintaining data centers
      • Go global in minutes: Deploying an application in multiple Regions around the world.

    B-)

    • Network Engineer Vs. Cloud Engineer: more 40% value add
      image

      study.com/articles/cloud_engineer_vs_network_engineer.html

    • How to become a Cloud Network Engineer - Career FAQ's:
      www.youtube.com/watch?v=znmFD6W3a5w

    • AWS 101: มารู้จัก AWS กันแบบ Newbie:
      medium.com/@beagleview/aws-101-มารู้จัก-aws-กันแบบ-newbie-ตอนที่-1-ceb3a9173b48

    • ข้อมูลพื้นฐานที่ควรรู้ก่อนใช้งาน AWS:
      dev.classmethod.jp/articles/what_i_learned_with_awssummitonlineasean

    • ทีนี้จะเลือกยังไงล่ะว่าจะใช้ Service ไหนบ้าง? - ตัวอย่างการ Design สำหรับ Web Application นี่เลย:
      medium.com/@aglcsupachaipluamjitta/amazon-aws-stack-th-1f763d590309

    • มาลองทำ Web-API ด้วย AWS Lambda + API Gateway กัน:
      medium.com/@beagleview/มาลองทำ-webapi-ด้วย-aws-lambda-api-gateway-กัน-path-1-799358559fb8

      medium.com/@beagleview/มาลองทำ-webapi-ด้วย-aws-lambda-api-gateway-กัน-ตอนที่-2-ab708f816a96

    • รู้จัก Amazon S3 มันคืออะไร? ทำไมต้องเก็บข้อมูลลง Bucket?:
      www.blognone.com/node/101588

    • เริ่มต้นใช้งาน AWS EC2 กันเถอะ:
      medium.com/@aglcsupachaipluamjitta/เริ่มต้นใช้งาน-aws-ec2-กันเถอะ-f258fa31fbd0

    • สอนใช้งาน AWS แบบ Coolๆ: EC2, S3, VPC
      www.youtube.com/playlist?list=PLt-twymrmZ2d25VMRQ_6_tcocYK4DEvUJ

    • AWS มี Certificate อะไรบ้าง ต้องรู้อะไรก่อนไปสอบ?:
      blog.cloudhm.co.th/aws-certificate

    • noomnatt.medium.com/เส้นทางสู่-aws-certified-solutions-architect-2019-7c54fe819c3f

    • Technique การสอบ AWS Solution Architect:
      www.howtoautomate.in.th/tutorial-aws-solution-architecture

    • AWS in Thai:
      www.youtube.com/playlist?list=PLcUq8DDsIcwV36KUZfFzT_rtvXuEwVM_8

    • www.coursera.org/specializations/aws-fundamentals

    • AWS Networking เบื้องต้น:
      nopnithi.medium.com/7d10673923d7

    Lab:

    1. Introduction to AWS Identity and Access Management (IAM):
      • Explored pre-created IAM users and groups
      • Inspected IAM policies as applied to the pre-created groups
      • Followed a real-world scenario, adding users to groups with specific capabilities enabled
      • Located and used the IAM sign-in URL
      • Experimented with the effects of policies on service access

    2. Introduction to Amazon EC2:

    3. Introduction to Amazon Virtual Private Cloud (VPC):

    4. Introduction to Amazon Simple Storage Service (S3):

    AWS Re:Invent 2020 Recap:

    1. New EC2 types:
      1. M5zn
      2. C6gn
      3. R5b
      4. G4ad
      5. D3/D3en

    2. AWS EKS Anywhere provides customer full control both Control Plane and Data Plane layer.

    3. Amazon Managed Service for Grafana provides fully managed service for data visualizations across multiple data sources.

    4. AWS DevOps Guru allows customer to:
      • Leverage ML-powered insights into application and operation
      • Remediate operational issues faster with less manual effort
      • Provide accurate operational insights for critical issues that impact applications

    5. Fully managed and Serverless services are the management types of AWS database services.

    6. Industrial solutions will see in Singapore Region soon:
      • AWS IoT core for LoRaWAN
      • AWS Panorama
      • AWS Monitron
      • AWS Lookout Suite

    7. The advantages and features that the customer get from Amazon Connect:
      • 100% cloud based contact center with pay per use
      • Deliver omnichannel experiences that are natural dynamic and personalized with AI capability
      • Agents can be located virtually anywhere

    Exam readiness:

    1. AWS Customer Service AWS billing support resource is available to all support levels.

    2. A user can achieve high availability for a web application hosted on AWS by Use an Application Load Balancer across multiple Availability Zones in one AWS Region.

    3. A user needs to quickly deploy a non-relational database on AWS. The user does not want to manage the underlying hardware or the database software. Amazon DynamoDB AWS service can be used to accomplish this.

    4. An application is receiving SQL injection attacks from multiple external resources. AWS WAF service can help automate mitigation against these attacks.

    5. AWS Global Accelerator service help to improve application performance by reducing latency while accessing content globally.

    6. A company is building a new archiving system on AWS that will store terabytes of data. The company will NOT retrieve the data often. S3 Glacier Amazon storage class will Minimize the cost of the system.

    7. AWS Direct Connect service allows a user to establish a dedicated network connection between a company's on-premises data center and the AWS Cloud.

    8. Loose coupling AWS cloud architecture principle states that systems should reduce interdependence.

    9. Amazon CloudFront content cached is Edge locations.

    10. Features does the AWS Organizations provide are Implementing consolidated billing and Enforcing the governance of AWS accounts.

    11. A company needs 24x7 phone, email, and chat access, with a response time of less than 1 hour it a production system has a service interruption. Business AWS Support plan meets these requirements at the LOWEST cost.

    12. A company with AWS Enterprise Support needs help understanding its monthly AWS bill and wants to implement billing best practices. AWS Concierge Support team is available to accomplish these goals.

    13. A company is considering a migration from on premises to the AWS Cloud. The company's IT team needs to offload support of the workload. The IT team should Use AWS Managed Services to provision run and support the company in infrastructure to accomplish this goal.

    14. A security officer wants a list of any potential vulnerabilities in Amazon EC2 security groups. Amazon GuardDuty service should the officer use.

    15. Management at a large company wants to avoid long-term contracts and is interested in AWS to move from fixed costs to variable costs. Volume discounts is the value proposition of AWS for this company.

    16. Can consolidated billing within AWS Organizations help lower overall monthly expenses By leveraging service control policies (SCPs) for centralized service management.

    17. RDS backups are managed by AWS and supports any relational database are benefits of running a database on Amazon RDS compared to an on-premises database.

    18. Closing an AWS account task requires the use of AWS account root account user credentials.

    19. Amazon Athena service provides the ability to quickly run one-time queries on data in Amazon S3.

    20. A company would like to host its MySQL databases on AWS and maintain full control over the operating system, database installation, and configuration. Amazon EC2 service should the company use to host the databases.

    B-)
  • 2X
    1. A user has an AWS account with a Business-level AWS Support plan and needs assistance with handling a production service disruption. Action the user should take is Open a business-critical system down support case.

    2. A company wants to use Amazon EC2 to deploy a global commercial application. The deployment solution should be built with the highest redundancy and fault tolerance. Based on this situation the Amazon EC2 instances should be deployed across multiple Availability Zones in two AWS Regions.

    3. A company is looking for a way to encrypt data stored on Amazon S3. AWS Key Management Service (AWS KMS) managed service can be used to help to accomplish this.

    4. Elasticity architecture concept describes the ability to deploy resources on demand and release resources when they are no longer needed.

    5. Service control policies (SCPs) manage permissions for Availability Zones.

    6. When a user wants to utilize their existing per-socket per-core, or per-virtual machine software licenses for a Microsoft Windows server running on AWS, Dedicated Hosts Amazon EC2 instance type is required.

    7. Elasticity architecture concept describes the ability to deploy resources on demand and release resources when they are no longer needed.

    8. A user can receive help with deploying popular technologies based on AWS best practices, including architecture and deployment instructions in AWS Quick Starts.

    9. AWS CloudFormation can be used to describe infrastructure as code in the AWS Clouds.

    10. When comparing AWS to on-premises Total Cost of Ownership (TCO), Data center security costs are include with AWS.

      https://forms.gle/h94VfhsFtMtjrJxf7

    11. AWS CloudTrail service enables risk auditing of an AWS account by tracking and recording user actions and source IP addresses.

    12. Identity and access management duty is a responsibility to AWS under the AWS shared responsibility model.

    13. A company has performance and regulatory requirements that call for it to run its workload only in its on-premises data center. AWS Outposts and Snowball Edge services should the company use.

    14. A company wants to build a new architecture with AWS services. The company needs to compare service costs at various scales. AWS Pricing Calculator service should the company use to meet the requirement.

    15. AWS Snowball service facilitates transporting 50 GB of data from an on-premises data center to an Amazon S3 bucket without using a network connection.

    16. A company needs to improve the response rate of high-volume queries to its relational database. AWS Global Accelerator service should the company use to offload requests to the database and improve overall response times.

    17. Amazon Simple Notification Service (Amazon SNS) uses a combination of publishers and subscribers.

    18. Amazon EC2 Image Builder service simplifies the creation, maintenance, validation, sharing and deployment of Linux or Windows Server templates for use with Amazon EC2 and on premises VMs.

    19. According to the AWS shared responsibility model, Updating the guest operating system on Amazon EC2 instances task is the customer's responsibility.

    20. VPC endpoint AWS service natively provides and encrypted connection that can be used to move data from on premises infrastructure to the AWS Cloud.

    Exam Readiness - AWS Get Certified - Cloud Practitioner:

    The Exam: Mechanics:
    • Questions are multiple choice, with both single selection and multiple selection.
    • There is no penalty for guessing; unanswered questions are scored as incorrect.
    • Have 90 minutes to complete

    Exam Strategies:
    1. Read both the question and the answer in full one time through.
    2. Identify the features mentioned in the answers.
    3. Identify text in the question that implies certain AWS features. Example: required IOPS, data retrieval times.
    4. Pay attention to qualifying clauses (e.g., 'in the most cost-effective way,')

    Cloud Concepts: Review:
    • With a pay-as-you-go pricing, cloud services platform AWS delivers:
      • Compute power
      • Storage
      • Database services
      • Other resources
    • Regions and Availability Zones are more highly available, fault tolerant, and scalable than traditional data-center infrastructures.
    • AWS supports three different management interfaces to access account:
      • Web-based AWS Management Console

    • Amazon CloudWatch - Have complete visibility of cloud resources and applications
    • Elastic Load Balancing - Application Auto Scaling - Deploy highly available applications that scale with demand
    • AWS Database Services - Run SQL or No-SQL databases without the management overhead
    • AWS CloudFormation - Programmatically deploy repeatable infrastructure

    • AWS is more economical than traditional data centers for applications with varying compute workloads by Amazon EC2 instances can be launched on-demand when needed.

    Exam Outline:

    Domain 2: Security:
    1. Define the AWS Shared Responsibility model
    2. Define AWS Cloud security and compliance concepts
    3. Identify AWS access management capabilities
    4. Identify resources for security support

    Security: Review:
    • Security is the highest priority at AWS.
    • The Shared Responsibility Model defines security responsibilities between AWS and the customer.

    • Maintaining physical hardware is AWS's responsibility.

    • A system administrator add an additional layer of login security to a user's AWS Management Console by Enable Multi Factor authentication.

    Domain 3:
    1. Define methodology of deploying and operating in the AWS Cloud
    2. Define the AWS global infrastructure
    3. Identify the core AWS services
    4. Identify resources for technology support

    • AWS edge locations / PoP component of AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery.

    • Amazon Virtual Private Cloud (Amazon VPC) AWS networking service enables a company to create a virtual network within AWS.

    • AWS CloudTrail service can identify the user that made the API call when an Amazon Elastic Compute Cloud (Amazon EC2) instance is terminated.

    Domain 4:
    1. Compare and contrast the various pricing models for AWS
    2. Recognize the various account structures in relation to AWS billing and pricing
    3. Identify resources available for billing support

    • AWS Marketplace offering enables customers to find, buy, and immediately start using software solutions in their AWS environment.

    • aws.amazon.com/getting-started/hands-on

    • aws.qwiklabs.com

    • aws-labs.net

    • workshops.aws

    • wellarchitectedlabs.com

    • eksworkshop.com

    • ecsworkshop.com

    • containersfromthecouch.com

    • www.appmeshworkshop.com

    • amazon-dynamodb-labs.com

    • awssecworkshops.com

    • sagemaker-workshop.com
    B-)

    • cdkworkshop.com

    • aws.amazon.com/serverless-workshops

    • learn-to-code.workshop.aws

    • lakeformation.workshop.aws

    • aws.amazon.com/training/self-paced-labs

    • observability.workshop.aws

    4X
    1. Local Zones type of AWS infrastructure deployment puts AWS compute, storage database, and other select services closer to end users to run latency-sensitive applications.

    2. A company uses Amazon DynamoDB in its AWS Cloud architecture. According to the AWS shared responsibility model, Operating system patching and upgrades and Application of appropriate permissions with IAM tools are responsibilities of the company.

    3. Spot Instances pricing model will interrupt a running amazon EC2 instance if capacity becomes temporarily unavailable.

    4. A company with an AWS Business Support plan wants to identify Amazon EC2 Reserved that are scheduled to expire. AWS Trusted Advisor service can the company use to accomplish this goal.

    5. Amazon Lightsail and AWS Batch are AWS compute services.

    6. According to the AWS shared responsibility model, when using amazon RDS The customer is responsible for scheduling and AWS responsible for performing backups.

    7. Server Side Encryption with S3 managed encryption keys (SSE-S3) and Server Side Encryption with AWS KMS managed encryption keys (SSE KMS) types can be used to protect objects at rest amazon S3.

    8. A company has a globally distributed user base. The company needs its application to be highly available and have low latency for end users. Multi-Region, active-active architecture approach will most effectively support these requirements.

    9. A company is required to store its data close to its primary users. Global footprint benefit of the AWS Cloud supports this requirement.

    10. When comparing AWS cloud with premises total cost of ownership, Physical storage hardware and Project management expenses must be considered.

    11. A company wants an in-memory data store that is compatible with open source in the cloud. Amazon ElastiCache service should the company use.

    12. Amazon EC2 and AWS Lambda services offer compute capabilities.

    13. When using Amazon RDS, the customer responsible is Controlling network access through security groups.

    14. A company has existing software licenses that it wants bring to AWS, but the licensing model requires licensing physical cores. The company can meet this requirement in the AWS cloud by Launch an Amazon EC2 instance on a Dedicated Host.

      https://forms.gle/fcCRdJ42uFnygtPr6

      https://forms.gle/7dJoKSUMTKSr3gLQ6

      https://forms.gle/rjANwjjYeQSAGbvU7

      https://forms.gle/ukfcjAEfx8fm2Nxz5

      https://forms.gle/Gd2RQqvrqE7UDATn8

      https://forms.gle/B6cdA2vxSDwNQVJdA

      https://forms.gle/dQN2xpj3sbqEL7JT9

    1. Design for automated recovery from failure guideline is a well-architected design principle for building cloud applications.

    2. A company wants to use an AWS service to continuously monitor the health of its application endpoints based on proximity to application users. The company also needs to route traffic to healthy Regional endpoints and to improve application availability and performance. Amazon Inspector service will meet these requirements.

    3. A company uses Amazon EC2 Instances in its AWS account for several different workloads. The company needs to perform an analysis to understand the cost of each workload. Update the workload applications to publish usage data to a cost allocation database is the MOST operationally efficient way to meet this requirement.

    4. Amazon CloudFront provide Automatic scaling for all resources to power an application from a single unified interface.

    5. A solutions architect needs to maintain a fleet of Amazon EC2 instances so that any impaired instances are replaced with new ones. Amazon Elastic Container Service (Amazon ECS) should the solution architect use.

    6. AWS Certificate Manager (ACM) service provides a report that enables users to assess AWS infrastructure compliance.

    7. AWS Server Migration Service (AWS SMS) does AWS Snowball Edge natively support.

    8. A security engineer wants a single-tenant AWS solution to create, control, and manage their own cryptographic keys to meet regulatory compliance requirements for data security. AWS CloudHSM service should the engineer use.

    9. A company wants to implement an automated security assessment of the scanty and network accessibility of its Amazon EC2 instances. Amazon GuardDuty AWS service can be used to accomplish this.

    10. An application that runs on Amazon EC2 needs to accommodate a flexible workload that can run or terminate at any time of day. Spot Instances pricing model will accommodate these requirements at the LOWEST cost.

    11. Resource elasticity is an AWS value proportion that describes a user's ability to scale infrastructure based on demand.

    12. AWS Customer Service billing support resource is available to all support levels.

      https://forms.gle/JCfj2b8Wa9GKUUun6

      https://forms.gle/c5VuiKNWvQygkRfq5

    • เริ่มต้น Cloud ด้วย AWS เตรียมสอบ AWS Certified Solution Architect Associate:
      nopnithi.medium.com/fbcca23b7589

    • การกำหนดราคาของ AWS

    • AWS Pricing ทำงานอย่างไร?

    • Stephane OR Neal - AWS:
      If you want to pass the exam without getting too deep go for Stephane else go with Neal's course
      www.youtube.com/watch?v=QKU8kZ92Ubc

      www.reddit.com/r/AWSCertifications/comments/g0kw75/neal_davis_or_stephane_maarek_for_aws_associate

    Introduction - AWS Certified Solution Architect Associate SAA-C02:

    What's AWS?:
    • AWS (Amazon Web Services) is a Cloud Provider
    • They Provide you with servers and services that you can use on demand and scale easily

    • AWS has revolutionized IT over time
    • AWS powers some of the biggest websites in the world
      • Amazon.com
      • Netflix
    image

    AWS Fundamentals: IAM & EC2:

    AWS Regions:
    • AWS has Regions all around the world
    • Names can be: us-east-l, eu-west-3...
    • A region is a cluster of data centers
    • Most AWS services are region-scoped
    • aws.amazon.com/about-aws/global-infrastructure

    AWS Availability Zones:
    • Each region has many availability zones (usually 3, min is 2, max is 6). Example:
      • ap-southeast-2a
      • ap-southeast-2b
      • ap-southeast-2c
    • Each availability zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity
    • They're separate from each other, so that they're isolated from disasters
    B-)

    • They're connected with high bandwidth, ultra-low latency networking
    • www.business2community.com/cloud-computing/5-things-you-need-to-know-about-aws-regions-and-availability-zones-02295344

    IAM Introduction:
    • IAM (Identity and Access Management)
    • Your whole AWS security is there:
      • Users: Usually a physical person
      • Groups: Functions (admins, devops) / Teams (engineering, design...)
        Contains users!
      • Roles: Internal usage within AWS resources
    • Root account should never be used (and shared)
    • Users must be created with proper permissions
    • IAM is at the center of AWS
    • Policies are written in JSON (JavaScript Object Notation) Documents
      Defines what each User/Group/Role can and cannot do

    • IAM has a global view
    • Permissions are governed by Policies (JSON)
    • MFA (Multi Factor Authentication) can be setup
    • IAM has predefined 'managed policies'
    • It's best to give users the minimal amount of permissions they need to perform their job (least privilege principles)

    IAM Federation:
    • Big enterprises usually integrate their own repository of users with IAM
    • This way, one can login into AWS using their company credentials
    • Identify Federation uses the SAML standard (Active Directory)

    IAM 101 Brain Dump:
    • One IAM User per PHYSICAL PERSON
    • One IAM Role per Application
    • IAM credentials should NEVER BE SHARED
    • Never, ever, ever, ever, write IAM credentials in code. EVER.
    • And even less, NEVER EVER EVER COMMIT YOUR IAM credentials
    • Never use the ROOT account except for initial setup.
    • Never use ROOT IAM Credentials

    What is Amazon EC2?:
    • EC2 is one of the most popular of AWS' offering
    • EC2 = Elastic Compute Cloud = Infrastructure as a Service
    • It mainly consists in the capability of:
      • Renting virtual machines (EC2)
      • Storing data on virtual drives (EBS)
      • Distributing load across machines (ELB)
      • Scaling the services using an auto-scaling group (ASG)
    • Knowing EC2 is fundamental to understand how the Cloud works

    How to SSH into your EC2 Instance:
    • SSH is one of the most important function. It allows you to control a remote machine, all using the command line.
    • ssh -i EC2Tutorial.pem ec2-user@x.229.240.238
    • clear => clear screen

    Introduction to Security Groups:
    • Security Groups are the fundamental of network security in AWS
    • They control how traffic is allowed into or out of EC2 Instances.
      Operate at instance level.
    • It is the most fundamental skill to learn to troubleshoot networking issues
    • Only contain allow rules
    • Rules can reference by IP or by security group

    Deeper Dive:
    • Security groups are acting as a 'firewall' on EC2 instances
    • They regulate:
      • Access to Ports
      • Authorized IP range - IPv4 and IPv6
      • Control of inbound network (from other to the instance)
      • Control of outbound network (from the instance to other)

    Good to know:
    • Can be attached to multiple instances
    • Locked down to a region / VPC combination
    • Does live 'outside' the EC2 - if traffic is blocked the EC2 instance won't see it
    • It's good to maintain one separate security group for SSH access
    • If your application:
      • is not accessible (time out), then it's a security group issue
      • gives a 'connection refused' error, then it's an application error or it's not launched
    • All inbound traffic is blocked by default
    • All outbound traffic is authorised by default

    Private vs Public IP (IPv4):
    • Networking has two sorts of IPs. IPv4 and IPv6:
      • IPv4: 2.201.21.51
      • IPv6: 400f:2a11:5656:4:311:900:f32:78d0

    • IPv4 is still the most common format used online.
    • IPv6 is newer and solves problems for the Internet of Things (IoT).

    • IPv4 allows for 3.7 billion different addresses in the public space
    • IPv4: [0-255].[0-255].[0-255].[0-255].

    Fundamental Differences:
    • Public IP:
      • means the machine can be identified on the internet (WWW)
      • Must be unique across the whole web (not two machines can have the same public IP).
      • Can be geo-located easily

    • Private IP:
      • means the machine can only be identified on a private network only
      • The IP must be unique across the private network
      • BUT two different private networks (two companies) can have the same IPs.
      • Machines connect to WWW using an internet gateway (a proxy)
      • Only a specified range of IPs can be used as private IP

    Elastic IPs:
    • When stop and then start an EC2 instance, it can change its public IP.
    • If need to have a fixed public IP for instance, need an Elastic IP
    • An Elastic IP is a public IPv4 IP own as long as don't delete it
    • Can attach it to one instance at a time

    • With an Elastic IP address, can mask the failure of an instance or software by rapidly remapping the address to another instance in account.
    • Can only have 5 Elastic IP in account (can ask AWS to increase that).
    • Overall, try to avoid using Elastic IP:
      • They often reflect poor architectural decisions
      • Instead, use a random public IP and register a DNS name to it
      • Or, use a Load Balancer and don't use a public IP

    In AWS EC2:
    • By default, EC2 machine comes with:
      • A private IP for the internal AWS Network
      • A public IP, for the WWW.
    • When doing SSH into EC2 machines:
      • Can't use a private IP, because we are not in the same network
      • Can only use the public IP.
    • If machine is stopped and then started,
      the public IP can change

    Launching an Apache Server on EC2:
    • Let's leverage our EC2 instance

    • We'll install an Apache Web Server to display a web page
    • We'll create an index.html that shows the hostname of our machine

    • #!/bin/bash
    • # get admin privileges
    • $ sudo su
    • # install httpd (Linux 2 version)
    • # yum update -y
    • yum install -y httpd.x86_64
    • systemctl start httpd.service
    • systemctl enable httpd.service
    • curl localhost:80
    • allow in the security group

    • echo "Hello World" > /var/www/html/index.html

    • echo "<h1>Hello World from $(hostname -f)</h1>" > /var/www/html/index.html

    • EC2_AVAIL_ZONE=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
    • echo "<h1>Hello World from $(hostname -f) in AZ $EC2_AVAIL_ZONE</h1>" > /var/www/html/index.html
    B-)

    1. ap-southeast-1a is an Availability Zone.

    2. Availability Zones are in isolated data centers, this helps guarantee that multi AZ won't all fail at once (due to a meteorological disaster for example).

    3. All of Users, Roles, Policies, and Groups are IAM components.

    4. IAM is a global service (encompasses all regions), IAM Users are NOT defined on a per-region basis.

    5. An IAM user can belong to multiple groups.

    6. Getting started with AWS and manager wants things to remain simple yet secure. He wants the management of engineers to be easy, and not re-invent the wheel every time someone joins company. Create multiple IAM users and groups, and assign policies to groups. New users will be added to groups. This is best practice when have a big organization.

    7. Never share IAM credentials. If colleagues need access to AWS they'll need their own account.

    8. Pay for an EC2 instance compute component only when it's in 'running' state.

    9. Getting a permission error exception when trying to SSH into Linux Instance because the key is missing permissions chmod 0400.

    10. Any timeout errors when trying to SSH into EC2 instance (not just in SSH but also HTTP for example) means a misconfiguration of security groups.

    11. When a security group is created, Deny all traffic inbound and allow all traffic outbound is the default behavior.

    12. Security groups can reference IP address and CIDR block.

    13. EC2 User Data provide startup instructions to EC2 instances.

    EC2 On Demand:
    • Pay for what use:
      • Linux - billing per second, after the first minute
      • All other operating systems (ex: Windows) - billing per hour
    • Has the highest cost but no upfront payment
    • No long term commitment

    • Recommended for short-term and un-interrupted workloads, where can't predict how the application will behave.

    EC2 Reserved Instances:
    • Up to 75% discount compared to On-demand
    • Reservation period: 1 year = + discount | 3 years = +++ discount
    • Purchasing options: no upfront | partial upfront = + discount | All upfront = ++ discount
    • Reserve a specific instance type
    • Recommended for steady-state usage applications (think database)

    • Convertible Reserved Instance:
      • can change the EC2 instance type
      • Up to 54% discount

    • Scheduled Reserved Instances:
      • launch within time window reserve
      • When require a fraction of day / week / month
      • Still commitment over 1 to 3 years

    EC2 Spot Instances:
    • Can get a discount of up to 90% compared to On-demand (the biggest discount)
    • Instances that can 'lose' at any point of time if max price is less than the current spot price
    • The MOST cost-efficient instances in AWS
    • Useful for short workloads that are resilient to failure (less reliable)
      • Batch jobs
      • Data analysis
      • Image processing
      • Any distributed workloads
      • Workloads with a flexible start and end time
    • Not suitable for critical jobs or databases

    • Great combo: Reserved Instances for baseline + On-Demand & Spot for peaks

    EC2 Dedicated Hosts:
    • A physical server with EC2 instance capacity fully dedicated to use
    • Can help address compliance requirements
    • Reduce costs by allowing to use existing server-bound software licenses
    • Allocated for account for a 3 year period reservation
    • More expensive

    • Useful for software that have complicated licensing model (BYOL - Bring Your Own License)
    • Or for companies that have strong regulatory or compliance needs

    • Per host billing
    • Visibility of sockets, cores, host ID
    • Affinity between a host and instance
    • Targeted instance placement
    • Add capacity using an allocation request

    EC2 Dedicated Instances:
    • Instances running on hardware that's dedicated
    • May share hardware with other instances in same account
    • No control over instance placement (can move hardware after Stop / Start)

    • Per instance billing (subject to a $2 per region fee)

    Common Characteristic Dedicated Instances and Hosts:
    • Enables the use of dedicated physical servers
    • Automatic Instance placement

    Which host / purchasing option is right?:
    • On demand: coming and staying in resort whenever like, pay the full price
    • Reserved: like planning ahead and if plan to stay for a long time, may get a good discount.
    • Spot instances: the hotel allows people to bid for the empty rooms and the highest bidder keeps the rooms. Can get kicked out at any time
    • Dedicated Hosts: Book an entire building of the resort

    Price Comparison
    Example - m4.large - ap-southeast-1:
    • On-demand: $0.125 per Hour
    • Spot Instance (Spot Price): $0.0311 - $0.1231
    • Spot Block (1 to 6 hours): $0.069 - $0.152
    • Reserved Instance (12 months) - no upfront: $0.078
    • Reserved Instance (12 months) - all upfront: $0.072
    • Reserved Instance (36 months) - no upfront: $0.053
    • Reserved Convertible Instance (12 months) - no upfront: $0.089
    • Reserved Dedicated Instance (12 months) - all upfront: $0.08
    • Dedicated Host: On-demand price
    • Dedicated Host Reservation: Up to 70% off

    EC2 Spot Instance Requests:
    • Can get a discount of up to 90% compared to On-demand

    • Define max spot price and get the instance while current spot price < max
      • The hourly spot price varies based on offer and capacity
      • If the current spot price > max price can choose to stop or terminate instance with a 2 minutes grace period.
    • Other strategy: Spot Block
      • 'block' spot instance during a specified time frame (1 to 6 hours) without interruptions
      • In rare situations, the instance may be reclaimed

    • Used for batch jobs, data analysis, or workloads that are resilient to failures.
    • Not great for critical jobs or databases

    How to terminate Spot Instances?:
    image
    image
    Can only cancel Spot Instance requests that are open, active, or disabled.
    Cancelling a Spot Request does not terminate instances
    Must first cancel a Spot Request, and then terminate the associated Spot Instances

    B-)

  • Spot Fleets:
    • Set of Spot Instances + (optional) On-Demand Instances
    • Will try to meet the target capacity with price constraints
      • Define possible launch pools: instance type (m5.large), OS, Availability Zone
      • Can have multiple launch pools, so that the fleet can choose
      • Spot Fleet stops launching instances when reaching capacity or max cost
    • Strategies to allocate Spot Instances:
      • lowestPrice: from the pool with the lowest price (cost optimization, short workload)
      • diversified: distributed across all pools (great for availability, long workloads)
      • capacityOptimized: pool with the optimal capacity for the number of instances

    • Allow us to automatically request Spot Instances with the lowest price

    EC2 Instance Types - Main ones:
    • R: applications that needs a lot of RAM - in-memory caches
    • C: applications that needs good CPU - compute / databases
    • M: applications that are balanced (think 'medium') - general / web app
    • I: applications that need good local I/O (instance storage) - databases
    • G: applications that need a GPU - video rendering / machine learning

    • T2/T3: burstable instances (up to a capacity)
    • T2/T3 - unlimited: unlimited burst

    • Real-world tip: use Easy Amazon EC2 / RDS Instance Comparison
      https://instances.vantage.sh

    Burstable Instances (T2/T3):
    • AWS has the concept of burstable instances (T2/T3 machines)

    • Burst means that overall, the instance has OK CPU performance.
    • When the machine needs to process something unexpected (a spike in load for example), it can burst, and CPU can be VERY good.
    • If the machine bursts, it utilizes 'burst credits'
    • If all the credits are gone, the CPU becomes BAD
    • If the machine stops bursting, credits are accumulated over time

    • Burstable instances can be amazing to handle unexpected traffic and getting the insurance that it will be handled correctly
    • If instance consistently runs low on credit, need to move to a different kind of non-burstable instance

    CPU Credits:

    T2/T3 Unlimited:

    What's an AMI?:
    • As saw, AWS comes with base image such as: Ubuntu, Fedora, RedHat, Windows, Etc.
    • These images can be customized at runtime using EC2 User data

    • But what if could create own image, ready to go?
    • That's an AMI - an image to use to create instances
    • AMIs can be built for Linux or Windows machines

    Why would use a custom AMI?:
    • Using a custom built AMI can provide the following advantages:
      • Pre-installed packages needed
      • Faster boot time (no need for ec2 user data at boot time)
      • Machine comes configured with monitoring / enterprise software
      • Security concerns - control over the machines in the network
      • Control of maintenance and updates of AMIs over time
      • Active Directory Integration out of the box
      • Installing app ahead of time (for faster deploys when auto-scaling)
      • Using someone else's AMI that is optimized for running an app, DB, etc.
    • AMI are built for a specific AWS region (!)

    Using Public AMIs:
    • Can leverage AMIs from other people
    • Can also pay for other people's AMI by the hour
      • These people have optimized the software
      • The machine is easy to run and configure
      • Basically rent 'expertise' from the AMI creator

    • AMI can be found and published on the Amazon Marketplace

    • Warning:
      • Do not use an AMI don't trust!
      • Some AMIs might come with malware or may not be secure for enterprise

    AMI Storage:
    • AMI take space and they live in Amazon S3
    • Amazon S3 is a durable, cheap and resilient storage where most of backups will live (but won't see them in the S3 console)
    • By default, AMIs are private, and locked for account / region
    • Can also make AMIs public and share them with other AWS accounts or sell them on the AMI Marketplace

    AMI Pricing:
    • AMIs live in Amazon S3, so get charged for the actual space in takes in Amazon S3
    • Amazon S3 pricing in AP-SOUTHEAST-1:
      • First 50 TB / Month: $0.025 per GB
      • Next 450 TB / Month: $0.024 per GB
    • Overall it is quite inexpensive to store private AMIs.
    • Make sure to remove the AMIs don't use


    Cross Account AMI Copy (FAQ + Exam Tip):
    • Can share an AMI with another AWS account.
    • Sharing an AMI does not affect the ownership of the AMI.
    • If you copy an AMI that has been shared with your account, you are the owner of the target AMI in your account.
    • To copy an AMI that was shared with you from another account, the owner of the source AMI must grant you read permissions for the storage that backs the AMI, either the associated EBS snapshot (for an Amazon EBS-backed AMI) or an associated S3 bucket (for an instance store-backed AMI).
    • Limits:
      • You can't copy an encrypted AMI that was shared with you from another account. Instead, if the underlying snapshot and encryption key were shared with you, you can copy the snapshot while re-encrypting it with a key of your own. You own the copied snapshot, and can register it as a new AMI.
      • You can't copy an AMI with an associated billingProduct code that was shared with you from another account. This includes Windows AMIs and AMIs from the AWS Marketplace. To copy a shared AMI with a billingProduct code, launch an EC2 instance in your account using the shared AMI and then create an AMI from the instance.

    Placement Groups:
    • Sometimes want control over the EC2 Instance placement strategy
    • That strategy can be defined using placement groups
    • When create a placement group, specify one of the following strategies for the group:
      • Cluster - clusters instances into a low-latency group in a single Availability Zone
      • Spread - spreads instances across underlying hardware (max 7 instances per group per AZ) - critical applications
      • Partition - spreads instances across many different partitions (which rely on different sets of racks) within an AZ. Scales to 100s of EC2 instances per group (Hadoop, Cassandra, Kafka)
    B-)


  • Placement Groups:

    Cluster:

    image
    • Pros: Great network (Low latency 10 Gbps bandwidth between instances)
    • Cons: If the rack fails, all instances fails at the same time
    • Use case:
      • Big Data job that needs to complete fast
      • Application that needs extremely low latency and high network throughput

    Spread:

    image
    • Pros:
      • Can span across Availability Zones (AZ)
      • Reduced risk is simultaneous failure
      • EC2 Instances are on different physical hardware
    • Cons: Limited to 7 instances per AZ per placement group
    • Use case:
      • Application that needs to maximize high availability
      • Critical Applications where each instance must be isolated from failure from each other

    Partition:

    image
    • Up to 7 partitions per AZ
    • Can span across multiple AZs in the same region
    • Up to 100s of EC2 instances
    • The instances in a partition do not share racks with the instances in the other partitions
    • A partition failure can affect many EC2 but won't affect other partitions
    • EC2 instances get access to the partition information as metadata
    • Use cases: HDFS, HBase, Cassandra, Kafka

    Elastic Network Interfaces (ENI):
    • Logical component in a VPC that represents a virtual network card
    • The ENI can have the following attributes:
      • Primary private IPv4, one or more secondary IPv4
      • One Elastic IP (IPv4) per private IPv4
      • One Public IPv4
      • One or more security groups
      • A MAC address
    • Can create ENI independently and attach them on the fly (move them) on EC2 instances for failover
    • Bound to a specific availability zone (AZ)

    EC2 Hibernate:
    • We know we can stop, terminate instances:
      • Stop: the data on disk (EBS) is kept intact in the next start
      • Terminate: any EBS volumes (root) also set-up to be destroyed is lost
    • On start, the following happens:
      • First start: the OS boots & the EC2 User Data script is run
      • Following starts: the OS boots up
      • Then application starts, caches get warmed up, and that can take time!
    • Introducing EC2 Hibernate:
      • The in-memory (RAM) state is preserved
      • The instance boot is much faster!
        (the OS is not stopped / restarted)
      • Under the hood: the RAM state is written to a file in the root EBS volume
      • The root EBS volume must be encrypted
    • Use cases:
      • long-running processing
      • saving the RAM state
      • services that take time to initialize
    • Supported instance families - C3, C4, C5, M3, M4, M5, R3, R4, and R5.
    • Instance RAM size - must be less than 150 GB.
    • Instance size - not supported for bare metal instances.
    • AMI: Amazon Linux 2, Linux AMI, Ubuntu... & Windows
    • Root Volume: must be EBS, encrypted, not instance store, and large
    • Available for On-Demand and Reserved Instances
    • An instance cannot be hibernated more than 60 days

    EC2 for Solution Architects:
    • EC2 instances are billed by the second, t2.micro is free tier
    • On Linux / Mac we use SSH, on Windows we use Putty
    • SSH is on port 22, lock down the security group to your IP
    • Timeout issues => Security groups issues
    • Permission issues on the SSH key => run 'chmod 0400'
    • Security Groups can reference other Security Groups instead of IP ranges
    • Know the difference between Private, Public and Elastic IP
    • Can customize an EC2 instance at boot time using EC2 User Data
    • The 4 EC2 launch modes:
      • On demand
      • Reserved
      • Spot instances
      • Dedicated Hosts
    • The basic instance types: R, C, M, I, G, T2/T3
    • Can create AMIs to pre-install software on EC2 => faster boot
    • AMI can be copied across regions and accounts
    • EC2 instances can be started in placement groups:
      • Cluster
      • Spread

    • Cluster placement groups places instances next to each other giving high performance computing and networking while talking to each other as performing big data analysis.

    • Plan on running an open-source MongoDB database year-round on EC2. Reserved Instances launch mode should choose. This will allow to save cost as know that the instance will be up for a full year.

    • Built and published an AMI in the ap-southeast-2 region, and colleague in us-east-1 region cannot see it because An AMI created for a region can only be seen in that region.

    • Launching an EC2 instance in us-east-1 using this Python script snippet:
      ec2.create_instances(ImageId='ami-c34b6f8', MinCount=1, MaxCount=1)
      It works well, so decide to deploy script in us-west-1 as well. There, the script does not work and fails with 'ami not found' error because AMI is region locked and the same ID cannot be used across regions.

    • Would like to deploy a database technology and the vendor license bills based on the physical cores and underlying network socket visibility. Dedicated Hosts EC2 launch modes allow to get visibility into them.

    • Launching an application on EC2 and the whole process of installing the application takes about 30 minutes. Would like to minimize the total time for instance to boot up and be operational to serve traffic. Create an AMI after installing the applications and launch from the AMI allows to start more EC2 instances directly from that AMI, hence bypassing the need to install the application (as it's already installed).

    • Running a critical workload of three hours per week, on Tuesday. As a solutions architect, Scheduled Reserved Instances EC2 Instance Launch Types should choose to maximize the cost savings while ensuring the application stability.
    B-)

    • It's easy to horizontally scale thanks the cloud offerings such as Amazon EC2

    High Availability:
    • Usually goes hand in hand with horizontal scaling
    • Means running application / system in at least 2 data centers (== Availability Zones)
    • The goal is to survive a data center loss
    • Can be passive (for RDS Multi AZ for example)
    • Can be active (for horizontal scaling)

    High Availability & Scalability For EC2:
    • Vertical Scaling: Increase instance size (= scale up / down)
      • From: t2.nano - 0.5G of RAM, 1 vCPU
      • To: u-12tb1.metal - 12.3TB of RAM, 448 vCPUs
    • Horizontal Scaling: Increase number of instances (= scale out / in)
      • Auto Scaling Group
      • Load Balancer
    • High Availability: Run instances for the same application across multi AZ
      • Auto Scaling Group multi AZ
      • Load Balancer multi AZ

    What is load balancing?:
    • Load balancers are servers that forward internet traffic to multiple servers (EC2 Instances) downstream.

    Why use a load balancer?:
    • Spread load across multiple downstream instances
    • Expose a single point of access (DNS) to application
    • Seamlessly handle failures of downstream instances
    • Do regular health checks to instances
    • Provide SSL termination (HTTPS) for websites
    • Enforce stickiness with cookies
    • High availability across zones
    • Separate public traffic from private traffic

    Why use an EC2 Load Balancer?:
    • An ELB (ECS Load Balancer) is a managed load balancer
      • AWS guarantees that it will be working
      • AWS takes care of upgrades, maintenance, high availability
      • AWS provides only a few configuration knobs
    • It costs less to setup own load balancer but it will be a lot more effort.
    • It is integrated with many AWS offering / services

    Health Checks:
    • Are crucial for Load Balancers
    • They enable the load balancer to know if instances it forwards traffic to are available to reply to requests
    • The health check is done on a port and a route (/health is common)
    • If the response is not 200 (OK), then the instance is unhealthy

    Types of load balancer on AWS:
    • Has 3 kinds of managed Load Balancers
    • Classic Load Balancer (v1 - old generation) - 2009
      • HTTP, HTTPS, TCP
    • Application Load Balancer (v2 - new generation) - 2016
      • HTTP, HTTPS, WebSocket
    • Network Load Balancer (v2 - new generation) - 2017
      • TCP, TLS (secure TCP) & UDP
    • Overall, it is recommended to use the newer / v2 generation load balancers as they provide more features
    • Can setup internal (private) or external (public) ELBs

    Load Balancer Security Groups:
    • Allows only traffic from the load balancer to EC2 instances

    Good to Know:
    • LBs can scale but not instantaneously - contact AWS for a 'warm-up'
    • Troubleshooting:
      • 4xx errors are client induced errors
      • 5xx errors are application induced errors
      • LB Errors 503 means at capacity or no registered target
      • If the LB can't connect to application, check security groups!
    • Monitoring:
      • ELB access logs will log all access requests (so can debug per request)
      • CloudWatch Metrics will give aggregate statistics (ex: connections count)

    Classic Load Balancers (v1):
    • Supports TCP (Layer 4), HTTP & HTTPS (Layer 7)
    • Health checks are TCP or HTTP based
    • Fixed hostname XXX.region.elb.amazonaws.com

    Application Load Balancer (v2):
    • Application load balancer is Layer 7 (HTTP)
    • Load balancing to multiple HTTP applications across machines (target groups) / applications on the same machine (ex: containers)
    • Support for HTTP/2 and WebSocket / redirects (from HTTP to HTTPS for example)
    • Routing tables to different target groups:
      • Routing based on path in URL (example.com/users & example.com/posts)
      • Based on hostname in URL (one.example.com & other.example.com)
      • On Query String, Headers (example.com/users?id=123&order=false)
    • Are a great fit for micro services & container-based application (example: Docker & Amazon ECS)
    • Has a port mapping feature to redirect to a dynamic port in ECS
    • In comparison, we'd need multiple Classic Load Balancer per application

    Target Groups:
    • EC2 instances (can be managed by an Auto Scaling Group) - HTTP
    • ECS tasks (managed by ECS itself) - HTTP
    • Lambda functions - HTTP request is translated into a JSON event
    • IP Address - must be private IPs
    • ALB can route to multiple target groups
    • Health checks are at the target group level

    Good to Know:
    • Fixed hostname (XXX.region.elb.amazonaws.com)
    • The application servers don't see the IP of the client directly
      • The true IP of the client is inserted in the header X-Forwarded-For
      • Can also get Port (X-Forwarded-Port) and proto (X-Forwarded-Proto)

    Network Load Balancer (v2):
    • (Layer 4) allow to:
      • Forward TCP & UDP Based traffic to instances
      • Handle millions of request per seconds
      • Less latency ~100 ms (vs 400 ms for ALB)
    • Has one static IP per AZ, and supports assigning Elastic IP (helpful for whitelisting specific IP)
    • Are used for extreme performance, TCP or UDP traffic
    • Not included in the AWS free tier

    Load Balancer Stickiness:
    • It is possible to implement stickiness so that the same client is always redirected to the same instance behind a load balancer
    • This work for Classic Load Balancers & ALBs
    • The 'cookie' used for stickiness has an expiration date control
    • Use case: make sure the user doesn't lose his session data
    • Enabling stickiness may bring imbalance to the load over the backend EC2 instances

    Cross-Zone Load Balancing:
    • Each load balancer instance distributes evenly across all registered instances in all AZ
      2 instances in AZ1, 3 instances in AZ2: all instances will share traffic 20% for each
    • Otherwise, Requests are distributed in the instances of the node of the ELB
      2 instances in AZ1, 3 instances in AZ2: instances in AZ1 load 25% for each, while 16.67% for AZ2
    • ALB:
      • Always on (can't be disabled)
      • No charges for inter AZ data
    • NLB:
      • Disabled by default
      • Pay charges ($) for inter AZ data if enabled
    • Classic LB:
      • Through Console => Enabled by default
      • CLI/API => Disabled by default
      • No charges for inter AZ data if enabled

      SSL/TLS - Basics:
      • An SSL Certificate allows traffic between clients and load balancer to be encrypted in transit (in-flight encryption)
      • SSL refers to Secure Sockets Layer, used to encrypt connections
      • TLS refers to Transport Layer Security, which is a newer version
      B-)

      • Nowadays, TLS certificates are mainly used, but people still refer as SSL
      • Public SSL certificates are issued by Certificate Authorities (CA)
      • Comodo, Symantec, GoDaddy, GlobalSign, Digicert, Letsencrypt, etc...
      • SSL certificates have an expiration date (set) and must be renewed

      Load Balancer - SSL Certificates:
      • The load balancer uses an X.509 certificate (SSL/TLS server certificate)
      • Can manage certificates using ACM (AWS Certificate Manager)
      • Can create upload own certificates alternatively
      • HTTPS listener:
        • Must specify a default certificate
        • Can add an optional list of certs to support multiple domains
        • Clients can use SNI (Server Name Indication) to specify the hostname they reach
        • Ability to specify a security policy to support older versions of SSL / TLS (lagacy clients)

      SSL - Server Name Indication:
      • SNI solves the problem of loading multiple SSL certificates onto one web server (to serve multiple websites)
      • It's a 'newer' protocol, and requires the client to indicate the hostname of the target server in the initial SSL handshake
      • The server will then find the correct certificate, or return the default one
        Note:
      • Only works for ALB & NLB (newer generation), CloudFront
      • Does not work for CLB (older gen)

      Elastic Load Balancers - SSL Certificates:
      • Classic Load Balancer (v1: CLB):
        • Support only one SSL certificate
        • Must use multiple CLB for multiple hostname with multiple SSL certificates
      • Application Load Balancer (v2) & Network Load Balancer (v2):
        • Supports multiple listeners with multiple SSL certificates
        • Uses Server Name Indication (SNI) to make it work

      ELB - Connection Draining:
      • Feature naming:
        • CLB: Connection Draining
        • Target Group: Deregistration Delay
          (for ALB & NLB)
      • Time to complete 'in-flight requests' while the instance is de-registering or unhealthy
      • Stops sending new requests to the instance which is de-registering
      • Between 1 to 3,600 seconds, default is 300 seconds
      • Can be disabled (set value to 0)
      • Set to a low value if requests are short

      What's an Auto Scaling Group?:
      • In real-life, the load on websites and application can change
      • In the cloud, can create and get rid of servers very quickly
      • The goal of an Auto Scaling Group (ASG) is to:
        • Scale out (add EC2 instances) to match an increased load
        • Scale in (remove EC2 instances) to match a decreased load
        • Ensure have a minimum and a maximum number of machines running
        • Automatically Register new instances to a load balancer

      AWS Fundamentals:

      AWS Cloud Technical Essentials:

      • Should consider four main aspects when deciding which AWS Region to host applications and workload: latency, price, service availability, and compliance. Focusing on these factors will enable to make the right decision when choosing an AWS Region.

      • Every action take in AWS is an API call.

      • The AWS Global Infrastructure is nested for high availability and redundancy. AWS Regions are clusters of Availability Zones that are connected through highly availably and redundant high-speed links and Availability Zones are clusters of data centers that are also connected through highly available and redundant high-speed links.

      • There are six benefits of cloud computing. Going global in minutes means can easily deploy applications in multiple Regions around the world with just a few clicks.

      • With the cloud, no longer have to manage and maintain own hardware in own data centers. Companies like AWS own and maintain these data centers and provide virtualized data center technologies and services to users over the internet.

      • Use an access key (an access key ID and secret access key) to make programmatic requests to AWS. However, do not use AWS account root user access key. The access key for AWS account root user gives full access to all resources for all AWS services, including billing information. Cannot reduce the permissions associated with AWS account root user access key. Therefore, protect root user access key like credit card numbers or any other sensitive secret. Should disable to delete any access keys associated with the root user, and should also enable MFA for the root user.

      • Users in company are authenticated in corporate network and want to be able to use AWS without having to sign in again. Instead of creating an IAM User for each employee that needs access to the AWS account, should use IAM Roles to federate users.

      • A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents that are attached to an IAM identity (user, group of users, or role). The information in a policy statement is contained within a series of elements:
        • Version – Specify the version of the policy language that want to use. As a best practice, use the latest 2012-10-17 version.
        • Statement – Use this main policy element as a container for the following elements. Can include more than one statement in a policy.
        • Sid (Optional) – Include an optional statement ID to differentiate between statements.
        • Effect – Use Allow or Deny to indicate whether the policy allows or denies access.
        • Principal (Required in only some circumstances) – If create a resource-based policy, must indicate the account, user, role, or federated user to which would like to allow or deny access. If creating an IAM permissions policy to attach to a user or role, cannot include this element. The principal is implied as that user or role.
        • Action – Include a list of actions that the policy allows or denies.
        • Resource (Required in only some circumstances) – If create an IAM permissions policy, must specify a list of resources to which the actions apply. If create a resource-based policy, this element is optional. If do not include this element, then the resource to which the action applies is the resource to which the policy is attached.
        • Condition (Optional) – Specify the circumstances under which the policy grants permission.

      • Multi-factor Authentication is an authentication method that requires the user to provide two or more verification factors to gain access to an AWS account.

      • When create a VPC, have to specify the AWS region it will reside in, the IP range for the VPC, as well as the name of the VPC.

      • Route Tables can be attached to VPCs and subnets.

      • A network ACL secures subnets, while a security group is responsible for securing EC2 instances.

      • To allow resources to communicate with the internet, will need to attach an internet gateway to VPC, and create a route in a route table_to the internet gateway and attach it to subnet with internet-facing resources. Will also need to make sure internet-facing resources have a public IP address.
      B-)

      • The default configuration of a security group blocks all inbound traffic and allows all outbound traffic.

      • Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give the flexibility to choose the appropriate mix of resources for applications. Each instance type includes one or more instance sizes, allowing to scale resources to the requirements of target workload.

      • When launch an Amazon EC2 instance, must choose the subnet to place the instance into. Subnets reside in one singular AZ and cannot span AZs, therefore EC2 instances also reside in one Availability Zone. Should architecture for high availability in case one AZ is unreachable for any reason or is experiencing outages. To do so, should deploy AWS resources, like Amazon EC2, should be deployed redundantly across at least two AZs.

      • AWS Fargate is a serverless compute platform for either Amazon ECS or Amazon EKS. When use Fargate, the compute infrastructure needed to run containers is managed by AWS whereas with Amazon ECS on EC2 for the compute platform you are responsible for managing the underlying EC2 cluster hosting containers.

      • With serverless on AWS do not have to pay for idling resources, instead only pay for what use and each serverless service will charge differently based on usage.

      • AWS Lambda is a great solution for many use cases, but it does not fit all use cases. For long running processes, Lambda is not the best choice since it has a 15 minute runtime limit.

      • Amazon EC2 provides with a great deal of control over the environment application runs in, serverless services like AWS Lambda exist to provide convenience whereas services like Amazon EC2 provide control.

      • Amazon S3 is an object storage service designed for large objects like media files. Because can store unlimited objects, and each individual object can be up to 5 TBs, S3 is an ideal location to host video, photo, or music uploads.

      • Amazon EBS would be ideal for a high-transaction database storage layer. Is considered persistent storage.
        Amazon S3 is not ideal, as it's considered WORM (Write Once, Read Many) storage.
        Amazon EC2 Instance Store is ephemeral storage, and persistence is needed for databases.
        EFS is for ideal when have multiple servers that need access to the same set of files.

      • Amazon Glacier Deep Archive is Amazon S3's lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers - particularly those in highly regulated industries, such as the Financial Services, Healthcare, and Public Sectors - that retain data sets for 7 to 10 years or longer to meet regulatory compliance requirements.

      • Amazon S3 is a regional service. However, the namespace is shared by all AWS accounts, so the bucket name must be a globally unique.

      • When use Amazon RDS, it places the DB instance into a subnet which is bound by one AZ. For high availability reasons, should use a Multi-AZ deployment in case one AZ is temporarily unavailable.

      • Amazon DynamoDB allows for a flexible schema, so each item can have variation in the attributes outside of the primary and secondary key. For a dataset that has variation within the data, as in not every piece of data share all the same attributes.

      • When using Amazon RDS, no longer responsible for the underlying environment the database runs on, instead can focus on optimizing the database. This is because Amazon RDS has components that are managed by AWS.

      • EC2 Auto Scaling requires to specify three main components:
        • a launch template or a launch configuration as a configuration template for the EC2 instances
        • an EC2 Auto Scaling group that allows to specify minimum, maximum, and desired capacity of instances
        • and scaling policies that allow to configure a group to scale based on the occurrence of specified conditions or on a schedule.

      • ELB automatically scales depending on the traffic. It handles the incoming traffic and sends it to backend application. ELB also integrates seamlessly with EC2 Auto Scaling. As soon as a new EC2 instance is added to or removed from the EC2 Auto Scaling group, ELB is notified and can begin to direct traffic to the new instance.

      • Instances that are launched by Auto Scaling group are automatically registered with the load balancer. Likewise, instances that are terminated by Auto Scaling group are automatically deregistered from the load balancer.

      • Application Load Balancer is a layer 7 load balancer that routes HTTP and HTTPs traffic, with support for rules. For example, rule based on the domain of a website.

      • An application can be scaled vertically by adding more power to an existing machine or it can be scaled horizontally by adding more machines to pool of resources.

      • AWS calls the different elements that allow to view/analyze metrics which can add to a Dashboard widgets.

      • A metric alarm has the following possible states:
        • OK - The metric or expression is within the defined threshold.
        • ALARM - The metric or expression is outside of the defined threshold.
        • INSUFFICIENT_DATA - The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.

      Addressing Security Risk:

      • Multi-factor Authentication or MFA security mechanism can add an extra layer of protection to AWS account in addition to a username password combination.

      • If a user wanted to read from a DynamoDB table AmazonDynamoDBReadOnlyAccess policy would attach to their user profile.

      • Gemalto token, yubiKey, and Google Authenticator are valid MFA or Multi-factor Authentication options available to use on AWS.

      • JSON format is an Identity and Access Management policy document in.

      • Command Line Interface, Software Development Kit, Application Programming Interface, and AWS Console are valid options for interacting with AWS account.

      • Managed policy of IAM policies cannot be updated by you.

      • AD Connector can establish a trusted relationship between corporate Active Directory and AWS.

      • Audit IAM user's access to AWS accounts and resources by Using CloudTrail to look at the API call and timestamp.

      • Grants AWS Management Console access to an DevOps engineer by Create IAM user for the engineer and associate relevant IAM managed policies to this IAM user.

      • Identity Pools provide temporary AWS credentials.

      • Traffics within an Availability Zone, or between Availability Zones in all Regions, are routed over the AWS private global network.

      • A Security Group Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.

      • Two types of VPC Endpoints are available: Gateway and Interface Endpoint.

      • VPC, A subnet in a VPC, and A network interface attached to EC2 AWS resources can be monitored using VPC Flow logs.

      • AWS CloudTrail service keeps a record of who is interacting with AWS Account.

      • AWS CloudWatch and Config are monitoring and logging services available on AWS.

      • If wanted to accomplish threat detection in AWS infrastructure, AWS GuardDuty service would use.
      B-)

      • Security section from Trusted Advisor exists under the Well-Architected Framework as a pillar as well.

      • Amazon Inspector AWS Service has an optional agent that can be deployed to EC2 instances to perform a security assessment.

      • Amazon Relational Database Service is also a valid storage service on AWS.

      • Provision the HSM in a VPC requirement must adhere to in order to deploy an AWS CloudHSM.

      • Customer master AWS KMS keys are used to encrypt and decrypt data in AWS.

      • Up to 4KB data can encrypt/decrypt using an Customer Master Key.

      • The purpose of encrypting data when it is in transit between systems and services is to prevent eavesdropping, unauthorized alterations, and copying.

      • TLS protocol is an industry-standard cryptographic protocol used for encrypting data at the transport layer.

      • Encrypt an existing un-encrypted EBS volume by Take a snapshot for EBS volume, and create new encrypted vloume for this snapshot.

      • Cannot encrypt just a subset of items in a DynamoDB table.

      • When enable Data encrypted at rest includes underlying storage for an RDS DB instance, its automated backups, Read Replicas, snapshots, and Transaction logs.

      • CORPS: Cost optimization, Operational excellence, Reliability, Performance efficiency, and Security are Pillars of the Well-Architectured Framework.

      • Amazon Athena suport SQL language.

      • Shared Responsibility Model is the name of the model that shows how security is handled by AWS and it's customers in the AWS Cloud.

      • Amazon Simple Storage AWS Service is best suited for storing objects.

      • AWS Organizations service can be used to manage multiple AWS Accounts for consolidated billing.

      • Amazon GuardDuty AWS Service supports threat detection by continuously monitoring for malicious or unauthorized behavior.

      • Amazon DynamoDB is NoSQL type of database.

      • A URL entry point for a web service is a customer access endpoint.

      • Amazon Cognito, AWS SSO, and IAM services do authenticate users to access AWS resources using existing credentials on their current corporate identity.

      DevOps group led the initial charge in the cloud but When things break, DevOps teams cannot troubleshoot their own network connectivity without networking teams for support.


      This specialization is part of the 100% online Master in Computer Science from University of Illinois at Urbana-Champaign.

      Migrating to the Cloud:

      • AWS Cloud Adoption Framework helps build a comprehensive approach to a successful cloud computing migration across organization, and throughout IT lifecycle.
        Business, People, Governance, Platform,
        Security, and Operations are the six (6) perspectives presented in the
        Cloud Adoption Framework.

      • Horizontal scaling gives the ability to add more servers in order to distribute load across resources and maintain operations without overloading any singular resource.

      • Adding more memory to an Amazon Elastic Compute Cloud (Amazon EC2) instance is an example of vertical scaling.

      • In Phase2: Portfolio Discovery and Planning analyse the dependencies between applications and begin to think about which migration type is most suitable for each dependency.

      • Rehost migration strategy is commonly referred to as 'lift and shift', where trying to move applications or environments into the cloud while trying to make as few changes as possible.

      • Replatform can be referred to as 'life, tinker, and shift'.

      • Refactor or Rearchitect migration strategy typically driven by strong business needs to add features, scale, or increase performance that would otherwise be difficult in an existing environment.

      • Understanding database is one of the most crucial steps of migrating database. Things like the size, schema, types of tables, and engine-specific limitations are usually topics that need to be regularly discussed and reviewd.

      • The benefits of leveraging AWS SMS to manage server migration are:
        • Automate the migration of on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to the AWS Cloud
        • Track incremental replications of VMs
        • Schedule the replication of servers

      • AWS Migration Hub service provides a single location to track the progress of application migrations across multiple AWS and partner solutions.

      • Agent-based discovery method deploys the AWS Application Discovery Agent on each of VMs and physical servers to collect static configuration data, detailed time-series system-performance information, inbound and outbound network connections, and processes that are running.

      • Amazon S3 / S3 Glacier command line interface and rsync are examples of unmanaged cloud data migration tools that are easy, one-and-done methods to move data at small scales from on-premises environment into Amazon cloud storage.

      • VPN provides a secure connection between environments for transfer of data.

      • Amazon Route 53 AWS service gives the ability to control the amount of traffic going to multiple DNS endpoints.

      • AWS Schema Conversion Tool service helps to perform schema changes while migrating database.
        Can be used to help convert Stored Procedure code in Oracle and SQL Server to equivalent code in the Amazon Aurora MySQL dialect of SQL.

      • AWS DataSync Agent is a requirement when using AWS DataSync.

      • Avoid single points of failure is a primary best practice to follow when building and optimizing migrated environment in AWS.

      • AWS Migration Acceleration Program service provides consulting, support, training, and service credits in order to reduce associated risks with migrating to the cloud.

      • A large majority of AWS services and tools have directly accessible APIs that can use for creating, configuring, and managing the services employ.

      • CloudEndure AWS Migration service converts any application running on a supported operating system to enable full functionality on AWS without compatibility issues.
        Is Software as a Service (SaaS) migration offering from AWS allows applications to continue runing at the source without downtime or performance impact, and allows to run non-disruptive tests to validate that the replicated applications work properly in AWS.

      • TSO Logic provides discovery of existing workloads to help identify what consume in regards to compute, storage, database, and other resources to help evaluate what Total Cost of Ownership, or TCO, is for various applications.

      • 'The 6 R's': 6 Application Migration Strategies:
        1. Re-host
        2. Re-platform
        3. Re-factor / Re-architect
        4. Re-purchase
        5. Retire
        6. Retain

        image
      • AWS Command Line Interface tool can be installed locally or on an instance to provide direct API access for management, building, and optimization tasks within AWS.
      B-)

      • AWS Competency Program allows companies in the Partner Network to demonstrate and prove their expertise in areas like Migrations.

      • AWS Snowball service provides a physical device that can be connected directly to data center network can leverage the local network to copy data, can hold up to 80 Terabytes, and is protected by AWS Key Management Service to encrypt data.

      • Cannot use DMS to directly migrate on-premises database to another on-premises database.

      • AWS Direct Connect allows to:
        • Create public virtual interfaces to connect with services like Amazon Simple Storage Service.
        • Create private virtual interfaces to create VPN-like connections across hybrid environment.

      • In regards to security in and between environments while migrating, Data in transit and at rest areas can encryption be beneficial.

      • There are many Migration Partners available to help to better operate in the cloud, and become more proficient in migrations. These companies have expertise in all phases of the migration process, and can help with implementation, planning, or even training on migration technologies.
        They have built the knowledge up over years of experience working with, and helping other customers migrate to AWS.

      • A 5-phase approach to migrating applications:
        1. Phase 1: Migration Preparation and Business Planning
        2. Phase 2: Portfolio Discory and Planning
        3. Phase 3 & 4: Designing, Migrating, and Validating Applications
        4. Phase 5: Operate
      image
      Building Serverless Applications:

      Amazon Lex:

      • An 'intent' is A particular goal that the user wants to achieve.

      • A slot is Data that the user must provide to fulfill the intent.

      • Amazon Poly service does Amazon Lex use for text-to-speech.

      Amazon S3:

      • Public Read access need to provide on Amazon S3 bucket for website access.

      • Static websites created with Amazon S3 can be interactive.

      Amazon CloudFront:

      • Is used to create a Content Distribution Network.

      • From own datacenter, Amazon S3, and EC2 services can Amazon CloudFront retrieve content.

      • With Amazon CloudFront, S3 bucket permissions no need Public Read Access.

      • AWS WAF allows to specify restrictions on access to content based upon IP address.

      Amazon API Gateway:

      • CORS is configured in Amazon API Gateway service.

      IAM:

      • IAM roles provide users and services access to AWS services.

      • IAM Roles are not associated with a specific user or group. Instead, trusted entities assume roles such as an IAM user, an application, or an AWS service like EC2.

      Amazon Lambda:

      • Is An event-driven, serverless computing platform that runs code in response to events and automatically manages the compute resources required by that code.

      Amazon DynamoDB:

      • Is Non-relational, NoSQL database type of database solution.

      • Table name and Primary key need to be provided when creating a table in DynamoDB.
      #####

      ASGs have the following attributes:
      • A launch configuration
        • AMI + Instance Type
        • EC2 User Data
        • EBS Volumes
        • Security Groups
        • SSH Key Pair
      • Min / Max Size / Initial Capacity
      • Network + Subnets Information
      • Load Balancer Information
      • Scaling Policies

      Auto Scaling Alarms:
      • It is possible to scale an ASG based on CloudWatch alarms
      • An Alarm monitors a metric (such as Average CPU)
      • Metrics are computed for the overall ASG instances
      • Based on the alarm:
        • Can create scale-out policies (increase the number of instances)
        • Can create scale-in policies (decrease the number of instances)

      Auto Scaling New Rules:
      • It is now possible to define 'better' auto scaling rules that are directly managed by EC2:
        • Target Average CPU Usage
        • Number of requests on the ELB per instance
        • Average Network In & Out
      • These rules are easier to set up and can make more sense

      Auto Scaling Custom Metric:
      • Can auto scale based on a custom metric (ex: number of connected users):
        1. Send custom metric from application on EC2 to CloudWatch (PutMetric API)
        2. Create CloudWatch alarm to react to low / high values
        3. Use the CloudWatch alarm as the scaling policy for ASG

      ASG Brain Dump:
      • Scaling policies can be on CPU, Network... and can even be on custom metrics or based on a schedule (if know visitors patterns)
      • ASGs use Launch configurations or Templates (newer)
      • To update an ASG, must provide a new launch configuration / template
      • IAM roles attached to an ASG will get assigned to EC2 instances
      • ASG are free. Pay for the underlying resources being launched
      • Having instances under an ASG means that if they get terminated for whatever reason, the ASG will automatically create new ones as a replacement. Extra safety!
      • ASG can terminate instances marked as unhealthy by an LB (and hence replace them)

      Auto Scaling Groups - Scaling Policies:
      • Target Tracking Scaling:
        • Most simple and easy to set-up
        • Example: Want the average ASG CPU to stay at around 40%
      • Simple / Step Scaling:
        • When a CloudWatch alarm is triggered (example CPU > 70%), then add 2 units
        • When a CloudWatch alarm is triggered (example CPU < 30%), then remove 1
      • Scheduled Actions:
        • Anticipate a scaling based on known usage patterns
        • Example: increase the min capacity to 10 at 5 pm on Fridays

      Scaling Cooldowns:
      • The cooldown period helps to ensure that Auto Scaling group doesn't launch or terminate additional instances before the previous scaling activity takes effect.
      • In addition to default cooldown for Auto Scaling group, can create cooldowns that apply to a specific simple scaling policy
      • A scaling-specific cooldown period overrides the default cooldown period.
      • One common use for scaling-specific cooldowns is with a scale-in policy - a policy that terminates instances based on a specific criteria or metric. Because this policy terminates instances, Amazon EC2 Auto Scaling needs less time to determine whether to terminate additional instances.
      • If the default cooldown period of 300 seconds is too long - can reduce costs by applying a scaling-specific cooldown period of 180 seconds to the scale-in policy.
      B-)

      • If application is scaling up and down multiple times each hour, modify the Auto Scaling Groups cool-down timers and the CloudWatch Alarm Period that triggers the scale in

      ASG for Solution Architects:
      • ASG Default Termination Policy (simplified version):
        1. Find the AZ which has the most number of instances
        2. If there are multiple instances in the AZ to choose from, delete the one with the oldest launch configuration
      • ASG tries the balance the number of instances across AZ by default

      Lifecycle Hooks:
      • By default as soon as an instance is launched in an ASG it's in service.
      • Have the ability to perform extra steps before the instance goes in service (Pending state)
      • Have the ability to perform some actions before the instance is terminated (Terminating state)

      Launch Template vs Launch Configuration:
      • Both:
        • ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that use to launch EC2 instances (tags, EC2 user-data...)
      • Launch Configuration (legacy):
        • Must be re-created every time
      • Launch Template (newer):
        • Can have multiple versions
        • Create parameters subsets (partial configuration for re-use and inheritance)
        • Provision using both On-Demand and Spot instances (or a mix)
        • Can use T2 unlimited burst feature
        • Recommended by AWS going forward

      • Load Balancers provide a static DNS name can use in application.
        The reason being that AWS wants load balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes

      • Running a website with a load balancer and 10 EC2 instances. Users are complaining about the fact that website always asks them to re-authenticate when they switch pages. You are puzzled, because it's working just fine on your machine and in the dev environment with 1 server. The Load Balancer does not have stickiness enabled could be the reason.
        Stickiness ensures traffic is sent to the same backend instance for a client. This helps maintaining session data.

      • Application is using an Application Load Balancer. It turns out application only sees traffic coming from private IP which are in fact load balancer's. Should Look into the X-Forwarded-For header in the backend to find the true IP of the clients connected to website.
        This header is created by load balancer and passed on to backend application.

      • Quickly created an ELB and it turns out users are complaining about the fact that sometimes, the servers just don't work. You realize that indeed, servers do crash from time to time. Enable Health Checks to protect users from seeing these crashes.
        Health checks ensure ELB won't send traffic to unhealthy (crashed) instances.

      • Designing a high performance application that will require millions of connections to be handled, as well as low latency. The best Load Balancer for this is Network Load Balancer.
        NLB provide the highest performance if application needs it.

      • Application Load Balancers handle HTTP, HTTPS, and Websocket protocols.
        A NLB (Network Load Balancer) support TCP.

      • The application load balancer can route to different target groups based on Hostname, Request Path, and Source IP.

      • Running at desired capacity of 3 and the maximum capacity of 3. Have alarms set at 60% CPU to scale out application. Application is now running at 80% capacity. Nothing will happen.
        The capacity of ASG cannot go over the maximum capacity have allocated during scale out events.

      • Have an ASG and an ALB, and setup ASG to get health status of instances thanks to ALB. One instance has just been reported unhealthy. The ASG will terminate the EC2 Instance.
        Because the ASG has been configured to leverage the ALB health checks, unhealthy instances will be terminated.

      • Boss wants to scale ASG based on the number of requests per minute application makes to database. Create a CloudWatch custom metric and build an alarm on this to scale ASG.
        The metric 'request per minute' is not an AWS metric, hence it needs to be a custom metric.

      • Scaling an instance from an r4.large to an r4.4xlarge is called Vertical Scalability.

      • Running an application on an auto scaling group that scales the number of instances in and out is called Horizontal Scalability.

      • Would like to expose a fixed static IP to end-users for compliance purposes, so they can write firewall rules that will be stable and approved by regulators. Network Load Balancer should use.
        Network Load Balancer expose a public static IP, whereas an Application or Classic Load Balancer exposes a static DNS (URL).

      • A web application hosted in EC2 is managed by an ASG. Exposing this application through an Application Load Balancer. The ALB is deployed on the VPC with the CIDR 192.168.0.0/18. Configure the EC2 instance security group to ensure only the ALB can access the port 80 by Open up the EC2 security on port 80 to the ALB's security group.
        This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule.

      • Application load balancer is hosting 3 target groups with hostnames being users, api.external, and checkout.example.com. Would like to expose HTTPS traffic for each of these hostnames. Use SNI to configure ALB SSL certificates to make this work.
        SNI (Server Name Indication) is a feature allowing to expose multiple SSL certs if the client supports it.

      1. A solution architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet Security is a high priority for the company.
        In this scenario an inbound rule is required to allow traffic from any internet client to the web front end on SSL/TLS port 443. The source should therefore be set to 0.0.0.0/0 to allow any inbound traffic.
        To secure the connection from the web front-end to the database tier, an outbound rule should be created from the public EC2 security group with a destination of the private EC2 security group.
        The port should be set to 1433 for MySQL. The private EC2 security group will also need to allow inbound traffic on 1433 from the public EC2 security group.

        image

      2. Auto Scaling support both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.

      3. A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution. Use Amazon Cognito Identity with SMS based MFA solution should a solutions architect recommend to meet these requirements.
      B-)

      • An ASG spawns across 2 availability zones. AZ-A has 3 EC2 instances and AZ-B has 4 EC2 instances. The ASG is about to go into a scale-in event. The AZ-B will terminate the instance with the oldest launch configuration.
        The Default Termination Policy for ASG tries to balance across AZ first, and then delete based on the age of the launch configuration.

      • The Application Load Balancers target groups can be EC2 Instances, IP Addresses, and Lambda Functions.

      • Running an application in 3 AZ, with an Auto Scaling Group and a Classic Load Balancer. It seems that the traffic is not evenly distributed amongst all the backend EC2 instances, with some AZ being overloaded. Cross Zone Load Balancing feature should help distribute the traffic across all the available EC2 instances.

      • Application Load Balancer (ALB) currently is routing to two target groups, each of them is routed to based on hostname rules. Have been tasked with enabling HTTPS traffic for each hostname and have loaded the certificates onto the ALB. Server Name Indication (SNI) ALB feature will help it choose the right certificate for clients.

      • An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, the scaling of the Auto Scaling Group is done manually and would like to define a scaling policy that will ensure the average number of connections to EC2 instances is averaging at around 1,000. Target Tracking scaling policy should use.

      What's an EBS Volume?
      • An EBS (Elastic Block Store) Volume is a network drive can attach to instances while they run
      • It allows instances to persist data, even after their termination
      • They can only be mounted to one instance at a time (at the CCP level)
      • They are bound to a specific availability zone
      • Analogy: Think of them as a 'network USB stick'
      • Free tier: 30 GB of free EBS storage of type General Purpose (SSD) or Magnetic per month

      EBS Volume:
      • It's a network drive (i.e. not a physical drive):
        • It uses the network to communicate the instance, which means there might be a bit of latency
        • Can be detached from and EC2 instance and attached to another one quickly
      • It's locked to an Availability Zone (AZ):
        • An EBS Volume in ap-southeast-1a cannot be attached to ap-southeast-1b
        • To move a volume across, first need to snapshot it
      • Have a provisioned capacity (size in GBs, and IOPS):
        • Get billed for all the provisioned capacity
        • Can increase the capacity of the drive over time

      Delete on Termination attribute:
      • Controls the EBS behavior when an EC2 instance terminates. By default:
        • The root EBS volume is deleted (attribute enabled)
        • Any other attached EBS volume is not deleted (attribute disabled)
      • This can be controlled by the AWS console / AWS CLI
      • Use case: preserve root volume when instance is terminated

      EBS Volume Types:
      • Come in 6 types:
        • gp2 / gp3 (SSD): General purpose SSD volume that balances price and performance for a wide variety of workloads
        • io1 / io2 (SSD): Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
        • st1 (HDD): Low cost HDD volume designed for frequently accessed, throughput-intensive workloads
        • sc1 (HDD): Lowest cost HDD volume designed for less frequently accessed workloads
      • EBS Volumes are characterized in Size | Throughput | IOPS (I/O Ops Per Sec)
      • When in doubt always consult the AWS documentation!
      • Only gp2/gp3 and io1/io2 can be used as boot volumes

      Make an Amazon EBS volume available for use on Linux:
      • [ec2-user ~]$ lsblk
        Use command to view available disk devices and their mount points (if applicable)
      • ec2-user ~]$ sudo file -s /dev/xvdb
        Use command to get information about a specific device, if output shows data = no file system
      • c2-user ~]$ sudo mkfs -t ext4 /dev/xvdb
        Use command to create a file system on the volume
      • 2-user ~]$ sudo mkdir /data
        Use command to create a mount point directory for the volume
      • -user ~]$ sudo mount /dev/xvdb /data
        To mount an attached volume automatically after reboot:
      • user ~]$ sudo cp /etc/fstab /etc/fstab.orig
        Create a backup of /etc/fstab file that can use if accidentally destroy or delete this file
      • ser ~]$ sudo nano /etc/fstab
        Open the /etc/fstab file using any text editor, such as nano or vim
      • /dev/xvdb  /data  ext4  defaults,nofail 0 2
        Add to /etc/fstab to mount the device at the specified mount point.
      • er ~]$ sudo file -s /dev/xvdb

      1. A team has an application that detects new objects being uploaded into an Amazon bucket. The upload a trigger AWS Lambda function to write metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database. Enable Multi-AZ on the RDS PostgreSQL database should the team take to ensure high availability.

      2. After recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, clients asks to explain why recommending Redshift. The following would be reasonable responses:
        • It has high performance at scale as data and query complexity grows.
        • Prevents reporting and analytic processing from interfering with the performance of OLTP workloads.
        • Don't have the administrative burden of running own data warehouse and dealing with setup, durability, monitoring, scaling and patching.
        Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing to use a wide range of familiar SQL clients.
        Data load speed scales linearly with cluster size, with integrations to Amazon S3, DynamoDB, Elastic MapReduce, Kinesis or any SSH-enabled host.
        Large volumes of structured data to persist and query using standard SQL and existing BI tools.

      3. A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the file to be stored for 4 years before they can deleted. (Delete the files 4 years after the object creation.) Immediate accessibility is always required as the files contain business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation storage solution is MOST cost effective.

      4. A solution architect needs to deploy a node js-based web application that is highly available and scales automatically. The marketing team needs to roll back on application releases quickly and they need to have an operational dashboard. The Marketing team does not want to manage deployment of operating system patches to the Linux servers. AWS Elastic Beanstalk service satisfies these requirements.
      B-)

      • 5 เคล็ดลับ ใช้งาน AWS อย่างคุ้มค่าสูงสุด ปกป้องข้อมูลได้ทุกส่วน:
        dl.techtalkthai.com/ttt_veeam_aws_cloud_cost_optimization_2021_whitepaper_01 v1.1.pdf

      AWS Cloud History:
      • 2002: Internally launched
      • 2003: Amazon infrastructure is one of their core strength. Idea to market
      • 2004: Launched publicly with SQS
      • 2006: Re-launched publicly with SQS, S3 & EC2
      • 2007: Launched in Europe
      • Dropbox, airbnb, NETFLIX, NASA, etc.

      AWS Cloud Number Facts:
      • In 2020, AWS had $45.37 billion in annual revenue
      • AWS accounts for 32% of the market in 2020 (Microsoft is 2nd with 20%)
      • Pioneer and Leader of the AWS Cloud Market for the 10th consecutive year
      • Over 1,000,000 active users

      Magic Quadrant for Cloud Infrastructure and Platform Service (CIPS):

      image

      AWS Cloud Use Cases:
      • AWS enables to build sophisticated, scalable applications
      • Applicable to a diverse set of industries
      • Use cases include:
        • Enterprise IT, Backup & Storage, Big Data analytics
        • Website hosting, Mobile & Social Apps
        • Gaming
      • McDonald's, 21ST CENTURY FOX, ACTIVISION, etc.

      AWS Global Infrastructure:
      • AWS Regions
      • Availability Zones
      • Data Centers
      • Edge Locations / Points of Presence

      How to choose an AWS Region?:
      If need to launch a new application, where should do it?
      • Compliance with data governance and legal requirements: data never leaves a region without explicit permission
      • Proximity to customers: reduced latency
      • Available services within a Region: new services and new features aren't available in every Region
      • Pricing: pricing varies region to region and is transparent in the service pricing page

      AWS Points of Presence (Edge Locations):
      • Amazon has 216 Points of Presence (205 Edge Locations & 11 Regional Caches) in 84 cities across 42 countries
      • Content is delivered to end users with lower latency

      Tour of the AWS Console:
      • AWS has Global Services:
        • Identity and Access Management (IAM)
        • Route 53 (DNS service)
        • CloudFront (Content Delivery Network)
        • WAF (Web Application Firewall)
      • Most AWS services are Region-scoped:
        • Amazon EC2 (Infrastructure as a Service)
        • Elastic Beanstalk (Platform as a Service)
        • Lambda (Function as a Service)
        • Rekognition (Software as a Service)
      • Region Table

      IAM:
      Users & Groups:
      • IAM = Identity and Access Management, Global service
      • Root account created by default, shouldn't be used or shared
      • Users are people within organization, and can be grouped
      • Groups only contain users, not other groups
      • Users don't have to belong to a group, and user can belong to multiple groups

      Permissions:
      • Users or Groups can be assigned JSON documents called policies
      • These policies define the permissions of the users
      • In AWS apply the least privilege principle: don't give more permissions than a user needs

      Password Policy:
      • Strong password = higher security for account
      • In AWS, can setup a password policy:
        • Set a minimum password length
        • Require specific character types:
          • including uppercase letters
          • lowercase letters
          • numbers
          • non-alphanumeric characters
        • Allow all IAM users to change their own passwords
        • Require users to change their password after some time (password expiration)
        • Prevent password re-use

      1. A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office to Amazon S3 Glacier. The solution must avoid saturating the branch office's low-bandwidth internet connection. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier is the MOST cost-effective solution.
        As the company's internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball edge appliance can hold up to 75 TB of data so 10 devices would be required to migrate 750 TB of data.
        Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier.
        Cannot set a Glacier vault as the destination, it must be an S3 bucket. Also can't enforce a VPC endpoint using a bucket policy.
        Can Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier but this is not the most cost-effective and takes time to setup.

      2. After reviewing the cost optimization checks in AWS Trusted Advisor, a team finds that it has 10,000 Amazon Elastic Block Store (Amazon EBS) snapshots in its account that are more than 30 days old. When the team determines that it needs to implement better governance for the lifecycle of its resources. Use a scheduled event in Amazon EventBridge (Amazon CloudWatch Events) and invoke AWS Step Functions to manage the snapshots and Schedule and run backups in AWS Systems Manager are the actions should the team take to automate the lifecycle management of the EBS snapshots with the LEAST effort.

      3. A company is hosting 60 TB of production-level data in an Amazon S3 bucket. A solution architect needs to bring that data on premises for quarterly audit requirements. This export of data must be encrypted while in transit. The company has low network bandwidth in place between AWS and its on-premises data center. The solutions architect should Deploy an AWS Storage Gateway volume gateway on AWS. Enable a 90-day replication window to transfer the data to meet these requirements.

      4. A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team's account. The other company wants to poll the queue without giving up its own account permissions to do so. A solution architect should Create an SQS access policy that provides the other company access to the SQS queue.

      5. A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and the company can tolerate a delay in accessing those older backup files. A solutions architect should Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups to meet these requirements with the LEAST operational effort.
      B-)

    • Multi Factor Authentication - MFA:
      • Users have access to account and can possibly change configurations or delete resources in AWS account
      • Want to protect Root Accounts and IAM users
      • MFA = password know + security device own
      • Alice > Password + MFA => Successful login
      • Main benefit of MFA:
        if a password is stolen or hacked, the account is not compromised

      MFA devices options in AWS:
      • Virtual MFA device: Google Authenticator (phone only), Authy (multi-device), Duo
        Support for multiple tokens on a single device.
      • Universal 2nd Factor (U2F) Security Key: YubiKey by Yubico (3rd party)
        Support for multiple root and IAM users using a single security key
      • Hardware Key Fob MFA Device:
        Provided by Gemalto (3rd party)
      • Hardware Key Fob MFA Device for AWS GovCloud (US):
        Provided by SurePassID (3rd party)

      How can users access AWS?:
      • To access AWS, have three options:
        • AWS Management Console (protected by password + MFA)
        • Command Line Interface (CLI): protected by access keys
        • Software Developer Kit (SDK) - for code: protected by access keys
      • Access Keys are generated through the AWS Console
      • Users manage their own access keys
      • Access Keys are secret, just like a password. Don't share them
      • Access Key ID ~= username
      • Secret Access Key ~= password

      Example Access Keys:
      • Access key ID: AKIAREOC3O54I7ZEOWVC
      • Secret Access Key: VEcVINNDMqR5VnywD/oXQ7YHRmIt7tDcKpATsq6q
      • Remember: don't share access keys

      IAM Roles for Services:
      • Some AWS service will need to perform actions on behalf
      • To do so, will assign permissions to AWS services with IAM Roles
      • Common roles:
        • EC2 Instance Roles
        • Lambda Function Roles
        • Roles for CloudFormation

      IAM Security Tools:
      • IAM Credentials Report (account-level)
        • a report that lists all account's users and the status of their various credentials
      • IAM Access Advisor (user-level)
        • Access advisor shows the service permissions granted to a user and when those services were last accessed.
        • Can use this information to revise policies.

      IAM Guidelines & Best Practices:
      • Don't use the root account except for AWS account setup
      • One physical user = One AWS user
      • Assign users to groups and assign permissions to groups
      • Create a strong password policy
      • Use and enforce the use of Multi Factor Authentication (MFA)
      • Create and use Roles for giving permissions to AWS services
      • Use Access Keys for Programmatic Access (CLI / SDK)
      • Audit permissions of account with the IAM Credentials Report
      • Never share IAM users & Access Keys

      Summary:
      • Users: Map กับผู้ใช้จริง, มีรหัสผ่านสำหรับ AWS Console
      • Groups: มีแต่ผู้ใช้เท่านั้น
      • Policies: JSON document ที่ระบุการอนุญาตสำหรับผู้ใช้หรือกลุ่ม
      • Roles: สำหรับ Instance EC2 หรือบริการ AWS
      • Security: MFA + นโยบายรหัสผ่าน
      • Access Keys: เข้าถึง AWS โดยใช้ CLI หรือ SDK
      • Audit: รายงาน IAM Credential และ IAM Access Advisor

      • An IAM entity that defines a set of permissions for making AWS service requests, that will be used by AWS services is a proper definition of IAM Roles.

      • IAM Credentials Report is an IAM Security Tool. It lists all account's users and the status of their various credentials.
        The other IAM Security Tool is an IAM Access Advisor. It shows the service permissions granted to a user and when those services were last accessed.

      1. A company has a mobile game that reads most of its metadata from an Amazon RDS DB instances. As the game increased in popularity, developer noticed slowdowns related to the game's metadata load times. Performance metrics indicate that simply scaling the database will not help. A solution architect must explore all options that include capabilities for snapshots, replication, and sub-millisecond response times. The solution architect should recommend Add an Amazon ElastiCache for Redis layer in front of the database to solve the issues.

      2. A company has implemented one of its micro-services on AWS Lambda that accesses an Amazon DynamoDB table named Books. A solution architect is design an IAM policy to be attached to the Lambda function's IAM role, giving it access to put, update, and delete items in the Books table. The IAM policy must prevent function from performing any other actions on the Books table or any other. IAM policy would fulfill these needs and provide the LEAST privileged access is:
        {
          "Version": "2013-11-28",
          "Statement": [
            {
              "Sid": "PutUpdateDeleteOnBooks",
             "Effect": "Allow",
             "Action": [
                "dynamodb: PutItem",
                "dynamodb: UpdateItem",
                "dynamodb: DeleteItem"
              ],
             "Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/Books"
            }
          ]
        }

      3. A solution architect is designing the cloud architecture for a company that needs to host hundreds of machine learning models for its users. During startup, the models need to load up to 10 GB of data from Amazon S3 into memory, but they do not need disk access. Most of the models are used sporadically, but the users expect all of them to be highly available and accessible with low latency. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer for each model solution meets the requirements and is MOST cost-effective.

      4. A company host a popular web application. The web application connects to a database running in a private VPC subnet. The web servers must be accessible only to customers on an SSL connection (Open an HTTPS port on the security group for web server and set the source to 0.0.0.0/0). The Amazon RDS for MySQL database services be accessible only from the web servers (Open the MySQL port on the database security group and attach it to the MySQL instance. Set the source to web server security group). This is a solution which a solution architect should design to meet the requirements without impacting applications.

      5. A company has a website deployed on AWS. The database backend is hosted on Amazon RDS for MySQL with a primary instance and five read replicas to support scaling needs. The read replicas should lag no more than 1 second behind the primary instance to support the user experience. As traffic on the website continues to increase, the replicas are falling further behind during periods of peak load, resulting in complaints from users when searches yield inconsistent results. A solution architect needs to reduce the replication lag as much as possible, with minimal changes to the application code or operational requirements. Migrate the database to Amazon Aurora MySQL. Replace the MySQL read replicas with Aurora Replicas and enable Aurora Auto Scaling solution meets these requirements.
      B-)

      • IAM Users:
        • Can belong to multiple groups
        • Don't have to belong to a group
        • Can have policies assigned to them
        • Access AWS using a username and a password

      • Don't use the root user account is an IAM best practice.
        Only want to use the root account to create first IAM user, and for a few account and service management tasks. For every day and administration tasks, use an IAM user with permissions.

      • JSON documents to define Users, Groups or Roles' permissions are IAM Policies.
        An IAM policy is an entity that, when attached to an identity or resource, defines their permissions.

      • Grant least privilege principle should apply regarding IAM Permissions.
        Don't give more permissions than the user needs.

      • Enable Multi-Factor Authentication (MFA) should do to increase root account security.
        It adds a layer of security, so even if password is stolen, lost or hacked, account is not compromised.

      EC2 sizing & configuration options:
      • Operating System (OS): Linux, Windows or Mac OS
      • How much:
        • compute power & cores (CPU)
        • random-access memory (RAM)
        • storage space: Network-attached (EBS & EFS) / hardware (EC2 Instance Store)
      • Network card: speed of the card, Public IP address
      • Firewall rules: security group
      • Bootstrap script (configure at first launch): EC2 User Data

      EC2 instance types: example
      • Instance: t2.micro, vCPU: 1, Mem 1 GiB, Storage: EBS-Only, Network Performance: Low to Moderate - is part of the AWS free tier (up to 750 hours per month)
      • t2.xlarge - 4 vCPU, Mem 16 GiB, Storage: EBS-Only, Network Performance: Moderate
      • c5d.4xlarge - 16 vCPU 32 GiB, Storage: 1 x 400 NVMe SSD, Network Performance: Up to 10 Gbps, EBS Bandwidth: 4.75 Gbps
      • r5.16xlarge - 64 vCPU 512 GiB, EBS-Only, Network Performance: 20 Gbps, EBS Bandwidth: 13.6 Gbps
      • m5.8xlarge - 32 vCPU 128 GiB, EBS-Only, Network 10 Gbps, EBS Bandwidth: 6.8 Gbps
        • m: instance class/family
        • 5: generation (AWS improves them over time)
        • 8xlarge: size within the instance class

      1. A company has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages. Integrate the sender and processor applications with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process solution meets these requirements and is the MOST operationally efficient.

      2. A solution architect must create a highly available bastion host architecture. The solution needs to be resilient within a single AWS Region and should require only minimal effort to maintain. The solutions architect should Create a Network Load Balancer backed by an Auto Scaling with instances in multiple Availability zones as the target to meet these requirements.

      3. A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources. A solution architect should Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.

      4. A solution architect is moving the static content from a public website hosted on Amazon EC2 instances to an Amazon S3 bucket. An Amazon CloudFront distribution will be used to deliver the static assets. The security group used by the EC2 instances restricts access to a limited set of IP ranges. Access to the static content should be similarly restricted. Combination of steps will meet these requirements are:
        Create an origin access identity (OAI) and associate it with the distribution. Change the permissions in the bucket policy so that only the OAI can read the objects.
        Create an AWS WAF web ACL that includes the same IP restrictions that exist in the EC2 security group. Associate this new web AC with the CloudFront distribution.

      5. An application running on AWS uses an Amazon Aurora Multi-AZ deployment for its database. When evaluating performance metrics, a solution architect discovered that the database reads are causing high I/O and adding latency to the write requests against the database. The solution architect should Create read replica and modify the application to use the appropriate endpoint to separate the read requests from the write requests.
        Aurora Replicas are independent endpoints in an Aurora DB cluster, best used for scaling read operations and increasing availability. Up to 15 Aurora Replicas can be distributed across the Availability Zones that a DB cluster spans within an AWS Region.
        The DB cluster volume is made up of multiple copies of the data for the DB cluster. However, the data in the cluster volume is represented as a single, logical volume to the primary instance and to Aurora Replicas in the DB cluster.

        image

        Aurora Fault Tolerance:
        • Fault tolerance across 3 AZs
        • Single logical volume
        • Aurora Replicas scale-out read requests
        • Up to 15 Aurora Replicas with sub-10ms replica lag
        • Aurora Replicas are independent endpoints
        • Can promote Aurora Replica to be a new primary or create new primary
        • Set priority (tiers) on Aurora Replicas to control order of promotion
        • Can use Auto Scaling to add replicas
        As well as providing scaling for reads, Aurora Replicas are also targets for multi-AZ. In this case the solution architect can update the application to read from the Multi-AZ standby instance.

      6. A company has multiple AWS accounts with applications deployed in the us-west-2 Region. Application logs are stored within Amazon S3 buckets in each account. The company wants to build a centralized log analyst solution that uses a single S3 bucket. Logs must not leave us-west-2 and the company wants to incur minimal operational overhead. Create an S3 Lifecycle policy that copies the objects from one of the application S3 buckets to the centralized S3 bucket is MOST cost-effective.

      7. A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Key must be rotated every year. Server-side encryption with AWS KMS (SSE-KMS) customer master keys (CMKs) with automatic rotation is the MOST operationally efficient.

      8. An application launched on Amazon EC2 instances needs to publish Personally Identifiable Information (PII) about customers using Amazon Simple Notification Service (Amazon SNS). The application is launched in private subnets within an Amazon VPC. Use AWS PrivateLink is the MOST secure way to allow the application to access service endpoints in the same AWS Region.
      B-)

    • EC2 instance types/families:

      • General Purpose:
        • Great for a diversity of workloads such as web servers or code repositories
        • Balance between Compute, Memory, and Networking
        • Mac, T4g, T3, T3a, T2, M6g, M5, M5a, M5n, M5zn, M4, and A1

      • Compute Optimized:
        • Great for compute-intensive tasks that require high performance processors:
          • Batch processing workloads
          • Media trans-coding
          • High Performance web servers / Computing (HPC)
          • Scientific modeling & machine learning
          • Dedicated gaming servers
        • C6g, C6gn, C5, C5a, C5n, and C4

      • Memory optimized:
        • Fast performance for workloads that process large data sets in memory
        • Use cases:
          • High performance, relational/non-relational databases
          • Distributed web scale cache stores
          • In-memory databases optimized for BI (Business Intelligence)
          • Applications performing real-time processing of big unstructured data
        • R6g, R5, R5a, R5b, R5n, R4, X1e, X1, High Memory, and z1d

      • Storage Optimized:
        • Great for storage-intensive tasks that require high, sequential read and write access to large data sets on local storage
        • Use cases:
          • High frequency OnLine Transaction Processing (OLTP) systems
          • Relational & NoSQL databases
          • Cache for in-memory databases (for example, Redis)
          • Data warehousing applications
          • Distributed file systems
        • I3, I3en, D2, D3, D3en, and H1

      • Accelerated computing

      Classic Ports to know:
      • 21 = FTP (File Transport Protocol) - upload files into a file share
      • 22 = SSH (Secure Shell) - log into a Linux instance
         And SFTP (Secure File Transport Protocol) - upload files using SSH
      • 80 = HTTP - access unsecured websites
      • 443 = HTTPS - access secured websites
      • 1433 = Microsoft SQL
      • 3389 = RDP (Remote Desktop Protocol) - log into a Windows instance

      1. A company receives structured and semi-structured data from various sources once every day. A solution architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools. The solution architect should recommend Use Amazon EMR to process data and Amazon Redshift to store data to build the MOST high-performing solution.

      2. The company must optimize its S3 storage costs while maintaining high availability and resiliency of stored assets. An image hosting company uploads its large assets to Amazon S3 Standard buckets. The company uses multipart upload in parallel by using S3 APIs and overwrites if the same object is uploaded again (Configure an S3 Lifecycle policy to clean up expired object delete markers). For the first 30 days after upload the objects will be accessed frequently. The objects will be used less frequently after 30 days but the access patterns for each object will be inconsistent (Move assets to S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days).

      3. A company's legacy application is currently relying on a single-instance Amazon RDS MySQL database without encryption. Due to new compliance requirements, all existing and new data in this database must be encrypted. Take a snapshot of the RDS instance. Create an encrypted copy of the snapshot. And restore the RDS instance from the encrypted snapshot should this be accomplished.

      4. A company designed a stateless two-tier that uses Amazon EC2 in a single Availability Zone and an Amazon RDS multi-AZ DB instance. New company management wants to ensure the application is highly available. A solution architect should Configure Amazon Route 53 rules to handle incoming requests and create a multi-AZ Application Load Balancer.

      5. A company's dynamic website is hosted using on-premises servers in the United States. The company is launching its product in Europe and it wants to optimize site loading times for new European users. The site's backend must remain in the United States. The product is being launched in a few days, and an immediate solution is needed. The solution architect recommend Use Amazon CloudFront with a custom origin pointing to the on-premises servers.

      6. A global company plans to track and store information about local allergens in an Amazon DynamoDB table and query this data from its website. The company anticipates that website traffic will fluctuate. The company estimates that the combined read and write capacity units will range from 10 to 10,000 per second, depending on the severity of the conditions for the given day. A solution architect must design a solution that avoids throttling issues and manages capacity efficiently. The solutions architect should Use provisioned capacity mode and a scaling policy in DynamoDB auto scaling to meet MOST cost-effectively.

      7. A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users. A solution architect should recommend Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
        Rate limit:
        For a rate-based rule, enter the maximum number of requests to allow in any five-minute period from an IP address that matches the rule's conditions. The rate limit must be at least 100.
        Can specify a rate limit alone, or a rate limit and conditions. If specify only a rate limit, AWS WAF places the limit on all IP addresses. If specify a rate limit and conditions, AWS WAF places the limit on IP addresses that match the conditions.
        When an IP address reaches the rate limit threshold, AWS WAF applies the assigned action (block or count) as quickly as possible, usually within 30 seconds. Once the action is in place, if five minutes pass with no requests from the IP address, AWS WAF resets the counter to zero.

      8. A company is deploying an application that processes large quantities of data in parallel. The company plans to use Amazon EC2 instances for the workload. The network architecture must be configurable to provide the lowest possible latency between nodes. By Place the EC2 instances in a single Availability Zone and Run the EC2 instances in a cluster placement group.

      9. A company is migrating a Linux-based web server group to AWS. The web servers must access files in a shared file store for some content to meet the migration date, minimal changes can be made. A solution architect should Create an Amazon Elastic File System (Amazon EFS) volume and mount it on all web servers.

      10. A mobile gaming company runs application servers on Amazon EC2 instances. The servers receive updates from players every 15 minutes. The mobile game creates a JSON object of the progress made in the game since the last update, and sends the JSON object an Application Load Balancer. As the mobile game is played, game updates are being lost. The company wants to create a durable way to get the updates in order. A solution architect should recommend Use Amazon simple Queue service (Amazon SQS) FIFO queue to capture the data and EC2 instances to process the messages in the queue to decouple the system.
      B-)

    • EC2 Nitro:
      • Underlying Platform for the next generation of EC2 instances
      • New virtualization technology
      • Allows for better performance:
        • Better networking options (enhanced networking, HPC, IPv6)
        • Higher Speed EBS (Nitro is necessary for 64,000 EBS IOPS - max 32,000 on non-Nitro)
      • Better underlying security
      • Instance types example:
        • Virtualized: A1, C5, C5a, C5ad, C5d, C5n, C6g, C6gd, C6gn, D3, D3en, G4, I3en, Inf1, M5, M5a, M5ad, M5d, M5dn, M5n, ....
        • Bare metal: a1.metal, c5.metal, c5d.metal, c5n.metal, c6g.metal, c6gd.metal...

      Understanding vCPU:
      • Multiple threads can run on one CPU (multithreading)
      • Each thread is represented as a virtual CPU (vCPU)
      • Example: m5.xlarge: 4 CPU, 2 threads per CPU => 8 vCPU in total

      Optimizing CPU options:
      • EC2 instances come with a combination of RAM and vCPU
      • But in some cases, may want to change the vCPU options:
        • # of CPU cores: can decrease it (helpful if need high RAM and low number of CPU) - to decrease licensing costs
        • # of threads per core: disable multithreading to have 1 thread per CPU - helpful for high performance computing (HPC) workloads
      • Only specified during instance launch

      Capacity Reservations:
      • Ensure have EC2 Capacity when needed
      • Manual or planned end-date for the reservation
      • No need for 1 or 3-year commitment
      • Capacity access is immediate, get billed as soon as it starts
      • Specify:
        • The Availability Zone in which to reserve the capacity (only one)
        • The number of instances for which to reserve capacity
        • The instance attributes, including the instance type, tenancy, and platform/OS
      • Combine with Reserved Instances and Savings Plans to do cost saving

      Scalability & High Availability:
      • Scalability means that an application / system can handle greater loads by adapting.
      • There are two kinds of scalability:
        • Vertical Scalability
        • Horizontal Scalability (= elasticity)
      • Scalability is linked but different to High Availability
      • Let's deep dive into the distinction, using a call center as an example

      Vertical Scalability:
      • Means increasing the size of the instance
      • For example, application runs on a t2.micro
      • Scaling that application vertically means running it on a t2.large
      • Is very common for non distributed systems, such as a database.
      • RDS, ElastiCache are services that can scale vertically.
      • There's usually a limit to how much can vertically scale (hardware limit)

      Horizontal Scalability:
      • Means increasing the number of instances / systems for application
      • Horizontal scaling implies distributed systems.
      • This is very common for web applications / modern applications

      1. A prediction process requires access to a trained model that is stored in an Amazon S3 bucket. The process takes a few seconds to process an image and make a prediction. The process is not overly resource-intensive does not require any specialized hardware, and takes less than 512 MB of memory to run. Amazon Lambda functions is the MOST effective compute solution for this use case.

      2. A solution architect is designing a new workload in which an AWS Lambda function will access an Amazon DynamoDB table. Create an IAM role with the necessary permissions to access the DynamoDB table. Assign the role to the Lambda function is the MOST secure means of granting the Lambda function access to the DynamoDB.

      3. A company is running a multi-tier web application on AWS. The application runs its database tier on Amazon Aurora MySQL. The application and database tiers are in the us-east-1 Region. A database administrator who regularly monitors the Aurora DB cluster finds that an intermittent increase in read traffic is creating high CPU utilization on the read replica and causing increased read latency of the application. A solution architect should Configure Aurora Auto Scaling for the read replica to improve read scalability.

      4. A company hosts a static website on-premises and wants to migrate the website to AWS. The website should load as quickly as possible for users around the world. The company also wants the most cost-effective solution/option. A solution architect should Copy the website content to an Amazon S3 bucket (Cheaper than EC2). Configure the bucket to serve/host static webpage content. To enable good performance for global users should Configure Amazon CloudFront with the S3 bucket as the origin. This will cache the static content around the world closer to users.

      5. A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available. A solution architect should recommend Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.

      6. A recent analysis of a company's IT expenses highlights the need to reduce backup costs. The company's chief information officer wants to simplify the on-premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on-premises backup applications and workflows. A solution architect should recommend Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.
        Tape Gateway:
        • Some companies have backup processes using physical tapes (!)
        • With Tape Gateway, companies use the same processes but in the cloud
        • Virtual Tape Library (VTL) backed by Amazon S3 and Glacier
        • Back up data using existing tape-based processes (and iSCSI interface)
        • Works with leading backup software vendors.
        image
      7. A company is preparing to deploy a data lake on AWS. A solution architect must define the encryption strategy for data at rest in Amazon S3. The company's security policy states:
        • Keys must be rotated every 90 days.
        • Strict separation of duties between key users and key administrators must be implemented.
        • Auditing key usage must be possible.
        The solutions architect should recommend Server-side encryption with AWS KMS managed keys (SSE-KMS) with customer managed customer master keys (CMKs).

      8. A recently acquired company is required to build its own infrastructure on AWS and migrate multiple applications to the cloud within a month. Each application has approximately 50 TB of data to be transferred. After the migration is complete this company and its parent company will do the require secure network connectivity with consistent throughput from their data centers to the applications. A solutions architect must ensure one-time data migration and ongoing network connectivity. AWS Snowball for the initial transfer and AWS Direct Connect for ongoing connectivity solution will meet these requirements.

      9. A user has underutilized on-premises resources. Elasticity AWS Cloud concept can BEST address this issue.
      B-)

    • EBS Snapshots:
      • Make a backup (snapshot) of EBS volume at a point in time
      • Not necessary to detach volume to do snapshot, but recommended
      • Can copy snapshots across AZ or Region

      AMI Overview:
      • AMI = Amazon Machine Image
      • AMI are a customization of an EC2 instance
        • Add own software, configuration, operating system, monitoring...
        • Faster boot / configuration time because all software is pre-packaged
      • AMI are built for a specific region (and can be copied across regions)
      • Can launch EC2 instances from:
        • A Public AMI: AWS provided
        • Own AMI: make and maintain them yourself
        • An AWS Marketplace AMI: an AMI someone else made (and potentially sells)

      AMI Process (from an EC2 instance):
      • Start an EC2 instance and customize it
      • Stop the instance (for data integrity)
      • Build an AMI - this will also create EBS snapshots
      • Launch instances from other AMIs

      EC2 Instance Store:
      • EBS volumes are network drives with good but 'limited' performance
      • If need a high-performance hardware disk, use EC2 Instance Store

      • Better I/O performance
      • EC2 Instance Store lose their storage if they're stopped (ephemeral)
      • Good for buffer / cache / scratch data / temporary content
      • Risk of data loss if hardware fails
      • Backups and Replication are your responsibility

      Local EC2 Instance Store:

      EBS Volume Types Use cases:

      General Purpose SSD:
      • Cost effective storage, low-latency
      • System boot volumes, Virtual desktops, Development and test environments
      • 1 GiB - 16 TiB
      • gp3:
        • Baseline of 3,000 IOPS and throughput of 125 MiB/s
        • Can increase IOPS up to 16,000 and throughput up to 1,000 MiB/s independently
      • gp2:
        • Small gp2 volumes can burst IOPS to 3,000
        • Size of the volume and IOPS are linked, max IOPS is 16,000
        • 3 IOPS per GB, means at 5,334 GB we are at the max IOPS

      Provisioned IOPS (PIOPS) SSD:
      • Critical business applications with sustained IOPS performance
      • Or applications that need more than 16,000 IOPS
      • Great for databases workloads (sensitive to storage perf and consistency)
      • io1/io2 (4 GiB - 16 TiB):
        • Max PIOPS: 64,000 for Nitro EC2 instances & 32,000 for other
        • Can increase PIOPS independently from storage size
        • io2 have more durability and more IOPS per GiB (at the same price as io1)
      • io2 Block Express (4 GiB - 64 TiB):
        • Sub-millisecond latency
        • Max PIOPS: 256,000 with an IOPS:GiB ratio of 1,000:1
      • Supports EBS Multi-attach

      Hard Disk Drives (HDD):

      EBS Multi-Attach - io1/io2 family:
      • Attach the same EBS volume to multiple EC2 instances in the same AZ
      • Each instance has full read & write permissions to the volume
      • Use case:
        • Achieve higher application availability in clustered Linux applications (ex: Tera-data)
        • Applications must manage concurrent write operations
      • Must use a file system that's cluster-aware (not XFS, EX4, etc...)


      1. A web application runs on Amazon EC2 instances behind an Application Load Balancer. The application allows users to create custom reports of historical weather data. Generating a report can take up to 5 minutes. These long-running requests use many of the available incoming connections, making the system unresponsive to other users. A solution architect can make the system more responsive by Use Amazon SQS with AWS Lambda to generate reports.

      2. A solution architect should Update the bucket policy to deny if the PutObject does not have an x-amz-server-side-encryption header set to ensure that all objects uploaded to an Amazon S3 bucket are encrypted.

      3. An ecommerce website is deploying its web application as Amazon Elastic Container Service (Amazon ECS) container instances behind an Application Load Balancer (ALB). During periods of high activity, the website slows down and availability is reduced. A solution architect uses Amazon CloudWatch alarms to receive notifications whenever there is an availability issue so they can scale out resources. Company management wants a solution that automatically responds to such events. Should Set up AWS Auto Scaling to scale out the ECS service when there are timeouts on the ALB. Set up AWS Auto Scaling to scale out the ECS cluster when the CPU or memory reservation is too high.

      4. A company runs a web service on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across two Availability Zones. The company needs a minimum of four instances at all times to meet the required service level agreement (SLA) while keeping costs low. If an Availability Zone fails, the company can remain compliant with the SLA by Change the Auto Scaling group to use eight servers across two Availability Zones.

      5. A manufacturing company wants to implement predictive maintenance on its machinery equipment. The company will install thousands of IoT sensors that will send data to AWS in real time. A solution architect is tasked with implementing a solution that will receive events in an ordered manner for each machinery asset and ensure that data is saved for further processing at a later time. Use Amazon Kinesis Data Streams for real-time events with a partition for each equipment asset. Use Amazon Kinesis Data Firehose to save data to Amazon S3 solution would be MOST efficient.
        Amazon SQS Introduces FIFO Queues with Exactly-Once Processing and Lower Prices for Standard Queues. Can now use Amazon Simple Queue Service (SQS) for applications that require messages to be processed in a strict sequence and exactly once using First-in, First-out (FIFO) queues. FIFO queues are designed to ensure that the order in which messages are sent and received is strictly preserved and that each message is processed exactly once.
        Amazon SQS is a reliable and highly-scalable managed message queue service for storing messages in transit between application components. FIFO queues complement the existing Amazon SQS standard queues, which offer high throughput, best-effort ordering, and at-least-once delivery. FIFO queues have essentially the same feature as standard queues, but provide the added benefits of supporting ordering and exactly-once processing. FIFO queues provide additional features that help prevent unintentional duplicates from being sent by message producers or from being received by message consumers. Additionally, message groups allow multiple separate ordered message streams within the same queue.
      B-)

    • EBS Encryption:
      • When create an encrypted EBS volume, get the following:
        • Data at rest is encrypted inside the volume
        • All the data in flight moving between the instance and the volume is encrypted
        • All snapshots are encrypted
        • All volumes created from the snapshot
      • Encryption and decryption are handled transparently (have nothing to do)
      • Encryption has a minimal impact on latency
      • EBS Encryption leverages keys from KMS (AES-256)
      • Copying an unencrypted snapshot allows encryption
      • Snapshots of encrypted volumes are encrypted

      Encryption: encrypt an unencrypted EBS volume:
      • Create an EBS snapshot of the volume
      • Encrypt the EBS snapshot (using copy)
      • Create new EBS volume from the snapshot (the volume will also be encrypted)
      • Now can attach the encrypted volume to the original instance

      EBS RAID Options:
      • EBS is already redundant storage (replicated within an AZ)
      • But what if want to increase IOPS to say 100,000 IOPS?
      • What if want to mirror EBS volumes?
      • Would mount volumes in parallel in RAID settings!
      • RAID is possible as long as OS supports it
      • Some RAID options are: 0, 1, 5 & 6 (not recommended for EBS)

      RAID 0 (increase performance):
      • Combining 2 or more volumes and getting the total disk space and I/O
      • But one disk fails, all the data is failed
      • Use cases would be:
        • An application that needs a lot of IOPS and doesn't need fault-tolerance
        • A database that has replication already built-in
      • Using this, can have a very big disk with a lot of IOPS
      • For example:
        • two 500 GiB Amazon EBS io1 volumes with 4,000 provisioned IOPS each will create a...
        • 1,000 GiB RAID 0 array with an available bandwidth of 8,000 IOPS and 1,000 MB/s of throughput

      RAID 1 (increase fault tolerance):
      • Mirroring a volume to another
      • If one disk fails, logical volume is still working
      • Have to send the data to two EBS volume at the same time (2 x network)
      • Use case:
        • Application that need increase volume fault tolerance
        • Application where need to service disks
      • For example:
        • two 500 GiB Amazon EBS io1 volumes with 4,000 provisioned IOPS each will create a...
        • 500 GiB RAID 1 array with an available bandwidth of 4,000 IOPS and 500 MB/s of throughput

      EFS - Elastic File System:
      • Managed NFS (network file system) that can be mounted on many EC2
      • Works with EC2 instances in multi-AZ
      • Highly available, scalable, expensive (3 x gp2), pay per use

      • Use case: content management, web serving, data sharing, WordPress
      • Uses NFSv4.1 protocol
      • Uses security group to control access to EFS
      • Compatible with Linux based AMI (not Windows)
      • Encryption at rest using KMS

      • POSIX file system (~Linux) that has a standard file API
      • File system scales automatically, pay-per-use, no capacity planning!

      Performance & Storage Classes:
      • EFS Scale:
        • 1,000s of concurrent NFS clients, 10 GB+/s throughput
        • Grow to Petabyte-scale network file system, automatically
      • Performance mode (set at EFS creation time):
        • General purpose (default): latency-sensitive use cases (web server, CMS, etc...)
        • Max I/O - higher latency, throughput, highly parallel (big data, media processing)
      • Throughput mode:
        • Bursting (1 TB = 50MiB/s + burst of up to 100MiB/s)
        • Provisioned: set throughput regardless of storage size, ex: 1 GiB/s for 1 TB storage
      • Storage Tiers (lifecycle management feature - move file after N days):
        • Standard: for frequently accessed files
        • Infrequent access (EFS-IA): cost to retrieve files, lower price to store

      Installing the Amazon EFS client on Amazon Linux 2:
      • sudo yum install -y amazon-efs-utils
      • mkdir efs
      • ls
      • sudo mount -t efs -o tls fs-2571d165:/ efs
      The EFS folder will be shared folder for both EC2 instances

      EBS vs EFS:

      Elastic Block Storage:
      • EBS volumes...:
        • can be attached to only one instance at a time
        • are locked at the Availability Zone (AZ) level
        • gp2: IO increases if the disk size increases
        • io1: can increase IO independently
      • To migrate an EBS volume across AZ:
        • Take a snapshot
        • Restore the snapshot to another AZ
        • EBS backups use IO and shouldn't run them while application is handling a lot of traffic
      • Root EBS Volumes of instances get terminated by default if the EC2 instance gets terminated. (can disable that)

      1. A company sells datasets to customers who do research in artificial intelligence and machine learning (AIML). The datasets are large formatted files met are stored in an Amazon S3 bucket in the us-east-1 Region. The company hosts a web application that the customers use to purchase access to a given dataset. The web application is deployed on mutate Amazon EC2 instances behind an Application Load Balancer. After a purchase is made customers receive an S3 signed URL that allows access to the files.
        The customers are distributed across North America and Europe. The company wants to reduce the cost that is associated with data transfers and wants to maintain or improve performance. A solution architect should Configure S3 Transfer Accelerator on the existing S3 bucket Direct customer requests to the S3 Transfer Acceleration endpoint Continue to use S3 signed URLs for access control.

      2. A company is running a multi-tier ecommerce web application in the AWS Cloud. The application runs on Amazon EC2 Instances with an Amazon RDS MySQL Multi-AZ DB instance. Amazon RDS is configured with the latest generation instance with 2,000 GB of storage in an Amazon EBS General Purpose SSD (gp2) volume. The database performance impacts the application during periods of high demand.
        After analyzing the logs in Amazon CloudWatch Logs, a database administrator finds that the application performance always degrades when the number of read and write IOPS is higher than 6,000. A solution architect should Replace the volume with a Provisioned IOPS (PIOPS) volume to improve the application performance.

      3. A solution architect needs to design a low-latency solution for a static single-page application accessed by users utilizing a custom domain name. The solution must be serverless, encrypted in transit, and cost-effective. Amazon S3 and CloudFront combination of AWS services and features should the solution architect use.

      4. A company purchased Amazon EC2 Partial Upfront Reserved Instances for a 1-year term. A solutions architect wants to analyze how much the daily effective cost is with all possible discounts. Show amortized costs view must the solutions architect choose in the advanced options of Cost Explorer to get the correct values.
      B-)

    • Elastic File System:
      • Mounting 100s of instances across AZ
      • EFS share website files (WordPress)
      • Only for Linux Instances (POSIX)

      • EFS has a higher price point than EBS
      • Can leverage EFS-IA for cost savings

      • Instance in us-east-1a just got terminated, and the attached EBS volume is now available. Colleague can't seem to attach it to instance in us-east-1b. Because EBS volumes are AZ locked.
        EBS Volumes are created for a specific AZ. It is possible to migrate them between different AZ through backup and restore.

      • Have provisioned an 8TB gp2 EBS volume and running out of IOPS. Mount EBS volumes in RAID 0 or Change to an io1 volume type are ways to increase performance.
        EBS IOPS peaks at 16,000 IOPS or equivalent 5,334 GB.

      • RAID 0 leverage EBS volumes in parallel to linearly increase performance, while accepting greater failure risks.

      • Although EBS is already a replicated solution, company SysOps advised to use a RAID 1 mode that will mirror data and will allow instance to not be affected if an EBS volume entirely fails.

      • Mount an EFS have the same data being accessible as an NFS drive cross AZ on all EC2 instances.
        EFS is a network file system (NFS) and allows to mount the same file system on EC2 instances that are in different AZ.

      • Instance Store have a high-performance cache for application that mustn't be shared. Don't mind losing the cache upon termination of instance.
        Instance Store provide the best disk performance.

      • Use an EC2 Instance Store can run a high-performance database that requires an IOPS of 210,000 for its underlying filesystem.
        It is possible to run a database on EC2. It is also possible to use instance store, but there are some considerations to have. The data will be lost if the instance is stopped, but it can be restarted without problems. One can also set up a replication mechanism on another EC2 instance with instance store to have a standby copy. One can also have back-up mechanisms. It's all up to how want to set up architecture to validate requirements.

      AWS RDS Overview:
      • RDS stands for Relational Database Service
      • It's a managed DB service for DB use SQL as a query language.
      • It allows to create databases in the cloud that are managed by AWS
        • PostgreSQL
        • MySQL
        • MariaDB
        • Oracle
        • Microsoft SQL Server
        • Aurora (AWS Proprietary database)

      Advantage over using RDS versus deploying DB on EC2:
      • RDS is a managed service:
        • Automated provisioning, OS patching
        • Continuous backups and restore to specific timestamp (Point in Time Restore)!
        • Monitoring dashboards
        • Read replicas for improved read performance
        • Multi AZ setup for DR (Disaster Recovery)
        • Maintenance windows for upgrades
        • Scaling capability (vertical and horizontal)
        • Storage backed by EBS (gp2 or io1)
      • BUT can't SSH into instances

      1. A company receives data (different sources and implements multiple applications to consume this data. There are many short-running jobs that run only on the weekend. The data arrives in batches rather then throughout the entire weekend. The company needs an environment on AWS to ingest and process this data while maintaining the order of the transactions. Amazon Simple Queue Service (Amazon SQS) with AWS Lambda is the MOST cost-effective manner.

      2. A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions the chief information officer (CIO) wants to block access for certain countries. Use Amazon CloudFront to serve the application and deny access to blocked countries.
        'block access for certain countries.' can use geo restriction, also known as geo blocking, to prevent users in specific geographic locations from accessing content that are distributing through a CloudFront web distribution.

      3. A company has three VPCs named Development, Testing, and Production in the us-east-1 Region. The three VPCs need to be connected to an on-premises data center and are designed to be separate to maintain security and prevent any resource sharing. A solution architect needs to find a scalable and secure solution. The solution architect should recommend Create VPC peers from all the VPCs to the Production VPC. Use an AWS Direct Connect connection from the Production VPC back to the data center.

      4. A company runs an internal browser-based application. The application runs on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales up to 20 instances during work hours, but scales down to 2 instances overnight. Staff are complaining that the application is very slow when the day begins, although it runs well by mid-morning. The scaling should be changed by Implement a scheduled action that sets the desired capacity to 20 shortly before the office opens to address the staff complaints and keep costs to a minimum.

      5. A company is rolling out a new web service, but is unsure how many customers the service will attract. However, the company is unwilling to accept any downtime. A solution architect could recommend Amazon RDS to the company to keep.

      6. A solution architect needs to design a centralized logging solution for a group of web applications running on Amazon EC2 instances. The solution requires minimal development effort due to budget containts. The architect should recommend Install and configure Amazon CloudWatch Logs agent in the Amazon EC2 instances.

      7. A company has an application workflow that uses an AWS Lambda function to download and decrypt files from Amazon S3. These files are encrypted using AWS Key Management Service Customer Master Keys (AWS KMS CMKs). A solution architect needs to design a solution that will ensure the required permissions are set correctly. By Grant the decrypt permission for the Lambda IAM role in the KMS key's policy and Create a new IAM role with the kms decrypt permission and attach the execution role to the Lambda function.

      8. A company has developed a micro-services application. It uses a client-facing API with Amazon API Gateway and multiple internal services hosted on Amazon EC2 instances to process user requests. The API is designed to support unpredictable surges in traffic, but internal services may become overwhelmed and unresponsive for a period of time during surges. A solution architect needs to design a more reliable solution that reduces errors when internal services become unresponsive or unavailable. Should Use Amazon Simple Queue Service (Amazon SQS) to store user requests as they arrive. Change the internal services to retrieve the requests from the queue for processing.

      9. A company stores user data in AWS. The data is used continuously with peak usage during business hours. Access patterns vary, with some data not being used for months at a time. A solution architect must choose a cost-effective solution that maintains the highest level of durability while maintaining high availability. Should use Amazon S3 intelligent-Tiering storage solution.
      B-)

    • RDS Backups:
      • Backups are automatically enabled in RDS
      • Automated backups:
        • Daily full backup of the database (during the maintenance window)
        • Transaction logs are backed-up by RDS every 5 minutes
        • => ability to restore to any point in time (from oldest backup to 5 minutes ago)
        • 7 days retention (can be increase to 35 days)

      • DB Snapshots:
        • Manually triggered by the user
        • Retention of backup for as long as want

      Storage Auto Scaling:
      • Helps increase storage on RDS DB instance dynamically
      • When RDS detects running out of free database storage, it scales automatically
      • Avoid manually scaling database storage
      • Have to set Maximum Storage Threshold (maximum limit for DB storage)
      • Automatically modify storage if:
        • Free storage is less then 10% of allocated storage
        • Low-storage lasts at least 5 minutes
        • 6 hours have passed since last modification
      • Useful for applications with unpredictable workloads
      • Supports all RDS database engines (MariaDB, MySQL, PostgreSQL, SQL Server, Oracle)

      Read Replicas for read scalability:
      • Up to 5 Read Replicas
      • Within AZ, Cross AZ or Cross Region
      • Replication is ASYNC, so reads are eventually consistent
      • Replicas can be promoted to their own DB
      • Applications must update the connection string to leverage read replicas

      Read Replicas:

      Use Cases:
      • Have a production database that is taking on normal load
      • Want to run a reporting application to run some analytics
      • Create a Read Replica to run the new workload there
      • The production application is unaffected
      • Read replicas are used for SELECT (=read) only kind of statements (not INSERT, UPDATE, DELETE)

      Network Cost:
      • In AWS there's a network cost when data goes from one AZ to another
      • For RDS Read Replicas within the same region, don't pay that fee

      Multi AZ (Disaster Recovery):
      • SYNC replication
      • One DNS name - automatic app failover to standby
      • Increase availability
      • Failover in case of loss of AZ, network, instance or storage failure
      • No manual intervention in apps
      • Not used for scaling

      • The Read Replicas be setup as Multi AZ for Disaster Recovery (DR)

      From Single to Multi-AZ:
      • Zero downtime operation (no need to stop the DB)
      • Just click on 'modify' for the database
      • The following happens internally:
        • A snapshot is taken
        • A new DB is restored from the snapshot in a new AZ
        • Synchronization is established between the two databases


      Security - Encryption:
      • At rest encryption:
        • Possibility to encrypt the master & read replicas with AWS KMS - AES-256 encryption
        • Encryption has to be defined at launch time
        • If the master is not encrypted, the real replicas cannot be encrypted
        • Transparent Data Encryption (TDE) available for Oracle and SQL Server

      • In-flight encryption:
        • SSL certificates to encrypt data to RDS in flight
        • Provide SSL options with trust certificate when connecting to database
        • To enforce SSL:
          • PostgreSQL: rds.force_ssl=1 in the AWS RDS Console (Parameter Groups)
          • MySQL: Within the DB:
            GRANT USAGE ON *.* TO 'mysqluser'@'%' REQUIRE SSL;

      1. A company's near-real-time streaming application is running on AWS. As the data is ingested a job runs on the data and takes 30 minutes to complete. The workload frequently experiences high latency due to large amounts of incoming data. A solution architect needs to design a scalable and serverless solution to enhance performance. The solution architect should Use Amazon Kinesis Data Firehose to ingest and Amazon EC2 instances in an Auto Scaling group to process the data.

      2. A company that operates a web application on premises is preparing to launch a newer version of the application on AWS. The company needs to route requests to either the AWS-hosted or the on-premises-hosted application based on the URL query string. The on-premises application is not available from the internet, and a VPN connection is established between Amazon VPC and the company's data center. The company wants to use an Application Load Balancer (ALB) for this launch. Should Use two ALBs: one for on premises and one for the AWS resource. Add hosts to the target group of each ALB. Create a software router on an EC2 instance based on the URL query string.

      3. A start-up company has a web application based in the us-east-1 Region with multiple Amazon EC2 instances running behind an Application Load Balancer across multiple Availability Zones. As the company's user base grows in the us-west-1 Region, it needs the solution with low latency and high availability. A solution architect should Provision EC2 instances and configure an Application Load Balancer in us-west-1. Create an accelerator in AWS Global Accelerator that uses an endpoint group that includes the load balancer endpoints in both Regions.

      4. A company hosts its application in the AWS Cloud. The application runs on Amazon EC2 instances behind an Elastic Load Balancer in an Auto Scaling group and with an Amazon DynamoDB table. The company wants to ensure the application can be made available in another AWS Region with minimal downtime. A solution architect should Create an AWS CloudFormation template to create EC2 instances and a load balancer to be executed when needed. Configure the DynamoDB table_as a global table. And configure DNS failover to point to the new disaster
        recovery Region's load balancer to meet with the LEAST amount of
        downtime.

      5. A solution architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solution architect must minimize the costs of storing and retrieving the media files by use S3 Intelligent-Tiering.

      6. A company recently deployed a new auditing system to centralize information about operating system versions, patching, and installed software for Amazon EC2 instances. A solution architect must ensure all instances provisioned through EC2 Auto Scaling groups successfully send reports to the auditing system as soon as they are launched and terminated. Use EC2 Auto Scaling lifecycle hooks to execute a custom script to send data to the audit system when instances are launched and terminated is MOST efficiency solution.

      7. A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client side JavaScript, and images. Create an Amazon S3 bucket and host the website there is the MOST cost-effective.
      B-)

    • AWS Cost Calculator Overview:

      Pricing Philosophy - High volume / low margin businesses are in core DNA:
      • Trade CapEx for variable expense: Pay for what use
      • Economies of scale provide with lower costs: 80 price reductions since 2006
      • Pricing model choice to support variable and stable workloads: On-demand / Spot / Reserved Instances / Saving Plan
      • Save more money as you grow bigger: Tiered pricing, Volume discounts, and Custom pricing

      Compute:
      Amazon EC2:
      • Linux | Windows
      • Arm and x86 architectures
      • General purpose and workload optimized
      • Bare metal, disk, networking capabilities
      • Packaged | Custom | Community AMIs
      • Multiple purchase options: On-demand, RI, Spot

      Operating Systems Supported:
      • Windows 2003R2 / 2008 / 2008R2 / 2012 / 2012R2 / 2016 / 2019
      • Amazon Linux
      • Debian
      • Suse
      • CentOS
      • Red Hat Enterprise Linux
      • Ubuntu
      • Etc.

      Processor and architecture:
      • Intel® Xeon® Scalable (Skylake) processor
      • NVIDIA V100 Tensor Core GPUs
      • AMD EPYC processor
      • Amazon ARM based Cloud Processor
      • FPGAs for custom hardware acceleration
      Right compute for the right application and workload

      Naming Explained - c5n.xlarge:
      • c: Instance family
        • a: ARM
        • c: Compute Intensive
        • d: Dense storage
        • f: FPGA
        • g: GPU (Graphics Intensive)
        • h: HDD (Big Data Optimized)
        • i: high I/O
        • m: Most scenarios (General Purpose)
        • p: Premium GPU (General Purpose GPU)
        • r: Random-access (Memory Optimized)
        • t: Turbo (Burstable performance); a1 also
        • x: eXtra-large (In-memory)
        • z: high frequency (Compute and Memory Intensive)
      • 5: Instance generation
      • n: Attribute
      • xlarge: Instance size

      1. Which service?
      2. Which instance type?
        • EC2: General Purpose / Compute / Memory / etc.
        • S3
      3. Purchasing Options:
        • On-Demend: Pay for compute capacity by the second with no long-term commitments
        • Reserved Instances
        • Spot Instances
        • Savings Plans: Because Reserved Instances fixed instance type
        To optimize EC2, combine three purchase options!
      4. gp3 vs gp2: gp3 is recommend with better performance & cheaper price

      RDS Encryption Operations:
      • Encryption RDS backups:
        • Snapshots of:
          • un-encrypted RDS database are un-encrypted
          • encrypted RDS database are encrypted
        • Can copy a snapshot into an encrypted one

      • To encrypt an un-encrypted RDS database:
        • Create a snapshot of the un-encrypted database
        • Copy the snapshot and enable encryption for the snapshot
        • Restore the database from the encrypted snapshot
        • Migrate applications to the new database, and delete the old database

      Security - Network & IAM:
      • Network Security:
        • RDS database are usually deployed within a private subnet, not in a public one
        • RDS security works by leveraging security groups (the same concept as for EC2 instances) - it controls which IP / security group can communicate with RDS

      • Access Management:
        • IAM policies help control who can manage AWS RDS (through the RDS API)
        • Traditional Username and Password can be used to login into the database
        • IAM-based authentication can be used to login into RDS MySQL & PostgreSQL

      IAM Authentication:
      • IAM database authentication works with MySQL and PostgreSQL
      • Don't need a password, just an authentication token obtained through IAM & RDS API calls
      • Auth token has a lifetime of 15 minutes

      • Benefits:
        • Network in/out must be encrypted using SSL
        • IAM to centrally manage users instead of DB
        • Can leverage IAM Roles and EC2 Instance profiles for easy integration

      Security - Summary:
      • การเข้ารหัส at rest:
        • จะทำก็ต่อเมื่อสร้าง Instance DB ขึ้นเป็นครั้งแรก
        • หรือ: unencrypted DB => snapshot => คัดลอก snapshot โดยเข้ารหัส => สร้าง DB จาก snapshot
      • ความรับผิดชอบของคุณ:
        • ตรวจสอบกฎขาเข้าของ Port / IP / กลุ่มความปลอดภัยใน DB SG
        • การสร้างผู้ใช้ในฐานข้อมูลและสิทธิ์หรือจัดการผ่าน IAM
        • การสร้างฐานข้อมูลโดยมีหรือไม่มีการเข้าถึงแบบสาธารณะ
        • ตรวจสอบให้แน่ใจว่ามีการกำหนดค่ากลุ่ม Parameter หรือ DB ให้อนุญาตเฉพาะการเชื่อมต่อ SSL
      • ความรับผิดชอบของ AWS:
        • ไม่มีการเข้าถึง SSH
        • ไม่มีการ Manual Patch ฐานข้อมูล
        • ไม่มีการ Manual Patch ระบบปฏิบัติการ
        • ไม่มีวิธีตรวจสอบ Underlying Instance

      1. A medical records company is hosting an application on Amazon EC2 instances. The application processes customer data files that are stored on Amazon EC2 instance. The EC2 instances access Amazon S3 over the internet, but they do not require any other network access. A new requirement mandates that the network traffic for file transfers take a private route and not be sent over the internet. Move the EC2 instances to private subnets. Create a VPC endpoint for Amazon S3, and link the endpoint to the route table_for the private subnets change to the network architecture should a solution architect recommend.

      2. A software vendor is deploying a new software-as-a-service (SaaS) solution that will be utilized by many AWS users. The service is hosted in a VPC behind a Network Load Balancer. The software vendor wants to provide access to this service to users with the least amount of administrative overhead and without exposing the service to the public internet. A solution architect should Connect the service in the VPC with an AWS PrivateLink endpoint. Have users subscribe to the endpoint.

      3. A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solution architect must devise a strategy that maximizes security without increasing operational overhead. The solutions architect should Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway.
      B-)

    • Amazon Aurora:
      • Is a proprietary technology from AWS (not open sourced)
      • Postgres and MySQL are both supported as Aurora DB (that means drivers will work as if Aurora was a Postgres or MySQL database)
      • Is 'AWS cloud optimized' and claims 5x performance improvement over MySQL on RDS, over 3x the performance of Postgres on RDS
      • Storage automatically grows in increments of 10GB, up to 64 TB.
      • Can have 15 replicas while MySQL has 5, and the replication process is faster (sub 10 ms replica lag)
      • Failover is instantaneous. It's HA native.
      • Costs more than RDS (20% more) - but is more efficient

      High Availability and Read Scaling:
      • 6 copies of data across 3 AZ:
        • 4 copies out of 6 needed for writes
        • 3 copies out of 6 need for reads
        • Self healing with peer-to-peer replication
        • Storage is striped across 100s of volumes
      • One Aurora Instance takes writes (master)
      • Automated failover for master in less than 30 seconds
      • Master + up to 15 Read Replicas serve reads
      • Support for Cross Region Replication

      DB Cluster:
      • Writer Endpoint: Pointing to the master
      • Reader Endpoint: Connection Load Balancing

      Features:
      • Automatic fail-over
      • Backup and Recovery
      • Isolation and security
      • Industry compliance
      • Push-button scaling
      • Automated Patching with Zero Downtime
      • Advanced Monitoring
      • Routine Maintenance
      • Backtrack: restore data at any point of time without using backups

      Security:
      • Similar to RDS because uses the same engines
      • Encryption at rest using KMS
      • Automated backups, snapshots and replicas are also encrypted
      • Encryption in flight using SSL (same process as MySQL or Postgres)
      • Possibility to authenticate using IAM token (same method as RDS)
      • You are responsible for protecting the instance with security groups
      • Can't SSH

      Custom Endpoints:
      • Define a subset of Aurora Instances as a Custom Endpoint
      • Example: Run analytical queries on specific replicas
      • The Reader Endpoint is generally not used after defining Custom Endpoints

      Serverless:
      • Automated database instantiation and auto-scaling based on actual usage
      • Good for infrequent, intermittent or unpredictable workloads
      • No capacity planning needed
      • Pay per second, can be more cost-effective

      Multi-Master:
      • In case want immediate failover for write node (HA)
      • Every node does R/W - vs promoting a RR as the new master

      Global:
      • Aurora Cross Region Read Replicas:
        • Useful for disaster recovery
        • Simple to put in place
      • Global Database (recommended):
        • 1 Primary Region (read / write)
        • Up to 5 secondary (read-only) regions, replication lag is less than 1 second
        • Up to 16 Read Replicas per secondary region
        • Helps for decreasing latency
        • Promoting another region (for disaster recovery) has an RTO of < 1 minute

      Machine Learning:
      • Enables to add ML-based predictions to applications via SQL
      • Simple, optimized, and secure integration between Aurora and AWS ML services
      • Supported services:
        • Amazon SageMaker (use with any ML model)
        • Amazon Comprehend (for sentiment analysis)
      • Don't need to have ML experience
      • Use cases: fraud detection, ads targeting, sentiment analysis, product recommendations

      IAM Policies Structure:
      • {
            "Version": "2012-10-17", // policy language version, always include '2012-10-17'
            "Id": "S3-Account-Permissions", // an identifier for the policy (optional)
            "Statement": [ // one or more individual statements (required)
                {
                    "Sid": "1", // an identifier for the statement (optional)
                    "Effect": "Allow", // whether the statement allows or denies access (Allow, Deny)
                    "Principal": { // account/user/role to which this policy applied to
                        "AWS": [arn:aws:iam;;123456789012:root"]
                    },
                    "Action": [ // list of actions this policy allows or denies
                        "s3:GetObject",
                        "s3:PutObject"
                    ],
                    "Resource": ["arn:aws:s3:::mybucket/*"] // list of resources to which the actions applied to
                    "Condition": { // for when this policy is in effect (optional)
                        "ForAnyValue:StringEquals": {
                            "aws:CalledVia": [
                                "cloudformation.amazonaws.com"
                            ]
                        }
                    }
                }
            ]
        }

      1. A solution architect is performing a security review of a recently migrated workload. The workload is a web application that consists of amazon EC2 instances in an Auto Scaling group behind an Application Load Balancer. The solution architect must improve the security posture and minimize the impact of a DDoS attack on resources. Create a custom AWS Lambda function that adds identified attacks into a common vulnerability pool to capture a potential DDoS attack. Use the identified information to modify a network ACL to block access is MOST effective solution.

      2. A solution architect needs to ensure that all Amazon Elastic Block Store (Amazon EBS) volumes restored from unencrypted EBS snapshots are encrypted. The solution architect should Create a new volume and specify the symmetric customer master key (CMK) to use for encryption.
      B-)

    • Amazon ElastiCache Overview:
      • The same way RDS is to get managed Relational Databases...
      • Is to get managed Redis or Memcached
      • Caches are in-memory databases with really high performance, low latency
      • Helps reduce load off of databases for read intensive workloads
      • Helps make application stateless
      • AWS takes care of OS maintenance / patching, optimizations, setup, configuration, monitoring, failure recovery and backups
      • Using involves heavy application code changes

      Solution Architecture - DB Cache:
      • Applications queries ElastiCache, if not available, get from RDS and store in ElastiCache.
      • Helps relieve load in RDS
      • Cache must have an invalidation strategy to make sure only the most current data is used in there.

      User Session Store:
      • User logs into any of the application
      • The application writes the session data into ElastiCache
      • The user hits another instance of application
      • The instance retrieves the data and the user is already logged in

      Redis vs Memcached:
      • REDIS:
        • Multi AZ with Auto-Failover
        • Read Replicas to scale reads and have high availability
        • Data Durability using AOF persistence
        • Backup and restore features
      • MEMCACHED:
        • Multi-node for partitioning of data (sharding)
        • No high availability (replication)
        • Non persistent
        • No backup and restore
        • Multi-threaded architecture

      Cache Security:
      • All caches in ElastiCache:
        • Do not support IAM authentication
        • IAM policies on ElastiCache are only used for AWS API-level security
      • Redis AUTH:
        • Can set a 'password/token' when create a Redis cluster
        • This is an extra level of security for cache (on top of security groups)
        • Support SSL in flight encryption
      • Memcached:
        • Supports SASL-based authentication (advanced)

      Patterns:
      • Lazy Loading: all the read data is cached, data can become stale in cache
      • Write Through: Adds or update data in the cache when written to a DB (no stale data)
      • Session Store: store temporary session data in a cache (using TTL features)

      • There are only two hard things in Computer Science: cache invalidation and naming things

      Redis Use Case:
      • Gaming Leaderboards are computationally complex
      • Redis Sorted sets guarantee both uniqueness and element ordering
      • Each time a new element added, it's ranked in real time, then added in correct order

      • RDS database struggles to keep up with the demand of the users from website. Million users mostly read news, and don't post news very often. An ElastiCache cluster and RDS Read Replicas solution is adapted to do indeed help with scaling reads.

      • Have setup read replicas on RDS database, but users are complaining that upon updating their social media posts, they do not see the update right away. Because Read Replicas have asynchronous replication and therefore it's likely users will only observe eventual consistency.

      • Multi AZ RDS feature does not require to change SQL connection string.
        Multi AZ keeps the same connection string regardless of which database is up. Read Replicas imply need to reference individually in application as each read replica will have its own DNS name.

      • Enable Multi AZ to ensure Redis cluster will always be available (high availability).

      1. Write a custom AWS Lambda function to generate the thumbnail and alert the user. Use the image upload process as an event source to invoke the Lambda function. The solution architect should Create Amazon Simple Notification Service (Amazon SNS) notification topics and subscriptions. Use one subscription with the application to generate the thumbnail.

      2. A company has a three-tier, stateless web application. The company's web and application tiers run on Amazon EC2 instances in an Auto Scaling group with an Amazon Elastic Block Store (Amazon EBS) root volume, and the database tier runs on Amazon RDS for PostgreSQL. The company's recovery point objective (RPO) is 2 hours. A solutions architect should recommend Retain the latest Amazon Machine Images (AMIs) of the web and application tiers. Configure daily Amazon RDS snapshots and use point-in-time recovery to meet the RPO to enable backups for this environment.

      3. A company has thousands of edge devices that collectively generate 1 TB of status alerts each day. Each alert is approximately 2 KB in size. A solution architect needs to implement a solution to ingest and store the alerts for future analysis.
        The company wants a highly available solution. However the company needs to minimize costs and does not want to manage additional infrastructure. Additionally, the company wants to keep 14 days of data available for immediate analysis and archive any data older than 14 days. Create an Amazon Simple Queue Service (Amazon SQS) standard queue to ingest the alerts and set the message retention period to 14 days. Configure consumers to poll the SQS queue check the age of the message and analyze the message data as needed if the message is 14 days old, the consumer should copy the message to an Amazon S3 bucket and delete the message from the SQS queue is the MOST operationally efficient solution.

      4. A company is running a highly sensitive application on Amazon EC2 backed by an Amazon RDS database Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest. The LEAST amount of changes to the infrastructure is Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

      5. A company is processing data on a daily basis. The results of the operations are stored in an Amazon S3 bucket analyzed daily for one week and then must remain immediately accessible for occasional analysis. Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days is the MOST cost-effective storage solution alternative to the current configuration.

      6. A company is creating a new application that will store a large amount of data. The data will be analyzed hourly and will be modified by several Amazon EC2 Linux instances that are deployed across multiple Availability Zones. The needed amount of storage space will continue to grow for the next 6 months. A solution architect should recommend Store the data in Amazon S3 Glacier Update the S3 Glacier vault policy to allow access to the application instances.

      7. An application running on an Amazon EC2 instance needs to access an Amazon DynamoDB table. Both the EC2 instance and the DynamoDB table are in the same AWS account. A solution architect must configure the necessary permissions. Create an IAM role with the appropriate policy to allow access to the DynamoDB table. Create an instance profile to assign this IAM role to the EC2 instance solution will allow least privilege access to the DynamoDB table from the EC2 instance.

      8. A company plans to store sensitive user data on Amazon S3. Internal security compliance requirement encryption of data before sending it to Amazon S3. Server-side encryption with customer-provided encryption keys should a solution architect recommend.
      B-)