Cloud Computing

PlAwAnSaI

Administrator
Work Hard. Have Fun. Make History.
main-qimg-3324c17b3ddba4059c76d7598e947e95

Index:

  1. เรียนรู้พื้นฐานของ AWS Cloud:
    • Cloud Computing คืออะไร?
    • AWS คืออะไร?
    • ประเภทของ Cloud Computing
    • Cloud Computing กับ AWS
    • ภาพรวมความรู้พื้นฐานของ AWS
    • แนวคิดหลักของ AWS Fundamentals

      The-5-Pillars-of-the-AWS-Well-Architected-Framework.8ba36f97ca0834988ebfbce03d69c4656b365db1.jpg
    • AWS-Well-Architected-Framework-%E2%80%93-Six-Pillars.jpg
      1. ความเป็นเลิศในการปฏิบัติงาน: การดำเนินงานเป็นระบบอัตโนมัติ
        IaC สามารถใช้ Provision Service โดยอัตโนมัติโดยใช้เครื่องมือและกระบวนการเดียวกับที่ใช้สำหรับ Code ในปัจจุบัน
        Observability ใช้รวบรวม, วิเคราะห์ และดำเนินการกับ Metric เพื่อปรับปรุงการดำเนินงานอย่างต่อเนื่อง
      2. ความปลอดภัย: Zero Trust
        IAM ตามหลักการของ Least Privilege (ให้สิทธิ์การเข้าถึงได้ในระดับที่จำเป็น)
        AWS Network Security กับการออกแบบระบบความมั่นคงปลอดภัยแบบ Defense in Depth คือแนวคิดเรื่องการออกแบบระบบรักษาความมั่นคงปลอดภัยแบบหลายชั้น กล่าวคือ ยิ่งมี Technique หรือกระบวนการในการตรวจจับภัยคุกคามมากเท่าไหร่ ยิ่งทำให้มีโอกาสตรวจจับเจอภัยคุกคามสูงมากขึ้นเท่านั้น ส่งผลให้ป้องกัน Hacker ได้ก่อนที่จะเข้ามาแทรกซึมในระบบ
        Data Encryption ถูกนำไปใช้ทั้งกับ ข้อมูลที่ส่งระหว่างระบบ และภายในระบบ
      3. ความน่าเชื่อถือ: Blast Radius (รัศมีของการเกิดผลกระทบ)
        Fault Isolation Zones เพื่อจำกัด Blast Radius
        Limits เพื่อหลีกเลี่ยงการหยุดชะงักของการบริการ
      4. ประสิทธิภาพของการปฏิบัติงาน: ปฏิบัติต่อ Server แบบปศุสัตว์แทนที่จะเป็นสัตว์เลี้ยง
        เลือกบริการที่เหมาะสมรวมถึงการกำหนดค่า ตามเป้าหมายประสิทธิภาพ
        ปรับขนาดบริการได้สองแบบ Vertical และ Horizontal
      5. การเพิ่มประสิทธิภาพต้นทุน:
        Model การใช้จ่ายจะเน้น OpEx จะมี Technique เช่น การปรับขนาดที่เหมาะสม, Technology ไร้ Server, การจอง, และ Spot Instance
        การตรวจสอบติดตามและเพิ่มประสิทธิภาพงบประมาณโดยใช้บริการ เช่น Cost Explorer, Tags, และ Budgets
    • ภาพรวมของ AWS

      AWS มีส่วนประกอบพื้นฐานที่สามารถประกอบได้อย่างรวดเร็วเพื่อรองรับภาระงานแทบทุกประเภท ด้วย AWS จะพบชุดบริการที่พร้อมใช้งานสูงที่ออกแบบมาเพื่อทำงานร่วมกัน ช่วยให้สร้าง Application ที่สามารถปรับขนาดและซับซ้อนได้

      สามารถเข้าถึงพื้นที่เก็บข้อมูลที่ทนทานสูง, การประมวลผลต้นทุนต่ำ, ฐานข้อมูลประสิทธิภาพสูง, เครื่องมือการจัดการ และอื่นๆ ทั้งหมดนี้มีให้โดยไม่มีค่าใช้จ่ายล่วงหน้า จ่ายเฉพาะสิ่งที่ใช้เท่านั้น บริการเหล่านี้ช่วยให้องค์กรทำงานได้เร็วขึ้น, ลดค่าใช้จ่ายด้าน IT และยังสามารถปรับขนาดได้ตามต้องการ AWS ได้รับความไว้วางใจจากองค์กรที่ใหญ่ที่สุด และ Start-up ที่มาแรงที่สุด ในการขับเคลื่อนปริมาณงานที่หลากหลาย, รวมถึง Application บน Web และ Mobile, การพัฒนา Game, การประมวลผลข้อมูลและคลังสินค้า, การจัดเก็บข้อมูล และอื่นๆ อีกมากมาย
    • AWS Global Infrastructure
    • ศัพท์ AWS
      • Amazon Web Service (AWS): Amazon Web Service หรือ AWS คือหนึ่งในผู้ให้บริการด้าน Cloud Computing โดย Website Amazon โดย AWS มีบริการรองรับสามกลุ่มใหญ่ๆ ด้วยกันคือบริการด้าน IaaS, PaaS และ SaaS ตัวอย่าง Product ที่เป็นที่รู้กันดีของ AWS เช่น Amazon EC2, Amazon Elastic Beanstalk, และ Amazon S3.
      • Auto Scaling (AS): คือ บริการที่สามารถปรับเปลี่ยนทรัพยากรได้ตามที่กําหนดอย่างอัตโนมัติ ซึ่งจะเหมาะสมกับสถานการณ์ที่การใช้งานต้องการทรัพยากรอย่างสูงมากและเร่งด่วนในช่วงเวลาใดเวลาหนึ่ง ซึ่งจะช่วยให้การบริหารจัดการทรัพยากรมีประสิทธิภาพมาก (บริการต้องร้องขอเป็นกรณีพิเศษ)
      • Availability zones: เปรียบเสมือนศูนย์ข้อมูลที่ให้บริการทรัพยากร Computer ถ้าหากว่า Availability Zone แห่งหนึ่งมีปัญหา จะไม่ส่งผลให้ Availability Zone อื่นๆ มีปัญหาตามไปด้วย
      • Cloud Service Provider (CSP): คือบริษัทผู้ให้บริการ Cloud Computing ทั้งในส่วนของ PaaS, IaaS หรือ SaaS
      • Container: คือ Technology ที่เปรียบเสมือนหีบห่อซึ่งสามารถบรรจุ พวก Software, Program หรือ Application ต่างๆ เพื่อนำไปใช้งานบน Server ที่ไหนก็ได้ โดยจะช่วยลดขั้นตอนในการลง Program หรือ Tools ต่างๆ
      • Content Delivery Network (CDN): คือระบบเครือข่ายของเครื่อง Server ขนาดใหญ่ ที่เชื่อมต่อกันทั่วโลกผ่านทาง Internet ทำหน้าส่งข้อมูลให้ไปถึงผู้รับปลายทางให้เร็วที่สุด มีประสิทธิภาพและพร้อมให้ผู้ชมเข้าถึงข้อมูลได้ตลอดเวลา
      • Elastic Block Store (EBS): คือ การจัดการพื้นที่จัดเก็บ Block ประสิทธิภาพสูง ใช้ในการจัดเก็บข้อมูลที่มีการส่งข้อมูลและการทําธุรกรรมในปริมาณงานที่มีการประมวลผลมาก
      • Elastic Container Service (ECS): เป็นบริการจัดการ Container ประสิทธิภาพสูง สามารถปรับขนาดให้รองรับ Container Docker ได้อย่างง่ายได้ ซึ่งช่วยให้สามารถ Run และปรับขนาด Application ที่มี Container ได้ตามต้องการ
      • Elastic IP: คือ IP address ที่มีลักษณะ Static ทั้งในส่วนของ Private และ Public IP ซึ่งผู้ใช้งานสามารถเลือกเพื่อเชื่อมต่อไปยัง Internet หรือส่งข้อมูลกันผ่านในระบบ Cloud ได้ ส่งผลให้เกิดความคล่องตัวในการใช้งาน
      • Object Storage (S3): คือ Cloud Storage หรือ Object Store ที่ถูกสร้างขึ้นมาเพื่อจัดเก็บข้อมูลใดๆ ที่สามารถนําข้อมูลมาวิเคราะห์ และสามารถเข้าถึงข้อมูลนี้ได้จากทุกที่ ไม่ว่าจะเก็บ Website, Mobile App หรือพวก Data ต่างๆ ที่ต้องการ
      • Resource: ปัจจัยหรือทรัพยากรที่เกี่ยวข้องกับระบบ Computer ที่จำกัดตามการประมวลผลหรือเกี่ยวข้องกับการแก้ไขปัญหาตามโจทย์ที่ความต้องการของผู้ใช้ได้ระบุไว้
      • Virtual Private Cloud (VPC): คือ ระบบที่ช่วยให้ผู้ใช้งานสามารถสร้าง Virtual Networks สําหรับแต่ละระบบแยกออกจากกัน และบริหารจัดการได้อย่างสะดวก ซึ่งจะส่งผลให้การออกแบบ Network และใช้ทรัพยากรบน Cloud ปลอดภัยมากขึ้น
:cool:
 
Last edited:

PlAwAnSaI

Administrator

  1. เรียนรู้พื้นฐานของ AWS Cloud (ต่อ):
    • Role งานในระบบ Cloud

      Classic IT Roles:
      • On-Premise Role: Architect
      • Role: System Administrator is responsible for installing, supporting, and maintaining computer systems.
      • Application, Database, Network Administrator
      • Security Administrator is responsible for defending against unauthorized access.

      Cloud Roles:

      Spheres of Responsibility in the AWS Cloud Environment


      Common Duties in the Cloud:
      • Design/validate/expand solution-independent architectures and requirements:
        > Cloud Enterprise Architect
        delivering cloud services for the business.
        • Collaborate to Obtain Business Requirements
          'What are business use cases?'
          'We want to build an entertainment site that can scale and has PCI compliance.'
        • Design Solution-Independent Architectures
        • Present Different Models to Business
        • Validate, Refine, and Expand Architectures
        • Manage, Monitor, and Update Architectures as Necessary
      • > Program Manager: for ensuring that the cloud is managed appropriately.
        • Manage operational teams
        • Manage and Monitor Cloud Metrics - What's the user experience like?
        • Manage Service Reports
      • > Financial Manager: managing financial controls for the cloud.
        • Perform Own Coding Cost
        • Distribute Cost to Sales, Marketing, Engineering
        • Know Cost Usage
        • Optimize Cloud Costs
      • Design/validate/expand solution-dependent architectures and requirements:

        Cloud Infrastructure Role > Cloud Infrastructure Architect - designing solution-dependent cloud infrastructure architectures.
        • Develop and Maintain Plans
        • Collaborate with Enterprise Architect, Mobile, IoT, Gaming Specialist
        Application Role > Cloud Application Architect - designing cloud-optimized applications.
        • Collaborate with Enterprise, Infrastructure Architect
        • Perform Capacity and Scalability Requirements
        • Provide Deep Software Knowledge to Developer
        • Advise on AWS Best Practices to Developer
          'The software architecture should be implemented this way'
      • Build the infrastructure/application:

        Infrastructure > Cloud Operations Engineer - building, monitoring, and managing the cloud infrastructure and shared services.
        • Collaborate with Cloud Infrastructure Architect
        • Ensure That Service Requirements Are Met
        • Management: OS, Patch and Update Management, Manage Templates, Capacity, Virtual Networks, Application Resiliency, Document Changes (V1, V2, V3), Tag and Review Cloud Infrastructure
        • Support: Provide Operations Support for Cloud Services, Perform Performance Tuning, Root Cause Analysis, Respond and Escalate Incidents, Documentation Review/Modification, Backup and Recovery Support, Monitor and Report on Compliance Programs (PCI, ISO27001)
        Application > Application Developer - application development
        • Manage Application Changes, Code Release ('It's OK to release v3'), Code Deployment, Application Documentation
        • Provide Application Support, Training
        • Develop Application Optimization Techniques
      • Specifying security requirements:
        > Cloud Security Architect
        • Collaborate with Enterprise Architect, Security Operations Engineer
        • Design and Maintain Security Configuration Checklists, Risk Assessment Plans, Corporate Security Policies and Procedures, Incident Response Plans
      • Managing, monitoring, and enforcing security:
        > Security Operations Engineer
        • Implement Corporate Security Policies and Procedures Implementation
        • Manage and Enforce Compliance
        • Manage Security Configuration, Identity and Access Management and Integration with Federated Identity Sources
        • Configure Security Groups
        • Perform Vulnerability Testing and Risk Analysis
        • Create Security Assessments and Audit Reports
      • > DevOps Engineer:
        Building and managing/operating fast and scalable workflows.
        primarily on deploying and configuring daily builds and troubleshooting failed builds.
        • Collaborate with Developer
        • Design and build Automation Solutions
        • Implement Continuous Build, Integration, Deployment, and Infrastructure as Code (Initiate CI Process > Test > Report > Commit >)
        • Review and Recommend Operational Improvements
        • Perform Application Testing and Recovery
        • Develop and Maintain Change Management Processes
      Infrastructure as Code (IaC):
      • Manually Managing Environment: AWS Management Console, APIs, CLI
      • Managing Environment Using Infrastructure as Code - Provides a reusable, maintainable, extensible, and testable infrastructure
        • Deploy Dev, Test, Prod Environment
        • Update Prod Environment
      Why Use Infrastructure as Code?
      A practice in which infrastructure is provided and managed using code and software development techniques, such as versioning control and continuous integration and delivery.
      • Codify Designs
      • Rapidly Iterate on Designs
      • Easy to Maintain
      • Easily add Company Security Best Practices
      Using the DevOps Model to Develop Applications

      133792071_1300282720357949_5989953068983217238_o.png


      AWS CloudFormation tool uses templates and can be used to deploy infrastructure as code.

      134733208_1300141310372090_3376637965010051147_n.png


      There are many duties in the cloud. Some duties might not be linked to a specific role. Depending on the business or organization, certain duties might be performed by a role. Duties might also be performed by multiple roles.

      Infrastructure as code is a practice in which infrastructure is provided and managed using code and software development techniques, such as versioning control and continuous integration and delivery.

      Must decide where to draw the red line between dev and ops.

      https://content.aws.training/wbt/jobrol/en/x1/1.1.0/story_content/external_files/Competencies_for_Cloud_Roles.pdf
cool.gif
 

PlAwAnSaI

Administrator

  1. เรียนรู้พื้นฐานของ AWS Cloud
  2. เจาะลึกข้อมูลพื้นฐานของ AWS Cloud รวมถึงราคาและการสนับสนุนของ AWS และบริการหลักของ AWS
    • สิ่งจำเป็นสำหรับ AWS Cloud Practitioner

    • Cloud computing is On-demand delivery of IT resources and applications through the Internet with pay-as-you-go pricing.
    • The AWS Cloud offers three cloud deployment models: cloud, hybrid, and on-premises.
      • Cloud-based applications are fully deployed in the cloud and do not have any parts that run on premises.
      • A hybrid deployment connects infrastructure and applications between cloud-based resources and existing resources that are not in the cloud, such as on-premises resources. However, a hybrid deployment is not equivalent to an on-premises deployment because it involves resources that are located in the cloud.
        Deploying applications connected to on-premises infrastructure is a sample use case for a hybrid cloud deployment. Cloud computing also has cloud and on-premises (or private cloud) deployment models.
    • AWS Lambda is an AWS service that lets run code without needing to manage or provision servers.
    • Benefits of cloud computing:
      • Trade upfront expense for variable expense: Not having to invest in technology resources before using them.
      • Stop guessing capacity: Accessing services on-demand to prevent excess or limited capacity.
      • Benefit from massive economies of scale: The scale of cloud computing help to save costs by The aggregated cloud usage from a large number of customers results in lower pay-as-you-go prices.
      • Go global in minutes: Quickly deploying applications to customers and providing them with low latency.
    • Amazon Elastic Compute Cloud (EC2) Instances Pricing / Billing / Purchasing Options:
      • On-Demand Instances: short workload, predictable pricing
      • Reserved require a commitment contract length of MINIMUM 1 year or 3 years with a larger discount.
        • Reserved Instances: long workloads
        • Convertible: long workloads with flexible instances
        • Scheduled: example - every Friday between 4 and 7 pm
      • Savings Plans are ideal for workloads that involve a consistent amount of compute usage over a 1-year or 3-year term.
        Can reduce compute costs by up to 72% over On-Demand costs.
      • Spot Instances are ideal for short workloads with flexible start and end times (like for a total of 6 months), or that can withstand interruptions / lose instances (less reliable).
        Can reduce compute costs by up to 90% over On-Demand costs (cheap).
        Do not require contracts or a commitment to a consistent amount of compute usage.
      • Dedicated Hosts run in a virtual private cloud (VPC) on hardware that is dedicated to a single customer (book an entire physical server), control instance placement. A highest cost than the others, which run on shared hardware.
    • Amazon EC2 Auto Scaling: Automated horizontal scaling enables to automatically add or remove Amazon EC2 instances in response to changing application demand.
    • Elastic Load Balancing (ELB) is the AWS service that automatically distributes incoming application traffic across multiple resources, such as Amazon EC2 instances. Helps to ensure that no single resource becomes over-utilized / has to carry the full workload on its own.
    • Amazon Simple Notification Service (Amazon SNS) is a publish/subscribe service. Using Amazon SNS topics, a publisher publishes messages to subscribers.
    • Amazon Simple Queue Service (Amazon SQS) is a message queuing service. Enables to send, store, and receive messages between software components through a queue. It does not use the message subscription and topic model that is involved with Amazon SNS.
    • Amazon Elastic Kubernetes Service (Amazon EKS) is fully managed Kubernetes service. Kubernetes is open-source software that enables to deploy and manage containerized applications at scale.
    • AWS Fargate is a server-less compute engine for containers.
    • AWS Global Infrastructure:
      • A Region is a separate geographical area/location with multiple locations that are isolated from each other that contains AWS resources. A Region consists of two or more Availability Zones. For example, the South America (São Paulo) Region is sa-east-1. It includes three Availability Zones: sa-east-1a, sa-east-1b, and sa-east-1c.

        Selecting a Region:
        • Compliance with data governance and legal requirements
        • Proximity to customers
        • Available services within a Region
        • Pricing
      • An Availability Zones (AZ) is A single data center or group of data centers within a Region. A fully isolated portion of the AWS global infrastructure.
      • Deploy infrastructure across at least 2 Availability Zones
      • An edge locations is a data center that an AWS service uses to perform service-specific operations.
      • Amazon CloudFront is a content delivery service. It uses a network of edge locations to store cache copies of content and faster deliver content to customers all over the world. When content is cached, it is stored locally as a copy. This content might be video files, photos, webpages, and so on.

        An origin is the server from which CloudFront gets files. Examples of CloudFront origins include Amazon Simple Storage Service (Amazon S3) buckets and web servers.
      • AWS Outposts is a service that can use to run/extend AWS infrastructure, services, and tools in own on-premises data center in a hybrid cloud approach.
    • Provisioning AWS resources:
      • The AWS Management Console includes wizards and workflows that can use to complete tasks in AWS services.
      • Software development kits (SDKs) enable to develop AWS applications in supported programming languages.
      • The AWS Command Line Interface (AWS CLI) is used to automate actions for AWS services and applications through scripts.
      • AWS Elastic Beanstalk
      • AWS CloudFormation
    • Amazon Virtual Private Cloud (Amazon VPC) is a service that enables to provision an isolated section of the AWS Cloud. In this isolated section, can lunch resources in a virtual network that define.
    • Internet gateway is used to connect a VPC to the internet.
    • A Virtual private gateway enables to Create a VPN connection between the VPC and the internal/private corporate network, such as company's data center. This connection is private and encrypted, it travels through the public internet.
    • AWS Direct Connect can be used to Establish a private dedicated connection between the company's on-premises data center and the AWS VPC.
    • A Public subnets contain resources that need to be accessible by the public, such as an online store's website. A section of a VPC that Support/contains the customer/public-facing website/resources.
    • Private subnets contain resources that should be accessible only through private network, such as an Isolate databases that contains customers' personal information and order histories.
cool.gif
 

PlAwAnSaI

Administrator
144106568_1320909428295278_5561300823324163243_n.png

  • Company has an application that uses Amazon EC2 instances to run the customer-facing website and Amazon RDS database instances to store customers' personal information. The developer should configure the VPC by Place the Amazon EC2 instances in a public subnet and the Amazon RDS database instances in a private subnet.
  • Network access control lists (ACLs) perform stateless packet filtering. By default, account's default network ACL allows all inbound and outbound traffic, but can modify it by adding own rules.
  • Security groups are stateful. By default, security groups deny all inbound traffic, but can add custom rules to fit operational and security needs. A virtual firewall that controls inbound and outbound traffic for an Amazon EC2 instance.
  • Domain Name System (DNS) resolution is Translating/a directory used for matching a domain names to an IP addresses.
  • Amazon Route 53 is used/the ability to manage the DNS records for domain names.
  • Instance stores Best for temporary data that is not kept long term. When stopping or terminating an EC2 instance, data is deleted.
  • Amazon EBS volumes Best for data that requires retention. When stopping or terminating an EC2 instance, data remains available.
  • Amazon Simple Storage Service (Amazon S3):
    • S3 Standard is a storage class that is ideal for frequently accessed data.
    • The S3 Standard-Infrequent Access (S3 Standard-IA) storage class is ideal for data that is infrequently accessed but requires high availability/must be immediately available when needed.
    • In the S3 Intelligent-Tiering storage class, Amazon S3 monitors objects' access patterns and automatically moves as image below.

      LASTproduct-page-diagram_Amazon-S3-Intelligent_Tiering.png
    • S3 Glacier and S3 Glacier Deep Archive are low-cost storage classes that are ideal for data archiving. Retrieve for a minutes to a few hours, and within 12 hours respectively.
  • Comparing Amazon EBS and Amazon EFS:
    • An Amazon Elastic Block Store (Amazon EBS) volume is a service that provides block-level storage volumes that can use with Amazon EC2 instances.
      Stores data within a single Availability Zone.
      To attach an Amazon EC2 instance, both the Amazon EC2 instance and the EBS volume must reside/be located within the same Availability Zone.
    • Amazon Elastic File System (Amazon EFS) is a scalable file systems used with AWS Cloud services and on-premises resources.
      Store data in and across multiple Availability Zones. It is a regional service.
      The duplicate storage enables to access data concurrently from all the Availability Zones in the Region where a file system is located. Additionally, on-premises servers can access Amazon EFS using AWS Direct Connect.
  • Amazon Relational Database Service (Amazon RDS) is A service that enables to run relational databases in the AWS Cloud. The scenarios in which should use:
    • Using SQL to organize data
    • Storing data in an Amazon Aurora database
  • Amazon Aurora is An enterprise-class relational database.
  • Amazon DynamoDB is A serverless key-value database service. The scenarios in which should use:
    • Running a serverless database
    • Storing data in a key-value database
    • Scaling up to 10 trillion requests per day
  • Amazon Redshift is used to query and analyze data across a data warehousing service that can use for big data analytics.
  • AWS Database Migration Service (Amazon DMS) is A service that can use to migrate relational, non-relational databases, and other types of data stores.
  • Amazon DocumentDB is a document database service that supports MongoDB workloads.
  • Amazon Neptune is a graph database service.
  • Amazon Managed Blockchain is a service that can use to create and manage blockchain networks with open-source frameworks.
  • Amazon ElastiCache is a service that adds caching layers on top of databases to help improve the read times of common requests.
  • Security responsibilities tasks example of customers:
    • Patching software on Amazon EC2 instances
    • Setting permissions for Amazon S3 objects
  • Security responsibilities tasks example of AWS:
    • Maintaining network infrastructure and servers that run Amazon EC2 instances
    • Implementing physical security controls at data centers
  • In AWS Identity and Access Management (IAM) used for Create users to enable people and applications to interact with AWS services and resources. Can assign permissions to users and groups.
  • The AWS account root user identity is the identity that is established when first create an AWS account. Can update in the AWS Management Console.
  • An IAM policy is a document that grants or denies permissions to AWS services and resources. Can attach to an IAM group. Can apply to IAM users, groups, or roles.
  • When grant permissions by following the principle of least privilege, prevent users or roles from having more permissions than needed to perform specific job tasks.
  • An IAM role is an identity that can assume to gain temporary access to permissions.
  • Multi-factor authentication (MFA) is an authentication process that provides an extra layer of protection for AWS account. Can configure in AWS IAM.
  • Service Control Policies (SCPs) enable to centrally control permissions for the accounts in organization.
  • In AWS Organizations, can apply/set permissions for the organization root, an individual member account, or an Organizational Unit (OU) by configuring SCPs.
    Can Consolidate and manage multiple AWS accounts within a central location.
  • In AWS Artifact is a service that provides on-demand Access to AWS security, compliance reports and Review, accept, and manage select online agreements.
  • As network traffic comes into applications, AWS Shield uses a variety of analysis techniques to detect potential Distributed Denial-of-Service (DDoS) attacks in real time and automatically mitigates them.
cool.gif
 

PlAwAnSaI

Administrator

  • AWS Key Management Service (AWS KMS) enables to perform encryption operations through the use of cryptographic keys.
  • Amazon Inspector checks applications for security vulnerabilities and deviations from security best practices.
  • Amazon GuardDuty is a service that provides intelligent threat detection for AWS infrastructure and resources.
  • Amazon CloudWatch is a web service that enables to:
    • Monitor AWS infrastructure and resources in real time
    • Monitor and manage/View/Access various metrics and graphs to monitor the performance and utilization of resources that run applications from a single dashboard.
    • Configure automatic actions and alerts in response to metrics
  • AWS CloudTrail is a web service that enables to:
    • Track/review details for user activities and API requests/calls that have occurred throughout/within AWS infrastructure/environment.
    • Filter logs to assist with operational analysis and troubleshooting
    • Automatically detecting unusual account activity
  • AWS Trusted Advisor is a web service that Receiving/provides real-time recommendations for improving/inspects AWS environment and Comparing infrastructure to AWS best practices in five categories: cost optimization, performance, security, fault tolerance, and service limits. The inspection include security checks, such as Amazon S3 buckets with open access permissions.
    Only the Business and Enterprise Support plans include this checks. The Business Support plan has a lower cost.
  • The AWS Free Tier is a program that consists of three types of offers that allow customers to use AWS services without incurring costs: Always free, includes offers that are available to new AWS customers for 12 months free following AWS sign-up date, and Trials.
  • AWS Pricing Calculator enables to Create an estimate for the cost of use cases on AWS.
  • From the Billing dashboard in the AWS Management Console, can view details on AWS bill, such as service costs by Region, month to date spend, and more.
  • Consolidated billing can Combine usage across accounts to receive volume pricing discounts.
  • AWS Budgets enables to create budgets to plan service usage, service costs, and instance reservations. Can Review how much predicted AWS usage will incur in costs by the end of the month. Can set custom alerts that will notify when service usage exceeds (or is forecast-ed to exceed) the amount that have budgeted.
  • AWS Cost Explorer is a tool that enables to Visualize, understand, and manage AWS costs and usage over time.
  • AWS Support is a resource that can answer questions about best practices, assist with troubleshooting issues, help to identify ways to optimize use of AWS services, and so on.
    • A Technical Account Manager (TAM) is available only to AWS customers with an Enterprise Support plan.
  • AWS Marketplace is used to find third-party software that runs on AWS.
  • AWS Cloud Adoption Framework (AWS CAF):
    • The Business Perspective helps to move from a model that separates business and IT strategies into a business model that integrates IT strategy.
    • The People Perspective helps Human Resources (HR) employees prepare their teams for cloud adoption by updating organizational processes and staff skills to include cloud-based competencies.
    • The Governance Perspective helps to identify and implement best practices for IT governance and support business processes with technology.
    • The Platform Perspective helps design, implement, and optimize AWS infrastructure based on business goals and perspectives.
    • The Security Perspective helps structure the selection and implementation of permissions.
    • The Operations Perspective focuses on operating and recovering IT workloads to meet the requirements of business stakeholders.
  • Migration strategies:
    • Rehosting
    • Re-platforming involves selectively optimizing aspects of an application to achieve benefits in the cloud without changing the core architecture of the application.
    • Refactoring involves changing how an application is architect-ed and developed, typically by using cloud-native features.
    • Repurchasing involves moving to a different product.
    • Retaining
    • Retiring involves removing an application that is no longer used or that can be turned off.
  • Snowball Edge Storage Optimized is a device that enables to transfer large amounts of data into and out of AWS. It provides 80 TB of usable HDD storage.
  • AWS Snowmobile is a service that is used for transferring up to 100 PB of data to AWS.
  • Amazon Fraud Detector is a service that enables to identify potentially fraudulent online activities.
  • Amazon Lex is a service that enables to build conversational interfaces using voice and text.
  • Amazon SageMaker is a service that enables to quickly build, train, and deploy machine learning models.
  • Amazon Textract is a machine learning service that automatically extracts text and data from scanned documents.
  • AWS DeepRacer is an autonomous 1/18 scale race car that can use to test reinforcement learning models.
  • The AWS Well-Architected Framework:
    • The Operational excellence pillar includes the ability to run workloads effectively, gain insights into their operations, and continuously improve supporting processes to deliver business value.
    • The Security pillar includes protecting data, systems, and assets, and using cloud technologies to improve the security of workloads.
    • The Reliability pillar focuses on the ability of a workload to consistently and correctly perform its intended functions.
    • The Performance Efficiency pillar focuses on using computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
    • The Cost Optimization pillar focuses on the ability to run systems to deliver business value at the lowest price point.

  • Advantages of cloud computing:
    • Trade upfront expense for variable expense: Paying for compute time as use it instead of investing upfront costs in data centers.
    • Benefit from massive economies of scale: Receiving lower pay-as-you-go prices as the result of AWS customers' aggregated usage of services.
    • Stop guessing capacity: Scaling infrastructure capacity in an out to meet demand.
    • Increase speed and agility
    • Stop spending money running and maintaining data centers
    • Go global in minutes: Deploying an application in multiple Regions around the world.
cool.gif
 

PlAwAnSaI

Administrator
  • Network Engineer Vs. Cloud Engineer: more 40% value add
    D7XqGa.png


    study.com/articles/cloud_engineer_vs_network_engineer.html
  • How to become a Cloud Network Engineer - Career FAQ's:
    www.youtube.com/watch?v=znmFD6W3a5w
  • AWS 101: มารู้จัก AWS กันแบบ Newbie:
    medium.com/@beagleview/aws-101-มารู้จัก-aws-กันแบบ-newbie-ตอนที่-1-ceb3a9173b48

  • ทีนี้จะเลือกยังไงล่ะว่าจะใช้ Service ไหนบ้าง? - ตัวอย่างการ Design สำหรับ Web Application นี่เลย:
    medium.com/@aglcsupachaipluamjitta/amazon-aws-stack-th-1f763d590309
  • มาลองทำ Web-API ด้วย AWS Lambda + API Gateway กัน:
    medium.com/@beagleview/มาลองทำ-webapi-ด้วย-aws-lambda-api-gateway-กัน-path-1-799358559fb8

    medium.com/@beagleview/มาลองทำ-webapi-ด้วย-aws-lambda-api-gateway-กัน-ตอนที่-2-ab708f816a96
  • รู้จัก Amazon S3 มันคืออะไร? ทำไมต้องเก็บข้อมูลลง Bucket?:
    www.blognone.com/node/101588
  • เริ่มต้นใช้งาน AWS EC2 กันเถอะ:
    medium.com/@aglcsupachaipluamjitta/เริ่มต้นใช้งาน-aws-ec2-กันเถอะ-f258fa31fbd0
  • สอนใช้งาน AWS แบบ Coolๆ: EC2, S3, VPC
    www.youtube.com/playlist?list=PLt-twymrmZ2d25VMRQ_6_tcocYK4DEvUJ
  • AWS มี Certificate อะไรบ้าง ต้องรู้อะไรก่อนไปสอบ?:
    blog.cloudhm.co.th/aws-certificate
  • noomnatt.medium.com/เส้นทางสู่-aws-certified-solutions-architect-2019-7c54fe819c3f
  • Technique การสอบ AWS Solution Architect:
    www.howtoautomate.in.th/tutorial-aws-solution-architecture
  • AWS in Thai:
    www.youtube.com/playlist?list=PLcUq8DDsIcwV36KUZfFzT_rtvXuEwVM_8
  • www.coursera.org/specializations/aws-fundamentals
  • AWS Networking เบื้องต้น:
    nopnithi.medium.com/7d10673923d7
Lab:
  1. Introduction to AWS Identity and Access Management (IAM):
    • Explored pre-created IAM users and groups
    • Inspected IAM policies as applied to the pre-created groups
    • Followed a real-world scenario, adding users to groups with specific capabilities enabled
    • Located and used the IAM sign-in URL
    • Experimented with the effects of policies on service access
  2. Introduction to Amazon EC2:
  3. Introduction to Amazon Virtual Private Cloud (VPC):
  4. Introduction to Amazon Simple Storage Service (S3):
AWS Re:Invent 2020 Recap:

  1. New EC2 types:
    1. M5zn
    2. C6gn
    3. R5b
    4. G4ad
    5. D3/D3en
  2. AWS EKS Anywhere provides customer full control both Control Plane and Data Plane layer.
  3. Amazon Managed Service for Grafana provides fully managed service for data visualizations across multiple data sources.
  4. AWS DevOps Guruallows customer to:
    • Leverage ML-powered insights into application and operation
    • Remediate operational issues faster with less manual effort
    • Provide accurate operational insights for critical issues that impact applications
  5. Fully managed and Serverless services are the management types of AWS database services.
  6. Industrial solutions will see in Singapore Region soon:
    • AWS IoT core for LoRaWAN
    • AWS Panorama
    • AWS Monitron
    • AWS Lookout Suite
  7. The advantages and features that the customer get from Amazon Connect:
    • 100% cloud based contact center with pay per use
    • Deliver omnichannel experiences that are natural dynamic and personalized with AI capability
    • Agents can be located virtually anywhere
Exam readiness:
  1. AWS Customer Service AWS billing support resource is available to all support levels.
  2. A user can achieve high availability for a web application hosted on AWS by Use an Application Load Balancer across multiple Availability Zones in one AWS Region.
  3. A user needs to quickly deploy a non-relational database on AWS. The user does not want to manage the underlying hardware or the database software. Amazon DynamoDB AWS service can be used to accomplish this.
  4. An application is receiving SQL injection attacks from multiple external resources. AWS WAF service can help automate mitigation against these attacks.
  5. AWS Global Accelerator service help to improve application performance by reducing latency while accessing content globally.
  6. A company is building a new archiving system on AWS that will store terabytes of data. The company will NOT retrieve the data often. S3 Glacier Amazon storage class will Minimize the cost of the system.
  7. AWS Direct Connect service allows a user to establish a dedicated network connection between a company's on-premises data center and the AWS Cloud.
  8. Loose coupling AWS cloud architecture principle states that systems should reduce interdependence.
  9. Amazon CloudFront content cached is Edge locations.
  10. Features does the AWS Organizations provide are Implementing consolidated billing and Enforcing the governance of AWS accounts.
  11. A company needs 24x7 phone, email, and chat access, with a response time of less than 1 hour it a production system has a service interruption. Business AWS Support plan meets these requirements at the LOWEST cost.
  12. A company with AWS Enterprise Support needs help understanding its monthly AWS bill and wants to implement billing best practices. AWS Concierge Support team is available to accomplish these goals.
  13. A company is considering a migration from on premises to the AWS Cloud. The company's IT team needs to offload support of the workload. The IT team should Use AWS Managed Services to provision run and support the company in infrastructure to accomplish this goal.
  14. A security officer wants a list of any potential vulnerabilities in Amazon EC2 security groups. Amazon GuardDuty service should the officer use.
  15. Management at a large company wants to avoid long-term contracts and is interested in AWS to move from fixed costs to variable costs. Volume discounts is the value proposition of AWS for this company.
  16. Can consolidated billing within AWS Organizations help lower overall monthly expenses By leveraging service control policies (SCPs) for centralized service management.
  17. RDS backups are managed by AWS and supports any relational database are benefits of running a database on Amazon RDS compared to an on-premises database.
  18. Closing an AWS account task requires the use of AWS account root account user credentials.
  19. Amazon Athena service provides the ability to quickly run one-time queries on data in Amazon S3.
  20. A company would like to host its MySQL databases on AWS and maintain full control over the operating system, database installation, and configuration. Amazon EC2 service should the company use to host the databases.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
2X
  1. A user has an AWS account with a Business-level AWS Support plan and needs assistance with handling a production service disruption. Action the user should take is Open a business-critical system down support case.
  2. A company wants to use Amazon EC2 to deploy a global commercial application. The deployment solution should be built with the highest redundancy and fault tolerance. Based on this situation the Amazon EC2 instances should be deployed across multiple Availability Zones in two AWS Regions.
  3. A company is looking for a way to encrypt data stored on Amazon S3. AWS Key Management Service (AWS KMS) managed service can be used to help to accomplish this.
  4. Elasticity architecture concept describes the ability to deploy resources on demand and release resources when they are no longer needed.
  5. Service control policies (SCPs) manage permissions for Availability Zones.
  6. When a user wants to utilize their existing per-socket per-core, or per-virtual machine software licenses for a Microsoft Windows server running on AWS, Dedicated Hosts Amazon EC2 instance type is required.
  7. Elasticity architecture concept describes the ability to deploy resources on demand and release resources when they are no longer needed.
  8. A user can receive help with deploying popular technologies based on AWS best practices, including architecture and deployment instructions in AWS Quick Starts.
  9. AWS CloudFormation can be used to describe infrastructure as code in the AWS Clouds.
  10. When comparing AWS to on-premises Total Cost of Ownership (TCO), Data center security costs are include with AWS.

    https://forms.gle/h94VfhsFtMtjrJxf7
  11. AWS CloudTrail service enables risk auditing of an AWS account by tracking and recording user actions and source IP addresses.
  12. Identity and access management duty is a responsibility to AWS under the AWS shared responsibility model.
  13. A company has performance and regulatory requirements that call for it to run its workload only in its on-premises data center. AWS Outposts and Snowball Edge services should the company use.
  14. A company wants to build a new architecture with AWS services. The company needs to compare service costs at various scales. AWS Pricing Calculator service should the company use to meet the requirement.
  15. AWS Snowball service facilitates transporting 50 GB of data from an on-premises data center to an Amazon S3 bucket without using a network connection.
  16. A company needs to improve the response rate of high-volume queries to its relational database. AWS Global Accelerator service should the company use to offload requests to the database and improve overall response times.
  17. Amazon Simple Notification Service (Amazon SNS) uses a combination of publishers and subscribers.
  18. Amazon EC2 Image Builder service simplifies the creation, maintenance, validation, sharing and deployment of Linux or Windows Server templates for use with Amazon EC2 and on premises VMs.
  19. According to the AWS shared responsibility model, Updating the guest operating system on Amazon EC2 instances task is the customer's responsibility.
  20. VPC endpoint AWS service natively provides and encrypted connection that can be used to move data from on premises infrastructure to the AWS Cloud.
Exam Readiness - AWS Get Certified - Cloud Practitioner:

The Exam: Mechanics:
  • Questions are multiple choice, with both single selection and multiple selection.
  • There is no penalty for guessing; unanswered questions are scored as incorrect.
  • Have 90 minutes to complete
Exam Strategies:
  1. Read both the question and the answer in full one time through.
  2. Identify the features mentioned in the answers.
  3. Identify text in the question that implies certain AWS features. Example: required IOPS, data retrieval times.
  4. Pay attention to qualifying clauses (e.g., 'in the most cost-effective way,')
Cloud Concepts: Review:
  • With a pay-as-you-go pricing, cloud services platform AWS delivers:
    • Compute power
    • Storage
    • Database services
    • Other resources
  • Regions and Availability Zones are more highly available, fault tolerant, and scalable than traditional data-center infrastructures.
  • AWS supports three different management interfaces to access account:
    • Web-based AWS Management Console
  • Amazon CloudWatch - Have complete visibility of cloud resources and applications
  • Elastic Load Balancing - Application Auto Scaling - Deploy highly available applications that scale with demand
  • AWS Database Services - Run SQL or No-SQL databases without the management overhead
  • AWS CloudFormation - Programmatically deploy repeatable infrastructure
  • AWS is more economical than traditional data centers for applications with varying compute workloads by Amazon EC2 instances can be launched on-demand when needed.
Exam Outline:
Domain 2: Security:
  1. Define the AWS Shared Responsibility model
  2. Define AWS Cloud security and compliance concepts
  3. Identify AWS access management capabilities
  4. Identify resources for security support
Security: Review:
  • Security is the highest priority at AWS.
  • The Shared Responsibility Model defines security responsibilities between AWS and the customer.
  • Maintaining physical hardware is AWS's responsibility.
  • A system administrator add an additional layer of login security to a user's AWS Management Console by Enable Multi Factor authentication.
Domain 3:
  1. Define methodology of deploying and operating in the AWS Cloud
  2. Define the AWS global infrastructure
  3. Identify the core AWS services
  4. Identify resources for technology support
  • AWS edge locations / PoP component of AWS global infrastructure does Amazon CloudFront use to ensure low-latency delivery.
  • Amazon Virtual Private Cloud (Amazon VPC) AWS networking service enables a company to create a virtual network within AWS.
  • AWS CloudTrail service can identify the user that made the API call when an Amazon Elastic Compute Cloud (Amazon EC2) instance is terminated.
Domain 4:
  1. Compare and contrast the various pricing models for AWS
  2. Recognize the various account structures in relation to AWS billing and pricing
  3. Identify resources available for billing support
  • AWS Marketplace offering enables customers to find, buy, and immediately start using software solutions in their AWS environment.
  • aws.amazon.com/getting-started/hands-on
  • aws.qwiklabs.com
  • aws-labs.net
  • workshops.aws
  • wellarchitectedlabs.com
  • eksworkshop.com
  • ecsworkshop.com
  • containersfromthecouch.com
  • www.appmeshworkshop.com
  • amazon-dynamodb-labs.com
  • awssecworkshops.com
  • sagemaker-workshop.com
cool.gif
 

PlAwAnSaI

Administrator

  • cdkworkshop.com
  • aws.amazon.com/serverless-workshops
  • learn-to-code.workshop.aws
  • lakeformation.workshop.aws
  • aws.amazon.com/training/self-paced-labs
  • observability.workshop.aws
4X
  1. Local Zones type of AWS infrastructure deployment puts AWS compute, storage database, and other select services closer to end users to run latency-sensitive applications.
  2. A company uses Amazon DynamoDB in its AWS Cloud architecture. According to the AWS shared responsibility model, Operating system patching and upgrades and Application of appropriate permissions with IAM tools are responsibilities of the company.
  3. Spot Instances pricing model will interrupt a running amazon EC2 instance if capacity becomes temporarily unavailable.
  4. A company with an AWS Business Support plan wants to identify Amazon EC2 Reserved that are scheduled to expire. AWS Trusted Advisor service can the company use to accomplish this goal.
  5. Amazon Lightsail and AWS Batch are AWS compute services.
  6. According to the AWS shared responsibility model, when using amazon RDS The customer is responsible for scheduling and AWS responsible for performing backups.
  7. Server Side Encryption with S3 managed encryption keys (SSE-S3) and Server Side Encryption with AWS KMS managed encryption keys (SSE KMS) types can be used to protect objects at rest amazon S3.
  8. A company has a globally distributed user base. The company needs its application to be highly available and have low latency for end users. Multi-Region, active-active architecture approach will most effectively support these requirements.
  9. A company is required to store its data close to its primary users. Global footprint benefit of the AWS Cloud supports this requirement.
  10. When comparing AWS cloud with premises total cost of ownership, Physical storage hardware and Project management expenses must be considered.
  11. A company wants an in-memory data store that is compatible with open source in the cloud. Amazon ElastiCache service should the company use.
  12. Amazon EC2 and AWS Lambda services offer compute capabilities.
  13. When using Amazon RDS, the customer responsible is Controlling network access through security groups.
  14. A company has existing software licenses that it wants bring to AWS, but the licensing model requires licensing physical cores. The company can meet this requirement in the AWS cloud by Launch an Amazon EC2 instance on a Dedicated Host.

    https://forms.gle/fcCRdJ42uFnygtPr6

    https://forms.gle/7dJoKSUMTKSr3gLQ6

    https://forms.gle/rjANwjjYeQSAGbvU7

    https://forms.gle/ukfcjAEfx8fm2Nxz5

    https://forms.gle/Gd2RQqvrqE7UDATn8

    https://forms.gle/B6cdA2vxSDwNQVJdA

    https://forms.gle/dQN2xpj3sbqEL7JT9
  1. Design for automated recovery from failure guideline is a well-architected design principle for building cloud applications.
  2. A company wants to use an AWS service to continuously monitor the health of its application endpoints based on proximity to application users. The company also needs to route traffic to healthy Regional endpoints and to improve application availability and performance. Amazon Inspector service will meet these requirements.
  3. A company uses Amazon EC2 Instances in its AWS account for several different workloads. The company needs to perform an analysis to understand the cost of each workload. Update the workload applications to publish usage data to a cost allocation database is the MOST operationally efficient way to meet this requirement.
  4. Amazon CloudFront provide Automatic scaling for all resources to power an application from a single unified interface.
  5. A solutions architect needs to maintain a fleet of Amazon EC2 instances so that any impaired instances are replaced with new ones. Amazon Elastic Container Service (Amazon ECS) should the solution architect use.
  6. AWS Certificate Manager (ACM) service provides a report that enables users to assess AWS infrastructure compliance.
  7. AWS Server Migration Service (AWS SMS) does AWS Snowball Edge natively support.
  8. A security engineer wants a single-tenant AWS solution to create, control, and manage their own cryptographic keys to meet regulatory compliance requirements for data security. AWS CloudHSM service should the engineer use.
  9. A company wants to implement an automated security assessment of the scanty and network accessibility of its Amazon EC2 instances. Amazon GuardDuty AWS service can be used to accomplish this.
  10. An application that runs on Amazon EC2 needs to accommodate a flexible workload that can run or terminate at any time of day. Spot Instances pricing model will accommodate these requirements at the LOWEST cost.
  11. Resource elasticity is an AWS value proportion that describes a user's ability to scale infrastructure based on demand.
  12. AWS Customer Service billing support resource is available to all support levels.

    https://forms.gle/JCfj2b8Wa9GKUUun6

    https://forms.gle/c5VuiKNWvQygkRfq5
  • เริ่มต้น Cloud ด้วย AWS เตรียมสอบ AWS Certified Solution Architect Associate:
    nopnithi.medium.com/fbcca23b7589
  • การกำหนดราคาของ AWS

  • AWS Pricing ทำงานอย่างไร?

  • Stephane OR Neal - AWS:
    If you want to pass the exam without getting too deep go for Stephane else go with Neal's course
    www.youtube.com/watch?v=QKU8kZ92Ubc

    www.reddit.com/r/AWSCertifications/comments/g0kw75/neal_davis_or_stephane_maarek_for_aws_associate
Introduction - AWS Certified Solution Architect Associate SAA-C02:

What's AWS?:
  • AWS (Amazon Web Services) is a Cloud Provider
  • They Provide you with servers and services that you can use on demand and scale easily
  • AWS has revolutionized IT over time
  • AWS powers some of the biggest websites in the world
    • Amazon.com
    • Netflix
List-of-AWS-Services.jpg
  • Creating an AWS Account:
    https://portal.aws.amazon.com/billing/signup#/start
  • AWS Budget Setup:
    https://console.aws.amazon.com/billing/home#
  • How to read an AWS Bill:
    https://console.aws.amazon.com/billing/home#/bills
AWS Fundamentals: IAM & EC2:

AWS Regions:
  • AWS has Regions all around the world
  • Names can be: us-east-l, eu-west-3...
  • A region is a cluster of data centers
  • Most AWS services are region-scoped
  • aws.amazon.com/about-aws/global-infrastructure
AWS Availability Zones:
  • Each region has many availability zones (usually 3, min is 2, max is 6). Example:
    • ap-southeast-2a
    • ap-southeast-2b
    • ap-southeast-2c
  • Each availability zone (AZ) is one or more discrete data centers with redundant power, networking, and connectivity
  • They're separate from each other, so that they're isolated from disasters
cool.gif
 

PlAwAnSaI

Administrator

  • They're connected with high bandwidth, ultra-low latency networking
  • www.business2community.com/cloud-computing/5-things-you-need-to-know-about-aws-regions-and-availability-zones-02295344
IAM Introduction:
  • IAM (Identity and Access Management)
  • Your whole AWS security is there:
    • Users: Usually a physical person
    • Groups: Functions (admins, devops) / Teams (engineering, design...)
      Contains users!
    • Roles: Internal usage within AWS resources
  • Root account should never be used (and shared)
  • Users must be created with proper permissions
  • IAM is at the center of AWS
  • Policies are written in JSON (JavaScript Object Notation) Documents
    Defines what each User/Group/Role can and cannot do
  • IAM has a global view
  • Permissions are governed by Policies (JSON)
  • MFA (Multi Factor Authentication) can be setup
  • IAM has predefined 'managed policies'
  • It's best to give users the minimal amount of permissions they need to perform their job (least privilege principles)
IAM Federation:
  • Big enterprises usually integrate their own repository of users with IAM
  • This way, one can login into AWS using their company credentials
  • Identify Federation uses the SAML standard (Active Directory)
IAM 101 Brain Dump:
  • One IAM User per PHYSICAL PERSON
  • One IAM Role per Application
  • IAM credentials should NEVER BE SHARED
  • Never, ever, ever, ever, write IAM credentials in code. EVER.
  • And even less, NEVER EVER EVER COMMIT YOUR IAM credentials
  • Never use the ROOT account except for initial setup.
  • Never use ROOT IAM Credentials
What is Amazon EC2?:
  • EC2 is one of the most popular of AWS' offering
  • EC2 = Elastic Compute Cloud = Infrastructure as a Service
  • It mainly consists in the capability of:
    • Renting virtual machines (EC2)
    • Storing data on virtual drives (EBS)
    • Distributing load across machines (ELB)
    • Scaling the services using an auto-scaling group (ASG)
  • Knowing EC2 is fundamental to understand how the Cloud works
How to SSH into your EC2 Instance:
  • SSH is one of the most important function. It allows you to control a remote machine, all using the command line.
  • ssh -i EC2Tutorial.pem ec2-user@x.229.240.238
  • clear => clear screen
Introduction to Security Groups:
  • Security Groups are the fundamental of network security in AWS
  • They control how traffic is allowed into or out of EC2 Instances.
    Operate at instance level.
  • It is the most fundamental skill to learn to troubleshoot networking issues
  • Only contain allow rules
  • Rules can reference by IP or by security group
Deeper Dive:
  • Security groups are acting as a 'firewall' on EC2 instances
  • They regulate:
    • Access to Ports
    • Authorized IP range - IPv4 and IPv6
    • Control of inbound network (from other to the instance)
    • Control of outbound network (from the instance to other)
Good to know:
  • Can be attached to multiple instances
  • Locked down to a region / VPC combination
  • Does live 'outside' the EC2 - if traffic is blocked the EC2 instance won't see it
  • It's good to maintain one separate security group for SSH access
  • If your application:
    • is not accessible (time out), then it's a security group issue
    • gives a 'connection refused' error, then it's an application error or it's not launched
  • All inbound traffic is blocked by default
  • All outbound traffic is authorised by default
Private vs Public IP (IPv4):
  • Networking has two sorts of IPs. IPv4 and IPv6:
    • IPv4: 2.201.21.51
    • IPv6: 400f:2a11:5656:4:311:900:f32:78d0
  • IPv4 is still the most common format used online.
  • IPv6 is newer and solves problems for the Internet of Things (IoT).
  • IPv4 allows for 3.7 billion different addresses in the public space
  • IPv4: [0-255].[0-255].[0-255].[0-255].
Fundamental Differences:
  • Public IP:
    • means the machine can be identified on the internet (WWW)
    • Must be unique across the whole web (not two machines can have the same public IP).
    • Can be geo-located easily
  • Private IP:
    • means the machine can only be identified on a private network only
    • The IP must be unique across the private network
    • BUT two different private networks (two companies) can have the same IPs.
    • Machines connect to WWW using an internet gateway (a proxy)
    • Only a specified range of IPs can be used as private IP
Elastic IPs:
  • When stop and then start an EC2 instance, it can change its public IP.
  • If need to have a fixed public IP for instance, need an Elastic IP
  • An Elastic IP is a public IPv4 IP own as long as don't delete it
  • Can attach it to one instance at a time
  • With an Elastic IP address, can mask the failure of an instance or software by rapidly remapping the address to another instance in account.
  • Can only have 5 Elastic IP in account (can ask AWS to increase that).
  • Overall, try to avoid using Elastic IP:
    • They often reflect poor architectural decisions
    • Instead, use a random public IP and register a DNS name to it
    • Or, use a Load Balancer and don't use a public IP
In AWS EC2:
  • By default, EC2 machine comes with:
    • A private IP for the internal AWS Network
    • A public IP, for the WWW.
  • When doing SSH into EC2 machines:
    • Can't use a private IP, because we are not in the same network
    • Can only use the public IP.
  • If machine is stopped and then started,
    the public IP can change
Launching an Apache Server on EC2:
  • Let's leverage our EC2 instance
  • We'll install an Apache Web Server to display a web page
  • We'll create an index.html that shows the hostname of our machine
  • #!/bin/bash
  • # get admin privileges
  • $ sudo su
  • # install httpd (Linux 2 version)
  • # yum update -y
  • yum install -y httpd.x86_64
  • systemctl start httpd.service
  • systemctl enable httpd.service
  • curl localhost:80
  • allow in the security group
  • echo "Hello World" > /var/www/html/index.html
  • echo "Hello World from $(hostname -f)" > /var/www/html/index.html
  • EC2_AVAIL_ZONE=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
  • echo "Hello World from $(hostname -f) in AZ $EC2_AVAIL_ZONE" > /var/www/html/index.html
cool.gif
 

PlAwAnSaI

Administrator

  1. ap-southeast-1a is an Availability Zone.
  2. Availability Zones are in isolated data centers, this helps guarantee that multi AZ won't all fail at once (due to a meteorological disaster for example).
  3. All of Users, Roles, Policies, and Groups are IAM components.
  4. IAM is a global service (encompasses all regions), IAM Users are NOT defined on a per-region basis.
  5. An IAM user can belong to multiple groups.
  6. Getting started with AWS and manager wants things to remain simple yet secure. He wants the management of engineers to be easy, and not re-invent the wheel every time someone joins company. Create multiple IAM users and groups, and assign policies to groups. New users will be added to groups. This is best practice when have a big organization.
  7. Never share IAM credentials. If colleagues need access to AWS they'll need their own account.
  8. Pay for an EC2 instance compute component only when it's in 'running' state.
  9. Getting a permission error exception when trying to SSH into Linux Instance because the key is missing permissions chmod 0400.
  10. Any timeout errors when trying to SSH into EC2 instance (not just in SSH but also HTTP for example) means a misconfiguration of security groups.
  11. When a security group is created, Deny all traffic inbound and allow all traffic outbound is the default behavior.
  12. Security groups can reference IP address and CIDR block.
  13. EC2 User Data provide startup instructions to EC2 instances.
EC2 On Demand:
  • Pay for what use:
    • Linux - billing per second, after the first minute
    • All other operating systems (ex: Windows) - billing per hour
  • Has the highest cost but no upfront payment
  • No long term commitment
  • Recommended for short-term and un-interrupted workloads, where can't predict how the application will behave.
EC2 Reserved Instances:
  • Up to 75% discount compared to On-demand
  • Reservation period: 1 year = + discount | 3 years = +++ discount
  • Purchasing options: no upfront | partial upfront = + discount | All upfront = ++ discount
  • Reserve a specific instance type
  • Recommended for steady-state usage applications (think database)
  • Convertible Reserved Instance:
    • can change the EC2 instance type
    • Up to 54% discount
  • Scheduled Reserved Instances:
    • launch within time window reserve
    • When require a fraction of day / week / month
    • Still commitment over 1 to 3 years
EC2 Spot Instances:
  • Can get a discount of up to 90% compared to On-demand (the biggest discount)
  • Instances that can 'lose' at any point of time if max price is less than the current spot price
  • The MOST cost-efficient instances in AWS
  • Useful for short workloads that are resilient to failure (less reliable)
    • Batch jobs
    • Data analysis
    • Image processing
    • Any distributed workloads
    • Workloads with a flexible start and end time
  • Not suitable for critical jobs or databases
  • Great combo: Reserved Instances for baseline + On-Demand & Spot for peaks
EC2 Dedicated Hosts:
  • A physical server with EC2 instance capacity fully dedicated to use
  • Can help address compliance requirements
  • Reduce costs by allowing to use existing server-bound software licenses
  • Allocated for account for a 3 year period reservation
  • More expensive
  • Useful for software that have complicated licensing model (BYOL - Bring Your Own License)
  • Or for companies that have strong regulatory or compliance needs
  • Per host billing
  • Visibility of sockets, cores, host ID
  • Affinity between a host and instance
  • Targeted instance placement
  • Add capacity using an allocation request
EC2 Dedicated Instances:
  • Instances running on hardware that's dedicated
  • May share hardware with other instances in same account
  • No control over instance placement (can move hardware after Stop / Start)
  • Per instance billing (subject to a $2 per region fee)
Common Characteristic Dedicated Instances and Hosts:
  • Enables the use of dedicated physical servers
  • Automatic Instance placement
Which host / purchasing option is right?:
  • On demand: coming and staying in resort whenever like, pay the full price
  • Reserved: like planning ahead and if plan to stay for a long time, may get a good discount.
  • Spot instances: the hotel allows people to bid for the empty rooms and the highest bidder keeps the rooms. Can get kicked out at any time
  • Dedicated Hosts: Book an entire building of the resort
Price Comparison
Example - m4.large - ap-southeast-1:
  • On-demand: $0.125 per Hour
  • Spot Instance (Spot Price): $0.0311 - $0.1231
  • Spot Block (1 to 6 hours): $0.069 - $0.152
  • Reserved Instance (12 months) - no upfront: $0.078
  • Reserved Instance (12 months) - all upfront: $0.072
  • Reserved Instance (36 months) - no upfront: $0.053
  • Reserved Convertible Instance (12 months) - no upfront: $0.089
  • Reserved Dedicated Instance (12 months) - all upfront: $0.08
  • Dedicated Host: On-demand price
  • Dedicated Host Reservation: Up to 70% off
EC2 Spot Instance Requests:
  • Can get a discount of up to 90% compared to On-demand
  • Define max spot price and get the instance while current spot price < max
    • The hourly spot price varies based on offer and capacity
    • If the current spot price > max price can choose to stop or terminate instance with a 2 minutes grace period.
  • Other strategy: Spot Block
    • 'block' spot instance during a specified time frame (1 to 6 hours) without interruptions
    • In rare situations, the instance may be reclaimed
  • Used for batch jobs, data analysis, or workloads that are resilient to failures.
  • Not great for critical jobs or databases
How to terminate Spot Instances?:
spot_lifecycle.png
spot_request_states.png
Can only cancel Spot Instance requests that are open, active, or disabled.
Cancelling a Spot Request does not terminate instancesMust first cancel a Spot Request, and then terminate the associated Spot Instances
cool.gif
 

PlAwAnSaI

Administrator

Spot Fleets:
  • Set of Spot Instances + (optional) On-Demand Instances
  • Will try to meet the target capacity with price constraints
    • Define possible launch pools: instance type (m5.large), OS, Availability Zone
    • Can have multiple launch pools, so that the fleet can choose
    • Spot Fleet stops launching instances when reaching capacity or max cost
  • Strategies to allocate Spot Instances:
    • lowestPrice: from the pool with the lowest price (cost optimization, short workload)
    • diversified: distributed across all pools (great for availability, long workloads)
    • capacityOptimized: pool with the optimal capacity for the number of instances
  • Allow us to automatically request Spot Instances with the lowest price
EC2 Instance Types - Main ones:
  • R: applications that needs a lot of RAM - in-memory caches
  • C: applications that needs good CPU - compute / databases
  • M: applications that are balanced (think 'medium') - general / web app
  • I: applications that need good local I/O (instance storage) - databases
  • G: applications that need a GPU - video rendering / machine learning
  • T2/T3: burstable instances (up to a capacity)
  • T2/T3 - unlimited: unlimited burst
  • Real-world tip: use Easy Amazon EC2 / RDS Instance Comparison
    https://instances.vantage.sh
Burstable Instances (T2/T3):
  • AWS has the concept of burstable instances (T2/T3 machines)
  • Burst means that overall, the instance has OK CPU performance.
  • When the machine needs to process something unexpected (a spike in load for example), it can burst, and CPU can be VERY good.
  • If the machine bursts, it utilizes 'burst credits'
  • If all the credits are gone, the CPU becomes BAD
  • If the machine stops bursting, credits are accumulated over time
  • Burstable instances can be amazing to handle unexpected traffic and getting the insurance that it will be handled correctly
  • If instance consistently runs low on credit, need to move to a different kind of non-burstable instance
CPU Credits:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html
T2/T3 Unlimited:
  • Nov 2017: It is possible to have an 'unlimited burst credit balance'
  • Pay extra money if go over credit balance, but don't lose in performance
  • Overall, it is a new offering, so be careful, costs could go high if are not monitoring the health of instances
  • https://aws.amazon.com/blogs/aws/new-t2-unlimited-going-beyond-the-burst-with-high-performance
What's an AMI?:
  • As saw, AWS comes with base image such as: Ubuntu, Fedora, RedHat, Windows, Etc.
  • These images can be customized at runtime using EC2 User data
  • But what if could create own image, ready to go?
  • That's an AMI - an image to use to create instances
  • AMIs can be built for Linux or Windows machines
Why would use a custom AMI?:
  • Using a custom built AMI can provide the following advantages:
    • Pre-installed packages needed
    • Faster boot time (no need for ec2 user data at boot time)
    • Machine comes configured with monitoring / enterprise software
    • Security concerns - control over the machines in the network
    • Control of maintenance and updates of AMIs over time
    • Active Directory Integration out of the box
    • Installing app ahead of time (for faster deploys when auto-scaling)
    • Using someone else's AMI that is optimized for running an app, DB, etc.
  • AMI are built for a specific AWS region (!)
Using Public AMIs:
  • Can leverage AMIs from other people
  • Can also pay for other people's AMI by the hour
    • These people have optimized the software
    • The machine is easy to run and configure
    • Basically rent 'expertise' from the AMI creator
  • AMI can be found and published on the Amazon Marketplace
  • Warning:
    • Do not use an AMI don't trust!
    • Some AMIs might come with malware or may not be secure for enterprise
AMI Storage:
  • AMI take space and they live in Amazon S3
  • Amazon S3 is a durable, cheap and resilient storage where most of backups will live (but won't see them in the S3 console)
  • By default, AMIs are private, and locked for account / region
  • Can also make AMIs public and share them with other AWS accounts or sell them on the AMI Marketplace
AMI Pricing:
  • AMIs live in Amazon S3, so get charged for the actual space in takes in Amazon S3
  • Amazon S3 pricing in AP-SOUTHEAST-1:
    • First 50 TB / Month: $0.025 per GB
    • Next 450 TB / Month: $0.024 per GB
  • Overall it is quite inexpensive to store private AMIs.
  • Make sure to remove the AMIs don't use
Number of Instance for Free Tier Micro Account

Cross Account AMI Copy (FAQ + Exam Tip):
  • Can share an AMI with another AWS account.
  • Sharing an AMI does not affect the ownership of the AMI.
  • If you copy an AMI that has been shared with your account, you are the owner of the target AMI in your account.
  • To copy an AMI that was shared with you from another account, the owner of the source AMI must grant you read permissions for the storage that backs the AMI, either the associated EBS snapshot (for an Amazon EBS-backed AMI) or an associated S3 bucket (for an instance store-backed AMI).
  • Limits:
    • You can't copy an encrypted AMI that was shared with you from another account. Instead, if the underlying snapshot and encryption key were shared with you, you can copy the snapshot while re-encrypting it with a key of your own. You own the copied snapshot, and can register it as a new AMI.
    • You can't copy an AMI with an associated billingProduct code that was shared with you from another account. This includes Windows AMIs and AMIs from the AWS Marketplace. To copy a shared AMI with a billingProduct code, launch an EC2 instance in your account using the shared AMI and then create an AMI from the instance.
Placement Groups:
  • Sometimes want control over the EC2 Instance placement strategy
  • That strategy can be defined using placement groups
  • When create a placement group, specify one of the following strategies for the group:
    • Cluster - clusters instances into a low-latency group in a single Availability Zone
    • Spread - spreads instances across underlying hardware (max 7 instances per group per AZ) - critical applications
    • Partition - spreads instances across many different partitions (which rely on different sets of racks) within an AZ. Scales to 100s of EC2 instances per group (Hadoop, Cassandra, Kafka)
cool.gif
 

PlAwAnSaI

Administrator

IAM Tutorial: Delegate access to the billing console

Placement Groups:
Cluster:
Img1.jpg

  • Pros: Great network (Low latency 10 Gbps bandwidth between instances)
  • Cons: If the rack fails, all instances fails at the same time
  • Use case:
    • Big Data job that needs to complete fast
    • Application that needs extremely low latency and high network throughput
Spread:
img2.jpg
  • Pros:
    • Can span across Availability Zones (AZ)
    • Reduced risk is simultaneous failure
    • EC2 Instances are on different physical hardware
  • Cons: Limited to 7 instances per AZ per placement group
  • Use case:
    • Application that needs to maximize high availability
    • Critical Applications where each instance must be isolated from failure from each other
Partition:
Img3.jpg
  • Up to 7 partitions per AZ
  • Can span across multiple AZs in the same region
  • Up to 100s of EC2 instances
  • The instances in a partition do not share racks with the instances in the other partitions
  • A partition failure can affect many EC2 but won't affect other partitions
  • EC2 instances get access to the partition information as metadata
  • Use cases: HDFS, HBase, Cassandra, Kafka
Elastic Network Interfaces (ENI):
  • Logical component in a VPC that represents a virtual network card
  • The ENI can have the following attributes:
    • Primary private IPv4, one or more secondary IPv4
    • One Elastic IP (IPv4) per private IPv4
    • One Public IPv4
    • One or more security groups
    • A MAC address
  • Can create ENI independently and attach them on the fly (move them) on EC2 instances for failover
  • Bound to a specific availability zone (AZ)
EC2 Hibernate:
  • We know we can stop, terminate instances:
    • Stop: the data on disk (EBS) is kept intact in the next start
    • Terminate: any EBS volumes (root) also set-up to be destroyed is lost
  • On start, the following happens:
    • First start: the OS boots & the EC2 User Data script is run
    • Following starts: the OS boots up
    • Then application starts, caches get warmed up, and that can take time!
  • Introducing EC2 Hibernate:
    • The in-memory (RAM) state is preserved
    • The instance boot is much faster!
      (the OS is not stopped / restarted)
    • Under the hood: the RAM state is written to a file in the root EBS volume
    • The root EBS volume must be encrypted
  • Use cases:
    • long-running processing
    • saving the RAM state
    • services that take time to initialize
  • Supported instance families - C3, C4, C5, M3, M4, M5, R3, R4, and R5.
  • Instance RAM size - must be less than 150 GB.
  • Instance size - not supported for bare metal instances.
  • AMI: Amazon Linux 2, Linux AMI, Ubuntu... & Windows
  • Root Volume: must be EBS, encrypted, not instance store, and large
  • Available for On-Demand and Reserved Instances
  • An instance cannot be hibernated more than 60 days
EC2 for Solution Architects:
  • EC2 instances are billed by the second, t2.micro is free tier
  • On Linux / Mac we use SSH, on Windows we use Putty
  • SSH is on port 22, lock down the security group to your IP
  • Timeout issues => Security groups issues
  • Permission issues on the SSH key => run 'chmod 0400'
  • Security Groups can reference other Security Groups instead of IP ranges
  • Know the difference between Private, Public and Elastic IP
  • Can customize an EC2 instance at boot time using EC2 User Data
  • The 4 EC2 launch modes:
    • On demand
    • Reserved
    • Spot instances
    • Dedicated Hosts
  • The basic instance types: R, C, M, I, G, T2/T3
  • Can create AMIs to pre-install software on EC2 => faster boot
  • AMI can be copied across regions and accounts
  • EC2 instances can be started in placement groups:
    • Cluster
    • Spread
  • Cluster placement groups places instances next to each other giving high performance computing and networking while talking to each other as performing big data analysis.
  • Plan on running an open-source MongoDB database year-round on EC2. Reserved Instances launch mode should choose. This will allow to save cost as know that the instance will be up for a full year.
  • Built and published an AMI in the ap-southeast-2 region, and colleague in us-east-1 region cannot see it because An AMI created for a region can only be seen in that region.
  • Launching an EC2 instance in us-east-1 using this Python script snippet:
    ec2.create_instances(ImageId='ami-c34b6f8', MinCount=1, MaxCount=1)
    It works well, so decide to deploy script in us-west-1 as well. There, the script does not work and fails with 'ami not found' error because AMI is region locked and the same ID cannot be used across regions.
  • Would like to deploy a database technology and the vendor license bills based on the physical cores and underlying network socket visibility. Dedicated Hosts EC2 launch modes allow to get visibility into them.
  • Launching an application on EC2 and the whole process of installing the application takes about 30 minutes. Would like to minimize the total time for instance to boot up and be operational to serve traffic. Create an AMI after installing the applications and launch from the AMI allows to start more EC2 instances directly from that AMI, hence bypassing the need to install the application (as it's already installed).
  • Running a critical workload of three hours per week, on Tuesday. As a solutions architect, Scheduled Reserved Instances EC2 Instance Launch Types should choose to maximize the cost savings while ensuring the application stability.
cool.gif
 

PlAwAnSaI

Administrator

  • It's easy to horizontally scale thanks the cloud offerings such as Amazon EC2
High Availability:
  • Usually goes hand in hand with horizontal scaling
  • Means running application / system in at least 2 data centers (== Availability Zones)
  • The goal is to survive a data center loss
  • Can be passive (for RDS Multi AZ for example)
  • Can be active (for horizontal scaling)
High Availability & Scalability For EC2:
  • Vertical Scaling: Increase instance size (= scale up / down)
    • From: t2.nano - 0.5G of RAM, 1 vCPU
    • To: u-12tb1.metal - 12.3TB of RAM, 448 vCPUs
  • Horizontal Scaling: Increase number of instances (= scale out / in)
    • Auto Scaling Group
    • Load Balancer
  • High Availability: Run instances for the same application across multi AZ
    • Auto Scaling Group multi AZ
    • Load Balancer multi AZ
What is load balancing?:
  • Load balancers are servers that forward internet traffic to multiple servers (EC2 Instances) downstream.
Why use a load balancer?:
  • Spread load across multiple downstream instances
  • Expose a single point of access (DNS) to application
  • Seamlessly handle failures of downstream instances
  • Do regular health checks to instances
  • Provide SSL termination (HTTPS) for websites
  • Enforce stickiness with cookies
  • High availability across zones
  • Separate public traffic from private traffic
Why use an EC2 Load Balancer?:
  • An ELB (ECS Load Balancer) is a managed load balancer
    • AWS guarantees that it will be working
    • AWS takes care of upgrades, maintenance, high availability
    • AWS provides only a few configuration knobs
  • It costs less to setup own load balancer but it will be a lot more effort.
  • It is integrated with many AWS offering / services
Health Checks:
  • Are crucial for Load Balancers
  • They enable the load balancer to know if instances it forwards traffic to are available to reply to requests
  • The health check is done on a port and a route (/health is common)
  • If the response is not 200 (OK), then the instance is unhealthy
Types of load balancer on AWS:
  • Has 3 kinds of managed Load Balancers
  • Classic Load Balancer (v1 - old generation) - 2009
    • HTTP, HTTPS, TCP
  • Application Load Balancer (v2 - new generation) - 2016
    • HTTP, HTTPS, WebSocket
  • Network Load Balancer (v2 - new generation) - 2017
    • TCP, TLS (secure TCP) & UDP
  • Overall, it is recommended to use the newer / v2 generation load balancers as they provide more features
  • Can setup internal (private) or external (public) ELBs
Load Balancer Security Groups:
  • Allows only traffic from the load balancer to EC2 instances
Good to Know:
  • LBs can scale but not instantaneously - contact AWS for a 'warm-up'
  • Troubleshooting:
    • 4xx errors are client induced errors
    • 5xx errors are application induced errors
    • LB Errors 503 means at capacity or no registered target
    • If the LB can't connect to application, check security groups!
  • Monitoring:
    • ELB access logs will log all access requests (so can debug per request)
    • CloudWatch Metrics will give aggregate statistics (ex: connections count)
Classic Load Balancers (v1):
  • Supports TCP (Layer 4), HTTP & HTTPS (Layer 7)
  • Health checks are TCP or HTTP based
  • Fixed hostname XXX.region.elb.amazonaws.com
Application Load Balancer (v2):
  • Application load balancer is Layer 7 (HTTP)
  • Load balancing to multiple HTTP applications across machines (target groups) / applications on the same machine (ex: containers)
  • Support for HTTP/2 and WebSocket / redirects (from HTTP to HTTPS for example)
  • Routing tables to different target groups:
    • Routing based on path in URL (example.com/users & example.com/posts)
    • Based on hostname in URL (one.example.com & other.example.com)
    • On Query String, Headers (example.com/users?id=123&order=false)
  • Are a great fit for micro services & container-based application (example: Docker & Amazon ECS)
  • Has a port mapping feature to redirect to a dynamic port in ECS
  • In comparison, we'd need multiple Classic Load Balancer per application
Target Groups:
  • EC2 instances (can be managed by an Auto Scaling Group) - HTTP
  • ECS tasks (managed by ECS itself) - HTTP
  • Lambda functions - HTTP request is translated into a JSON event
  • IP Address - must be private IPs
  • ALB can route to multiple target groups
  • Health checks are at the target group level
Good to Know:
  • Fixed hostname (XXX.region.elb.amazonaws.com)
  • The application servers don't see the IP of the client directly
    • The true IP of the client is inserted in the header X-Forwarded-For
    • Can also get Port (X-Forwarded-Port) and proto (X-Forwarded-Proto)
Network Load Balancer (v2):
  • (Layer 4) allow to:
    • Forward TCP & UDP Based traffic to instances
    • Handle millions of request per seconds
    • Less latency ~100 ms (vs 400 ms for ALB)
  • Has one static IP per AZ, and supports assigning Elastic IP (helpful for whitelisting specific IP)
  • Are used for extreme performance, TCP or UDP traffic
  • Not included in the AWS free tier
Load Balancer Stickiness:
  • It is possible to implement stickiness so that the same client is always redirected to the same instance behind a load balancer
  • This work for Classic Load Balancers & ALBs
  • The 'cookie' used for stickiness has an expiration date control
  • Use case: make sure the user doesn't lose his session data
  • Enabling stickiness may bring imbalance to the load over the backend EC2 instances
Cross-Zone Load Balancing:
  • Each load balancer instance distributes evenly across all registered instances in all AZ
    2 instances in AZ1, 3 instances in AZ2: all instances will share traffic 20% for each
  • Otherwise, Requests are distributed in the instances of the node of the ELB
    2 instances in AZ1, 3 instances in AZ2: instances in AZ1 load 25% for each, while 16.67% for AZ2
  • ALB:
    • Always on (can't be disabled)
    • No charges for inter AZ data
  • NLB:
    • Disabled by default
    • Pay charges ($) for inter AZ data if enabled
  • Classic LB:
    • Through Console => Enabled by default
    • CLI/API => Disabled by default
    • No charges for inter AZ data if enabled
SSL/TLS - Basics:
  • An SSL Certificate allows traffic between clients and load balancer to be encrypted in transit (in-flight encryption)
  • SSL refers to Secure Sockets Layer, used to encrypt connections
  • TLS refers to Transport Layer Security, which is a newer version
cool.gif
 

PlAwAnSaI

Administrator

  • Nowadays, TLS certificates are mainly used, but people still refer as SSL
  • Public SSL certificates are issued by Certificate Authorities (CA)
  • Comodo, Symantec, GoDaddy, GlobalSign, Digicert, Letsencrypt, etc...
  • SSL certificates have an expiration date (set) and must be renewed
Load Balancer - SSL Certificates:
  • The load balancer uses an X.509 certificate (SSL/TLS server certificate)
  • Can manage certificates using ACM (AWS Certificate Manager)
  • Can create upload own certificates alternatively
  • HTTPS listener:
    • Must specify a default certificate
    • Can add an optional list of certs to support multiple domains
    • Clients can use SNI (Server Name Indication) to specify the hostname they reach
    • Ability to specify a security policy to support older versions of SSL / TLS (lagacy clients)
SSL - Server Name Indication:
  • SNI solves the problem of loading multiple SSL certificates onto one web server (to serve multiple websites)
  • It's a 'newer' protocol, and requires the client to indicate the hostname of the target server in the initial SSL handshake
  • The server will then find the correct certificate, or return the default one
    Note:
  • Only works for ALB & NLB (newer generation), CloudFront
  • Does not work for CLB (older gen)
Elastic Load Balancers - SSL Certificates:
  • Classic Load Balancer (v1: CLB):
    • Support only one SSL certificate
    • Must use multiple CLB for multiple hostname with multiple SSL certificates
  • Application Load Balancer (v2) & Network Load Balancer (v2):
    • Supports multiple listeners with multiple SSL certificates
    • Uses Server Name Indication (SNI) to make it work
ELB - Connection Draining:
  • Feature naming:
    • CLB: Connection Draining
    • Target Group: Deregistration Delay
      (for ALB & NLB)
  • Time to complete 'in-flight requests' while the instance is de-registering or unhealthy
  • Stops sending new requests to the instance which is de-registering
  • Between 1 to 3,600 seconds, default is 300 seconds
  • Can be disabled (set value to 0)
  • Set to a low value if requests are short
What's an Auto Scaling Group?:
  • In real-life, the load on websites and application can change
  • In the cloud, can create and get rid of servers very quickly
  • The goal of an Auto Scaling Group (ASG) is to:
    • Scale out (add EC2 instances) to match an increased load
    • Scale in (remove EC2 instances) to match a decreased load
    • Ensure have a minimum and a maximum number of machines running
    • Automatically Register new instances to a load balancer
AWS Fundamentals:
AWS Cloud Technical Essentials:

  • Should consider four main aspects when deciding which AWS Region to host applications and workload: latency, price, service availability, and compliance. Focusing on these factors will enable to make the right decision when choosing an AWS Region.
  • Every action take in AWS is an API call.
  • The AWS Global Infrastructure is nested for high availability and redundancy. AWS Regions are clusters of Availability Zones that are connected through highly availably and redundant high-speed links and Availability Zones are clusters of data centers that are also connected through highly available and redundant high-speed links.
  • There are six benefits of cloud computing. Going global in minutes means can easily deploy applications in multiple Regions around the world with just a few clicks.
  • With the cloud, no longer have to manage and maintain own hardware in own data centers. Companies like AWS own and maintain these data centers and provide virtualized data center technologies and services to users over the internet.
  • Use an access key (an access key ID and secret access key) to make programmatic requests to AWS. However, do not use AWS account root user access key. The access key for AWS account root user gives full access to all resources for all AWS services, including billing information. Cannot reduce the permissions associated with AWS account root user access key. Therefore, protect root user access key like credit card numbers or any other sensitive secret. Should disable to delete any access keys associated with the root user, and should also enable MFA for the root user.
  • Users in company are authenticated in corporate network and want to be able to use AWS without having to sign in again. Instead of creating an IAM User for each employee that needs access to the AWS account, should use IAM Roles to federate users.
  • A policy is an object in AWS that, when associated with an identity or resource, defines their permissions. AWS evaluates these policies when an IAM principal (user or role) makes a request. Permissions in the policies determine whether the request is allowed or denied. Most policies are stored in AWS as JSON documents that are attached to an IAM identity (user, group of users, or role). The information in a policy statement is contained within a series of elements:
    • Version – Specify the version of the policy language that want to use. As a best practice, use the latest 2012-10-17 version.
    • Statement – Use this main policy element as a container for the following elements. Can include more than one statement in a policy.
    • Sid (Optional) – Include an optional statement ID to differentiate between statements.
    • Effect – Use Allow or Deny to indicate whether the policy allows or denies access.
    • Principal (Required in only some circumstances) – If create a resource-based policy, must indicate the account, user, role, or federated user to which would like to allow or deny access. If creating an IAM permissions policy to attach to a user or role, cannot include this element. The principal is implied as that user or role.
    • Action – Include a list of actions that the policy allows or denies.
    • Resource (Required in only some circumstances) – If create an IAM permissions policy, must specify a list of resources to which the actions apply. If create a resource-based policy, this element is optional. If do not include this element, then the resource to which the action applies is the resource to which the policy is attached.
    • Condition (Optional) – Specify the circumstances under which the policy grants permission.
  • Multi-factor Authentication is an authentication method that requires the user to provide two or more verification factors to gain access to an AWS account.
  • When create a VPC, have to specify the AWS region it will reside in, the IP range for the VPC, as well as the name of the VPC.
  • Route Tables can be attached to VPCs and subnets.
  • A network ACL secures subnets, while a security group is responsible for securing EC2 instances.
  • To allow resources to communicate with the internet, will need to attach an internet gateway to VPC, and create a route in a route table_to the internet gateway and attach it to subnet with internet-facing resources. Will also need to make sure internet-facing resources have a public IP address.
cool.gif
 

PlAwAnSaI

Administrator

  • The default configuration of a security group blocks all inbound traffic and allows all outbound traffic.
  • Amazon EC2 provides a wide selection of instance types optimized to fit different use cases. Instance types comprise varying combinations of CPU, memory, storage, and networking capacity and give the flexibility to choose the appropriate mix of resources for applications. Each instance type includes one or more instance sizes, allowing to scale resources to the requirements of target workload.
  • When launch an Amazon EC2 instance, must choose the subnet to place the instance into. Subnets reside in one singular AZ and cannot span AZs, therefore EC2 instances also reside in one Availability Zone. Should architecture for high availability in case one AZ is unreachable for any reason or is experiencing outages. To do so, should deploy AWS resources, like Amazon EC2, should be deployed redundantly across at least two AZs.
  • AWS Fargate is a serverless compute platform for either Amazon ECS or Amazon EKS. When use Fargate, the compute infrastructure needed to run containers is managed by AWS whereas with Amazon ECS on EC2 for the compute platform you are responsible for managing the underlying EC2 cluster hosting containers.
  • With serverless on AWS do not have to pay for idling resources, instead only pay for what use and each serverless service will charge differently based on usage.
  • AWS Lambda is a great solution for many use cases, but it does not fit all use cases. For long running processes, Lambda is not the best choice since it has a 15 minute runtime limit.
  • Amazon EC2 provides with a great deal of control over the environment application runs in, serverless services like AWS Lambda exist to provide convenience whereas services like Amazon EC2 provide control.
  • Amazon S3 is an object storage service designed for large objects like media files. Because can store unlimited objects, and each individual object can be up to 5 TBs, S3 is an ideal location to host video, photo, or music uploads.
  • Amazon EBS would be ideal for a high-transaction database storage layer. Is considered persistent storage.
    Amazon S3 is not ideal, as it's considered WORM (Write Once, Read Many) storage.
    Amazon EC2 Instance Store is ephemeral storage, and persistence is needed for databases.
    EFS is for ideal when have multiple servers that need access to the same set of files.
  • Amazon Glacier Deep Archive is Amazon S3's lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers - particularly those in highly regulated industries, such as the Financial Services, Healthcare, and Public Sectors - that retain data sets for 7 to 10 years or longer to meet regulatory compliance requirements.
  • Amazon S3 is a regional service. However, the namespace is shared by all AWS accounts, so the bucket name must be a globally unique.
  • When use Amazon RDS, it places the DB instance into a subnet which is bound by one AZ. For high availability reasons, should use a Multi-AZ deployment in case one AZ is temporarily unavailable.
  • Amazon DynamoDB allows for a flexible schema, so each item can have variation in the attributes outside of the primary and secondary key. For a dataset that has variation within the data, as in not every piece of data share all the same attributes.
  • When using Amazon RDS, no longer responsible for the underlying environment the database runs on, instead can focus on optimizing the database. This is because Amazon RDS has components that are managed by AWS.
  • EC2 Auto Scaling requires to specify three main components:
    • a launch template or a launch configuration as a configuration template for the EC2 instances
    • an EC2 Auto Scaling group that allows to specify minimum, maximum, and desired capacity of instances
    • and scaling policies that allow to configure a group to scale based on the occurrence of specified conditions or on a schedule.
  • ELB automatically scales depending on the traffic. It handles the incoming traffic and sends it to backend application. ELB also integrates seamlessly with EC2 Auto Scaling. As soon as a new EC2 instance is added to or removed from the EC2 Auto Scaling group, ELB is notified and can begin to direct traffic to the new instance.
  • Instances that are launched by Auto Scaling group are automatically registered with the load balancer. Likewise, instances that are terminated by Auto Scaling group are automatically deregistered from the load balancer.
  • Application Load Balancer is a layer 7 load balancer that routes HTTP and HTTPs traffic, with support for rules. For example, rule based on the domain of a website.
  • An application can be scaled vertically by adding more power to an existing machine or it can be scaled horizontally by adding more machines to pool of resources.
  • AWS calls the different elements that allow to view/analyze metrics which can add to a Dashboard widgets.
  • A metric alarm has the following possible states:
    • OK - The metric or expression is within the defined threshold.
    • ALARM - The metric or expression is outside of the defined threshold.
    • INSUFFICIENT_DATA - The alarm has just started, the metric is not available, or not enough data is available for the metric to determine the alarm state.
Addressing Security Risk:
  • Multi-factor Authentication or MFA security mechanism can add an extra layer of protection to AWS account in addition to a username password combination.
  • If a user wanted to read from a DynamoDB table AmazonDynamoDBReadOnlyAccess policy would attach to their user profile.
  • Gemalto token, yubiKey, and Google Authenticator are valid MFA or Multi-factor Authentication options available to use on AWS.
  • JSON format is an Identity and Access Management policy document in.
  • Command Line Interface, Software Development Kit, Application Programming Interface, and AWS Console are valid options for interacting with AWS account.
  • Managed policy of IAM policies cannot be updated by you.
  • AD Connector can establish a trusted relationship between corporate Active Directory and AWS.
  • Audit IAM user's access to AWS accounts and resources by Using CloudTrail to look at the API call and timestamp.
  • Grants AWS Management Console access to an DevOps engineer by Create IAM user for the engineer and associate relevant IAM managed policies to this IAM user.
  • Identity Pools provide temporary AWS credentials.
  • Traffics within an Availability Zone, or between Availability Zones in all Regions, are routed over the AWS private global network.
  • A Security Group Act as a firewall for associated Amazon EC2 instances, controlling both inbound and outbound traffic at the instance level.
  • Two types of VPC Endpoints are available: Gateway and Interface Endpoint.
  • VPC, A subnet in a VPC, and A network interface attached to EC2 AWS resources can be monitored using VPC Flow logs.
  • AWS CloudTrail service keeps a record of who is interacting with AWS Account.
  • AWS CloudWatch and Config are monitoring and logging services available on AWS.
  • If wanted to accomplish threat detection in AWS infrastructure, AWS GuardDuty service would use.
cool.gif
 

PlAwAnSaI

Administrator

  • Security section from Trusted Advisor exists under the Well-Architected Framework as a pillar as well.
  • Amazon Inspector AWS Service has an optional agent that can be deployed to EC2 instances to perform a security assessment.
  • Amazon Relational Database Service is also a valid storage service on AWS.
  • Provision the HSM in a VPC requirement must adhere to in order to deploy an AWS CloudHSM.
  • Customer master AWS KMS keys are used to encrypt and decrypt data in AWS.
  • Up to 4KB data can encrypt/decrypt using an Customer Master Key.
  • The purpose of encrypting data when it is in transit between systems and services is to prevent eavesdropping, unauthorized alterations, and copying.
  • TLS protocol is an industry-standard cryptographic protocol used for encrypting data at the transport layer.
  • Encrypt an existing un-encrypted EBS volume by Take a snapshot for EBS volume, and create new encrypted vloume for this snapshot.
  • Cannot encrypt just a subset of items in a DynamoDB table.
  • When enable Data encrypted at rest includes underlying storage for an RDS DB instance, its automated backups, Read Replicas, snapshots, and Transaction logs.
  • CORPS: Cost optimization, Operational excellence, Reliability, Performance efficiency, and Security are Pillars of the Well-Architectured Framework.
  • Amazon Athena suport SQL language.
  • Shared Responsibility Model is the name of the model that shows how security is handled by AWS and it's customers in the AWS Cloud.
  • Amazon Simple Storage AWS Service is best suited for storing objects.
  • AWS Organizations service can be used to manage multiple AWS Accounts for consolidated billing.
  • Amazon GuardDuty AWS Service supports threat detection by continuously monitoring for malicious or unauthorized behavior.
  • Amazon DynamoDB is NoSQL type of database.
  • A URL entry point for a web service is a customer access endpoint.
  • Amazon Cognito, AWS SSO, and IAM services do authenticate users to access AWS resources using existing credentials on their current corporate identity.
DevOps group led the initial charge in the cloud but When things break, DevOps teams cannot troubleshoot their own network connectivity without networking teams for support.https://dev.to/adilshehzad786/aviatrix-certified-engineer-multi-cloud-network-associate-notes-39j9
https://mayankchourasia2.medium.com/how-did-i-passed-the-aviatrix-ace-aviatrix-certified-engineer-multi-cloud-networking-associate-d1315855aa45
This specialization is part of the 100% online Master in Computer Science from University of Illinois at Urbana-Champaign.https://www.coursera.org/specializations/cloud-computing

Migrating to the Cloud:
  • AWS Cloud Adoption Framework helps build a comprehensive approach to a successful cloud computing migration across organization, and throughout IT lifecycle.
    Business, People, Governance, Platform,

    Security, and Operations are the six (6) perspectives presented in the

    Cloud Adoption Framework.
  • Horizontal scaling gives the ability to add more servers in order to distribute load across resources and maintain operations without overloading any singular resource.
  • Adding more memory to an Amazon Elastic Compute Cloud (Amazon EC2) instance is an example of vertical scaling.
  • In Phase2: Portfolio Discovery and Planning analyse the dependencies between applications and begin to think about which migration type is most suitable for each dependency.
  • Rehost migration strategy is commonly referred to as 'lift and shift', where trying to move applications or environments into the cloud while trying to make as few changes as possible.
  • Replatform can be referred to as 'life, tinker, and shift'.
  • Refactor or Rearchitect migration strategy typically driven by strong business needs to add features, scale, or increase performance that would otherwise be difficult in an existing environment.
  • Understanding database is one of the most crucial steps of migrating database. Things like the size, schema, types of tables, and engine-specific limitations are usually topics that need to be regularly discussed and reviewd.
  • The benefits of leveraging AWS SMS to manage server migration are:
    • Automate the migration of on-premises VMware vSphere, Microsoft Hyper-V/SCVMM, and Azure virtual machines to the AWS Cloud
    • Track incremental replications of VMs
    • Schedule the replication of servers
  • AWS Migration Hub service provides a single location to track the progress of application migrations across multiple AWS and partner solutions.
  • Agent-based discovery method deploys the AWS Application Discovery Agent on each of VMs and physical servers to collect static configuration data, detailed time-series system-performance information, inbound and outbound network connections, and processes that are running.
  • Amazon S3 / S3 Glacier command line interface and rsync are examples of unmanaged cloud data migration tools that are easy, one-and-done methods to move data at small scales from on-premises environment into Amazon cloud storage.
  • VPN provides a secure connection between environments for transfer of data.
  • Amazon Route 53 AWS service gives the ability to control the amount of traffic going to multiple DNS endpoints.
  • AWS Schema Conversion Tool service helps to perform schema changes while migrating database.
    Can be used to help convert Stored Procedure code in Oracle and SQL Server to equivalent code in the Amazon Aurora MySQL dialect of SQL.
  • AWS DataSync Agent is a requirement when using AWS DataSync.
  • Avoid single points of failure is a primary best practice to follow when building and optimizing migrated environment in AWS.
  • AWS Migration Acceleration Program service provides consulting, support, training, and service credits in order to reduce associated risks with migrating to the cloud.
  • A large majority of AWS services and tools have directly accessible APIs that can use for creating, configuring, and managing the services employ.
  • CloudEndure AWS Migration service converts any application running on a supported operating system to enable full functionality on AWS without compatibility issues.
    Is Software as a Service (SaaS) migration offering from AWS allows applications to continue runing at the source without downtime or performance impact, and allows to run non-disruptive tests to validate that the replicated applications work properly in AWS.
  • TSO Logic provides discovery of existing workloads to help identify what consume in regards to compute, storage, database, and other resources to help evaluate what Total Cost of Ownership, or TCO, is for various applications.
  • 'The 6 R's': 6 Application Migration Strategies:
    1. Re-host
    2. Re-platform
    3. Re-factor / Re-architect
    4. Re-purchase
    5. Retire
    6. Retain
    0*WW36nabYAh5wn2v3.
  • AWS Command Line Interface tool can be installed locally or on an instance to provide direct API access for management, building, and optimization tasks within AWS.
cool.gif
 

PlAwAnSaI

Administrator

  • AWS Competency Program allows companies in the Partner Network to demonstrate and prove their expertise in areas like Migrations.
  • AWS Snowball service provides a physical device that can be connected directly to data center network can leverage the local network to copy data, can hold up to 80 Terabytes, and is protected by AWS Key Management Service to encrypt data.
  • Cannot use DMS to directly migrate on-premises database to another on-premises database.
  • AWS Direct Connect allows to:
    • Create public virtual interfaces to connect with services like Amazon Simple Storage Service.
    • Create private virtual interfaces to create VPN-like connections across hybrid environment.
  • In regards to security in and between environments while migrating, Data in transit and at rest areas can encryption be beneficial.
  • There are many Migration Partners available to help to better operate in the cloud, and become more proficient in migrations. These companies have expertise in all phases of the migration process, and can help with implementation, planning, or even training on migration technologies.
    They have built the knowledge up over years of experience working with, and helping other customers migrate to AWS.
  • A 5-phase approach to migrating applications:
    1. Phase 1: Migration Preparation and Business Planning
    2. Phase 2: Portfolio Discory and Planning
    3. Phase 3 & 4: Designing, Migrating, and Validating Applications
    4. Phase 5: Operate
aws-blog-image.png
Building Serverless Applications:
Amazon Lex:

  • An 'intent' is A particular goal that the user wants to achieve.
  • A slot is Data that the user must provide to fulfill the intent.
  • Amazon Poly service does Amazon Lex use for text-to-speech.
Amazon S3:
  • Public Read access need to provide on Amazon S3 bucket for website access.
  • Static websites created with Amazon S3 can be interactive.
Amazon CloudFront:
  • Is used to create a Content Distribution Network.
  • From own datacenter, Amazon S3, and EC2 services can Amazon CloudFront retrieve content.
  • With Amazon CloudFront, S3 bucket permissions no need Public Read Access.
  • AWS WAF allows to specify restrictions on access to content based upon IP address.
Amazon API Gateway:
  • CORS is configured in Amazon API Gateway service.
IAM:
  • IAM roles provide users and services access to AWS services.
  • IAM Roles are not associated with a specific user or group. Instead, trusted entities assume roles such as an IAM user, an application, or an AWS service like EC2.
Amazon Lambda:
  • Is An event-driven, serverless computing platform that runs code in response to events and automatically manages the compute resources required by that code.
Amazon DynamoDB:
  • Is Non-relational, NoSQL database type of database solution.
  • Table name and Primary key need to be provided when creating a table in DynamoDB.
#####

ASGs have the following attributes:
  • A launch configuration
    • AMI + Instance Type
    • EC2 User Data
    • EBS Volumes
    • Security Groups
    • SSH Key Pair
  • Min / Max Size / Initial Capacity
  • Network + Subnets Information
  • Load Balancer Information
  • Scaling Policies
Auto Scaling Alarms:
  • It is possible to scale an ASG based on CloudWatch alarms
  • An Alarm monitors a metric (such as Average CPU)
  • Metrics are computed for the overall ASG instances
  • Based on the alarm:
    • Can create scale-out policies (increase the number of instances)
    • Can create scale-in policies (decrease the number of instances)
Auto Scaling New Rules:
  • It is now possible to define 'better' auto scaling rules that are directly managed by EC2:
    • Target Average CPU Usage
    • Number of requests on the ELB per instance
    • Average Network In & Out
  • These rules are easier to set up and can make more sense
Auto Scaling Custom Metric:
  • Can auto scale based on a custom metric (ex: number of connected users):
    1. Send custom metric from application on EC2 to CloudWatch (PutMetric API)
    2. Create CloudWatch alarm to react to low / high values
    3. Use the CloudWatch alarm as the scaling policy for ASG
ASG Brain Dump:
  • Scaling policies can be on CPU, Network... and can even be on custom metrics or based on a schedule (if know visitors patterns)
  • ASGs use Launch configurations or Templates (newer)
  • To update an ASG, must provide a new launch configuration / template
  • IAM roles attached to an ASG will get assigned to EC2 instances
  • ASG are free. Pay for the underlying resources being launched
  • Having instances under an ASG means that if they get terminated for whatever reason, the ASG will automatically create new ones as a replacement. Extra safety!
  • ASG can terminate instances marked as unhealthy by an LB (and hence replace them)
Auto Scaling Groups - Scaling Policies:
  • Target Tracking Scaling:
    • Most simple and easy to set-up
    • Example: Want the average ASG CPU to stay at around 40%
  • Simple / Step Scaling:
    • When a CloudWatch alarm is triggered (example CPU > 70%), then add 2 units
    • When a CloudWatch alarm is triggered (example CPU < 30%), then remove 1
  • Scheduled Actions:
    • Anticipate a scaling based on known usage patterns
    • Example: increase the min capacity to 10 at 5 pm on Fridays
Scaling Cooldowns:
  • The cooldown period helps to ensure that Auto Scaling group doesn't launch or terminate additional instances before the previous scaling activity takes effect.
  • In addition to default cooldown for Auto Scaling group, can create cooldowns that apply to a specific simple scaling policy
  • A scaling-specific cooldown period overrides the default cooldown period.
  • One common use for scaling-specific cooldowns is with a scale-in policy - a policy that terminates instances based on a specific criteria or metric. Because this policy terminates instances, Amazon EC2 Auto Scaling needs less time to determine whether to terminate additional instances.
  • If the default cooldown period of 300 seconds is too long - can reduce costs by applying a scaling-specific cooldown period of 180 seconds to the scale-in policy.
cool.gif
 

PlAwAnSaI

Administrator

  • If application is scaling up and down multiple times each hour, modify the Auto Scaling Groups cool-down timers and the CloudWatch Alarm Period that triggers the scale in
ASG for Solution Architects:
  • ASG Default Termination Policy (simplified version):
    1. Find the AZ which has the most number of instances
    2. If there are multiple instances in the AZ to choose from, delete the one with the oldest launch configuration
  • ASG tries the balance the number of instances across AZ by default
Lifecycle Hooks:
  • By default as soon as an instance is launched in an ASG it's in service.
  • Have the ability to perform extra steps before the instance goes in service (Pending state)
  • Have the ability to perform some actions before the instance is terminated (Terminating state)
Launch Template vs Launch Configuration:
  • Both:
    • ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and the other parameters that use to launch EC2 instances (tags, EC2 user-data...)
  • Launch Configuration (legacy):
    • Must be re-created every time
  • Launch Template (newer):
    • Can have multiple versions
    • Create parameters subsets (partial configuration for re-use and inheritance)
    • Provision using both On-Demand and Spot instances (or a mix)
    • Can use T2 unlimited burst feature
    • Recommended by AWS going forward
  • Load Balancers provide a static DNS name can use in application.
    The reason being that AWS wants load balancer to be accessible using a static endpoint, even if the underlying infrastructure that AWS manages changes
  • Running a website with a load balancer and 10 EC2 instances. Users are complaining about the fact that website always asks them to re-authenticate when they switch pages. You are puzzled, because it's working just fine on your machine and in the dev environment with 1 server. The Load Balancer does not have stickiness enabled could be the reason.
    Stickiness ensures traffic is sent to the same backend instance for a client. This helps maintaining session data.
  • Application is using an Application Load Balancer. It turns out application only sees traffic coming from private IP which are in fact load balancer's. Should Look into the X-Forwarded-For header in the backend to find the true IP of the clients connected to website.
    This header is created by load balancer and passed on to backend application.
  • Quickly created an ELB and it turns out users are complaining about the fact that sometimes, the servers just don't work. You realize that indeed, servers do crash from time to time. Enable Health Checks to protect users from seeing these crashes.
    Health checks ensure ELB won't send traffic to unhealthy (crashed) instances.
  • Designing a high performance application that will require millions of connections to be handled, as well as low latency. The best Load Balancer for this is Network Load Balancer.
    NLB provide the highest performance if application needs it.
  • Application Load Balancers handle HTTP, HTTPS, and Websocket protocols.
    A NLB (Network Load Balancer) support TCP.
  • The application load balancer can route to different target groups based on Hostname, Request Path, and Source IP.
  • Running at desired capacity of 3 and the maximum capacity of 3. Have alarms set at 60% CPU to scale out application. Application is now running at 80% capacity. Nothing will happen.
    The capacity of ASG cannot go over the maximum capacity have allocated during scale out events.
  • Have an ASG and an ALB, and setup ASG to get health status of instances thanks to ALB. One instance has just been reported unhealthy. The ASG will terminate the EC2 Instance.
    Because the ASG has been configured to leverage the ALB health checks, unhealthy instances will be terminated.
  • Boss wants to scale ASG based on the number of requests per minute application makes to database. Create a CloudWatch custom metric and build an alarm on this to scale ASG.
    The metric 'request per minute' is not an AWS metric, hence it needs to be a custom metric.
  • Scaling an instance from an r4.large to an r4.4xlarge is called Vertical Scalability.
  • Running an application on an auto scaling group that scales the number of instances in and out is called Horizontal Scalability.
  • Would like to expose a fixed static IP to end-users for compliance purposes, so they can write firewall rules that will be stable and approved by regulators. Network Load Balancer should use.
    Network Load Balancer expose a public static IP, whereas an Application or Classic Load Balancer exposes a static DNS (URL).
  • A web application hosted in EC2 is managed by an ASG. Exposing this application through an Application Load Balancer. The ALB is deployed on the VPC with the CIDR 192.168.0.0/18. Configure the EC2 instance security group to ensure only the ALB can access the port 80 by Open up the EC2 security on port 80 to the ALB's security group.
    This is the most secure way of ensuring only the ALB can access the EC2 instances. Referencing by security groups in rules is an extremely powerful rule.
  • Application load balancer is hosting 3 target groups with hostnames being users, api.external, and checkout.example.com. Would like to expose HTTPS traffic for each of these hostnames. Use SNI to configure ALB SSL certificates to make this work.
    SNI (Server Name Indication) is a feature allowing to expose multiple SSL certs if the client supports it.
  1. A solution architect is designing a two-tier web application. The application consists of a public-facing web tier hosted on Amazon EC2 in public subnets. The database tier consists of Microsoft SQL Server running on Amazon EC2 in a private subnet Security is a high priority for the company.
    In this scenario an inbound rule is required to allow traffic from any internet client to the web front end on SSL/TLS port 443. The source should therefore be set to 0.0.0.0/0 to allow any inbound traffic.
    To secure the connection from the web front-end to the database tier, an outbound rule should be created from the public EC2 security group with a destination of the private EC2 security group.
    The port should be set to 1433 for MySQL. The private EC2 security group will also need to allow inbound traffic on 1433 from the public EC2 security group.

    174458484_1370045070048380_6486876194270304711_n.jpg

  2. Auto Scaling support both EC2 classic and EC2-VPC. When an instance is launched as a part of EC2 classic, it will have the public IP and DNS as well as the private IP and DNS.
  3. A company designs a mobile app for its customers to upload photos to a website. The app needs a secure login with multi-factor authentication (MFA). The company wants to limit the initial build time and the maintenance of the solution. Use Amazon Cognito Identity with SMS based MFA solution should a solutions architect recommend to meet these requirements.
cool.gif
 

PlAwAnSaI

Administrator

  • An ASG spawns across 2 availability zones. AZ-A has 3 EC2 instances and AZ-B has 4 EC2 instances. The ASG is about to go into a scale-in event. The AZ-B will terminate the instance with the oldest launch configuration.
    The Default Termination Policy for ASG tries to balance across AZ first, and then delete based on the age of the launch configuration.
  • The Application Load Balancers target groups can be EC2 Instances, IP Addresses, and Lambda Functions.
  • Running an application in 3 AZ, with an Auto Scaling Group and a Classic Load Balancer. It seems that the traffic is not evenly distributed amongst all the backend EC2 instances, with some AZ being overloaded. Cross Zone Load Balancing feature should help distribute the traffic across all the available EC2 instances.
  • Application Load Balancer (ALB) currently is routing to two target groups, each of them is routed to based on hostname rules. Have been tasked with enabling HTTPS traffic for each hostname and have loaded the certificates onto the ALB. Server Name Indication (SNI) ALB feature will help it choose the right certificate for clients.
  • An application is deployed with an Application Load Balancer and an Auto Scaling Group. Currently, the scaling of the Auto Scaling Group is done manually and would like to define a scaling policy that will ensure the average number of connections to EC2 instances is averaging at around 1,000. Target Tracking scaling policy should use.
What's an EBS Volume?
  • An EBS (Elastic Block Store) Volume is a network drive can attach to instances while they run
  • It allows instances to persist data, even after their termination
  • They can only be mounted to one instance at a time (at the CCP level)
  • They are bound to a specific availability zone
  • Analogy: Think of them as a 'network USB stick'
  • Free tier: 30 GB of free EBS storage of type General Purpose (SSD) or Magnetic per month
EBS Volume:
  • It's a network drive (i.e. not a physical drive):
    • It uses the network to communicate the instance, which means there might be a bit of latency
    • Can be detached from and EC2 instance and attached to another one quickly
  • It's locked to an Availability Zone (AZ):
    • An EBS Volume in ap-southeast-1a cannot be attached to ap-southeast-1b
    • To move a volume across, first need to snapshot it
  • Have a provisioned capacity (size in GBs, and IOPS):
    • Get billed for all the provisioned capacity
    • Can increase the capacity of the drive over time
Delete on Termination attribute:
  • Controls the EBS behavior when an EC2 instance terminates. By default:
    • The root EBS volume is deleted (attribute enabled)
    • Any other attached EBS volume is not deleted (attribute disabled)
  • This can be controlled by the AWS console / AWS CLI
  • Use case: preserve root volume when instance is terminated
EBS Volume Types:
  • Come in 6 types:
    • gp2 / gp3 (SSD): General purpose SSD volume that balances price and performance for a wide variety of workloads
    • io1 / io2 (SSD): Highest-performance SSD volume for mission-critical low-latency or high-throughput workloads
    • st1 (HDD): Low cost HDD volume designed for frequently accessed, throughput-intensive workloads
    • sc1 (HDD): Lowest cost HDD volume designed for less frequently accessed workloads
  • EBS Volumes are characterized in Size | Throughput | IOPS (I/O Ops Per Sec)
  • When in doubt always consult the AWS documentation!
  • Only gp2/gp3 and io1/io2 can be used as boot volumes
Make an Amazon EBS volume available for use on Linux:
  • [ec2-user ~]$ lsblk
    Use command to view available disk devices and their mount points (if applicable)
  • ec2-user ~]$ sudo file -s /dev/xvdb
    Use command to get information about a specific device, if output shows data = no file system
  • c2-user ~]$ sudo mkfs -t ext4 /dev/xvdb
    Use command to create a file system on the volume
  • 2-user ~]$ sudo mkdir /data
    Use command to create a mount point directory for the volume
  • -user ~]$ sudo mount /dev/xvdb /data
    To mount an attached volume automatically after reboot:
  • user ~]$ sudo cp /etc/fstab /etc/fstab.orig
    Create a backup of /etc/fstab file that can use if accidentally destroy or delete this file
  • ser ~]$ sudo nano /etc/fstab
    Open the /etc/fstab file using any text editor, such as nano or vim
  • /dev/xvdb /data ext4 defaults,nofail 0 2
    Add to /etc/fstab to mount the device at the specified mount point.
  • er ~]$ sudo file -s /dev/xvdb
  1. A team has an application that detects new objects being uploaded into an Amazon bucket. The upload a trigger AWS Lambda function to write metadata into an Amazon DynamoDB table and an Amazon RDS for PostgreSQL database. Enable Multi-AZ on the RDS PostgreSQL database should the team take to ensure high availability.
  2. After recommend Amazon Redshift to a client as an alternative solution to paying data warehouses to analyze his data, clients asks to explain why recommending Redshift. The following would be reasonable responses:
    • It has high performance at scale as data and query complexity grows.
    • Prevents reporting and analytic processing from interfering with the performance of OLTP workloads.
    • Don't have the administrative burden of running own data warehouse and dealing with setup, durability, monitoring, scaling and patching.
    Amazon Redshift delivers fast query performance by using columnar storage technology to improve I/O efficiency and parallelizing queries across multiple nodes. Redshift uses standard PostgreSQL JDBC and ODBC drivers, allowing to use a wide range of familiar SQL clients.
    Data load speed scales linearly with cluster size, with integrations to Amazon S3, DynamoDB, Elastic MapReduce, Kinesis or any SSH-enabled host.
    Large volumes of structured data to persist and query using standard SQL and existing BI tools.
  3. A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the file to be stored for 4 years before they can deleted. (Delete the files 4 years after the object creation.) Immediate accessibility is always required as the files contain business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days. Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation storage solution is MOST cost effective.
  4. A solution architect needs to deploy a node js-based web application that is highly available and scales automatically. The marketing team needs to roll back on application releases quickly and they need to have an operational dashboard. The Marketing team does not want to manage deployment of operating system patches to the Linux servers. AWS Elastic Beanstalk service satisfies these requirements.
cool.gif
 

PlAwAnSaI

Administrator
  • 5 เคล็ดลับ ใช้งาน AWS อย่างคุ้มค่าสูงสุด ปกป้องข้อมูลได้ทุกส่วน:
    dl.techtalkthai.com/ttt_veeam_aws_cloud_cost_optimization_2021_whitepaper_01 v1.1.pdf
AWS Cloud History:
  • 2002: Internally launched
  • 2003: Amazon infrastructure is one of their core strength. Idea to market
  • 2004: Launched publicly with SQS
  • 2006: Re-launched publicly with SQS, S3 & EC2
  • 2007: Launched in Europe
  • Dropbox, airbnb, NETFLIX, NASA, etc.
AWS Cloud Number Facts:
  • In 2020, AWS had $45.37 billion in annual revenue
  • AWS accounts for 32% of the market in 2020 (Microsoft is 2nd with 20%)
  • Pioneer and Leader of the AWS Cloud Market for the 10th consecutive year
  • Over 1,000,000 active users
Magic Quadrant for Cloud Infrastructure and Platform Service (CIPS):
313188941_10226900201103887_3852692496678329642_n.jpg

441742_0001.png


AWS Cloud Use Cases:
  • AWS enables to build sophisticated, scalable applications
  • Applicable to a diverse set of industries
  • Use cases include:
    • Enterprise IT, Backup & Storage, Big Data analytics
    • Website hosting, Mobile & Social Apps
    • Gaming
  • McDonald's, 21ST CENTURY FOX, ACTIVISION, etc.
AWS Global Infrastructure:
  • AWS Regions
  • Availability Zones
  • Data Centers
  • Edge Locations / Points of Presence
How to choose an AWS Region?:If need to launch a new application, where should do it?
  • Compliance with data governance and legal requirements: data never leaves a region without explicit permission
  • Proximity to customers: reduced latency
  • Available services within a Region: new services and new features aren't available in every Region
  • Pricing: pricing varies region to region and is transparent in the service pricing page
AWS Points of Presence (Edge Locations):
  • Amazon has 216 Points of Presence (205 Edge Locations & 11 Regional Caches) in 84 cities across 42 countries
  • Content is delivered to end users with lower latency
Tour of the AWS Console:
  • AWS has Global Services:
    • Identity and Access Management (IAM)
    • Route 53 (DNS service)
    • CloudFront (Content Delivery Network)
    • WAF (Web Application Firewall)
  • Most AWS services are Region-scoped:
    • Amazon EC2 (Infrastructure as a Service)
    • Elastic Beanstalk (Platform as a Service)
    • Lambda (Function as a Service)
    • Rekognition (Software as a Service)
  • Region Table
IAM:
Users & Groups:
  • IAM = Identity and Access Management, Global service
  • Root account created by default, shouldn't be used or shared
  • Users are people within organization, and can be grouped
  • Groups only contain users, not other groups
  • Users don't have to belong to a group, and user can belong to multiple groups
Permissions:
  • Users or Groups can be assigned JSON documents called policies
  • These policies define the permissions of the users
  • In AWS apply the least privilege principle: don't give more permissions than a user needs
Password Policy:
  • Strong password = higher security for account
  • In AWS, can setup a password policy:
    • Set a minimum password length
    • Require specific character types:
      • including uppercase letters
      • lowercase letters
      • numbers
      • non-alphanumeric characters
    • Allow all IAM users to change their own passwords
    • Require users to change their password after some time (password expiration)
    • Prevent password re-use
  1. A solutions architect is tasked with transferring 750 TB of data from a network-attached file system located at a branch office to Amazon S3 Glacier. The solution must avoid saturating the branch office's low-bandwidth internet connection. Order 10 AWS Snowball appliances and select an Amazon S3 bucket as the destination. Create a lifecycle policy to transition the S3 objects to Amazon S3 Glacier is the MOST cost-effective solution.
    As the company's internet link is low-bandwidth uploading directly to Amazon S3 (ready for transition to Glacier) would saturate the link. The best alternative is to use AWS Snowball appliances. The Snowball edge appliance can hold up to 75 TB of data so 10 devices would be required to migrate 750 TB of data.
    Snowball moves data into AWS using a hardware device and the data is then copied into an Amazon S3 bucket. From there, lifecycle policies can transition the S3 objects to Amazon S3 Glacier.
    Cannot set a Glacier vault as the destination, it must be an S3 bucket. Also can't enforce a VPC endpoint using a bucket policy.
    Can Create an AWS Direct Connect connection and migrate the data straight into Amazon Glacier but this is not the most cost-effective and takes time to setup.
  2. After reviewing the cost optimization checks in AWS Trusted Advisor, a team finds that it has 10,000 Amazon Elastic Block Store (Amazon EBS) snapshots in its account that are more than 30 days old. When the team determines that it needs to implement better governance for the lifecycle of its resources. Use a scheduled event in Amazon EventBridge (Amazon CloudWatch Events) and invoke AWS Step Functions to manage the snapshots and Schedule and run backups in AWS Systems Manager are the actions should the team take to automate the lifecycle management of the EBS snapshots with the LEAST effort.
  3. A company is hosting 60 TB of production-level data in an Amazon S3 bucket. A solution architect needs to bring that data on premises for quarterly audit requirements. This export of data must be encrypted while in transit. The company has low network bandwidth in place between AWS and its on-premises data center. The solutions architect should Deploy an AWS Storage Gateway volume gateway on AWS. Enable a 90-day replication window to transfer the data to meet these requirements.
  4. A development team is collaborating with another company to create an integrated product. The other company needs to access an Amazon Simple Queue Service (Amazon SQS) queue that is contained in the development team's account. The other company wants to poll the queue without giving up its own account permissions to do so. A solution architect should Create an SQS access policy that provides the other company access to the SQS queue.
  5. A company is backing up on-premises databases to local file server shares using the SMB protocol. The company requires immediate access to 1 week of backup files to meet recovery objectives. Recovery after a week is less likely to occur, and the company can tolerate a delay in accessing those older backup files. A solutions architect should Deploy Amazon FSx for Windows File Server to create a file system with exposed file shares with sufficient storage to hold all the desired backups to meet these requirements with the LEAST operational effort.
:cool:
 
Last edited:
Top