Cloud Computing


  • AWS Systems Manager requires an IAM role for EC2 instances that it manages, to perform actions on behalf. This IAM role is referred to as an intance profile. If an instance is not managed by Systems Manager, one likely reason is that the instance does not have an instance profile, or the instance profile does not have the necessary permissions to allow Systems Manager to manage the instance.

  • A company has a new requirement stating that all resources in AWS must be tagged according to a set policy.
    AWS Config service should be used to enforce and continually identify all resources that are not in compliance with the policy.


  • A company's static website hosted on Amazon S3 was launched recently and is being used by tens of thousands of users. Subsequently, website users are experiencing 503 service unavailable errors.
    These errors occurring because The request rate to Amazon S3 is too high.

  • A SysOps Admin needs to receive an email whenever critical, production Amazon EC2 instances reach 80% CPU utilization.
    This can be achieved by Create an Amazon CloudWatch alarm and configure an Amazon SNS notification.
    CloudWatch Events is used for state changes not metric breaches.

  • A SysOps admin is responsible for managing a company's cloud infrastructure with AWS CloudFormation. The SysOps admin needs to create a single resource that consist of multiple AWS services. The resource must support creation and deletion through the CloudFormation console.
    To meet these requirements the SysOps admin should create CloudFormation Custom::MyCustomType.
    Custom resources enable to write custom provisioning logic in templates that AWS CloudFormation runs anytime create, update (if changed the custom resource), or delete stacks. For example, might want to include resources that aren't available as AWS CloudFormation resource types. Can include those resources by using custom resources.
    That way can still manage all related resources in a single stack.

  • A SysOps Admin manages a fleet of Amazon EC2 instances running a distribution of Linux. The OSs are patched on a schedule using AWS Systems Manager Patch Manager. Users of the application have complained about poor response times when the systems are being patched.
    To ensure patches are deployed automatically with MINIMAL customer impact is Configure the maintenance window to patch 10% of the instances in the patch group at a time.

  • A global gaming company is preparing to launch a new game on AWS. The game runs in multiple AWS Regions on a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group behind Application Load Balancer (ALB) in each Region. The company plans to use Amazon Route 53 for DNS services. The DNS configuration must direct users to the Region that is closest to them and must provide automated failover.
    To configure Route 53 to meet these requirements a SysOps admin should Create Amazon CloudWatch alarms that monitor the health of the ALB in each Region. Configure Route 53 DNS failover by using a health check that monitors the alarms. And Configure Route 53 geoproximity routing Specify the Regions that are used for the infrastructure.
    Monitoring the health of the EC2 instances is not sufficient to provide failover as the EC2 instances are in an Auto Scaling group and instances can be added or removed dynamically.
    Monitoring the private IP address of an EC2 instance is not sufficient to determine the health of the infrastructure, as the instance may still be running but the application or service on the instance may be unhealthy.
    Simple routing does not take into account geographic proximity.

  • A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A SysOps admin needs to monitor the p90 statistic of this field over time.
    To meet this requirement the SysOps admin should Create a metric filter on the log data.

  • A SysOps admin must configure a resilient tier of Amazon EC2 instances for a High Performance Computing (HPC) applicaton. The HPC application requires minum latency between nodes.
    To meet these requirements the SysOps admin should Place the EC2 instances in an Auto Scaling group within a single subnet and Launch the EC2 instances into a cluster placement group.

  • A company has a web application with a DB tier that consists of an Amazon EC2 instance that runs MySQL. A SysOps admin needs to minimize potential data loss and the time that is required to recover in the event of a DB failure.
    The MOST operationally efficient solution that meets these requirements is Use Amazon Data Lifecycle Manager (DLM) to take a snapshot of the Amazon Elastic Block Store (EBS) volume every hour. In the event of an EC2 instance failure, restore the EBS volume from a snapshot.

  • If the Dev are provided with full admin access, then the only way to ensure compliance with the corporate policy is to use an AWS Organizations Service Control Policies (SCPs) to restrict the API actions relating to use of the specific restricted services.

  • A company has set up an IPSec tunnel between its AWS environment and its on-premises data center. The tunnel is reporting as UP, but the Amazon EC2 instances are not able to ping any on-premises resources.
    To resolve this issue a SysOps admin should Create a new inbound rule on the EC2 instances' security groups to allow ICMP traffic from the on-premises CIDR.

  • If the users have reported high latency and connection instability of the application in another region, we can Create an accelerator in AWS Global Accelerator and update the DNS record to improve the availability and performance. It provides static IP addresses that provide a fixed entry point to applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and AZ.
    It always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, user's location, and policies that configure.
    CloudFront cannot be used for the SFTP protocol.

  • A company monitors its account activity using AWS CloudTrail, and is concerned that some log files are being tampered with after the logs have been delivered to the account's Amazon S3 bucket.
    Moving forward, the SysOps Admin confirm that the log files have not been modified after being delivered to the S3 bucket by Enable log file integrity validation and use digest files to verify the hash value of the log file.

  • A company uses several large Chef recipes to automate the configuration of Virtual Machines (VMs) in its data center. A SysOps admin is migrating this workload to Amazon EC2 instances on AWS and must run the existing Chef recipes.
    Solution will meet these requirements MOST cost-effectively is Set up AWS OpsWorks for Chef Automate. Migrate the existing recipes. Modify the EC2 instance user data to connect to Chef.

  • A company needs to ensure strict adherence to a budget for 25 applications deployed on AWS. Separate teams are responsible for storage, compute, and DB costs. A SysOps admin must implement an automated solution to alert each team when their projected spend will exceed a quarterly amount that has been set by the finance department. The solution cannot incur additional compute, storage, or DB costs.
    Solution will meet these requirements is Use AWS Budgets to create a cost budget for each team, filtering by the services they own. Specify the budget amount defined by the finance department along with a forecasted cost threshold. Enter the appropriate email recipients for each budget.
Last edited:


  • The Compute Savings Plans offer flexibility and can apply to usage across any AWS region, any AWS compute service (including AWS Fargate, not only EC2 like the EC2 Instance Savings Plans), and across different instance families.


  • A company runs an encrypted Amazon RDS for Oracle DB instance. The company wants to make regular backups available in another AWS Region.
    The MOST operationally efficient solution that meets these requirements is Modify the DB instance. Enable cross-Region automated backups.

  • A company is creating a new application that will run in a hybrid environment. The application processes data that must be secured and the developers require encryption in-transit across shared networks and encryption at rest.
    To meet these requirements a SysOps Admin should Configure an AWS Virtual Private Network (VPN) connection between the on-premises data center and AWS. It will encrypt data over the shared, hybrid network connection. This ensures encryption in-transit and if don't have a certificate can create a pre-shared key. And
    Use AWS KMS to manage encryption keys that can be used for data encryption. In this case the keys would then be used outside of KMS to actually encrypt the data.

  • A SysOps admin noticed that the cache hit ratio for an Amazon CloudFront distribution is less than 10%.
    Configurations changes will increase the cache hit ratio for the distribution are Increase the CloudFront Time To Live (TTL) settings in the Cache Behavior Settings and Ensure that only required cookies, query strings, and headers are forwarded in the Cache Behavior Settings. By default, each file automatically expires after 24 hours.

  • A company plans to use Amazon Route 53 to enable HA for a website running on-premises. The website consists of an active and passive server. Route 53 must be configured to route traffic to the primary active server if the associated health returns a 2xx status code. All other traffic should be directed to the secondary passive server.
    A SysOps Admin needs to configure the record type and health check. The website runs on-premises and therefore an Alias record cannot be used as this would only be used for AWS resources. Therefore, an A record should be used for each server. The health check must return HTTP status codes and therefore should be use a HTTP health check.

  • A company wants to reduce costs for jobs that can be completed at any time. The jobs currently run by using multiple Amazon EC2 On-Demand Instances and the jobs take slightly less than 2 hours to complete. If a job falls for any reason it must be restarted from the beginning.
    Solution will meet these requirements MOST cost-effectively is Submit a request for Spot Instances with a defined duration for the jobs. It also known as Spot blocks are no longer available to new customers as of July 1, 2021. Only On-Demand Instances for the workload that is not interruption tolerant.

  • An Amazon S3 bucket hold sensitive data. A SysOps Admin has been tasked with monitoring all object upload and download activity relating to the bucket. Monitoring must include tracking the AWS account of the caller, the IAM user role of the caller, the time of the API call, and the IP address of the API.
    To meet the requirements the SysOps Admin should Enable data event logging in AWS CloudTrail.
    Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities.
    The following two data types are recorded:
    • Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations).
    • AWS Lambda function execute activity (the Invoke API).
Data events are disabled by default when create a trail. To record CloudTrail data events, must explicitly add the supported resources or resource types for which want to collect activity to a trail.​
  • A company's SysOps admin must ensure that all Amazon EC2 Windows instances that are launched in an AWS account have a third-party agent installed. The third-party agent has an .msi package. The company uses AWS Systems Manager for patching, and the Windows instances are tagged appropriately. The third-party agent requires periodic updates as new versions are released. The SysOps admin must deploy these updates automatically.
    The steps will meet these requirements with the LEAST operational effort are:
    • Create a Systems Manager Distributor package for the third-party agent. Make sure that Systems Manager Inventory is configured. If Systems Manager Inventory is not configured, set up a new inventory for instances that is based on the appropriate tag value for Windows.
    • Create a Systems Manager Opsitem with the tag value for Windows. Attach the Systems Manager Distributor package to the Opsitem. Create a maintenance window that is specific to the package deployment. Configure the maintenance window to cover 24 hours a day.
  • When using multiple accounts within a Region it is important to understand that the name of the Availability Zone (AZ) in each account may map to a different underlying AZ. For instance, us-east-1a may map to a different AZ in one account vs another.
    To identify the location of resources relative to accounts, must use the AZ ID (zoneId), which is a unique and consistent identifier for an AZ. For example, use1-az1 is an AZ ID for the us-east-1 Region and it is the same location in every AWS account.
    This information can be obtained a few different ways including running the DescribeAvailabilityZones API operation or calling the Describe Subnets API operation match.

  • A SysOps admin launches an Amazon EC2 Linux instance in a public subnet. When the instance is running, the SysOps admin obtains the public IP address and attempts to remotely connect to the instance multiple times. However, the SysOps admin always receives a timeout error.
    Action will allow the SysOps admin to remotely connect to the instance is Modify the instance security group to allow inbound SSH traffic from the SysOps admin's IP address.

  • A company has a mobile app that uses Amazon S3 to store images. The images are popular for a week, and then the number of access requests decreases over time. The images must be Highly Available (HA) and must be immediately accessible upon request. A SysOps admin must reduce S3 storage costs for the company.
    Solution will meet these requirements MOST cost-effectively is Create an S3 Lifecycle policy to transition the images to S3 Standard-Infrequent Access (Standard-IA) after 7 days.

  • A manufacturing company uses an Amazon RDS DB instance to store inventory of all stock items. The company maintains several AWS Lambda functions that interact with the DB to add, update, and delete items. The Lambda functions use hardcoded credentials to connect to the DB. A SysOps admin must ensure that the DB credentials are never stored in plaintext and that the password is rotated every 30 days.
    Solution will meet these requirements in the MOST operationally efficient manner is Use AWS Secrets Manager to store credentials for the DB. Create a Secret Manager secret, and select the DB so that Secrets Manager will use a Lambda function to update the DB password automatically. Specify an automatic rotation schedule of 30 days. Update each Lambda function to access the DB password from SecretsManager.
Last edited:


  • A company runs an application that uses Amazon EC2 instance behind an ALB. Customers access the application using a custom DNS domain name. Reports have been received about errors when connecting to the application using the DNS name.
    The admin found that The load balancer target group shows no healthy instances.
    If the load balancer target group shows no healthy instances should check the load balancer target group health check configuration to understand why. The application will not work until the instances are healthy and accessible via the load balancer.

  • A company is managing multiple AWS accounts in AWS Organizations. The company is reviewing internal security of its AWS environment. The company's security admin has their own AWS account and wants to review the VPC configuration of dev AWS accounts.
    Solution will meet these requirements in the MOST secure manner is Create an IAM policy in each dev account that has read-only access related to VPC resources. Assign the policy to a cross-account IAM role. Ask the security admin to assume the role from their account.

  • A company hosts a web application on Amazon EC2 instances behind an ALB. The instances are in an Amazon EC2 Auto Scaling group. The application is accessed with a public URL. A SysOps admin needs to implement a monitoring solution that checks the availability of the application and follows the same routes and actions as a customer. The SysOps admin must receive a notification if less than 95% of the monitoring runs find no errors.
    Solution will meet these requirements is Create an Amazon CloudWatch Synthetics canary with a script that follows customer routes. Schedule the canary to run on a recurring schedule. Create a CloudWatch alarm that publishes a message to an Amazon Simple Notification Service (SNS) topic when the SuccessPercent metric is less than 95%.
    Can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor endpoints and APIs. Canaries follow the same routes and perform the same actions as a customer.

  • With HTTP_STR_MATCH Amazon Route 53 tries to establish a TCP connection. If successful, Route 53 submits an HTTP request and searches the first 5,120 bytes of the response body for the string that specify in SearchString.
    If the response body contains the value that specify in Search string, Route 53 considers the endpoint healthy. If not, or if the endpoint doesn't respond, Route 53 considers the endpoint unhealthy. The search string must appear entirely within the first 5,120 bytes of the response body.
    The search string does not need to be HTML encoded. The search string should include the forward slash (/), if don't include, Route 53 will just add it in anyway. And there is no minimum length for the search string.

  • A company recently deployed MySQL on an Amazon EC2 instance with a default boot volume. The company intends to restore a 1.75 TB DB. A SysOps admin needs to provision the correct Amazon EBS volume. The DB will require read performance of up to 10,000 IOPS and is not expected to grow in size.
    Solution will provide the required performance at the LOWEST cost is Deploy a 2 TB General Purpose SSD (gp3) volume. Set the IOPS to 10,000.

เจาะลึกการใช้งาน DevOps และ CI/CD บน AWS Trend ใหม่ที่องค์กรไทยห้ามพลาด!:
ก่อนมี DevOps เกิดขึ้น Engineer ทำงานกันยังไง?:
  • ต้องมีคนๆ นึง คอย Build และ Test ถ้ามีวันนึง 10 - 20 ครั้ง ก็แทบจะไม่ต้องทำอะไรเลย
  • 'คน' ความผิดพลาดถือเป็นเรื่องปกติ การ Deploy แต่ละครั้ง อาจจะตกหล่นบาง Step ไปได้
DevOps คืออะไร?:
  • คือ Culture, Automation, และ Platform Design เพื่อเพิ่มประสิทธิภาพในการทำงาน, จัดการ Life Cycle ในการ Development, การตอบโจทย์ Business ต่างๆ เพื่อให้สามารถ Release Software ได้เร็วมากยิ่งขึ้น
แล้ว CI/CD ล่ะ คืออะไร?:
  • มันเป็น Process นึงที่อยู่ Under DevOps จะประกอบด้วย 2 Part: Continuous Integration (CI) และ Continuous Delivery (CD)

  • CI หลังจากที่มีการแก้ไข Code เสร็จแล้ว เราจะเอา Automation ต่างๆ มาช่วยในการ Build, การ Run Test เราก็จะเขียน Pipeline ตัวอย่างเช่น gitlab Pipeline โดยเป็นการเขียนว่าเราจะ Build ด้วยอะไร, Test Job อะไรบ้าง เป็น Script
CI ให้อะไรเราบ้าง?:
  • Automate Test
  • Feedback ต่างๆ การเห็น Bug ที่เร็วขึ้น

  • CD จะมาดูว่าจะ Deploy ลง Environment (Env) ไหน Test หรือ Production มีการ Configure ค่าต่างๆ อย่างไรได้บ้าง
CD ล่ะ ให้อะไรเรา?:
  • การ Automate Deploy Change ไปที่ต่างๆ
  • การได้รับ Feedback จาก User
  • Support การทำ A/B Testing

  • Continuous Delivery คือการ Deploy แบบ Manual
  • Continuous Deployment คือการ Deploy แบบ Automation
ประโยชน์ของการทำ DevOps คืออะไร?:
  • การ Release ที่เร็วขึ้น ถี่ขึ้น Update เป็น Realtime
  • ความน่าเชื่อถือในการ Deploy กี่ครั้งก็ไม่ผิด
  • Improve Customer Experience สามารถส่งงานได้ไว
  • Improve Collaboration ระหว่าง Team Dev กับ Operation
  • มีเวลาในการเรียนรู้เรื่องอื่นๆ เพิ่มมากขึ้น
Software Development Life Cycle (SDLC) มี 6 ขั้นตอน:
  1. Plan
  2. Define
  3. Design
  4. Build
  5. Test
  6. Deploy
DevOps ช่วยให้สามารถ Deploy ทีละ Sprint (Agile) ให้กับลูกค้าได้เลย:
  • Agile เข้ามาช่วยลด Gab ระหว่าง Developer กับลูกค้า
  • DevOps Tool ช่วยลด Gab ระหว่าง Dev(eloper) และ Op(eration)s
  • Team Business เวลาขายของก็มั่นใจว่าสิ่งที่ส่งมอบให้ลูกค้าจะไม่มีความผิดพลาด Team งานมีความน่าเชื่อถือ
เมื่อไหร่ที่เราต้องการ DevOps:
  • ต้องการที่จะ Release Software หรือ Update ตรงเวลา
  • ต้องการ Software และ Environment ที่มีความเสถียร
  • เจอปัญหาก่อนที่จะกระทบ End User
  • ต้องการเพิ่มการ Delivery อย่างต่อเนื่อง
AWS DevOps tools:

  • AWS Cloud9 คือ IDE ที่สามารถ Develop Application ของเราได้เลย ไม่ต้อง Load Software IDE Tool หรือ Package ต่างๆ มาลงบนเครื่องเรา
    • เอาไว้ใช้ตั้งแต่การเขียน, การ Run, การ Debug Code ต่างๆ รวมถึงการ Edit Code
    • เป็น On-demand ไม่คิดค่าใช้จ่าย หากไม่ใช้งาน
    • Support หลายภาษาไม่ว่าจะเป็น Python, Java, Node.js, .NET, etc.


  • AWS CodeCommit คือ Repository ในการเก็บ Code และ Control Software Version
    • คล้ายๆ GitHub แต่เป็น Private
    • เป็นลักษณะของ Branch อาจจะมีการแบ่งเป็น Code ของ Production, Development, UAT, etc.
    • Integrate กับ AWS IAM เพื่อจำกัดสิทธิ์ในการใช้งานได้, ดู Status จาก CloudWatch หรือจะเอา Event ไปทำอย่างอื่นต่อ, ทำ Function Encryption มาใส่ได้ด้วย AWS KMS, etc.
  • แล้วก็ใช้ AWS CodeBuild ในการ Build (Complier) และ Test ว่า Code ถูกต้องมั้ย Syntax ต่างๆ มี Error รึเปล่า ถ้ามี จะ Feedback กลับมาให้แก้ก่อน
    • เป็น On-demand เหมือนกัน
    • Monitor ผ่าน CloudWatch ได้
    • Integrate กับ Jenkins, Git ต่างๆ, etc. ได้
  • AWS CodeGuru ใช้ตรวจ Check และ Optimize Code พร้อมกับให้ Recommendation โดยมี Machine Learning ทำหน้าที่วิเคราะห์ Code ที่เราเขียน

    • มี Intelligent Recommendation โดยเอาฐานข้อมูลมาจากที่มีการ Build Code ต่างๆ ของ หรือ Best Practice อื่นๆ
    • นอกจากดู Error แล้ว ยังดูเรื่อง Optimize ให้ด้วย เช่น การเขียนแบบนี้ยาวเกินไปรึเปล่า เป็น Recommendation ให้เรามา
    • ดู Efficiency ให้ด้วย แล้วก็ Recommend เรามาให้มีประสิทธิภาพเพิ่มขึ้น
  • AWS CodeDeploy ก็ช่วยให้นำ Code ของเราไปติดตั้งได้ใน Destination ต่างๆ บน AWS เอง เช่น EC2, Fargate (Container), Lambda (Serverless), หรือ On-premises Server ก็ได้ โดยสามารถ Deploy v2 เครื่องเดียวก่อน, แล้วค่อยขยับเป็นครึ่งนึง (3 ใน 6 เครื่อง), แล้วค่อย Deploy ทุกเครื่อง
    • ทำ Blue/Green Deployment ได้: Blue = Existing Version, Green = New Version
Last edited:


Mitigate downtime risks with blue/green deployment and save up to 80% in time and effort to set up and run Amazon ECS and Fargate applications:
  • AWS CodePipeline เป็น Orchestrate Code Build, Test, & Deploy ทำให้การ Automate Seamless และประสิทธิภาพมากขึ้น
    • ดึง Source มาจาก AWS CodeCommit, GitHub, Amazon S3, Amazon ECR (Container Repository) ได้
    • ปลายทางติดตั้งได้ทั้ง AWS CodeDeploy, EC2, Elastic Beanstalk, OpsWorks Stacks, ECS (Blue/Green), Fargate, CloudFormation, Lambda
  • AWS X-Ray & Amazon CloudWatch ก็จะเอาไว้คอย Monitor ดูว่า Status เป็นยังไงบ้าง ในการ Build มาตั้งแต่ Source จนถึงการ Deploy
    • CloudWatch ดูได้ตั้งแต่ Performance, Utilization, Health, Down, Error เน้นไปทาง Infrastructure, ดู CPU & Memory ได้ถึงระดับ Container
    • X-Ray ดูลึกลงไปในระดับ Application ค่อนข้าง End to end, Frontend Link กับอะไรบ้าง App 1, 2, DB, etc.


  • AWS CloudFormation ก็จะเป็น Infrastructure as Code (IaC) ไม่ต้องไปนั่ง Click หน้า UI ทีละหน้า
    • เอาไว้เขียน Code ให้สร้าง, Configure, และ Deploy Infrastructure Component
    • เขียนได้หมดไม่ว่าจะเป็น Network, Compute, DBs, Security, Management Tool, etc.
    • อยากจะมีหลาย Region Environment เหมือนกัน ก็เอาไปใช้ Deploy ได้
    • เป็น Document ได้ โดยอ่านจาก Code ที่เขียนก็จะเห็นว่ามี Services อะไรบ้าง
    • เป็น Automate ทั้งสร้าง หรือลบ
    • สร้าง Stack แยกเป็นแต่ละ Version ได้ อยากเพิ่ม/ลบ Version ไหน
  • AWS Cloud Development Kit (CDK) ก็จะมีเครื่องมือต่างๆ เพื่ออำนวยความสะดวกในการพัฒนา Code ของเรา
    • ไม่ต้อง Develop Code ตั้งแต่เริ่มต้น มี Pre-configured ให้ดึงมาใช้งาน
    • แล้ว Develop เป็น Custom Version สำหรับ Share ให้คนอื่นดึงไปใช้ต่อ หรือ Develop เพิ่ม หรือใช้กับ Project หน้าได้
  • AWS Systems Manager Patch Manager can be used for both on-premises servers and EC2 instances. Systems Manager can be configured for hybrid environments that include on-premises servers, edge devices, and VMs that are configured for AWS Systems Manager, including VMs in other cloud environments.
    There are several steps that must be taken to configure on-premises nodes to be managed by AWS Systems Manager.

  • Code:

  • A company is running several dev projects. Devs are assigned to a single project but move between projects frequently. Each project team requires access to different AWS resources.
    A solution Architect should Create a customer managed policy document for each project that requires access to AWS resources. Specify control of the resources that belong to the project. Attach the project-specific policy document to an IAM group. Change the group membership when dev change projects. Update the policy document when the set of resources changes.
AWS Organizations:
  • Can group accounts into Organizational Units (OUs)
  • Service Control Policies (SCPs) can control tagging and the available API actions
  • Create accounts programmatically using the Organizations API
  • Enable AWS SSO using on-prem directory
  • Receive a consolidated bill
  • Enable CloudTrail in management account and apply to members

  • SCPs control the maximum available permissions
  • Tag policy applied to enforce tag standardization

SCP Strategies and Inheritance:

Deny List Strategy:
  • The FullAWSAccess SCP is attached to every OU and account
  • Explicitly allows all permissions to flow down from the root
  • Can explicitly override with a deny in an SCP
  • This is the default setup
  • An explicit deny overrides any kind of allow

Allow List Strategy:
  • The FullAWSAccess SCP is removed from every OU and account
  • To allow a permission, SCPs with allow statements must be added to the account and every OU above it including root
  • Every SCP in the hierarchy must explicitly allow the APIs want to use
  • An explicit allow overrides an implicit deny

AWS Control Tower:
  • Directory source can be SSO, SAML 2.0 IdP, or Microsoft AD
  • Control Tower creates a landing zone, a well-architected multi-account baseline based on best practices
  • This is known as a landing zone
  • Guardrails are used for governance and compliance:
    • Preventive guardrails are based on SCPs and disallow API actions using SCPs
    • Detective guardrails are implemented using AWS Config rules and Lambda functions and monitor and govern compliance
  • The root user in the management account can perform actions that guardrails would disallow

How IAM Works:
  • IAM Principals must be authenticated to send requests (with a few exceptions)
  • A principal is a person or application that can make a request for an action or operation on an AWS resource
  • Request context:
    • Actions / operations
    • Resources
    • Principal
    • Environment data
    • Resource data
  • AWS determines whether to authorize the request (allow/deny)
Users, Groups, Roles and Policies:
  • Policies define the permissions for the identities or resources they are associated with
  • The user gains the permissions applied to the group through the policy
  • Identity-based policies can be applied to users, groups, and roles
  • Roles are used for delegation and are assumed

  • A company is migrating an application into AWS. The DB layer consists of a 25 TB MySQL DB in the on-premises data center. There is a 50 Mbps internet connection and an IPSec VPN connection to the Amazon VPC. The company plans to go live on AWS within 2 weeks. The migration schedule with the LEAST downtime is:
    The internet connection is too slow. The best approach is to use a DB export using DB-native tools and import the exported data to AWS using AWS Snowball. The data can be loaded into a newly launched Amazon Aurora MySQL DB instance. Native SQL replication can then be used to synchronize from the on-premises DB to the RDS Aurora instance using the VPN.
    Once the Aurora DB instance is fully synchronized, the DNS entry can be changed to point to the Aurora DB instance and the application will start to use the Aurora DB instance and stop replication.

  • The Recovery Time Objective (RTO) defines how quickly a service must be restored and a Recovery Point Objective (RPO) defines how much data it is acceptable to lose. For example, an RTO of 30 minutes means the service must be running again within half an hour and an RPO of 5 minutes means no more than 5 minutes' worth of data can be lost.
    Application tiers use Amazon EC2 instances and are stateless. The data tier consists of a 30TB Amazon Aurora DB. To achieve this example a host standby is required of the EC2 instances. With a hot standby a minimum of application/web servers should be running and can be scaled out as needed.
    For the data tier an Amazon Aurora cross-Region Replica is the best way to ensure that <5mins of data is lost. Can promote an Aurora Read Replica to a standalone DB cluster, and this would be performed in the event of a disaster affecting the source DB cluster.
Last edited:


IAM Users:
  • Email used for signup.
  • The root user has full permissions. It's a best practice to avoid using the root user account + enable MFA.
  • Up to 5,000 individual user accounts can be created. Users have no permissions by default.
  • Friendly name: Andrea, Amazon Resource Name: arn:aws:iam:736259363490:user/Andrea
  • Authentication via username/password for console or access keys for API/CLI.

IAM Groups:
  • Groups are collections of users. Users can be members of up to 10 groups.
  • The main reason to use groups is to apply permissions to users using policies.
  • The user gains the permissions applied to the group through the policy.

IAM Roles:
  • An IAM role is an IAM identity that has specific permissions.
  • Roles are assumed by users, applications, and services.
  • Once assumed, the identity 'becomes' the role and gain the roles' permissions.

IAM Policies:
  • Policies are documents that define permissions and are written in JSON.
  • All permissions are implicitly denied by default.
  • Identity-based policies can be applied to users, groups, and roles.
    These are JSON permissions policy documents that control what actions an identity can perform, on which resources, and under what conditions.
    • Inline policies have a 1-1 relationship with the user, group, or role.
    • AWS managed are created and managed by AWS; customer managed are created and managed by you.
    • Managed policies are standalone policies that can be attached to multiple users, groups, or roles.
    • A permissions policy.
  • Resource-based policies are JSON policy documents that apply/attach to resources such as an S3 buckets or DynamoDB tables.
    • Grant the specified principal (Paul) permission to perform specific actions on the resource.
    • A trust policy.

Role-Based Access Control (RBAC):
  • Users are assigned permissions through policies attached to groups.
  • Groups are organized by job function.
  • Best practice is to grant the minimum permissions required to perform the job.

  • API - kms:GenerateDataKey - returns a unique symmetric data key for use outside of AWS KMS. This operation returns a plaintext copy of the data key and a copy that is encrypted under a symmetric encryption KMS key that specify. The bytes in the plaintext key are random; they are not related to the caller or the KMS key. Can use the plaintext key to encrypt data outside of AWS KMS and store the encrypted data key with the encrypted data.

  • AWS WAF is a Web Application Firewall that helps protect web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. It gives control over how traffic reaches applications by enabling to create security rules that block common attack patterns and rules that filter out specific traffic patterns define.
    Can deploy AWS WAF on Amazon CloudFront as part of CDN solution, the ALB that fronts web servers or origin servers running on EC2, or Amazon API Gateway for APIs.
    AWS WAF - How it Works


    To block specific countries, can create a WAF geo match statement listing the countries that want to block, and to allow traffic from IPs of the remote development team, can create a WAF IP set statement that specifies the IP addresses that want to allow through.

  • If use an Elastic Load Balancer (ELB) with EC2, recommend configuring the security group associated with the application servers to only allow incoming traffic on port 80 from the ELB (by using the security group associated to the ELB as the source). This would stop the direct incoming traffic on port 80 from others.

  • Amazon S3 Transfer Acceleration enables fast, easy, and secure transfer of files over long distances between a client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
    Transfer Acceleration is a good solution for the following use cases:
    • There are customers that upload to a centralized bucket from all over the world.
    • Transfer gigabytes to terabytes of data on a regular basis across continents.
    • Unable to utilize all of available bandwidth over the Internet when uploading to Amazon S3.
Multipart upload transfers parts of the file in parallel and can speed up performance. This should definitely be built into the application code. Multipart upload also handles the failure of any parts gracefully, allowing for those parts to be retransmitted.​
Transfer Acceleration in combination with multipart upload will offer significant speed improvements when uploading data.​
Performance improvements for downloading data, CloudFront can offer.​
  • When want to give one or more consumer VPCs unidirectional access to a certain service or group of instances in the service provider VPC, use AWS PrivateLink. A connection to the service in the service provider VPC can only be started by clients in the consumer VPC.
    AWS PrivateLink establishes a connection between Virtual Private Cloud (VPC) and AWS services privately. Those AWS services can be hosted anywhere like in own account, or a different account or a different VPC.
    The connection doesn't require an Internet gateway, NAT gateway or any other form of networking connections. The data is completely going to flow over a private link, which means communication will happen over internal IP address.

  • In Amazon S3, can grant users in another AWS account (Account B) granular cross-account access to > objects owned by Account A.
    1. Create an S3 bucket in Account A.
    2. Create an IAM role or user in Account B.
    3. Give the IAM role in Account B permission to access objects from a specific bucket:
                  "Effect": "Allow",
                  "Action": [
                  "Resource": "arn:aws:s3:::AccountABucketName/*"
    4. Configure the bucket policy for Account A to grant permissions to the IAM role or user that created in Account B. Use this bucket policy to grant a user the permissions to access for objects in a bucket owned by Account A:
                  "Effect": "Allow",
                  "Principal": {
                    "AWS": "arn:aws:iam::AccountB:user/AccountBUserName"
                  "Action": [
                  "Resource": [
  • For object encryption at rest, can set the default encryption behavior on an Amazon S3 bucket so that all objects are encrypted when they are stored in the bucket. The objects are encrypted using Server-Side Encryption with either Amazon S3-managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) keys.
    To deny unencrypted objects, 's3:x-amz-server-side-encryption' can be added which allows only encrypted object upload and can restrict to a specific KMS key as well.
    Amazon CloudFront can use 301 response code to redirect HTTP requests to HTTPS and allows only secured traffic.
  • External ID prevents another AWS customer from accessing your account:

Last edited:


Job function policies: AWS managed policies for job functions are designed to closely align to common job functions in the IT industry:
  • Administrator
  • Billing
  • Database administrator
  • Data scientist
  • Developer power user
  • Network administrator
  • Security auditor
  • Support user
  • System administrator
  • View-only user

Attribute-Based Access Control (ABAC):
  • Tags are a way of assigning metadata to resources using key/value pairs.
  • Permissions are granted to resources when the tag matches a certain value.

Permissions Boundaries:
  • Permissions boundaries are attached to users and roles.
  • The permissions boundary sets the maximum permissions that the entity can have.
  • Ensure that users created have the same or fewer permissions. Does not have more privileges when logging in as other users.

Policy Evaluation Logic:


Steps for Authorizing Requests to AWS:
  1. Authentication - AWS authenticates the principal that makes the request
    Request context:
    • Actions - the actions or operations the principal wants to perform
    • Resources - The AWS resource object upon which actions are performed
    • Principal - The user, role, federated user, or application that sent the request
    • Environment data - Information about the IP address, user agent, SSL status, or time of day
    • Resource data - Data related to the resource that is being requested
  2. Processing the request context
  3. Evaluating all policies within the account
  4. Determining whether a request is allowed or denied

Types of Policy:
  • Identity-based policies - attached to users, groups, or roles
  • Resource-based policies - attached to a resource; define permissions for a principal accessing the resource
  • IAM permissions boundaries - set the maximum permissions an identity-based policy can grant an IAM entity
  • AWS Organizations SCP - specify the maximum permissions for an organization or OU
  • Session policies - used with AssumeRole* API actions

Determination Rules:
  1. By default, all requests are implicitly denied (though the root user has full access)
  2. An explicit allow in an identity-based or resource-based policy overrides this default
  3. If a permissions boundary, Organizations SCP, or session policy is present, it might override the allow with an implicit deny
  4. An explicit deny in any policy overrides any allows

  • A 403 error is an access denied error. Therefore, there must be an issue relating to authorization to access the S3 bucket. Steps to check:
    1. Bucket policy
    2. IAM roles
    3. The most likely cause of ECS issue is that the ECS task execution role has been changed and the tasks no longer have the permissions to access S3.
      According to best practice the permissions that containers (tasks) require should be specified in task execution roles not in container instance IAM roles. As the cluster has been setup according to best practice the permission to S3 should not be specified in the container instance IAM role.
  • To deploy the application in two AWS Regions that will be used simultaneously. The objects in the two S3 buckets must remain synchronized with each other. The steps are:
    1. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point.
    2. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets.
    3. Enable S3 Versioning for each S3 bucket.
  • AWS Direct Connect (DX) makes it easy to establish a dedicated connection from an on-premises network to Amazon VPC. Using AWS DX can establish private connectivity between AWS and data center, office, or collocated environment. This private connection can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience then internet-based connections.
    Design would provide the MOST resilient connectivity between AWS and the on-premises data center is Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
    This will provide physical separation and redundancy for the DX connections and is preferable to using the same carrier which could result in sharing the same physical pathways. The virtual private gateway has built in redundancy so sharing a VGW is acceptable.


  • To make the monitoring process more reliable to troubleshoot any future events due to traffic spikes:
    • If use Aurora MySQL DB cluster, can configure to publish general, slow query, audit, and error log data to a log group in Amazon CloudWatch Logs.
      With CloudWatch Logs can perform real-time analysis of the log data and use CloudWatch to create alarms and view metrics. Can use to store log records in highly durable storage. To publish logs to its, the respective logs must be enabled. Error logs are enabled by default but must enable the other types of logs explicitly.
    • Can collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. The unified CloudWatch agent enable to collect internal system-level metrics from EC2 instances across OSs. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances. Can collect logs from EC2 instances and on-premises servers, running either Linux or Windows server.
    • Can use the X-Ray SDK to trace incoming HTTP requests.
  • A security group acts as a virtual firewall that controls the traffic for one or more instances. When launch an instance, can specify one or more security groups; otherwise, AWS uses the default security group. Can add rules to each security group that allows traffic to or from its associated instances. Can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. To decide whether to allow traffic to reach an instance, AWS evaluates all the rules from all the security groups that are associated with the instance.
    The following are the default rules for a default security group:
    • Allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group.
    • Allows all outbound traffic.
The following are the default rules for a security group that you create:​
  • Allows no inbound traffic, can change its to reflect the type of inbound traffic that want to reach the associated instances.
  • Allows all outbound traffic, can also change its.
  • Use AWS DX gateway to connect VPCs. Associate an AWS DX gateway with either of the following gateways:
    • A transit gateway when there are multiple VPCs in the same Region
    • A virtual private gateway
Can also use a virtual private gateway to extend Local Zone. This configuration allows the VPC associated with the Local Zone to connect to a DX gateway. The DX gateway connects to a DX location in a Region. The on-premises Data Center (DC) has a DX connection to the DX location.​
DX gateway can use to add a redundant DX connection in the same Region. And also provide connectivity to other Regions through the same pair of DX connections as the company expands into other Regions.​
Last edited:


IAM Policy Structure:
  • An IAM policy is a JSON document that consists of one or more statements
        "Effect":"effect", > The effect element can be Allow or Deny
        "Action":"action", > The Action element is the specific API action for which are granting or denying permission
        "Resource":"arn", > The Resource element specifies the resource that's affected by the action
        "Condition":{ > The Condition element is optional and can be used to control when your policy is in effect
  • "Action": "*", > The Administrator Access policy uses wildcards (*) to allow all actions
    "Resource": "*" > on all resources

  • "Action": ["ec2:TerminateInstances"], > The specific API action is defined
    "Condition": {
    .."NotIpAddress": { > The effect is to deny the API action if the IP address is not in the specified range

  • "Principal": { > Can tell this is a resource-based policy as it has a principal element defined
    .."AWS": "*"
    "Action": [
    .."elasticfilesystem:ClientWrite" > The policy grants read and write access to an EFS file systems to all IAM principals ("AWS": "*")
    "Condition": {
    .."Bool": {
    ...."aws:SecureTransport": "true" > Additionally, the policy condition element requires that SSL/TLS encryption is used

  • "Condition": {"StringLike": {"s3: prefix": ["${aws:username}/*"]}} > A variable is used for the s3: prefix that is replaced with the user's friendly name

  • Amazon RDS Read Replicas provide enhanced performance and durability for RDS DB instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy DB workloads. Can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
    Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all DBs in the source DB instance.
    Can also promote a read replica if the source DB instance fails, and can set up a read replica with its own standby instance in different AZ. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.
Last edited: