Cloud Computing


  • AWS Systems Manager requires an IAM role for EC2 instances that it manages, to perform actions on behalf. This IAM role is referred to as an intance profile. If an instance is not managed by Systems Manager, one likely reason is that the instance does not have an instance profile, or the instance profile does not have the necessary permissions to allow Systems Manager to manage the instance.

  • A company has a new requirement stating that all resources in AWS must be tagged according to a set policy.
    AWS Config service should be used to enforce and continually identify all resources that are not in compliance with the policy.


  • A company's static website hosted on Amazon S3 was launched recently and is being used by tens of thousands of users. Subsequently, website users are experiencing 503 service unavailable errors.
    These errors occurring because The request rate to Amazon S3 is too high.

  • A SysOps Admin needs to receive an email whenever critical, production Amazon EC2 instances reach 80% CPU utilization.
    This can be achieved by Create an Amazon CloudWatch alarm and configure an Amazon SNS notification.
    CloudWatch Events is used for state changes not metric breaches.

  • A SysOps admin is responsible for managing a company's cloud infrastructure with AWS CloudFormation. The SysOps admin needs to create a single resource that consist of multiple AWS services. The resource must support creation and deletion through the CloudFormation console.
    To meet these requirements the SysOps admin should create CloudFormation Custom::MyCustomType.
    Custom resources enable to write custom provisioning logic in templates that AWS CloudFormation runs anytime create, update (if changed the custom resource), or delete stacks. For example, might want to include resources that aren't available as AWS CloudFormation resource types. Can include those resources by using custom resources.
    That way can still manage all related resources in a single stack.

  • A SysOps Admin manages a fleet of Amazon EC2 instances running a distribution of Linux. The OSs are patched on a schedule using AWS Systems Manager Patch Manager. Users of the application have complained about poor response times when the systems are being patched.
    To ensure patches are deployed automatically with MINIMAL customer impact is Configure the maintenance window to patch 10% of the instances in the patch group at a time.

  • A global gaming company is preparing to launch a new game on AWS. The game runs in multiple AWS Regions on a fleet of Amazon EC2 instances. The instances are in an Auto Scaling group behind Application Load Balancer (ALB) in each Region. The company plans to use Amazon Route 53 for DNS services. The DNS configuration must direct users to the Region that is closest to them and must provide automated failover.
    To configure Route 53 to meet these requirements a SysOps admin should Create Amazon CloudWatch alarms that monitor the health of the ALB in each Region. Configure Route 53 DNS failover by using a health check that monitors the alarms. And Configure Route 53 geoproximity routing Specify the Regions that are used for the infrastructure.
    Monitoring the health of the EC2 instances is not sufficient to provide failover as the EC2 instances are in an Auto Scaling group and instances can be added or removed dynamically.
    Monitoring the private IP address of an EC2 instance is not sufficient to determine the health of the infrastructure, as the instance may still be running but the application or service on the instance may be unhealthy.
    Simple routing does not take into account geographic proximity.

  • A company has a workload that is sending log data to Amazon CloudWatch Logs. One of the fields includes a measure of application latency. A SysOps admin needs to monitor the p90 statistic of this field over time.
    To meet this requirement the SysOps admin should Create a metric filter on the log data.

  • A SysOps admin must configure a resilient tier of Amazon EC2 instances for a High Performance Computing (HPC) applicaton. The HPC application requires minum latency between nodes.
    To meet these requirements the SysOps admin should Place the EC2 instances in an Auto Scaling group within a single subnet and Launch the EC2 instances into a cluster placement group.

  • A company has a web application with a DB tier that consists of an Amazon EC2 instance that runs MySQL. A SysOps admin needs to minimize potential data loss and the time that is required to recover in the event of a DB failure.
    The MOST operationally efficient solution that meets these requirements is Use Amazon Data Lifecycle Manager (DLM) to take a snapshot of the Amazon Elastic Block Store (EBS) volume every hour. In the event of an EC2 instance failure, restore the EBS volume from a snapshot.

  • If the Dev are provided with full admin access, then the only way to ensure compliance with the corporate policy is to use an AWS Organizations Service Control Policies (SCPs) to restrict the API actions relating to use of the specific restricted services.

  • A company has set up an IPSec tunnel between its AWS environment and its on-premises data center. The tunnel is reporting as UP, but the Amazon EC2 instances are not able to ping any on-premises resources.
    To resolve this issue a SysOps admin should Create a new inbound rule on the EC2 instances' security groups to allow ICMP traffic from the on-premises CIDR.

  • If the users have reported high latency and connection instability of the application in another region, we can Create an accelerator in AWS Global Accelerator and update the DNS record to improve the availability and performance. It provides static IP addresses that provide a fixed entry point to applications and eliminate the complexity of managing specific IP addresses for different AWS Regions and AZ.
    It always routes user traffic to the optimal endpoint based on performance, reacting instantly to changes in application health, user's location, and policies that configure.
    CloudFront cannot be used for the SFTP protocol.

  • A company monitors its account activity using AWS CloudTrail, and is concerned that some log files are being tampered with after the logs have been delivered to the account's Amazon S3 bucket.
    Moving forward, the SysOps Admin confirm that the log files have not been modified after being delivered to the S3 bucket by Enable log file integrity validation and use digest files to verify the hash value of the log file.

  • A company uses several large Chef recipes to automate the configuration of Virtual Machines (VMs) in its data center. A SysOps admin is migrating this workload to Amazon EC2 instances on AWS and must run the existing Chef recipes.
    Solution will meet these requirements MOST cost-effectively is Set up AWS OpsWorks for Chef Automate. Migrate the existing recipes. Modify the EC2 instance user data to connect to Chef.

  • A company needs to ensure strict adherence to a budget for 25 applications deployed on AWS. Separate teams are responsible for storage, compute, and DB costs. A SysOps admin must implement an automated solution to alert each team when their projected spend will exceed a quarterly amount that has been set by the finance department. The solution cannot incur additional compute, storage, or DB costs.
    Solution will meet these requirements is Use AWS Budgets to create a cost budget for each team, filtering by the services they own. Specify the budget amount defined by the finance department along with a forecasted cost threshold. Enter the appropriate email recipients for each budget.
Last edited:


  • The Compute Savings Plans offer flexibility and can apply to usage across any AWS region, any AWS compute service (including AWS Fargate, not only EC2 like the EC2 Instance Savings Plans), and across different instance families.


  • A company runs an encrypted Amazon RDS for Oracle DB instance. The company wants to make regular backups available in another AWS Region.
    The MOST operationally efficient solution that meets these requirements is Modify the DB instance. Enable cross-Region automated backups.

  • A company is creating a new application that will run in a hybrid environment. The application processes data that must be secured and the developers require encryption in-transit across shared networks and encryption at rest.
    To meet these requirements a SysOps Admin should Configure an AWS Virtual Private Network (VPN) connection between the on-premises data center and AWS. It will encrypt data over the shared, hybrid network connection. This ensures encryption in-transit and if don't have a certificate can create a pre-shared key. And
    Use AWS KMS to manage encryption keys that can be used for data encryption. In this case the keys would then be used outside of KMS to actually encrypt the data.

  • A SysOps admin noticed that the cache hit ratio for an Amazon CloudFront distribution is less than 10%.
    Configurations changes will increase the cache hit ratio for the distribution are Increase the CloudFront Time To Live (TTL) settings in the Cache Behavior Settings and Ensure that only required cookies, query strings, and headers are forwarded in the Cache Behavior Settings. By default, each file automatically expires after 24 hours.

  • A company plans to use Amazon Route 53 to enable HA for a website running on-premises. The website consists of an active and passive server. Route 53 must be configured to route traffic to the primary active server if the associated health returns a 2xx status code. All other traffic should be directed to the secondary passive server.
    A SysOps Admin needs to configure the record type and health check. The website runs on-premises and therefore an Alias record cannot be used as this would only be used for AWS resources. Therefore, an A record should be used for each server. The health check must return HTTP status codes and therefore should be use a HTTP health check.

  • A company wants to reduce costs for jobs that can be completed at any time. The jobs currently run by using multiple Amazon EC2 On-Demand Instances and the jobs take slightly less than 2 hours to complete. If a job falls for any reason it must be restarted from the beginning.
    Solution will meet these requirements MOST cost-effectively is Submit a request for Spot Instances with a defined duration for the jobs. It also known as Spot blocks are no longer available to new customers as of July 1, 2021. Only On-Demand Instances for the workload that is not interruption tolerant.

  • An Amazon S3 bucket hold sensitive data. A SysOps Admin has been tasked with monitoring all object upload and download activity relating to the bucket. Monitoring must include tracking the AWS account of the caller, the IAM user role of the caller, the time of the API call, and the IP address of the API.
    To meet the requirements the SysOps Admin should Enable data event logging in AWS CloudTrail.
    Data events provide visibility into the resource operations performed on or within a resource. These are also known as data plane operations. Data events are often high-volume activities.
    The following two data types are recorded:
    • Amazon S3 object-level API activity (for example, GetObject, DeleteObject, and PutObject API operations).
    • AWS Lambda function execute activity (the Invoke API).
Data events are disabled by default when create a trail. To record CloudTrail data events, must explicitly add the supported resources or resource types for which want to collect activity to a trail.​
  • A company's SysOps admin must ensure that all Amazon EC2 Windows instances that are launched in an AWS account have a third-party agent installed. The third-party agent has an .msi package. The company uses AWS Systems Manager for patching, and the Windows instances are tagged appropriately. The third-party agent requires periodic updates as new versions are released. The SysOps admin must deploy these updates automatically.
    The steps will meet these requirements with the LEAST operational effort are:
    • Create a Systems Manager Distributor package for the third-party agent. Make sure that Systems Manager Inventory is configured. If Systems Manager Inventory is not configured, set up a new inventory for instances that is based on the appropriate tag value for Windows.
    • Create a Systems Manager Opsitem with the tag value for Windows. Attach the Systems Manager Distributor package to the Opsitem. Create a maintenance window that is specific to the package deployment. Configure the maintenance window to cover 24 hours a day.
  • When using multiple accounts within a Region it is important to understand that the name of the Availability Zone (AZ) in each account may map to a different underlying AZ. For instance, us-east-1a may map to a different AZ in one account vs another.
    To identify the location of resources relative to accounts, must use the AZ ID (zoneId), which is a unique and consistent identifier for an AZ. For example, use1-az1 is an AZ ID for the us-east-1 Region and it is the same location in every AWS account.
    This information can be obtained a few different ways including running the DescribeAvailabilityZones API operation or calling the Describe Subnets API operation match.

  • A SysOps admin launches an Amazon EC2 Linux instance in a public subnet. When the instance is running, the SysOps admin obtains the public IP address and attempts to remotely connect to the instance multiple times. However, the SysOps admin always receives a timeout error.
    Action will allow the SysOps admin to remotely connect to the instance is Modify the instance security group to allow inbound SSH traffic from the SysOps admin's IP address.

  • A company has a mobile app that uses Amazon S3 to store images. The images are popular for a week, and then the number of access requests decreases over time. The images must be Highly Available (HA) and must be immediately accessible upon request. A SysOps admin must reduce S3 storage costs for the company.
    Solution will meet these requirements MOST cost-effectively is Create an S3 Lifecycle policy to transition the images to S3 Standard-Infrequent Access (Standard-IA) after 7 days.

  • A manufacturing company uses an Amazon RDS DB instance to store inventory of all stock items. The company maintains several AWS Lambda functions that interact with the DB to add, update, and delete items. The Lambda functions use hardcoded credentials to connect to the DB. A SysOps admin must ensure that the DB credentials are never stored in plaintext and that the password is rotated every 30 days.
    Solution will meet these requirements in the MOST operationally efficient manner is Use AWS Secrets Manager to store credentials for the DB. Create a Secret Manager secret, and select the DB so that Secrets Manager will use a Lambda function to update the DB password automatically. Specify an automatic rotation schedule of 30 days. Update each Lambda function to access the DB password from SecretsManager.
Last edited:


  • A company runs an application that uses Amazon EC2 instance behind an ALB. Customers access the application using a custom DNS domain name. Reports have been received about errors when connecting to the application using the DNS name.
    The admin found that The load balancer target group shows no healthy instances.
    If the load balancer target group shows no healthy instances should check the load balancer target group health check configuration to understand why. The application will not work until the instances are healthy and accessible via the load balancer.

  • A company is managing multiple AWS accounts in AWS Organizations. The company is reviewing internal security of its AWS environment. The company's security admin has their own AWS account and wants to review the VPC configuration of dev AWS accounts.
    Solution will meet these requirements in the MOST secure manner is Create an IAM policy in each dev account that has read-only access related to VPC resources. Assign the policy to a cross-account IAM role. Ask the security admin to assume the role from their account.

  • A company hosts a web application on Amazon EC2 instances behind an ALB. The instances are in an Amazon EC2 Auto Scaling group. The application is accessed with a public URL. A SysOps admin needs to implement a monitoring solution that checks the availability of the application and follows the same routes and actions as a customer. The SysOps admin must receive a notification if less than 95% of the monitoring runs find no errors.
    Solution will meet these requirements is Create an Amazon CloudWatch Synthetics canary with a script that follows customer routes. Schedule the canary to run on a recurring schedule. Create a CloudWatch alarm that publishes a message to an Amazon Simple Notification Service (SNS) topic when the SuccessPercent metric is less than 95%.
    Can use Amazon CloudWatch Synthetics to create canaries, configurable scripts that run on a schedule, to monitor endpoints and APIs. Canaries follow the same routes and perform the same actions as a customer.

  • With HTTP_STR_MATCH Amazon Route 53 tries to establish a TCP connection. If successful, Route 53 submits an HTTP request and searches the first 5,120 bytes of the response body for the string that specify in SearchString.
    If the response body contains the value that specify in Search string, Route 53 considers the endpoint healthy. If not, or if the endpoint doesn't respond, Route 53 considers the endpoint unhealthy. The search string must appear entirely within the first 5,120 bytes of the response body.
    The search string does not need to be HTML encoded. The search string should include the forward slash (/), if don't include, Route 53 will just add it in anyway. And there is no minimum length for the search string.

  • A company recently deployed MySQL on an Amazon EC2 instance with a default boot volume. The company intends to restore a 1.75 TB DB. A SysOps admin needs to provision the correct Amazon EBS volume. The DB will require read performance of up to 10,000 IOPS and is not expected to grow in size.
    Solution will provide the required performance at the LOWEST cost is Deploy a 2 TB General Purpose SSD (gp3) volume. Set the IOPS to 10,000.

เจาะลึกการใช้งาน DevOps และ CI/CD บน AWS Trend ใหม่ที่องค์กรไทยห้ามพลาด!:
ก่อนมี DevOps เกิดขึ้น Engineer ทำงานกันยังไง?:
  • ต้องมีคนๆ นึง คอย Build และ Test ถ้ามีวันนึง 10 - 20 ครั้ง ก็แทบจะไม่ต้องทำอะไรเลย
  • 'คน' ความผิดพลาดถือเป็นเรื่องปกติ การ Deploy แต่ละครั้ง อาจจะตกหล่นบาง Step ไปได้
DevOps คืออะไร?:
  • คือ Culture, Automation, และ Platform Design เพื่อเพิ่มประสิทธิภาพในการทำงาน, จัดการ Life Cycle ในการ Development, การตอบโจทย์ Business ต่างๆ เพื่อให้สามารถ Release Software ได้เร็วมากยิ่งขึ้น
แล้ว CI/CD ล่ะ คืออะไร?:
  • มันเป็น Process นึงที่อยู่ Under DevOps จะประกอบด้วย 2 Part: Continuous Integration (CI) และ Continuous Delivery (CD)

  • CI หลังจากที่มีการแก้ไข Code เสร็จแล้ว เราจะเอา Automation ต่างๆ มาช่วยในการ Build, การ Run Test เราก็จะเขียน Pipeline ตัวอย่างเช่น gitlab Pipeline โดยเป็นการเขียนว่าเราจะ Build ด้วยอะไร, Test Job อะไรบ้าง เป็น Script
CI ให้อะไรเราบ้าง?:
  • Automate Test
  • Feedback ต่างๆ การเห็น Bug ที่เร็วขึ้น

  • CD จะมาดูว่าจะ Deploy ลง Environment (Env) ไหน Test หรือ Production มีการ Configure ค่าต่างๆ อย่างไรได้บ้าง
CD ล่ะ ให้อะไรเรา?:
  • การ Automate Deploy Change ไปที่ต่างๆ
  • การได้รับ Feedback จาก User
  • Support การทำ A/B Testing

  • Continuous Delivery คือการ Deploy แบบ Manual
  • Continuous Deployment คือการ Deploy แบบ Automation
ประโยชน์ของการทำ DevOps คืออะไร?:
  • การ Release ที่เร็วขึ้น ถี่ขึ้น Update เป็น Realtime
  • ความน่าเชื่อถือในการ Deploy กี่ครั้งก็ไม่ผิด
  • Improve Customer Experience สามารถส่งงานได้ไว
  • Improve Collaboration ระหว่าง Team Dev กับ Operation
  • มีเวลาในการเรียนรู้เรื่องอื่นๆ เพิ่มมากขึ้น
Software Development Life Cycle (SDLC) มี 6 ขั้นตอน:
  1. Plan
  2. Define
  3. Design
  4. Build
  5. Test
  6. Deploy
DevOps ช่วยให้สามารถ Deploy ทีละ Sprint (Agile) ให้กับลูกค้าได้เลย:
  • Agile เข้ามาช่วยลด Gab ระหว่าง Developer กับลูกค้า
  • DevOps Tool ช่วยลด Gab ระหว่าง Dev(eloper) และ Op(eration)s
  • Team Business เวลาขายของก็มั่นใจว่าสิ่งที่ส่งมอบให้ลูกค้าจะไม่มีความผิดพลาด Team งานมีความน่าเชื่อถือ
เมื่อไหร่ที่เราต้องการ DevOps:
  • ต้องการที่จะ Release Software หรือ Update ตรงเวลา
  • ต้องการ Software และ Environment ที่มีความเสถียร
  • เจอปัญหาก่อนที่จะกระทบ End User
  • ต้องการเพิ่มการ Delivery อย่างต่อเนื่อง
AWS DevOps tools:

  • AWS Cloud9 คือ IDE ที่สามารถ Develop Application ของเราได้เลย ไม่ต้อง Load Software IDE Tool หรือ Package ต่างๆ มาลงบนเครื่องเรา
    • เอาไว้ใช้ตั้งแต่การเขียน, การ Run, การ Debug Code ต่างๆ รวมถึงการ Edit Code
    • เป็น On-demand ไม่คิดค่าใช้จ่าย หากไม่ใช้งาน
    • Support หลายภาษาไม่ว่าจะเป็น Python, Java, Node.js, .NET, etc.


  • AWS CodeCommit คือ Repository ในการเก็บ Code และ Control Software Version
    • คล้ายๆ GitHub แต่เป็น Private
    • เป็นลักษณะของ Branch อาจจะมีการแบ่งเป็น Code ของ Production, Development, UAT, etc.
    • Integrate กับ AWS IAM เพื่อจำกัดสิทธิ์ในการใช้งานได้, ดู Status จาก CloudWatch หรือจะเอา Event ไปทำอย่างอื่นต่อ, ทำ Function Encryption มาใส่ได้ด้วย AWS KMS, etc.
  • แล้วก็ใช้ AWS CodeBuild ในการ Build (Complier) และ Test ว่า Code ถูกต้องมั้ย Syntax ต่างๆ มี Error รึเปล่า ถ้ามี จะ Feedback กลับมาให้แก้ก่อน
    • เป็น On-demand เหมือนกัน
    • Monitor ผ่าน CloudWatch ได้
    • Integrate กับ Jenkins, Git ต่างๆ, etc. ได้
  • AWS CodeGuru ใช้ตรวจ Check และ Optimize Code พร้อมกับให้ Recommendation โดยมี Machine Learning ทำหน้าที่วิเคราะห์ Code ที่เราเขียน

    • มี Intelligent Recommendation โดยเอาฐานข้อมูลมาจากที่มีการ Build Code ต่างๆ ของ หรือ Best Practice อื่นๆ
    • นอกจากดู Error แล้ว ยังดูเรื่อง Optimize ให้ด้วย เช่น การเขียนแบบนี้ยาวเกินไปรึเปล่า เป็น Recommendation ให้เรามา
    • ดู Efficiency ให้ด้วย แล้วก็ Recommend เรามาให้มีประสิทธิภาพเพิ่มขึ้น
  • AWS CodeDeploy ก็ช่วยให้นำ Code ของเราไปติดตั้งได้ใน Destination ต่างๆ บน AWS เอง เช่น EC2, Fargate (Container), Lambda (Serverless), หรือ On-premises Server ก็ได้ โดยสามารถ Deploy v2 เครื่องเดียวก่อน, แล้วค่อยขยับเป็นครึ่งนึง (3 ใน 6 เครื่อง), แล้วค่อย Deploy ทุกเครื่อง
    • ทำ Blue/Green Deployment ได้: Blue = Existing Version, Green = New Version
Last edited:


Mitigate downtime risks with blue/green deployment and save up to 80% in time and effort to set up and run Amazon ECS and Fargate applications:
  • AWS CodePipeline เป็น Orchestrate Code Build, Test, & Deploy ทำให้การ Automate Seamless และประสิทธิภาพมากขึ้น
    • ดึง Source มาจาก AWS CodeCommit, GitHub, Amazon S3, Amazon ECR (Container Repository) ได้
    • ปลายทางติดตั้งได้ทั้ง AWS CodeDeploy, EC2, Elastic Beanstalk, OpsWorks Stacks, ECS (Blue/Green), Fargate, CloudFormation, Lambda
  • AWS X-Ray & Amazon CloudWatch ก็จะเอาไว้คอย Monitor ดูว่า Status เป็นยังไงบ้าง ในการ Build มาตั้งแต่ Source จนถึงการ Deploy
    • CloudWatch ดูได้ตั้งแต่ Performance, Utilization, Health, Down, Error เน้นไปทาง Infrastructure, ดู CPU & Memory ได้ถึงระดับ Container
    • X-Ray ดูลึกลงไปในระดับ Application ค่อนข้าง End to end, Frontend Link กับอะไรบ้าง App 1, 2, DB, etc.


  • AWS CloudFormation ก็จะเป็น Infrastructure as Code (IaC) ไม่ต้องไปนั่ง Click หน้า UI ทีละหน้า
    • เอาไว้เขียน Code ให้สร้าง, Configure, และ Deploy Infrastructure Component
    • เขียนได้หมดไม่ว่าจะเป็น Network, Compute, DBs, Security, Management Tool, etc.
    • อยากจะมีหลาย Region Environment เหมือนกัน ก็เอาไปใช้ Deploy ได้
    • เป็น Document ได้ โดยอ่านจาก Code ที่เขียนก็จะเห็นว่ามี Services อะไรบ้าง
    • เป็น Automate ทั้งสร้าง หรือลบ
    • สร้าง Stack แยกเป็นแต่ละ Version ได้ อยากเพิ่ม/ลบ Version ไหน
  • AWS Cloud Development Kit (CDK) ก็จะมีเครื่องมือต่างๆ เพื่ออำนวยความสะดวกในการพัฒนา Code ของเรา
    • ไม่ต้อง Develop Code ตั้งแต่เริ่มต้น มี Pre-configured ให้ดึงมาใช้งาน
    • แล้ว Develop เป็น Custom Version สำหรับ Share ให้คนอื่นดึงไปใช้ต่อ หรือ Develop เพิ่ม หรือใช้กับ Project หน้าได้
  • AWS Systems Manager Patch Manager can be used for both on-premises servers and EC2 instances. Systems Manager can be configured for hybrid environments that include on-premises servers, edge devices, and VMs that are configured for AWS Systems Manager, including VMs in other cloud environments.
    There are several steps that must be taken to configure on-premises nodes to be managed by AWS Systems Manager.

  • Code:

  • A company is running several dev projects. Devs are assigned to a single project but move between projects frequently. Each project team requires access to different AWS resources.
    A solution Architect should Create a customer managed policy document for each project that requires access to AWS resources. Specify control of the resources that belong to the project. Attach the project-specific policy document to an IAM group. Change the group membership when dev change projects. Update the policy document when the set of resources changes.
AWS Organizations:
  • Can group accounts into Organizational Units (OUs)
  • Service Control Policies (SCPs) can control tagging and the available API actions
  • Create accounts programmatically using the Organizations API
  • Enable AWS SSO using on-prem directory
  • Receive a consolidated bill
  • Enable CloudTrail in management account and apply to members

  • SCPs control the maximum available permissions
  • Tag policy applied to enforce tag standardization

SCP Strategies and Inheritance:

Deny List Strategy:
  • The FullAWSAccess SCP is attached to every OU and account
  • Explicitly allows all permissions to flow down from the root
  • Can explicitly override with a deny in an SCP
  • This is the default setup
  • An explicit deny overrides any kind of allow

Allow List Strategy:
  • The FullAWSAccess SCP is removed from every OU and account
  • To allow a permission, SCPs with allow statements must be added to the account and every OU above it including root
  • Every SCP in the hierarchy must explicitly allow the APIs want to use
  • An explicit allow overrides an implicit deny

AWS Control Tower คือเครื่องมือในการสร้าง Landing Zone แบบอัตโนมัติ:
  • Directory source can be SSO, SAML 2.0 IdP, or Microsoft AD
  • Control Tower creates a landing zone, a well-architected multi-account baseline based on best practices
  • This is known as a landing zone (การออกแบบ AWS Account ให้มีหลาย ๆ Account และทำหน้าที่เฉพาะ เพื่อ Support Workload Account ให้สามารถสร้างได้มีกรอบการทำงานเป็นระบบ สรุปแล้วก็ทำตามแนวทางการทำงานตาม Concept Multi-account Stretegy เพื่อแบ่งการทำงาน การกำหนด Governance โดยใช้ AWS Orgranization เป็นแกน ในการสร้าง และจัดกลุ่ม Account รวมไปถึงการ Design Network การกำหนดเส้นทางการเข้าออก การเก็บ Log, Cr: Anuwat)
  • Guardrails are used for governance and compliance:
    • Preventive guardrails are based on SCPs and disallow API actions using SCPs
    • Detective guardrails are implemented using AWS Config rules and Lambda functions and monitor and govern compliance
  • The root user in the management account can perform actions that guardrails would disallow

How IAM Works:
  • IAM Principals must be authenticated to send requests (with a few exceptions)
  • A principal is a person or application that can make a request for an action or operation on an AWS resource
  • Request context:
    • Actions / operations
    • Resources
    • Principal
    • Environment data
    • Resource data
  • AWS determines whether to authorize the request (allow/deny)
Users, Groups, Roles and Policies:
  • Policies define the permissions for the identities or resources they are associated with
  • The user gains the permissions applied to the group through the policy
  • Identity-based policies can be applied to users, groups, and roles
  • Roles are used for delegation and are assumed

  • A company is migrating an application into AWS. The DB layer consists of a 25 TB MySQL DB in the on-premises data center. There is a 50 Mbps internet connection and an IPSec VPN connection to the Amazon VPC. The company plans to go live on AWS within 2 weeks. The migration schedule with the LEAST downtime is:
    The internet connection is too slow. The best approach is to use a DB export using DB-native tools and import the exported data to AWS using AWS Snowball. The data can be loaded into a newly launched Amazon Aurora MySQL DB instance. Native SQL replication can then be used to synchronize from the on-premises DB to the RDS Aurora instance using the VPN.
    Once the Aurora DB instance is fully synchronized, the DNS entry can be changed to point to the Aurora DB instance and the application will start to use the Aurora DB instance and stop replication.
Last edited:


IAM Users:
  • Email used for signup.
  • The root user has full permissions. It's a best practice to avoid using the root user account + enable MFA.
  • Up to 5,000 individual user accounts can be created. Users have no permissions by default.
  • Friendly name: Andrea, Amazon Resource Name: arn:aws:iam:736259363490:user/Andrea
  • Authentication via username/password for console or access keys for API/CLI.

IAM Groups:
  • Groups are collections of users. Users can be members of up to 10 groups.
  • The main reason to use groups is to apply permissions to users using policies.
  • The user gains the permissions applied to the group through the policy.

IAM Roles:
  • An IAM role is an IAM identity that has specific permissions.
  • Roles are assumed by users, applications, and services.
  • Once assumed, the identity 'becomes' the role and gain the roles' permissions.

IAM Policies:
  • Policies are documents that define permissions and are written in JSON.
  • All permissions are implicitly denied by default.
  • Identity-based policies can be applied to users, groups, and roles.
    These are JSON permissions policy documents that control what actions an identity can perform, on which resources, and under what conditions.
    • Inline policies have a 1-1 relationship with the user, group, or role.
    • AWS managed are created and managed by AWS; customer managed are created and managed by you.
    • Managed policies are standalone policies that can be attached to multiple users, groups, or roles.
    • A permissions policy.
  • Resource-based policies are JSON policy documents that apply/attach to resources such as an S3 buckets or DynamoDB tables.
    • Grant the specified principal (Paul) permission to perform specific actions on the resource.
    • A trust policy.

Role-Based Access Control (RBAC):
  • Users are assigned permissions through policies attached to groups.
  • Groups are organized by job function.
  • Best practice is to grant the minimum permissions required to perform the job.

  • API - kms:GenerateDataKey - returns a unique symmetric data key for use outside of AWS KMS. This operation returns a plaintext copy of the data key and a copy that is encrypted under a symmetric encryption KMS key that specify. The bytes in the plaintext key are random; they are not related to the caller or the KMS key. Can use the plaintext key to encrypt data outside of AWS KMS and store the encrypted data key with the encrypted data.

  • AWS WAF is a Web Application Firewall that helps protect web applications or APIs against common web exploits that may affect availability, compromise security, or consume excessive resources. It gives control over how traffic reaches applications by enabling to create security rules that block common attack patterns and rules that filter out specific traffic patterns define.
    Can deploy AWS WAF on Amazon CloudFront as part of CDN solution, the ALB that fronts web servers or origin servers running on EC2, or Amazon API Gateway for APIs.
    AWS WAF - How it Works


    To block specific countries, can create a WAF geo match statement listing the countries that want to block, and to allow traffic from IPs of the remote development team, can create a WAF IP set statement that specifies the IP addresses that want to allow through.

  • If use an Elastic Load Balancer (ELB) with EC2, recommend configuring the security group associated with the application servers to only allow incoming traffic on port 80 from the ELB (by using the security group associated to the ELB as the source). This would stop the direct incoming traffic on port 80 from others.

  • Amazon S3 Transfer Acceleration enables fast, easy, and secure transfer of files over long distances between a client and an S3 bucket. Transfer Acceleration takes advantage of Amazon CloudFront's globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.
    Transfer Acceleration is a good solution for the following use cases:
    • There are customers that upload to a centralized bucket from all over the world.
    • Transfer gigabytes to terabytes of data on a regular basis across continents.
    • Unable to utilize all of available bandwidth over the Internet when uploading to Amazon S3.
Multipart upload transfers parts of the file in parallel and can speed up performance. This should definitely be built into the application code. Multipart upload also handles the failure of any parts gracefully, allowing for those parts to be retransmitted.​
Transfer Acceleration in combination with multipart upload will offer significant speed improvements when uploading data.​
Performance improvements for downloading data, CloudFront can offer.​
  • When want to give one or more consumer VPCs unidirectional access to a certain service or group of instances in the service provider VPC, use AWS PrivateLink. A connection to the service in the service provider VPC can only be started by clients in the consumer VPC.
    AWS PrivateLink establishes a connection between Virtual Private Cloud (VPC) and AWS services privately. Those AWS services can be hosted anywhere like in own account, or a different account or a different VPC.
    The connection doesn't require an Internet gateway, NAT gateway or any other form of networking connections. The data is completely going to flow over a private link, which means communication will happen over internal IP address.

  • In Amazon S3, can grant users in another AWS account (Account B) granular cross-account access to > objects owned by Account A.
    1. Create an S3 bucket in Account A.
    2. Create an IAM role or user in Account B.
    3. Give the IAM role in Account B permission to access objects from a specific bucket:
                  "Effect": "Allow",
                  "Action": [
                  "Resource": "arn:aws:s3:::AccountABucketName/*"
    4. Configure the bucket policy for Account A to grant permissions to the IAM role or user that created in Account B. Use this bucket policy to grant a user the permissions to access for objects in a bucket owned by Account A:
                  "Effect": "Allow",
                  "Principal": {
                    "AWS": "arn:aws:iam::AccountB:user/AccountBUserName"
                  "Action": [
                  "Resource": [
  • For object encryption at rest, can set the default encryption behavior on an Amazon S3 bucket so that all objects are encrypted when they are stored in the bucket. The objects are encrypted using Server-Side Encryption with either Amazon S3-managed keys (SSE-S3) or AWS Key Management Service (AWS KMS) keys.
    To deny unencrypted objects, 's3:x-amz-server-side-encryption' can be added which allows only encrypted object upload and can restrict to a specific KMS key as well.
    Amazon CloudFront can use 301 response code to redirect HTTP requests to HTTPS and allows only secured traffic.
  • External ID prevents another AWS customer from accessing your account:

Last edited:


Job function policies: AWS managed policies for job functions are designed to closely align to common job functions in the IT industry:
  • Administrator
  • Billing
  • Database administrator
  • Data scientist
  • Developer power user
  • Network administrator
  • Security auditor
  • Support user
  • System administrator
  • View-only user

Attribute-Based Access Control (ABAC):
  • Tags are a way of assigning metadata to resources using key/value pairs.
  • Permissions are granted to resources when the tag matches a certain value.

Permissions Boundaries:
  • Permissions boundaries are attached to users and roles.
  • The permissions boundary sets the maximum permissions that the entity can have.
  • Ensure that users created have the same or fewer permissions. Does not have more privileges when logging in as other users.

Policy Evaluation Logic:


Steps for Authorizing Requests to AWS:
  1. Authentication - AWS authenticates the principal that makes the request
    Request context:
    • Actions - the actions or operations the principal wants to perform
    • Resources - The AWS resource object upon which actions are performed
    • Principal - The user, role, federated user, or application that sent the request
    • Environment data - Information about the IP address, user agent, SSL status, or time of day
    • Resource data - Data related to the resource that is being requested
  2. Processing the request context
  3. Evaluating all policies within the account
  4. Determining whether a request is allowed or denied

Types of Policy:
  • Identity-based policies - attached to users, groups, or roles
  • Resource-based policies - attached to a resource; define permissions for a principal accessing the resource
  • IAM permissions boundaries - set the maximum permissions an identity-based policy can grant an IAM entity
  • AWS Organizations SCP - specify the maximum permissions for an organization or OU
  • Session policies - used with AssumeRole* API actions

Determination Rules:
  1. By default, all requests are implicitly denied (though the root user has full access)
  2. An explicit allow in an identity-based or resource-based policy overrides this default
  3. If a permissions boundary, Organizations SCP, or session policy is present, it might override the allow with an implicit deny
  4. An explicit deny in any policy overrides any allows

  • A 403 error is an access denied error. Therefore, there must be an issue relating to authorization to access the S3 bucket. Steps to check:
    1. Bucket policy
    2. IAM roles
    3. The most likely cause of ECS issue is that the ECS task execution role has been changed and the tasks no longer have the permissions to access S3.
      According to best practice the permissions that containers (tasks) require should be specified in task execution roles not in container instance IAM roles. As the cluster has been setup according to best practice the permission to S3 should not be specified in the container instance IAM role.
  • To deploy the application in two AWS Regions that will be used simultaneously. The objects in the two S3 buckets must remain synchronized with each other. The steps are:
    1. Create an S3 Multi-Region Access Point. Change the application to refer to the Multi-Region Access Point.
    2. Configure two-way S3 Cross-Region Replication (CRR) between the two S3 buckets.
    3. Enable S3 Versioning for each S3 bucket.
  • AWS Direct Connect (DX) makes it easy to establish a dedicated connection from an on-premises network to Amazon VPC. Using AWS DX can establish private connectivity between AWS and data center, office, or collocated environment. This private connection can reduce network costs, increase bandwidth throughput, and provide a more consistent network experience then internet-based connections.
    Design would provide the MOST resilient connectivity between AWS and the on-premises data center is Install a second Direct Connect connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
    This will provide physical separation and redundancy for the DX connections and is preferable to using the same carrier which could result in sharing the same physical pathways. The virtual private gateway has built in redundancy so sharing a VGW is acceptable.


  • To make the monitoring process more reliable to troubleshoot any future events due to traffic spikes:
    • If use Aurora MySQL DB cluster, can configure to publish general, slow query, audit, and error log data to a log group in Amazon CloudWatch Logs.
      With CloudWatch Logs can perform real-time analysis of the log data and use CloudWatch to create alarms and view metrics. Can use to store log records in highly durable storage. To publish logs to its, the respective logs must be enabled. Error logs are enabled by default but must enable the other types of logs explicitly.
    • Can collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent. The unified CloudWatch agent enable to collect internal system-level metrics from EC2 instances across OSs. The metrics can include in-guest metrics, in addition to the metrics for EC2 instances. Can collect logs from EC2 instances and on-premises servers, running either Linux or Windows server.
    • Can use the X-Ray SDK to trace incoming HTTP requests.
  • A security group acts as a virtual firewall that controls the traffic for one or more instances. When launch an instance, can specify one or more security groups; otherwise, AWS uses the default security group. Can add rules to each security group that allows traffic to or from its associated instances. Can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group. To decide whether to allow traffic to reach an instance, AWS evaluates all the rules from all the security groups that are associated with the instance.
    The following are the default rules for a default security group:
    • Allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group.
    • Allows all outbound traffic.
The following are the default rules for a security group that you create:​
  • Allows no inbound traffic, can change its to reflect the type of inbound traffic that want to reach the associated instances.
  • Allows all outbound traffic, can also change its.
  • Use AWS DX gateway to connect VPCs. Associate an AWS DX gateway with either of the following gateways:
    • A transit gateway when there are multiple VPCs in the same Region
    • A virtual private gateway
Can also use a virtual private gateway to extend Local Zone. This configuration allows the VPC associated with the Local Zone to connect to a DX gateway. The DX gateway connects to a DX location in a Region. The on-premises Data Center (DC) has a DX connection to the DX location.​
DX gateway can use to add a redundant DX connection in the same Region. And also provide connectivity to other Regions through the same pair of DX connections as the company expands into other Regions.​
Last edited:


IAM Policy Structure:
  • An IAM policy is a JSON document that consists of one or more statements
        "Effect":"effect", > The effect element can be Allow or Deny
        "Action":"action", > The Action element is the specific API action for which are granting or denying permission
        "Resource":"arn", > The Resource element specifies the resource that's affected by the action
        "Condition":{ > The Condition element is optional and can be used to control when your policy is in effect
  • "Action": "*", > The Administrator Access policy uses wildcards (*) to allow all actions
    "Resource": "*" > on all resources

  • "Action": ["ec2:TerminateInstances"], > The specific API action is defined
    "Condition": {
    .."NotIpAddress": { > The effect is to deny the API action if the IP address is not in the specified range

  • "Principal": { > Can tell this is a resource-based policy as it has a principal element defined
    .."AWS": "*"
    "Action": [
    .."elasticfilesystem:ClientWrite" > The policy grants read and write access to an EFS file systems to all IAM principals ("AWS": "*")
    "Condition": {
    .."Bool": {
    ...."aws:SecureTransport": "true" > Additionally, the policy condition element requires that SSL/TLS encryption is used

  • "Condition": {"StringLike": {"s3: prefix": ["${aws:username}/*"]}} > A variable is used for the s3: prefix that is replaced with the user's friendly name
    "Effect": "Allow", > The actions are allowed
    "Resource": ["arn:aws:s3::mybucket/${aws:username}/*"] > only within the user's folder within the bucket

  • Amazon RDS Read Replicas provide enhanced performance and durability for RDS DB instances. They make it easy to elastically scale out beyond the capacity constraints of a single DB instance for read-heavy DB workloads. Can create one or more replicas of a given source DB Instance and serve high-volume application read traffic from multiple copies of data, thereby increasing aggregate read throughput. Read replicas can also be promoted when needed to become standalone DB instances. Read replicas are available in Amazon RDS for MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server as well as Amazon Aurora.
    Amazon RDS creates a second DB instance using a snapshot of the source DB instance. It then uses the engines' native asynchronous replication to update the read replica whenever there is a change to the source DB instance. The read replica operates as a DB instance that allows only read-only connections; applications can connect to a read replica just as they would to any DB instance. Amazon RDS replicates all DBs in the source DB instance.
    Can also promote a read replica if the source DB instance fails, and can set up a read replica with its own standby instance in different AZ. This functionality complements the synchronous replication, automatic failure detection, and failover provided with Multi-AZ deployments.

  • The Recovery Time Objective (RTO) defines how quickly a service must be restored and a Recovery Point Objective (RPO) defines how much data it is acceptable to lose. For example, an RTO of 30 minutes means the service must be running again within half an hour and an RPO of 5 minutes means no more than 5 minutes' worth of data can be lost.
    Application tiers use Amazon EC2 instances and are stateless. The data tier consists of a 30TB Amazon Aurora DB. To achieve this example a host standby is required of the EC2 instances. With a hot standby a minimum of application/web servers should be running and can be scaled out as needed.
    For the data tier an Amazon Aurora cross-Region Replica is the best way to ensure that <5mins of data is lost. Can promote an Aurora Read Replica to a standalone DB cluster, and this would be performed in the event of a disaster affecting the source DB cluster.

  • It's not possible to change or modify the IP address range of an existing VPC or subnet. However, can do one of the following:
    • Add an additional IPv4 CIDR block as a secondary CIDR to VPC.
    • Crate a new VPC with preferred CIDR block and then migrate the resources from old VPC to the new VPC.
      For example, there are 2 AZs with 2 subnets &, if would like to add another AZ:
      1. Update the Auto Scaling group to use the AZ2 subnet only (
      2. Delete and re-create the AZ1 subnet using half the previous address space (
      3. Adjust the Auto Scaling group to also use the new AZ1 subnet ( &
      4. When the instance are healthy, adjust the Auto Scaling group to use the AZ1 subnet only (
      5. Remove the current AZ2 subnet, create a new AS2 subnet using the second half of the address space from the original AZ1 subnet (
      6. Create a new AZ3 subnet using half the original AZ2 subnet address space, then update the Auto Scaling group to target all three new subnets (,, &
  • Amazon Aurora Global DB is designed for globally distributed applications, allowing a single Amazon Aurora DB to span multiple AWS regions. It replicates data with no impact on DB performance, enables fast local reads with low latency in each region, and provides disaster recovery from region-wide outages.

    This solution will meet the RTO of less than 5 mins and RPO of less than 1 min.
    A read replica cannot be used as a writable DB. It is possible to promote a read replica, but this may take time more than 5 mins.

  • To improve the reliability of the application by addressing the issues of scalability, availability, and performance:
    • Containerizing the application will make it easier to deploy and manage on AWS.
    • Migrating the application to an Amazon ECS cluster will allow the application to run on fully managed container orchestration service.
    • Using the AWS Fargate launch type for the tasks that host the application will enable the application to run on serverless compute engines that are automatically provisioned and scaled by AWS.


    • Creating an Amazon EFS file system for the static content will provide a scalable and shared storage solution that can be accessed by multiple containers. Mounting the EFS file system to each container will eliminate the need to copy the static content to each EBS volume and ensure that the content is always up to date.
    • Configuring AWS Application Auto Scaling on the ECS cluster will enable the application to scale up and down based on demand or a predefined schedule. Setting the ECS service as a target for the ALB will distribute the incoming requests across multiple tasks in the ECS cluster and improve the availability and fault tolerance of the application.
    • Migrating the DB to Amazon Aurora MySQL Serverless v2 with a reader DB instance will provide a fully managed, compatible, and scalable relational DB service that can handle high throughput and concurrent connections. Using a reader DB instance will offload some of the read load from the primary DB instance and improve the performance of the DB.
Last edited:


AWS Academy:
  1. [Lab01] Create EC2 instance & SSH by putty.

  2. A computer program is A set of instructions that tells the system how to perform a specific task.
    • The motherboard in a computer system connects all the components.

    • Motherboard and Network card are a physical part of a computer system.

    • Random Access Memory (RAM) holds data temporarily and can be accessed quickly.

    • iOS and Amazon Linux 2 are examples of operating systems.
  3. Web server receives HyperText Transfer Protocol (HTTP) requests from clients and uses HTTP to send pages or resources back to the requester.
    • The benefits of virtualization are Multiple Virtual Machines (VMs) can be run on a single physical computer and VMs can reduce the amount of wasted computing resources from underutilized servers.

    • Software Development Life Cycle (SDLC): Plan > Analyze > Design > Develop > Test > Implement > Maintain.

    • MySQL and Oracle are examples of a DB Management System (DBMS).

    • Amazon EC2 provides the ability to host VMs.
  4. Job titles for development team roles: Project manager, Analyst, Quality assurance, Software developer, & DB administrator.
    • Software developer role writes code and runs initial tests to confirm that the product works.

    • Quality assurance engineer role maintains a list of different types of tests and runs all tests to verify that each product release works.

    • The DB administrator is responsible to Configure access to and secure the data.

    • Project manager is responsible to Assign tasks to team members and develop a project plan.
  5. Cloud computing delivers Information Technology (IT) resources on demand.
    • Software as a Service (SaaS) in cloud computing example is the third-party vendor manages the procurement of resources and the application. The developer is responsible for the application content.

    • A software developer that focused on deploying code that runs on the AWS Cloud should use Platform as a Service (PaaS).

    • Infrastructure as a Service (IaaS) gives the user more control over IT resources, such as access to networking features, computers (either virtual or on dedicated hardware), and data storage space.

    • Hybrid cloud has an on-premises data center and also runs a portion of its infrastructure in the cloud.

    • 208. Benefit of trading fixed expense for variable expense are Paying only when computing resources are consumed and for only how much is consumed.

    • The concept of massive economies of scale in terms of cloud computing mean savings can be passed on to customers when more customers use AWS.

    • An advantage of cloud computing is Increase speed and agility.

    • A benefit of developing in a cloud environment is The time needed to access computing resources is reduced.

    • The phrase 'go global in minutes' mean Users can deploy their solution in multiple AWS Regions around the world with a few clicks.
  6. SaaS gives a user the ability to immediately use a service without having to run or manage any resources.
    • AWS is a secure cloud services provider that offers many services to help businesses scale and grow.

    • AWS provides Compute, Storage services, etc.

    • Customers can create and manage resources in the AWS Cloud by AWS Management Console, AWS CLI, and AWS Software Development Kits (SDKs).

    • AWS documentation provide Tutorials and projects.
  7. AWS not charged for Inbound data transfer and data transfer between services in the same AWS Region.
    • Reserved Instances are available in three options: All Upfront Reserved Instance, Partial Upfront Reserved Instance, and No Upfront payments Reserved Instance.

    • The AWS Pricing Calculator is a tool that helps Estimate AWS monthly service costs, Identify opportunities for cost reduction, Gives customers the ability to create estimates for their AWS use cases, and Use templates to model solutions to compare services and deployment models.

    • Fundamental drivers of cost with AWS are Compute, Storage, and Data transfer.

    • AWS Pricing Calculator can also use to create estimates, share an estimate by a unique link, and revisit any estimates directly through a browser.
  8. Scalability capability is the AWS Global Infrastructure designed and built to deliver.
    • An AZ is One or more data centers that are built with fault isolation.

    • An AWS Region is A geographical area in the world.

    • Each Region is made up of two or more AZs.

    • Amazon CloudFront is A global Content Delivery Network (CDN) that delivers content to end users with reduced latency.

    • 209. AWS Storage services are Amazon S3, EBS, Elastic File System (EFS), and S3 Glacier.

    • AWS Identity and Access Management (IAM) use to create new users for their AWS account and assign permissions to these users.

    • AWS categories: AWS Cost Management, Compute, Containers, DB, Management and Governance, Networking and Content Delivery, Security, Identity, and Compliance, and Storage.

    • Amazon EC2 is compute service makes it possible for users to create VMs in the cloud.

    • Amazon VPC can use to provision a logically isolated section of the AWS Cloud to establish a virtual network.
  9. According to the AWS shared responsibility model, AWS responsible for Security of the cloud.
    • The AWS shared responsibility model address in addition to security is Compliance.

    • The customer responsible example is Ensuring that users have entered a user ID and password before they use an application.

    • IaaS requires the customer to be more involved in managing infrastructure security.

    • SaaS requires AWS to manage infrastructure security in its totality.
  10. Object key property uniquely identifies an object in an Amazon S3 bucket.
    • Amazon S3 designed to provide 11 9s (99.999999999) of durability.

    • S3 Glacier Deep Archive is the lowest-cost Amazon S3 solution for long-term storage, retrieval options ranging from 12-48 hours.

    • Not incurs a cost when using Amazon S3 examples are Transferring data:
      into Amazon S3, out of Amazon S3 and into Amazon CloudFront, that is larger than 5MB into Amazon S3.

    • Amazon S3 can be used for static web hosting and Bucket names are universal, and they must be unique across all existing bucket names.
  11. [Lab02] Introduction to Amazon EC2:
    1. Launching EC2 instance
    2. Monitor Instance
    3. Update Security Group and Access the Web Server
    4. Resize Instance: Instance Type and EBS Volume
    5. Test Termination Protection
  12. Amazon EC2 provides VMs that run on AWS.
    • A dev wants to run an application that loads a large amount of data into memory on an Amazon EC2 instance. Should choose Memory optimized instance type.

    • The data on an instance store volume persists even if the instance is rebooted. However, the data does not persist if the instance is stopped, hibernated, or terminated.

    • Security groups support adding rules that allow traffic from specified sources and allow all outbound traffic by default.

    • To lower costs of an application that is fault tolerant and tolerates interruptions, can use Spot Instances pricing model.
  1. .
  2. .
  3. .
  4. .
  5. [Lab03] Introduction to an Amazon Linux Amazon Machine Image (AMI):
    1. Use SSH to connect to an Amazon Linux EC2 instance
    2. Explore the Linux man pages
  6. Linux operating system is open source.
    • Kernel allocates the Linux memory that is used to run applications.

    • A Linux daemon is a program that provides a service and runs in the backgroud.
Last edited:


  1. .
  2. .
  3. .
  4. .
  5. .
    • man hostname command enables to review the manual page for the hostname command.

    • Kernel, Complementary tools, and applications are main components of a Linux distribution.
  6. [Lab04] Linux Command Line:
    1. Run familiar commands:
      1. whoami > display current username / account that are logged in.
      2. hostname -s > display a shortened version of computer’s host name.
      3. uptime -p > display the uptime of the system in an easily readable format.
      4. who -H -a > display information about the users who are logged in and some additional information.
      5. TZ=America/New_York date
        TZ=America/Los_Angeles date > identify the date and time of alternate locations in the world.
      6. cal -j > show the date of the year (e.g., 14 Oct 2023 = 287th date of the year)
      7. cal -s
        cal -m > display alternate views of the calendar (s = from Sunday, m = from Monday)
      8. id ec2-user > see unique ID and group information about specific user.
    2. Improve workflow through history and search:
      1. history > see a list of all of the commands that were used.
      2. ctrl+r > bring up a reverse history search.
      3. !! > rerun the most recent command.
  7. Bash is the name of the default Linux shell.
    • TAB keyboard key automatically completes commands in the default Linux shell.

    • UP arrow keyboard key retrieves the last command that entered in the default Linux shell.
  8. [Lab05] Managing Users and Groups:
    1. sudo useradd arosalez
      sudo passwd arosalez > Create User
      sudo cat /etc/passwd | cut -d: -f1 > validate
    2. sudo groupadd Sales > Create Group
      cat /etc/group > verify
    3. sudo usermod -a -G Sales arosalez > add the user to the group
      cat /etc/group > verify
    4. su arosalez > Log in using the new user
      [ec2-user@..] sudo cat /var/log/secure > a sudo and not permitted action was logged here
  9. Passwords are set with the passwd command and Users can reset their own passwords.
    • A standard user can Control any file that the user owns and access any files that the user has permissions for.

    • su command gives full administrative privileges and allows to switch to the root user's environment.

    • usermod command allows to add a user to a group.

    • A best practice for using the root account is Always log in as a standard user and switch the user to the root account only when must elevate permissions.
  10. [Lab06] Editing Files:
    1. run the Vim tutorial
      Learning VIM while playing a game:
    2. edit a file in Vim
      dd > delete the entire line
      u > delete the entire line
      :w > save changes without quitting
    3. edit a file in nano
  11. Vim is the default text editor for virtually all Linux distributions.
    • Press ESC keyboard key to exit the Insert mode and return to the Command mode.

    • Use :wq command to save the file and exit the editor.

    • Use :q! command to exit the editor without saving the file.

    • gedit Linux text editor has a Graphical User Interface
  12. [Lab07] Working with the File System:
    1. Create a Folder Structure
    2. Delete and reorganize folders
  13. Linux File names are case sensitive and Extensions are optional.
    • /home directory contains a user's personal files by default.

    • pwd command displays the absolute path to the user's current location in the file system.

    • cd command changes the current working directory to a different directory.

    • ls -la /var/log > see a list of all files in the /var/log directory in long format.
  14. [Lab08] Working with Files:
    .tar.gz More Efficient Than .zip
    1. Create a backup:
      tar -c(sv)pzf backup.CompanyA.tar.gz (-P) CompanyA > back up the entire CompanyA folder structure recursively
    2. Log the backup:
      echo "25 Aug 25 2021, 16:59, backup.CompanyA.tar.gz" | sudo tee SharedFolders/backups.csv > the tee command use to write information both in the terminal and in a file.
    3. tar -xvzf filename.tar.gz > unzip tar.gz
  15. The find command can search by: Owner, File name, File size, and File modification date.
    • cksum command can use to check whether a downloaded file is corrupted?

    • diff command can use to compare and display the outputs of two different files.

    • Symbolic links point to a hard link, instead of the actual file.

    • -cvf tarball.tar command enables a user to create a tarball.
  16. [Lab09] Managing File Permissions:
    1. Change file and folder ownership
    2. Change permission modes
    3. Assign permissions
  17. chmod command allows the user to set permissions for files and directories.
    • chmod 757 filename is an example of Absolute mode.
      chmod g+w filename is an example of Symbolic mode.

    • ls -l command displays the permissions for files and directories.

    • chown command is used to change the user or group of a file or directory.

    • Least privilege is Giving the least number of users the least amount of file access first, and grant more permissions only when the user has a need.
  18. [Lab10] Managing Processes:
    1. Create a log file named processes.csv from ps -aux and omit any processes that contain root user:
      sudo ps -aux | grep -v root | sudo tee SharedFolders/processes.csv
    2. List the processes using the top command
    3. Create a Cron Job:
      sudo crontab -e
      0 * * * * cd /home/ec2-user/companyA/ && ls -la $(find .) | sed -e 's/..csv/#####.csv/g' > /home/ec2-user/companyA/SharedFolders/filteredAudit.csv => run every hour at minute 0 and add ##### to the csv files
      sudo crontab -l > to validate
  19. ps -ef | grep <process_name> can use to retrieve the Process ID (PID).
    • ps and pidof commands list the PIDs for the running processes on a Linux host.

    • crontab and at commands enable the administrator to schedule automatic tasks.

    • -9 SIGKILL command signal will immediately stop a process with NO graceful exit.
      kill -9 <PID>

    • df command can use to check disk space on a Linux host.
  20. [Lab11] Managing Services - Monitoring:
    1. sudo systemctl status httpd.service > Check the Status of the httpd Service
    2. Monitoring a Linux EC2 instance:
      ./ & top > simulates a heavy workload on the EC2 instance
      #! /bin/sh
      # set -x
      stress --cpu 8 --io 4 --vm 2 --vm-bytes 128M --timeout 360s
      By default every EC2 instances is monitored by CloudWatch dashboard displays several metrics such as the CPU Utilization, DiskReadBytes, DiskReadOps, DiskWriteBytes, DiskWriteOps, and NetworkIn in the graphs.
  21. AWS customers can use Amazon CloudWatch to monitor AWS services and resources.
    • systemctl list-units --type=service --state=active command lists all active services.

    • vmstat command displays the usage of virtual memory.
      lshw > List hardware
      fdisk > List and modify partitions on the hard drive.

    • df -h command displays the amount of free disk space in Mb/GB unit.
      df -a display all the file system
      df -T display file type

    • free command monitors how much physical memory is available.
Last edited:


ขอพักโฆษณา ด้วย Course AWS Partner นะคับ:


Sale (Business):

  • Example demonstrate increased staff productivity after migrating to AWS:
    • Infrastructure staff eliminates time spent on hardware installation and maintenance.
    • Server admins can manage more VMs after migration.
  • Customers' cloud efficiency will improve through various optimizations, even with increased cloud usage over time.

  • Common objections to cloud adoption are:
    • Loss of control or visibility
    • Increased cost
    • Skills gap
  • The greatest business benefits when modernizing:
    • Increased efficiency of developers
    • Increased business agility
    • Improved Return On Investment (ROI) and reduction of Total Cost of Ownership (TCO)
  • The primary teams involved in co-selling with AWS are:
    • AWS Partner team
    • AWS Marketplace team
    • AWS Sales team
  • Co-selling with AWS is Sales motion where AWS Partners and AWS work together on a customer opportunity.

  • The best practices when engaging AWS teams are:
    • Demonstrate solution alignment to customer objectives.
    • Own the opportunity and communicate opportunity status often.
    • Articulate unique value, such as proven solutions and industry expertise.
  • Operational resilience contribute to business value on AWS by:
    • Increased availability
    • Reduced unplanned outages
    • Improved security
  • AWS Cloud Value Framework pillars tend to drive the most substantial business value for customers over time are Operational resilience, Business agility, and Staff productivity.

  • Greenfield is AWS Sales refer to a customer that is in the early phases of AWS adoption.

  • A Partner-originated and led opportunity ready to be submitted to the APN Customer Engagement (ACE) Program when:
    • It is an active opportunity with a target close date in the future.
    • There is a clear project description with how your company's solution(s) addresses customer requirements.
    • There is customer consent to share the opportunity details with AWS.
  • The best practices when engaging AWS teams:
    • Demonstrate solution alignment to customer objectives
    • Articulate unique value, such as proven solutions and industry expertise
    • Own the opportunity and communicate opportunity status often
  • Cost savings pillar of the AWS Cloud Value Framework is often the initial focus for customers considering a cloud migration.

  • Should discuss AWS Security Standard and Compliance Certifications when a customer has a concern about data security.

  • Business agility contribute to business value on AWS by Increased experimentation and Getting products and features to market faster.

  • Cloud computing is On-demand delivery of IT resources over the internet with pay-as-you-go pricing.

  • Responsible for security and compliance of and in the cloud Shared between AWS and the customer.

  • Government, Education, Nonprofit, and Healthcare segments are considered Public Sector by AWS Sales.

  • Customers can realize substantial business value by using AWS services. The four pillars of the AWS Cloud Value Framework are Cost savings, staff productivity, operational resilience, and business agility.

  • AWS cost optimization topic become an opportunity for discussion when a customer is concerned about the cost of cloud.

  • When referring to vendor lock-in, switching costs is usually the underlining concern.
  • Amazon EBS is a Block storage for Amazon EC2 instances.

  • Security Group is a Virtual firewall providing security at the instance level.

  • Amazon EC2 Auto Scaling is a Service maintaining the availability of resources by increasing or decreasing capacity.

  • AZ is a Separate geographic area within an AWS Region designed to facilitate high availability.

  • Replatform is Migrating an on-premises application to the cloud, while making targeted cloud optimization.

  • Horizontal scaling is Adding more compute resources to an application, instead of more power to compute resources.

  • Rightsizing instances is Reviewing deployed resources, seeking opportunities to downsize instance types.

  • The Well-Architected Framework is A critical resource to help design solutions following best practices.

  • Proof Of Concept (POC) is A small-scale, practical example of the proposed solution that will run the customer's app.

  • Whiteboard markers is Something should remember to bring to a customer meeting.

  • Connect, Condense, Continue is A technique for handling customer objections.

  • Research is Something should do before a discovery meeting.

  • Minimum Viable Product (MVP) is A functional product or solution with just enough features to satisfy requirements.

  • Serverless is Services providing functionality without configuring backend components.

  • AWS Welll-Architected Tool use to Reviews the state of current workloads and compares them to AWS architectural practices.

  • Containers is Packages with application code, configuration, and dependencies in a single object.

  • APN Partner Central Provides APN Partners with the tools and content they need to grow their businesses on AWS.

  • Service Path Deliver consulting, professional, managed, and value-add resale services.

  • Hardware Path Develop devices that work with AWS.

  • AWS Training and Certification Helps learners deepen their AWS knowledge and skills.
3x - AWS:
  1. ข้อมูลพื้นฐานที่คุณควรรู้ก่อนใช้งาน:
  2. 101 ฉบับเรียนด้วยตัวเอง : AWS คืออะไร:
  3. Knowledge ติวเข้ม Lesson 1: AWS Certified Cloud Practitioner:
    AWS Backup:
  4. วิธีสร้าง Snapshot EC2 แบบ On-demand:
  5. วิธีการกำหนดระยะเวลา Snapshot EC2 (Schedule):
  6. วิธี Restore จาก Snapshot EC2:
    AWS Simple Systems Manager (SSM):
  7. วิธีการ Run คำสั่งจาก Management Console โดยไม่ต้องเข้า Windows:
  8. การสร้าง SSM:
  9. AWS IAM: การใช้งาน Switch Role:
    Amazon EFS:
  10. วิธีการสร้าง และเชื่อมต่อกับ EC2:
  11. การตั้งค่า Folder Content ของ WordPress:
  12. การตั้งค่า MySQL (MariaDB):
    Amazon SES:
  13. การตั้งค่าพื้นฐานเพื่อใช้งาน:
  14. การตั้งค่า SMTP พื้นฐาน:
  15. #การส่ง Email SMTP ผ่าน WordPress:
Last edited:


  1. .
  2. .
  3. [Lab12] Software Management:
    1. Update Linux machine:
      sudo yum -y check-update > To query repositories for available updates
      sudo yum update --security > To apply security-related updates
      sudo yum -y upgrade > To update packages
      -y > not ask to answer yes
    2. Roll back a package:
      Using the sudo yum history list > to list what has been installed and updated
      sudo yum history info <ID> > shows the begin time, begin rpmdb, end time, end rpmdb, user, return-code, and command line.
      sudo yum -y history undo <ID> > Rolling back to the ID in the history list
    3. ติดตั้ง AWS CLI บน Linux & การสร้าง Access Key และการตั้งค่าใช้งาน:
      python3 --version > To verify that Python is installed
      pip3 --version > To see if the pip package manager is already installed
  4. Pre-compiled code, Documentation, and Installation instructions are part of a package.
    • yum update command updates a Red hat Package Manager (RPM) package.

    • Use yum list installed | grep command to enable a user to display specific packages.

    • Online vendor site, Internal server, and Local hard disk drive locations can be used for storing repositories.

    • wget is a common utility for downloading files from a server.
  5. [Lab13] Managing Log Files:
    • Review secure log files:
      sudo less /var/log/secure > it will show the user was trying to access from (IP address), if they failed authentication, and which port.
      sudo lastlog > To view the last login times of all the users on the machine
  6. In Linux distributions, log files normally stored in /var/log.
    • Syslog Emergency (emerg) level shows the highest level of severity for an event.

    • The increase in the size of a log file for File systems on a disk can run out of space and An end user's personal information might have been stored in the log file.

    • head command display the first 10 lines and tail for last 10 lines of a file.

    • lastlog command enables the admin to show the users who logged in recently.
  7. [Lab14] Working with Commands:
    1. hostname | tee file1.txt > The tee command outputs the hostname to the screen (in the shell) and the designated file, which is file1.txt.
    2. sort test.csv > sorted the list with the default action by alphabetical or numerical order.
    3. cut -d ',' -f 1 cities.csv > cut sections from lines of text by character, use the -d (delimiter) option, and the -f (field) option. cut: must specify a list of bytes, characters, or fields.
    4. sed 's/,/./' test.csv > replace the first comma (,) with periods (.).
  8. Bash metacharacters - wildcard:
    * (star) > Any number of any character
    ? (hook) > Any one character.
    ; > chain commands together > cd .. ; rm *.csv ; ls *.csv > three commands
    • grep fail /var/log/secure.log > find if the log file contains the word 'fail'.

    • sed command is A non-interactive text editor.

    • > and >> are output redirectors that enables to redirecting the standard output.
      command >> info.txt => appends the result to the existing file.
  9. [Lab15] The Bash Shell:
    1. alias backup='tar -cvzf ' > Create an alias for a backup operation
      -c > create a new archive.
      -v > is the everbose option to display what is put into the archive.
      -z > compresses the archive into the .gzip format.
      -f > archives the files (tar can also archive devices).
      backup backup_companyA.tar.gz CompanyA > use the backup alias to back up the CompanyA folder
    2. PATH=$PATH:/home/ec2-user/CompanyA/bin > add the /home/ec2-user/CompanyA/bin folder to the PATH variable that can run command any path
      echo $PATH > to verify
  10. A variable is a value that's substituted into the script or command.
    • $HOME > HOME is the variable.

    • echo can display information as the Bash script runs.

    • To change/alter aliases that are configured should change .bashrc file configuration.

    • Bash use # character to ignore all input that follows it.
Last edited:


  1. [Lab16] Bash Shell Scripts:
    1. touch > create a generic shell script called
    2. Code:
      #!/bin/bash > make a bash script executable
      DAY="$(date +%Y_%m_%d)" > create a variable for the current date
      BACKUP="/home/$USER/backups/$DAY-backup-CompanyA.tar.gz" > create a variable for the backup file for the day
      tar -csvpzf $BACKUP /home/$USER/CompanyA
    3. sudo chmod 755 > change the file privileges to make be executable
    4. ./ > run
      can schedule this type of script via cron to create a daily backup of the folder. can also use other commands to copy this archive to other servers
  2. A Bash shell script use To automate a repetitive task and ensure that a task runs correctly and consistently.
    • sum=$(($2 + $4)) > $2 and $4 are the arguments

    • exit command causes a Bash script to stop running and exit the shell.

    • if - else conditional statement defines two courses of action.
  3. [Lab17] Bash Shell Scripting:
    • Code:
      # Define your name and the directory where you want to create the files
      # Check if the directory exists, create it if not
      if [ ! -d "$directory" ]; then
        mkdir -p "$directory"
      # Find the maximum number in existing file names
      for file in "$directory/$yourName"*; do
        if [[ -f "$file" ]]; then
          number=$(echo "$file" | grep -oE '[0-9]+' | tail -n1)
          if [[ $number -gt $max_number ]]; then
      # Create the next batch of 25 files with increasing numbers
      for ((i = max_number + 1; i <= max_number + 25; i++)); do > <yourName><number+1>, <yourName><number+2>, and so on
        filename="$directory/$yourName$i" > The file names should be <yourName><number>,
        touch "$filename" > Creates 25 empty (0 KB) files
        echo "Created $filename"
      echo "Created 25 files with numbers from $((max_number + 1)) to $((max_number + 25))"
  4. A computer network is a collection of computing devices that are logically connected to communicate and share resources.
    • In the client-server computing model, The server responds to a request from the client.

    • On Layer 2 of the Open Systems Interconnection (OSI) model is data considered a message or a frame.

    • A Media Access Control (MAC) address is a unique physical identifier that's assigned by the manufacturer. The Network Interface Card (NIC) uses it to identify data about the sender and the receiver.

    • Switch transmits incoming data to the receiving device by using only MAC addresses.
  5. A Local Area Network (LAN) connects nodes and hosts within a geographically limited area. A Wide Area Network (WAN) connects multiple LANs to create a more expansive network that can cover large geographical areas, such as cities.
    • Isolated network > Amazon VPC: Route tables, Internet gateway
      Network segment > Subnet
      Firewall > Security Group and Network Access Contol List (NACL)
      Server > EC2 instance

    • Star-bus topology is the most common hybrid topology used today.

    • A connectionless protocol sends a message to the destination without ensuring that the destination is available.

    • A connection-oriented protocol creates a session between the sender and the receiver.
  6. A static IP address should be assigned to a device that others often use like A printer.
    • The network's broadcast address is the last ip address of the IP address range.

    • Private IP address use to separate a network from the internet and from other networks.

    • There are 32 bits in an IPv4 address.

    • When a port number is combined with an IP address, it uniquely identifies a source or a destination for data communication.
  7. Amazon VPC enables to create a private network in the AWS Cloud.
    • Must specified IP address range when creating a VPC.

    • A database benefit from being in a private subnet.

    • A route table determine where network traffic is directed within the VPC.

    • A security group acts as a stateful firewall that blocks all traffic by default.
  8. Creating a subnet help to reduce network traffic and divide a network into smaller, more efficient subnets.
    • IP address Class A provides the most hosts.

    • /31 Classless Inter-Domain Routing (CIDR) notation defines the smallest range of IP addresses.

    • has 256 IP addresses.

    • A subnet mask defines which section of an IP address identifies the network and which section of an IP address identifies the hosts.
  9. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are used to secure web applications.
    • HTTP is a client-server protocol with default port number is 80.

    • In a TCP handshake includes Acknowledge (ACK), Synchronize (SYN), and SYN/ACK messages.

    • tcpdump use to analyze packets to view TCP/IP and other packet information that is being transmitted or received over the network.

    • Remote Desktop Protocol (RDP) is popular with remote support technicians and is a proprietary Microsoft protocol.
  10. Devices communicate with cloud services by using various technologies and protocols. Examples include: Wi-Fi and broadband internet, Broadband / Narrow-band cellular data, Long-Range Wide Area Network (LoRaWAN), Proprietary Radio Frequency (RF) communications.
    • AWS IoT Core Communication protocols include MQTT, Secure Hypertext Transfer Protocol (HTTPS), MQTT over WebSocket Secure (WSS), and LoRaWAN.

    • The goal of the Internet of Things (IoT) is Make devices that can connect to the internet to report data, be monitored, or be remotely controlled.

    • Solutions facilitate enterprise mobility for businesses that support remote-working options are Bring Your Own Device (BYOD) and Mobile Device Management (MDM).

    • Amazon WorkSpaces service supports remote work and remote training by providing virtual Microsoft Windows and Linux desktops that can accessed anywhere and from any device.
Last edited:


  1. [Lab18] Internet Protocols - Public and Private IP addresses:
    1. OSI Model > AWS infrastructure
      Layer 7: Application (how the end user sees it) > Application
      6: Presentation (translator between layers) > Web / application Servers
      5: Session (session establishment, security) > EC2 instances
      4: Transport (TCP, flow control) > Security group, NACL
      3: Network (Packets which contain IP addresses) & 2: Data Link (Frames which contain physical MAC addresses) > Route Tables, IGW, Subnets
      1: Physical (cables, physical transmission bits and volts) > Regions, AZs
    2. The instance can access the internet by got the Public IPv4 address. Can see it on the Networking tab.
  2. [Lab19] Static and Dynamic Addresses:
    1. EC2 public IP is Dynamic that change when stops and restarts instance. AWS does have a solution that allocates a persistent public IP address to an EC2 instance, called an Elastic IP (EIP).
    2. Network and Security > Elastic IPs > Allocate Elastic IP address.
  3. [Lab20] Create Subnets and Allocate IP addresses in an Amazon VPC:
    1. A VPC is like a data center but in the cloud. It is logically isolated from other virtual networks and can use to spin up AWS resources.
    2. An instance needs a public IP address for it to communicate outside the VPC. The VPC needs networking resources, such as an internet gateway and a route table, for the instance to reach the internet.
    3. Code:
  4. [Lab21] Creating Networking Resources in an Amazon VPC:
    1. Creating the VPC
    2. Creating Subnets > VPC and more no need to do these:
      1. Create Route Table
      2. Create Internet Gateway and attach Internet Gateway
      3. Add route to route table and associate subnet to route table
      4. Creating a Network ACL
    3. Editing a Security Group
    4. Launch EC2 instance
    5. Use ping to test internet connectivity > ping
  5. [Lab22] Internet Protocol Troubleshooting Commands:
    1. Layer 3 (network): The ping and traceroute commands
      ping -c 5 > -c stands for count, and 5 stands for how many requests are requesting
    2. Layer 4 (transport): The netstat and telnet commands
      netstat -tp: See the active connections
      netstat -ntlp: See the listening ports
    3. Layer 7 (application): The curl command
  6. [Lab23] Troubleshooting a Web server Issue:
    1. sudo systemctl status httpd.service > check the status of the httpd service
    2. EC2$ ping -c 5 > If can get to the internet, the internet gateway and route table should work
    3. Apache is a server that commonly uses HTTP/S as ports, check the Security Groups that allow needed port.
    4. http://<PUBLIC IP OF INSTANCE> > Confirm that the Apache HTTP server is working.
  7. [Lab24] Build VPC and Launch a Web Server:
    1. Allocate Elastic IP address
    2. Like the Lab21
    3. Edit a Main route table to Private Route Table by set to the NAT gateway
    4. Associate the Private subnets to Private Route Table and Public route no need to edit ( to IGW), delete other route tables.
    5. Create a security group by enable HTTP access in inbound rules.
    6. Launch a web server instance > Choose AMI, Instance Type, Public Subnet 2 (1 for NAT GW), Security Group from 5, User data:
      # Install Apache Web Server and PHP
      yum install -y httpd mysql php
      # Download Lab files
      unzip -d /var/www/html/
      # Turn on web server
      chkconfig httpd on
      service httpd start
  1. .
  2. .
  3. .
  4. .
  5. .
  6. [Lab25] Network Hardening Using Amazon Inspector and AWS Systems Manager:
    1. View EC2 instances and add tags - applied tags for the BastionServer instance, allows the security scan to find and scan this instance.
    2. Configure and run Amazon Inspector - choose EC2 tag for target.
    3. Analyze Amazon Inspector findings - it will show the affected instance, description, & recommendation provided.
      This lab, TCP port 23, which is associated with Telnet and TCP port 22, which is associated with SSH, are reachable from the internet.
    4. Update security groups - can click the security group link in the Recommendation section
    5. Replace BastionServer with Systems Manager - can quick & secure access EC2 through an interactive one-click browser-based shell
  7. [Lab26] Systems Hardening with Patch Manager via AWS Systems Manager:
    1. Select patch baselines
  8. [Lab27] Data Protection Using Encryption:
    1. Create a symmetric AWS KMS key and gave ownership of that key to the IAM role.
    2. Configure the File Server instance - configured the AWS credentials file, which provides the ability to use the AWS KMS key &
      installed the AWS Encryption CLI, so that can run encryption commands.
    3. Encrypt plaintext data into ciphertext by running the --encrypt command and decrypted the ciphertext back into the original data.
  9. [Lab28] Introduction to AWS IAM:
    1. Strengthen password requirements by creating a custom password policy. The various password options made more difficult to crack.
    2. Explore users and learned attached polices to the user groups and what differences between user groups and their permissions are.
    3. Add users to the user groups.
    4. Sign in and test user permissions: user-1 was able to view S3 buckets but unable to view EC2 instances.
      user-2 and verified that they were able to view S3 buckets and EC2 instances but unable to perform the stop instance action.
      user-3 were able to view EC2 instances and perform the stop instance action.
  10. [Lab29] Malware Protection Using an AWS Network Firewall:
    1. Confirm Reachability - download malicious files.
    2. Inspect the network firewall and update the firewall policy to forward all packets for stateful rule inspection.
    3. Create a stateful network firewall rule group that use Suricata rules.
    4. Once Attach a rule group to the network firewall, it will block the malicious actor files hosted within the websites.
    5. Validate the solution - can't download recent malicious files.
Last edited:


  1. [Lab30] Monitor an EC2 Instance:
    1. Configure Amazon SNS topic and created a subscription for the topic by using an email address. This topic is now able to send alerts.
    2. Create a CloudWatch alarm that initiates an In alarm state when the CPU utilization threshold exceeds 60 percent.
    3. Test the Cloudwatch alarm by ran a command to load the EC2 instance to 100% for 400s. This activated the alarm send by email.
    4. Create a CloudWatch dashboard > Create dashboard > Line > Metrics > EC2 > Per-Instance Metrics > Metric name: CPUUtilization
  2. In the Confidentiality, Integrity, and Availability (CIA) triad, the confidentiality ensures that resources are accessed by only authorized users' access.
    • Integrity security perspective is concerned with ensuring that data sent over a network is not tampered with.

    • Distributed Denial of Service (DDoS) threat floods a resource with multiple requests from multiple systems.

    • Security lifecycle phases: Prevention > Detection > Response > Analysis

    • Acceptable Use Policy document is an example of an administrative security control.
  3. Prevention tasks include Identifying assets to be protected, assessing asset vulnerability, and Implementing countermeasures.
    • AWS Systems Manager can create an inventory of the Amazon EC2 instances in an AWS account.

    • A layered security prevention strategy is protecting the valuable assets by multiple layers of different types of security measures.

    • Implement prevention measures through the: Network & Systems hardening, Data security controls, and Identity management.

    • The principle of least privilege in identity management is Granting users access to only the resources that they are authorized to access.
  4. Port scanning exposes protocols and services on a network.
    • A network administrator should disable or secure ICMP & SNMP protocols to protect against network discovery attacks.

    • A firewall uses to filter inbound and outbound packets on a network.

    • Network segmentation helps to improve network traffic performance and allows administrators to apply different security controls to different parts of a network.

    • AWS security group is A firewall that protects Amazon EC2 instances.
  5. The primary goal of systems hardening is to minimize security risks by reducing the set of vulnerabilities that are exposed by a system.
    • A security baseline is A starting point for determining what to secure and how to secure a system.

    • Applying patches regularly helps to harden a system.

    • Physical security contributes to system hardening by Restricts physical access to facilities through biometrics, CCTV, and so on.

    • AWS Trusted Advisor evaluates account and provides recommendations to help follow AWS best practices.
  6. Cryptography is the discipline that embodies the principles and techniques for providing data security.
    • A symmetric encryption, its key is a shared secret between the sender and the receiver.

    • The goal of encryption is Confidentiality.

    • SSL & TLS use the hybrid encryption method as a mode of encryption.

    • The recipient of the message possesses the private key to decrypt an asymmetrically encrypted message.
  7. Certificate Authorities (CAs) signs and issues certificates to entities and manages trusts and relationships.
    • A digital certificate is an electronic credential that is used to represent the identity of an individual, computer, or other entity.

    • Private and public key are used with SSL and TLS to establish a secure connection between a client and server.

    • A company's site is giving users an error that the CA is invalid. The cause of this issue could be The CA certificate has expired.

    • Trust achieved between two parties when using Public Key Infrastructure (PKI) principles Through the exchange of public keys that validate the identities of the parties.
  8. The goal of authorization in identity management is Determining a user's permissions.
    • Fingerprint reader is an authentication factor for something that you are.

    • Dictionary attack uses a predefined list of words as passwords to attempt to log in to a system.

    • AWS IAM use JSON format to define authorization rules.

    • A form of Single Sign-On (SSO) in which one account is used for multiple services is Federated identities.
  9. The goal of identity management is Administer users and their access permissions.
    • During a typical login Authentication and Authorization steps are performed.

    • Multi-Factor Authentication (MFA) is an example of an authentication factor.

    • Can define the actions that may or may not be performed by a user in a policy document attached to the user.

    • A best practice for identity management is Implement password policies.
  10. Malware infects a system through different methods, including Untrusted websites, Removable devices, and Emails.
    • A countermeasure against malware is Scan systems regularly.

    • A best practice when using antivirus software is Update virus definition files regularly.

    • Can install an Intrusion Detection System (IDS) on a server and network.

    • Amazon GuardDuty uses different logs include AWS CloudTrail event logs, VPC flow logs, and DNS logs to analyze and detect threats.
Last edited:


  1. AWS CloudTrail is A web service that uses log files to record AWS events.
    • The key benefits of AWS CloudTrail are providing visibility by recording user and resource activities and simplifies compliance audits by recording activities and events into logs files.

    • AWS CloudTrail events capture the user who performed the request and IP address where the request originated when a user performs a service request.

    • Best practices for working with AWS CloudTrail:
      • Aggregate log files to a single Amazon S3 bucket.
      • Turn on log file integrity validation for CloudTrail.
      • Ensure that CloudTrail is enabled across AWS globally.
      • Restrict access to CloudTrail S3 buckets.
      • Integrate with Amazon CloudWatch.
    • Integrating AWS CloudTrail with Amazon CloudWatch considered a good practice because CloudWatch monitors and reacts to events recorded in CloudTrail log files.
  2. AWS Config is used for Compliance auditing, Security analysis, Resource change tracking, and Troubleshooting.
    • Use AWS Config to:
      • Retrieve an inventory of AWS resources.
      • Send notification when a configuration change has occurred in an AWS resource.
      • Discover new and deleted resources.
      • Record configuration changes continuously.
    • Customers can use AWS Lambda to define custom rules for AWS Config.

    • How AWS Config works:
      1. A configuration change occurs in AWS resources.
      2. AWS Config records/logs, normalizes, and stores the changes to an Amazon S3 bucket.
      3. AWS Config automatically evaluates the changes against defined AWS configuration rules.
      4. AWS Config sends configuration and rule compliance change notifications to an Amazon SNS topic.
  3. Business Continuity Plan (BCP) and Disaster Recovery Plan (DRP) are used to minimize the impact of unplanned downtime.
    • The purpose of a BCP is to define how to run the business in a reduced capacity.

    • The purpose of a DRP is to define how to restore business functionality quickly after a disaster occurs.

    • The purpose of the RPO is focusing on recovering only data and representing how much data loss a business can tolerate.

    • The Pilot light disaster recovery is a minimal version of an environment running in the cloud.
  4. After a security breach will go to Analysis phase of the security lifecycle reviews.
    • During Root Cause Analysis (RCA) will Establish a timeline starting with baseline operations and ending with the problem's occurrence.

    • The benefit of monitoring and logging is It provides the data that is used for problem identification.

    • Question the company should ask during the analysis are How did the breach happen? and how could the breach have been prevented?

    • When develop a monitoring policy, ask the following questions:
      • Who performs the monitoring?
      • Which resources are monitored?
      • How closely should monitor?
      • How often should monitor?
      • Do you outsource monitoring?
      • Who watches the watchers?
  5. Trusted Advisor provides best practices or checks in five categories: Cost optimization, Performance, Security, Fault tolerance, and Service limits.
    • AWS Trusted Advisor checks are available for free for:
      • MFA on the root account
      • AWS IAM use.
      • Security group - Specific ports unrestricted
      • S3 bucket permissions
      • Amazon EBS public snapshots
      • Amazon RDS public snapshots
    • An AWS Trusted Advisor red check status (Exclamation point (!)) mean Action is recommended.

    • The service limit threshold that causes a warning from the service limits check is 80 percent.

    • AWS Trusted Advisor raises a Security Groups - Unrestricted Access red check status (Exclamation point (!)). Should Add rules that give access to authorized IP addresses to the identified security group.
  6. Email address is used to authenticate the AWS account root user in the AWS Management Console.
    • An AWS account root user has Complete access to all AWS services and resources permissions.

    • As a best practice when creating an AWS account should be immediately activated AWS CloudTrail logging service.

    • A best practice when creating a user in an AWS account is Require MFA for access.

    • A best practice when setting up AWS account to receive billing reports is Create or select an Amazon S3 bucket and set up AWS Cost and Usage Reports.
  7. Security compliance ensures that security controls meet regulatory and contractual requirement.
    • Government or laws regulations compliance violation can result in Civil, criminal, or financial penalties.

    • The customer and AWS share the responsibility for satisfying compliance standards.

    • Goals of the AWS business risk management program are:
      • Identifying and remediating risks
      • Maintaining a register of known risks
      • Creating and maintaining security policies
      • Providing security training to AWS employees
      • Performing application security reviews
    • AWS share security information by:
      • Publishing in technical papers and website content
      • Making available through the AWS Artifact Portal
      • Certificates, reports, and other documentation provided directly to AWS customers under NonDisclosure Agreements (NDAs)
  8. Customers can use AWS account teams and advisories and bulletins to make their applications more secure.
    • Can use AWS Auditor Learning Path and account teams to support compliance for customers applications.

    • A security engineer discovered a vulnerability in an AWS service. Should report their findings in AWS advisories and bulletins.

    • AWS Enterprise Support offer:
      • Support 24/7 through phone, chat, or email
      • A dedicated AWS Technical Account Manager (TAM)
      • The response time is less than 15 minutes for business-critical outages
    • AWS account teams serve as a first point of contact to help guide customers through deployment and implementation.

    • [Lab31] Creating a Hello, World Program:
      1. Accessing the AWS Cloud9 IDE
      2. Creating Python exercise file > File > New From Template > Python File > delete the sample code provided > File > Save As...
      3. Accessing the terminal session > + > New Terminal
      4. Introducing Python > python(2/3) --version
      5. Writing first Python program:
        > File > Save > Run (Play)
  9. [Lab32] Working with Numeric Data Types:
    1. Using the Python shell > python3 > Adding (+), Subtracting (-), Multiplying (*), Dividing (/) > Exiting the Python shell > quit()
    2. Introducing the int data type:
  10. [Lab33] Working with the String Data Type:
Last edited:


  1. [Lab34] Working with Lists, Tuples, and Dictionaries:
  2. [Lab35] Categorizing Values:
  3. [Lab36] Working with Composite Data Types:
  4. [Lab37] Working with Conditionals:
  5. [Lab38] Working with Loops:
  6. [Lab39] Creating a Git Repository:
    1. Creating a GitHub account >
    2. Creating a repository > + > New repository > Give Repository name > Public/Private > Create repository > try to upload files
    3. Downloading a repository > <> Code > Download ZIP
  7. A computer program is A text file with instructions for the computer that are written in a programming language.
    • A program that is written in a compiled language run the entire program is first translated into machine code before it is run.

    • The practice of writing software iteratively is Write a little > Test it > Write a little more > Test it.

    • Version control helps with managing updates and coordinating access to source code.

    • Git is a version control tool.
  8. [Lab40] Preparing to Analyze Insulin with Python:
    1. Go to
      > copy origin to text file.
    2. Manual separated Amino acids to the files.
  9. Python, Ruby, JavaScript are examples of Interpreted languages.
    • The benefits of using an Integrated Development Environment (IDE) instead of a text editor to write code are IDEs highlight incorrect syntax and suggest fixes for issues.

    • The declaration example of an integer variable in Python is 'daysinWeek = 7'.

    • In Python, 'for' is used for loop control flow mechanisms.
      Conditional control flow statements are 'if' and 'elif'.
  10. [Lab41] Working with the String Sequence and Numeric Weight of Insulin in Python:
  11. Python variable name must start with a letter or the underscore character.
    • Immutable datatypes in python are Int, Float, Tuple, Complex, String, Stringfrozen set, Bytes.

    • Could declare a string identifier by msg = "I'm a message" and msg = "I'm a " + "message" for examples.

    • print(2*3+3**2) = 6 + 3ยกกำลัง2 = 15.

    • Cannot concatenate a string and an integer.
  12. [Lab42] Calculating the Net Charge of Insulin by Using Python Lists and Loops:
  13. A conditional statement example is If the customer has already made more than five purchases, grant them a 5 percent discount.
    • Code:
      def printCategory(age):
        if age > 18:
        elif age > 65:
          print('Senior Citizen')
      printCategory(70) > Adult

    • The Python while loop runs as long as a condition is true and can run indefinitely.

    • Python lists can contain multiple data types (such as string, int, or float).

    • Flow-control constructs that represent conditional statements is if/(elif/)else.
  14. [Lab43] Using Functions to Implement a Caesar Cipher:
  15. A Python dev avoid repeating the same sequence of code several times By writing custom functions.
    • A fruitful function is A function that returns something.

    • The definition of the function of the price is of type float, the country is of type string, and the VAT is of type float is 'def calculateVAT(price, country)'

    • input() is a built-in Python function includes a prompt that users can use to enter a value.

    • The main reason for creating functions is Reusability.
  16. [Lab44] Creating File Handlers and Modules for Retrieving Information about Insulin:
  17. A Python module is A set of functions that are grouped together.
    • loads function can parse a JavaScript Object Notation (JSON) string and convert it to structured data in Python.

    • The JSON library commonly used To transform data that will be sent or received over a network.

    • pip is A Python package manager.

    • The purpose of exception handling is To detect and manage errors so that a program does not end unexpectedly.
  18. [Lab45] Introducing System Administration with Python:
    • is equivalent to the deprecated os.system() function in Python V3.

    • A goal of system administration is Ensure system stability.

    • "apt-get upgrade' updates the packages in the current operating system.

    • A common task for System Administrators (SysAdmins) is Installing new hardware or software.

    • Adding a user code example:
      def new_user():
        confirm = "N"
        while confirm != "Y":
          username = str(input("Enter the name of the user:"))
          print("Use the username '" + username + "'? (Y/N)")
  19. [Lab46] Using the Debugger:
    1. Choose toggledebugger right hand > choose gutter to the left of the line number to add break point > watch expressions > Run
    2. Run in Debug Mode > Run again > Step Over right hand > Blue arrow to go to the next break point
  20. [Lab47] Debugging the Caesar Cipher Program:
Last edited:


  1. An efficient way to find and fix errors in a running application is Use a debugging tool for dynamic analysis.
    • The advantages of performing static code analysis instead of dynamic analysis are Bugs can be detected early in the development process and the exact location of code issues can be identified.

    • The purpose of assertions is to raise errors under certain conditions at runtime.

    • The smallest testable part of application is Unit tests.

    • Integration tests can verify that the different part of software will work together or not when combined.

    • System tests is a complete and integrated application is tested. Determines whether the software meets specific requirements.

    • Acceptance testing is formalized testing that consider user & business needs, and whether the software is acceptable to final user.
  2. .
  3. .
  4. DevOps Tool:
    1. ManageEngine:รายละเอียด...bf443d81e4001ab91112/5c85fc18ca10e8001be783f8
    2. Nagios:
    3. WhatsUp Gold:รายละเอียด...3e428ba27c001a66d0b1/5c85fc18ca10e8001be783f8
    4. AWS Code Deploy:
    5. Jenkins:
    6. Electric Cloud:
  6. Orchestration & Automation:
    • Python Script
    • Provisioning
    • Unit test
    • Increase productivity


      Only Orchestration:
    • Management
    • Code
    • Process Coordination
    • Infrastructure
    • Eliminate repetition
    • User-defined function
    • Increase reliability
    • Version Contol (VC)
    • Decrease IT cost
    • Thread creation
    • Decrease friction among teams
    • PyCharm
    • Workflow

      Only Automation:
    • Single task
    • Hashicorp Configuration Language (HCL)
    • Terraform
  7. DevOps is A set of practices and a culture that help automate and monitor software development.
    • Adhocratic culture is associated with entrepreneurship and innovation.

    • CICDBlog.png

    • Build and Test of the SDLC can be automated.

    • Build automation: A DevOps engineer is setting up a process that regularly scans a Git repository and triggers a compilation if it detects a new code commit.
  8. In a well-organized project, Teams will work together in the same way.
    • pylint tool checks for errors and ensure that Python code is well-formatted.

    • Configuration management means using tools to track code changes and to roll back if necessary.

    • After dev is making a few changes to the code, the next step should Push the changes to the repository.

    • Version Control (VC) software tool is used for configuration management.
  9. .
  10. .
  11. [Lab48] Python Scripting Exercise:
  1. .
  2. .
  3. .
  4. .
  5. .
  6. .
  7. .
  8. [Lab49] Database Table Operations:
    1. Connect to the Command Host > Compute > EC2 > Instances > Instance > Connect > Session Manager > Connect
    2. Create a database and a table > mysql -u root --password='re:St@rt!9' > SHOW DATABASES; > CREATE DATABASE world;
    3. Delete a database and tables > DROP TABLE; > DROP DATABASE world; > SHOW TABLES;
  9. [Lab50] Insert, Update, and Delete Data in a Database:
    1. Connect to a database > sudo su > cd /home/ec2-user/
    2. Insert data into a table > INSERT INTO VALUES ('IRL','Ireland','Europe','British Islands',70273.00,1921,3775100,76.8,...
    3. Update rows in a table > UPDATE SET Population = 100, SurfaceArea = 100; > SELECT * FROM;
    4. Delete ALL rows from a table > DELETE FROM;
    5. Import data using an SQL file > quit; > mysql -u root --password='re:St@rt!9' < /home/ec2-user/world.sql > use world;
  10. [Lab51] Selecting Data from a Database:
    1. SELECT COUNT(*) FROM; (count all rows in the table) ORDER BY (ascending) Population DESC (descending);
    2. SELECT Name, Capital, Region, SurfaceArea AS "Surface Area", Population from WHERE Population > 50000000 AND Region = "Southern Europe";
      Query specific columns by change column name to Surface Area conditions population > 50 millions with operator specific region
Last edited:


  1. [Lab52] Performing a Conditional Search:
    • Return the sum of the surface area and sum of the population of North America:
      SELECT SUM(SurfaceArea) as "N. America Surface Area", SUM(Population) as "N. America Population" FROM WHERE Region = "North America";
  2. [Lab53] Working with Functions:
    1. SELECT Name,Region FROM WHERE lower(Region) LIKE "%micro%";
      lower for case-sensitive search, like for not exact match.
    2. SELECT Name, substring_index(Region, "/", 1) as "Region Name 1",substring_index(region, "/", -1) as "Region Name 2" FROM WHERE Region = "Micronesia/Caribbean";
      Split a string where a / occurs, 1 for first left side, -1 for first right side.
  3. [Lab54] Organizing Data:
    • SELECT Region, Name, Population, RANK() OVER(partition by Region ORDER BY Population desc) as 'Ranked' FROM order by Region, Ranked;
      Rank by population descending in each region.
    • [Lab55] Build DB Server and Interact with DB Using an App:
      1. Create a Security Group for the RDS DB Instance > Networking & Content Delivery > VPC > Security Groups > Create security group
      2. Create a DB Subnet Group > Database > RDS > Subnet groups > Create DB Subnet Group > Choose how many AZ > Subnets
      3. Create an Amazon RDS DB Instance > Database > Create database > Choose DB engine > Set DB name, user, pass > Instance type
      4. Interact with Database
  4. [Lab56] Introduction to Amazon Aurora:
    1. Create an Aurora instance > Database > RDS > Databases > Create database > Standard create > Aurora > Don't create Replica
  5. .
  6. .
  7. .

  8. 29x:
  9. A database is A collection of data that is organized into tables.
    • Tables and Columns database attributes accurately represent elements of a relational database.

    • The primary features of a NoSQL database are use data models that are not based on relational tables and have a flexible schema.

    • Types of anomaly are Insertion, Update, and Deletion.

    • DataBase as a Service (DBaaS) reduce the cost of installing and maintaining the servers.
  10. Atomicity database transaction property ensures that changes are successfully completed all at once or not at all.
    • The daily duties of DB admin include all of the following:
      • Design, implement, administer, and monitor data in DB systems.
      • Use different SQL commands to interact with tables in DBs.
      • Ensure consistency, quality, and security of the DB.
    • A DB transaction is the reproduction of one or more changes that are performed on a DB.

    • Isolation in the term ACID compliance represent The ability to concurrently process multiple transactions so that one transaction does not affect other transactions.

    • A transaction on a DB is successful if the transaction is committed.
  11. The purpose of Data Definition Language (DDL) statements in Structured Query Language (SQL) is used to create and modify the structure of a database.
    • The purpose of Data Manipulation Language (DML) statements in SQL is used to view, add, change, or delete data in a table.

    • The function of the SELECT statement in SQL is Retrieve data from a table.

    • A foreign key is a reference to a column in another table.

    • Foreign key values must match an existing primary key value.
  12. The most common use of a Comma-Separated Values (CSV) file is To import data into or export data out of databases and spreadsheets.
    • The function of a NULL value in SQL is used to represent a missing value.

    • A CSV file validated before it is imported into a database by Confirm that the structure of the data in the file matches the number of columns in the table and the type of data in each column.

    • The purpose of the CONCAT function in SQL is To combines strings from several columns and puts them together in one line.

    • Ways to use the INSERT INTO statement are add new data to a database and a new record to a table.
  13. The purpose of a WHERE clause in a SELECT statement requests only specific rows from a table.
    • FROM clause is required to complete a SELECT statement.

    • The WHERE clause represents in the SELECT-FROM-WHERE statement is Condition.

    • An asterisk (*) in an SQL means Select all columns.

    • A single-line comment within the query is -- comment to make others understand.
  14. BETWEEN operator selects values in a given range.
    • LIKE operator determines whether a specific character string matches a specified pattern.

    • And compares a value to similar values:
      SELECT ID, Name, CountryCode
      FROM city
      WHERE Name LIKE 'c%'

    • The purpose of using Aliases in SQL statements is To create temporary names for columns.

    • NULL value is tested for by the IS NULL condition in SQL.
  15. COUNT( ) function calculates the number of rows in a table.
    • SELECT DISTINCT district, countrycode FROM city; -- displays only the unique combinations of values of district and countrycode.

    • LTRIM( ) string function removes the leading space on entries in a column.

    • SUM SQL function calculates the sum of a set of values or the sum of an expression.

    • DISTINCT keyword is commonly used with individual columns to ensure that the retrieved column has unique values.
  16. To aggregate data from different rows in a table, must correlate the data into specified columns by GROUP BY.
    • HAVING operator can be used to filter query results after applying a GROUP BY clause.

    • ORDER BY column_name DESC -- get results in descending order.

    • ORDER BY 3 -- The query output will be sorted by the third column in the SELECT clause of the SQL statement.

    • HAVING SUM(sales) > 5000 -- limit the output of a query to only those customers with sales that are more than 5,000 units.
  17. UNION operator can be used to combine the results of two or more SELECT statements into a single set.
    • table_name.column_name represents the correct way to create a qualified column name.

    • INNER JOIN use to display the matching values from both tables only if there are matching columns between the tables.

    • LEFT JOIN use to retrieve all rows from the first table and all matching rows from the second table.

    • AS statement is used to create an alias so that the table name is not repeated twice in a self JOIN query.
  18. Periodic backups can be automatically performed by Amazon Relational Database Service (Amazon RDS) or manually performed by a user.
    • When creating a DB instance in Amazon RDS must Choose the DB engine first.

    • A best practice for high availability in an Amazon RDS DB instance is Deploying the DB instance in multiple AZs (multi-AZ deployment).

    • One of the reasons for using Amazon RDS instead of other DB solutions is Complex transactions or complex queries.

    • An Amazon Aurora DB cluster consist of one or more DB instances and a cluster volume that manages the data for those DB instances.
  19. Amazon DynamoDB is NoSQL database.
    • Amazon DynamoDB achieve high availability and scalability across Regions By using a collection of multiple replica tables.

    • The concept of partitioning in Amazon DynamoDB is the allocation of storage for a table.

    • Amazon DynamoDB partition keys uniquely identify each item in the table.

    • Amazon DynamoDB can retrieve data from a DynamoDB table Two ways: Query and Scan, Query is generally more efficient for specific items based on primary key attributes.

  20. A benefit of automating systems operations tasks is Reduced cost of infrastructure because of resource reuse.
    • One of a systems operations tasks is Design the DB used by an application.
Last edited:


    • Can create an AWS IAM policy for an Amazon S3 bucket by Resource-based policy.

    • Best practices for AWS IAM are Use IAM roles to provide cross-account access and Delegate administrative functions according to the principle of least privilege.

    • From AWS CLI:
      aws ec2 stop-instances --instance-id i-1234567890abcdef0 --output json
      stop-instances: specifies the operation to be performed.
  1. Automation document of AWS Systems Manager can take a snapshot of an Amazon EC2 instance.
    • AWS Systems Manager Parameter Store capability can store configuration data.

    • Session Manager can connect to an instance directly from the AWS Management Console.

    • JSON and YAML markup languages are supported in AWS CloudFormation.

    • AWS OpsWorks service is based on the Chef and Puppet automation platforms.

  2. To create a blue/green deployment infrastructure by using Amazon Route 53 to gradually phase out the blue environment should use Weighted routing policy.
    • Step scaling policy uses an Amazon CloudWatch alarm and varies the scaling response based on the size of the alarm breach.

    • Amazon CloudFront is a Content Delivery Network (CDN).

    • Application Load Balancer (ALB) should use to distribute requests to a website running on Amazon EC2 instance in a VPC.

    • An ALB use Listener to checks for connection requests from clients.
  3. AWS Fargate can a customer use to run containers without having to manage servers.
    • A customer should use Amazon ECR if must store, manage, and deploy Docker container images.

    • Tasks performs the work in a workflow when AWS Step Functions is running.

    • AWS Lambda can a customer use to build and run applications without provisioning or managing servers.

    • POST REST request method creates a resource.
  4. Amazon Aurora is compatible with MySQL and PostgreSQL.
    • Use AWS DMS to automate data transformation from data center to Amazon RDS for MySQL when the data is migrated.

    • Amazon Redshift is a fully managed data warehouse that a customer can use to run complex analytic queries.

    • A DB user is just responsible for Application optimization when they use a managed DB service on AWS.

    • Amazon DynamoDB and Neptune support NoSQL applications.
  5. NAT gateway use for a VPC allows a private subnet to connect to the internet.
    • VPC Flow Logs use for troubleshooting network connectivity issues inside a VPC and capturing information about the IP traffic going to and from the VPC.

    • Can add Allow and deny rules to the rule tables of a network ACL.

    • Security groups have an inbound rules table and outbound rules table. The default allows all traffic between resources that are assigned to the same security group.

    • Subnet is a logical network segment in a VPC that can only exist in a single AZ.
  6. Amazon EC2 Instance store provides temporary storage for an Amazon EC2 instance.
    • Amazon S3 Intelligent-Tiering should be used when the pattern of accessing data is unknown or changing.

    • Amazon EBS Provisioned IOPS SSD (io2) provides the highest performance for frequent read/write operations.

    • Amazon EBS can be attached to an Amazon EC2 instance.

    • A standard retrieval of data from Amazon S3 Glacier is 3-5 hours.

    • AWS Well-Architected Framework documents provide a set of foundational questions that customers can use to understand if their architecture aligns with cloud best practices.

    • Reliability AWS Well-Architected Framework pillar provides details about how to recover from failure and mitigate disruption.

    • A key design principle when designing a framework in the cloud is Test systems at production scale.

    • Amazon EC2 instance can replace a traditional server.

    • Business and technical are the primary perspectives in the AWS Cloud Adoption Framework (AWS CAF).
  7. Amazon CloudWatch alarms monitors a metric and sends an alert when the metric changes.
    • Amazon S3 can be used with AWS CloudTrail to store logs.

    • Events parameter indicates a change in the AWS environment when creating an event in Amazon CloudWatch Events.

    • A name-value pair that is used for identifying metrics in Amazon CloudWatch is Dimension.

    • AWS CloudTrail logs can be queried with Amazon Athena.
  8. AWS Organizations provides consolidated billing and account management capabilities for multiple accounts.
    • AWS Config can be used to define and enforce required tags.

    • AWS Budgets can be used to view costs across linked accounts and to monitor spending on a daily and monthly basis.

    • To reduce cost for AWS services could Use a stopinator script.

    • SCP allows or denies access to AWS services for individual accounts, or for groups of accounts in an AWS Organizations OU.
  9. AWS CloudFormation can be used to build a script-like template that represents a stack of AWS resources.
    • A systems administrator use a configuration management solution To automate configuration tasks and make them repeatable.

    • The purpose of a wait condition in an AWS CloudFormation template is To coordinate the creation of template resources with other external configuration actions.

    • To store an AWS CloudFormation template in an Amazon S3 bucket, The administrator must have permissions to access the bucket.

    • An AMI is anchored at the Region level.
  10. .
  11. .
  12. .
  13. .
  14. A federal contractor located in Seattle is expanding their cloud presence to Germany, which has strict rules on data that originated in Germany leaving the region, known as data sovereignty. They know that they will have auditors auditing both regions.
    1. Selection criteria should be prioritized for this customer's expansion needs is Data governance, legal requirements.

    2. Services would be considered global are Amazon CloudFront, IAM, and Route 53.

      A financial company is working with AWS for their new website. Administrators update images and related content on a monthly basis. They have noticed that the website's images take a while to load, especially for some of the global customers.
    3. Amazon CloudFront would be beneficial for reducing the latency on image downloads.

    4. The images that are used for the website should stored on Amazon S3.

    5. AWS CLI output Table format is most readable by a person, and should be selected by the SysOps administrator to show the customer the results of the image search.

    6. AWS CLI '--dry-run' command should the SysOps administrator use to check all the permissions that are needed to run the required action of the image archive before the action is executed.

      A startup tech has just created a mobile app that allows a customer to place a lunch order for delivery with local restaurants through a third party courier. They want to provide the customer and the restaurant with a notification that the delivery has been completed, as the building is secure and all food orders are left in the lobby.
    7. AWS Lambda and Amazon SNS could be used to provide the startup's customer with a notification of the food delivery.

    8. Network ACLs would secure the mobile app's network so that traffic is not allowed into the database that stores the startup's customer's personal information?

    9. The restaurant (AWS customer) has the responsibility to secure the restaurant's financial data, which is stored when an order is placed.

    10. IP address provides metadata service for all the instances associated with the mobile app.

    11. When provision an EC2 instance, user data run in Initialization.

    12. To reduce hardware footprint On-Demand Amazon EC2 instance would be a best practice for selection.

    13. If would like new hardware for the Amazon EC2 instance, should Start & Stop the instance.
Last edited:


    1. .
    2. .
    3. .
    4. WebSockets would support the warehouse goal of having customers receiving real time messages without requiring the customers to make additional requests to the server.

    5. A warehouse that promises 48 hour delivery should use Active-active failover configuration of Amazon Route 53 so that it is available for the majority of the time, even if there are unhealthy instances.

    6. Amazon Route 53 provide:
      • Completes health checks of the resource to ensure it can be reached.
      • Registers domain names.
      • Routes traffic to the website based on the domain name that is entered.
    7. Amazon DynamoDB and ElastiCache services can be used to store session-related information off from an Amazon EC2 instance so that the instance remains stateless.

    8. Serverless-vs-Containers-Which-one-to-choose-in-2022.png

    9. Amazon EC2 Spot instance should be used with AWS Lambda for the most economic benefit, while maintaining security and resiliency.

    10. Amazon Kinesis could be added to the serverless design to analyze data that can be customized for future applications.

    11. Two IAM policies support AWS Lambda authentication.
  • A company operates quick-service restaurants. The restaurants follow a predictable model with high sales traffic for 4 hours daily. Sales traffic is lower outside of those peak hours.
    The point of sale and management platform is deployed in the AWS Cloud and has a backend that is based on Amazon DynamoDB. The DB table uses provisioned throughput mode with 100,000 RCUs and 80,000 WCUs to match known peak resource consumption.
    The company wants to reduce its DynamoDB cost and minimize the operational overhead for the IT staff.
    MOST cost-effectively solution is Enable Dynamo DB auto scaling for the table.
    There are compelling reasons to use DynamoDB auto scaling with actively changing traffic. Auto scaling responds quickly and simplifies capacity management, which lowers costs by scaling table's provisioned capacity and reducing operational overhead.

  • A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region. Solution will meet these requirements is Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.

  • A company manufactures smart vehicles uses a custom app to collect vehicle data. The vehicles use the MQTT protocol to connect to the app. The company processes the data in 5-mins intervals. The company then copies vehicle telematics data to on-premises storage. Custom apps analyze this data to detect anomalies.
    The number of vehicles that send data grows constantly. Newer vehicles generate high volumes of data. The on-premises storage solution is not able to scale for peak traffic, which results in data loss. The company must modernize the solution and migrate the solution to AWS to resolve the scaling challenges. The solution will meet these requirements with the LEAST operational overhead is Use AWS IoT Core to receive the vehicle data. Configure rules to route data to an Amazon Kinesis Data Firehose delivery stream that stores the data in Amazon S3. Create an Amazon Kinesis Data Analytics app that reads from the delivery stream to detect anomalies.
    Using AWS IoT Core to receive the vehicle data will enable connecting the smart vehicles to the cloud using the MQTT protocol.
    AWS IoT Core is a platform that enables to connect devices to AWS Services and other devices, secure data and interactions, process and act upon device data, and enable apps to interact with devices even when they are offline.
    Configuring rules to route data to an Amazon Kinesis Data Firehose delivery stream that stores the data in Amazon S3 will enable processing and storing the vehicle data in a scalable and reliable way.
    Amazon Kinesis Data Firehose is a fully managed service that delivers real-time streaming data to destinations such as Amazon S3. Creating an Amazon Kinesis Data Analytics app that reads from the delivery stream to detect anomalies will enable analyzing the vehicle data using SQL queries or Apache Flink apps. Amazon Kinesis Data Analytics is a fully managed service that enables to process and analyze streaming data using SQL or Java.

  • A company needs to create and manage multiple AWS accounts for a number of departments from a central location. The security team requires read-only access to all accounts from its own AWS account. The company is using AWS Organizations and created an account for the security team. A solutions architect meets these requirements by Use the OrganizationAccountAccessRole IAM role to create a new IAM role with read-only access in each member account. Establish a trust relationship between the IAM role in each member account and the security account. Ask the security team to use the IAM role to gain access.
    When create a member account using the AWS Organizations console, AWS Organizations automatically creates an IAM role named OrganizationAccountAccessRole in the account need OrganizationAccountAccessRole in member account to create a read-only role and use role from security team to assume this read-only role.

  • Access S3 from Private EC2 instance using VPC Endpoint:
    Create a VPC > Services > Networking & Content Delivery > VPC > Your VPCs > Create VPC > VPC only > Name: MyVPC > CIDR/26

    Default, instances that launched in a VPC cannot communicate with the Internet > Create and attach an Internet Gateway with custom VPC

    Create a Public and Private Subnet > Subnets > Create subnet > ID: MyVPC > Name: Public subnet > AZ: -1a > Private subnet AZ: -1b /27

    Configure the Public subnet to enable auto-assign public IPv4 address > Public subnet > Actions > Edit subnet settings > Enable auto-...

    Create a Route Table for the Public subnet > Route Tables > Create route table > Name: PublicRouteTable & PrivateRouteTable > ...

    Create security groups > 2 groups > 1 for Bastion server with SSH, HTTP, & HTTPS rules, 2 for S3 endpoint with only SSH rule from Bastion

    Create a Bastion Host (Publicly accessible EC2 Instance) > Compute > EC2 > Public Subnet, Auto-assign public IP: Enable, Bastion-SG

    Create an Endpoint instance (Privately accessible EC2 instance) > Private Subnet, Auto-assign public IP: Disable, Endpoint-SG

    SSH into Endpoint instance (Privately accessible) through Bastion host > vi WhizKey.pem > copy content from file when created key pair

    Create a VPC endpoint for S3, attach it to the Private subnet's Route table > VPC > Endpoints > ...

    List all the S3 Bucket and its objects

  • With HPC clusters with tightly coupled workloads require inter-node communication that is high-performance and low latency. Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run applications requiring high levels of inter-node communications at scale on AWS.
    In addition to using EFAs with supported EC2 instances, AWS recommend that launch instances into a single AZ to ensure the latency between nodes is low.
Last edited: