Cloud Computing


  • 'docker ps -a' to show containers status
    • Up = Running
  • To kill the container use 'docker kill [NAMES]'


  • Task roles is the best practice way of providing permissions to running containers on ECS.

  • ECS Fargate mode should be used if want as little admin overhead as possible.

  • ECS Service is used to configure scaling and HA for containers.

  • 3 cluster modes are available within ECS:
    • Network Only (Fargate)
    • EC2 Linux + Networking
    • EC2 Windows + Networking
  • Docker is the only container platform supported by Amazon ECS at this time.

  • Container images store in a container registry.

  • The advantages/benefits of container are Fast to startup, Portable, and Lightweight.
Bootstrapping Wordpress Installation:

  • A Dev has been asked to build a real-time dashboard web application to visualize the key prefixes and storage size of objects in Amazon S3 buckets. Amazon DynamoDB will be used to store the Amazon S3 metadata. The optimal and MOST cost-effective design to ensure that the real-time dashboard is kept up to date with the state of the objects in the Amazon S3 buckets is Use an Amazon CloudWatch event backed by an AWS Lambda function. Issue an Amazon S3 API call to get a list of all Amazon S3 objects and persist the metadata within DynamoDB. Have the web application poll the DynamoDB table to reflect this change.

  • An on-premises application is implemented using a Linux, Apache, Mysql, and Php (LAMP) stack. The Dev wants to run this application in AWS. Amazon EC2 and Aurora can be used to run this stack.

  • A dev is building a backend system for the long-term storage of information from an inventory management system. The information needs to be stored so that other teams can build tools to report and analyze the data. To achieve the FASTEST running time the dev should Create an AWS Lambda function that writes to Amazon S3 synchronously. Set the inventory system to retry failed requests.

  • A Dev is working on an application that handles 10MB documents that contain highly-sensitive data. The application will use AWS KMS to perform client-side encryption. Should Invoke the GenerateDataKey API to retrieve the plaintext version of the data encryption key to encrypt the data.
    GenerateDataKey API: Generates a unique data key. This operation returns a plaintext copy of the data key and a copy that is encrypted under a Customer Master Key (CMK) that specify. Can use the plaintext key to encrypt data outside of KMS and store the encrypted data key with the encrypted data.

  • A dev is using AWS CodeDeploy to deploy an application running on Amazon EC2. The dev wants to change the file permissions for a specific deployment file. To meet this requirement a dev should use AfterInstall lifecycle event.

  • An application ingests a large number of small messages and stores them in a database. The application uses AWS Lambda. A dev team is making changes to the application's processing logic. In testing, it is taking more than 15 mins to process each message. The team is concerned the current backend may time out. To ensure each message is processed in the MOST scalable way to the backend system should Add the messages to an Amazon SQS queue. Set up an Amazon EC2 instance to poll the queue and process messages as they arrive.

  • A dev has discovered that an application responsible for processing messages in an Amazon SQS queue is routinely falling behind. The application is capable of processing multiple messages in one execution, but is only receiving one message at a time. To increase the number of messages the application receives the dev should Call the ChangeMessageVisibility API for the queue and set MaxNumberOfMessages to a value greater than the default of 1.

  • A dev is writing an AWS Lambda function. The dev wants to log key events that occur during the Lambda function and include a unique identifier to associate the events with a specific function invocation. To help the dev accomplish this objective should Obtain the request identifier from the Lambda context object. Architect the application to write logs to the console.

  • A dev is trying to monitor an application's status by running a cron job that returns 1 if the service is up and 0 if the service is down. The dev created code that uses an AWS CLI put-metric-alarm command to publish the custom metrics to Amazon CloudWatch and create an alarm. However the dev is unable to create an alarm as the custom metrics do not appear in the CloudWatch console. This issue cause from The dev needs to use the put-metric-data command.

  • A company runs an e-commerce website that uses Amazon DynamoDB where pricing for items is dynamically updated in real time. At any given time, multiple updates may occur simultaneously for pricing information on a particular product. This is causing the original editor's changes to be overwritten without a proper review process. To prevent this overwriting should use DynamoDB Conditional writes option.

  • Company C provides an online image recognition service and utilizes SQS to decouple system components for scalability. The SQS consumers poll the imaging queue as often as possible to keep end-to-end throughput as high as possible. However, Company C is realizing that polling in tight loops is burning CPU cycles and increasing costs with empty responses. Company C can reduce the number of empty responses by Set the Imaging queue ReceiveMessageWaitTimeSeconds attribute to 20 sec.

  • A dev has created a REST API using Amazon API Gateway. The dev wants to log who and how each caller accesses the API. The dev also wants to control how long the logs are kept. To meet these requirements the dev should Enable API Gateway execution logging. Delete old logs using API Gateway retention settings.

  • Company D is currently hosting their corporate site in an Amazon S3 bucket with Static Website Hosting enabled. Currently, when visitors go to the index.html page is returned. Company D now would like a new page welcome.html to be returned when a visitor enters in the browser. The steps will allow Company D to meet this requirement are:
    • Upload an html page named welcome.html to their S3 bucket
    • Set the Index Document property to welcome.html.
  • A company is launching an ecommerce website and will host the static data in Amazon S3. The company expects approximately 1,000 Transactions Per Second (TPS) for GET and PUT requests in total. Logging must be enabled to track all requests and must be retained for auditing purposes. The MOST cost-effective solution is Enable AWS CloudTrail logging for the S3 bucket-level action and create a lifecycle policy to expire the data in 90 days.
Last edited:



Normally when launch an EC2 instance its physical location is selected by AWS, placing it on whatever EC2 host makes the most sense within the AZ that it's launched in. Placement groups allow to influence placement. Ensuring that instances are either physically close together or not.

Cluster Placement Group packs instance close together inside an AZ:

A Spread Placement group is a group of instances that are each placed on distinct racks, with each rack having its own network and power source. It is maximum of 7 instances per AZ per group. It provides a Highly Available (HA) infrastructure and distinct underlying hardware. Can't deploy to multiple AWS regions:



  • An application takes 40 sec to process instructions received in an Amazon SQS message. Assuming the SQS queue is configured with the default VisibilityTimeout value, upon receiving a message, to ensure that no other instances can retrieve a message that has already been processed or is currently being processed, the BEST way is Use the ChangeMessageVisibility API to increase the VisibilityTimeout, then use the DeleteMessage API to delete the message.

  • A Dev needs to deploy an application running on AWS Fargate using Amazon ECS. The application has environment variables that must be passed to a container for the application to initialize. The environment variables should be passed to the container by Define an array that includes the environment variables under the environment parameter within the task definition.

  • A company is developing a web application that allows its employees to upload a profile picture to a private Amazon S3 bucket. There is no size limit for the profile pictures, which should be displayed every time an employee log in. For security reasons, the pictures cannot be publicly accessible. A viable long-term solution for this scenario is Save the picture's S3 key in an Amazon DynamoDB table. Create an Amazon S3 VPC endpoint to allow the employees to download pictures once they log in.

  • A dev creates an Amazon S3 bucket to store project status files that are uploaded hourly. The dev also crates an AWS Lambda function that will be used to process the project status files. To invoke the function with the LEAST amount of AWS infrastructure the dev should Create an S3 event notification to invoke the function when a new object is created in the S3 bucket.

  • A dev from AnyCompany's AWS account needs access to the Example Corp. AWS account AnyCompany uses an identity provider that is compatible with OpenID Connect. The MOST secure way for Example Corp to allow dev access is Create a user in the Example Corp account and provide the access keys.

  • A dev is designing a distributed application built using a microservices architect spanning multiple AWS accounts. The company's operations team wants to analyze and debug application issues from a centralized account. The dev can meet these requirements by Use an Amazon X-Ray agent with role assumption on to publish data into the centralized account.

  • The Dev for a retail company must integrate a fraud detection solution into the order processing solution. The fraud detection solution takes between ten and thirty mins to verify an order. At peak, the web site can receive one hundred orders per min. The most scalable method to add the fraud detection solution to the order processing pipeline is Add all new orders to an SQS queue. Configure an Auto Scaling group that uses the queue depth metric as its unit of scale to launch a dynamically-sized fleet of EC2 instances spanning multiple AZs with the fraud detection solution installed on them to pull orders from this queue. Update the order with a pass or fails status.

  • An on-premises application makes repeated calls to store files to Amazon S3. As usage of the application has increased, "LimitExceeded" errors are being logged. To fix this error should be Implement exponential backoffs in the application.

  • A dev is working on an AWS Lambda function that accesses Amazon DynamoDB. The Lambda function must retrieve an item and update some of its attributes or create the item if it does not exist. The Lambda function has access to the primary key. To achieve this functionality the dev should request:
    dynamodb: PutItem

  • A company wants to make sure that only one user from its Admin group has the permanent right to delete an Amazon EC2 resource. There should be no changes in the existing policy under the Admin group. To meet these requirements a dev should use Inline policy.

  • A Dev wants to debug an application by searching and filtering log data. The application logs are stored in Amazon CloudWatch Logs. The Dev creates a new metric filter to count exceptions in the application logs. However, no result are returned from the logs. The reason that no filtered results are being returned is CloudWatch Logs only publishes metric data for events that happen after the filter is created.

  • A supplier is writing a new RESTful API for customers to query the status of orders. The customers requested the API endpoint. The application designs meet the requirements are Amazon API Gateway; AWS Lambda and Amazon S3; Amazon CloudFront.

  • An application on AWS is using third-party APIs. The Dev needs to monitor API errors in the code, and wants to receive notifications if failures go above a set threshold value. The Dev can achieve these requirements by Publish a custom metric on Amazon CloudWatch and use Amazon SNS for notification.

  • Company B has an S3 bucket containing premier content that they intend to make available to only paid subscribers of their website. The S3 bucket currently has default permissions of all objects being private to prevent inadvertent exposure of the premier content to non-paying website visitors. Company B can provide only paid subscribers the ability to download a premier content file in the S3 bucket by Generate a pre-signed object URL for the premier content file when a paid subscriber requests a download.

  • The format of structured notification messages sent by Amazon SNS is An JSON object containing MessageId, unsubscribeURL, Subject, Message and other values.

  • A dev has built an application using Amazon Cognito for authentication and authorization. After a user is successfully logged in to the application, the application creates a user record in an Amazon DynamoDB table. The correct flow to authenticate the user and create a record in the DynamoDB table is Authenticate and get a token from an Amazon Cognito identity pool. Use the token to access DynamoDB.
Last edited:


  • Enhanced Networking Provide Higher Packets Per Second (PPS), Consistent Low Latency, and High Throughput benefits.

  • Cluster placement group:
    • should be used when need the best performance within EC2
    • Only one AZ can be used
  • Spread placement group
    • is ideal when need the best levels of resilience
    • 7 instances Per AZ can be
  • If run a large application which uses 100's of EC2 instances and it needs exposure to physical location for performance and availability reasons. Should use Partition placement group.

  • Can permissions be provided to an application running in EC2 using best practices by Instance Profile & IAM Role.

  • There is no charge for EC2 instances running on a dedicated host and the host is dedicated to you.

  • EC2 user-data feature allows to provide commands that the instance will run at startup.

  • Commands specified in user-data get executed Once when the instance is provisioned.


  • nonportable.yaml:
        Type: 'AWS::S3::Bucket'
          BucketName: 'dogpics1337'
        Type: 'AWS::EC2::Instance'
          KeyName: 'A4L'
          InstanceType: 't2.micro'
          ImageId: 'ami-0c802847a7dd848c0' > for Singapore Region









  • A Dev is going to deploy an AWS Lambda function that requires significant CPU utilization. Approach will MINIMIZE the average runtime of the function is Deploy the function with its memory allocation set to the maximum amount.

  • A Dev is trying to make API calls using SDK. The IAM user credentials used by the application require multi-factor authentication for all API calls. Method the Dev use to access the multi-factor authentication protected API is GetCallerIdentity.

  • A Dev has created a software package to be deployed on multiple EC2 instances using IAM roles. Actions could be performed to verify IAM access to get records from Amazon Kenesis Streams are Use the AWS CLI to retrieve the IAM group and Validate the IAM role policy with the IAM policy simulator.

  • A Dev needs temporary access to resources in a second account. The MOST secure way to achieve this is Create a cross-account access role, and use sts:AssumeRole API to get short-lived credentials.

  • An application stores images in an S3 bucket. Amazon S3 event notifications are used to trigger a Lambda function that resizes the images. Processing each image takes less than a second. AWS Lambda will handle the additional traffic by scale out to execute the requests concurrently.

  • An application reads data from an Amazon DynamoDB table. Several times a day, for a period of 15 sec, the application receives multiple ProvisionedThroughputEceeded errors. This exception should be handled by Retry the failed read requests with exponential backoff.

  • An application uploads photos to an Amazon S3 bucket. Each photo that is uploaded to the S3 bucket must be resized to a thumbnail image by the application. Each thumbnail image is uploaded with a new name in the same S3 bucket. Service can a dev configure to directly process each single S3 event for each S3 object upload is AWS Lambda.

  • When a Dev tries to run an AWS CodeBuild project, it raises an error because the length of all environment variables exceeds the limit for the combined maximum of characters. The recommended solution is Use AWS Systems Manager Parameter Store to store large numbers of environment variables.

  • A dev is working on a web application that runs on Amazon Elastic Container Service (Amazon ECS) and uses an Amazon DynamoDB table to store data. The application performs a large number of read requests against a small set of the table data. The dev can improve the performance of these requests by:
    • Create an Amazon ElastiCache cluster. Configure the application to cache data in the cluster.
    • Increase the read capacity of the DynamoDB table.
  • A company has a three-tier application that is deployed in Amazon ECS. The application is using an Amazon RDS for MySQL DB instance. The application performs more database reads than writes. During times of peak usage, the application's performance degrades. When this performance degradation occurs, the DB instance's ReadLatency metric in Amazon CloudWatch increases suddenly. A dev should modify the application to improve performance by Use Amazon ElastiCache to cache query results.

  • A Dev team currently supports an application that uses an in-memory store to save accumulated game results. Individual results are stored in a database. As part of migrating to AWS, the team needs to use automatic scaling. The team knows this will yield inconsistent results. The team should store these accumulated game results to BEST allow for consistent results without impacting performance in Amazon ElastiCache.

  • A dev registered an AWS Lambda function as a target for an Application Load Balancer (ALB) using a CLI command. However, the Lambda function is not being invoked when the client sends requests through the ALB. The Lambda function is not being invoked because The permissions to invoke the Lambda function are missing.

  • DynamoDB uses optimistic concurrency control and conditional writes for consistency.

  • A company is developing an application that will be accessed through the Amazon API Gateway REST API. Registered users should be the only ones who can access certain resources of this API. The token being used should expire automatically and needs to be refreshed periodically. A dev can meet these requirements by Create an Amazon Cognito user pool, configure the Cognito Authorizer in API Gateway, and use the identity or access token.

  • A dev is building an application that reads 90 Items of data each second from an Amazon DynamoDB table. Each item is 3 KB in size. The table is configured to use eventually consistent reads. Read capacity units should the dev provision for the table is (Size of each item at 4KB increments) x 90 Items / 8 = 45 RCUs.
Last edited:


  • nonportable.json
  • portable-stage1.json
  • portable-stage2.json
  • portable-stage3.json












  • 1_userdata.yaml
  • 2_userdata with signal.yaml
  • 3_cfninit with signal.yaml
  • 4_cfninit with signal and cfnhup.yaml

  • When a Simple Queue Service (SQS) message triggers a task that takes 5 mins to complete, process will result in successful processing of the message and remove it from the queue while minimizing the chances of duplicate processing is Retrieve the message with an increased visibility timeout, process the message, delete the message from the queue.

  • A Dev is asked to implement a caching layer in front of Amazon RDS. Cached content is expensive to regenerate in case of service failure. Implementation would work while maintaining maximum uptime is Implement Amazon ElastiCache Redis in Cluster Mode.

  • A company is developing a serverless ecommerce web application. The application needs to make coordinated, all-or-nothing changes to multiple items in the company's inventory table in Amazon DynamoDB. Solution will meet these requirements is Use the TransactWriteitem operation to group the changes. Update the items in the table.

  • A company has a web application in an Amazon Elastic Container Service (Amazon ECS) cluster running hundreds of secure services in AWS Fargate containers. The services are in target groups routed by an Application Load Balancer (ALB). Application users log in to the website anonymously, but they must be authenticated using any OpenID Connect protocol-compatible identity provider (IdP) to access the secure services. Authentication approach would meet these requirements with the LEAST amount of effort by Configure the services to use Amazon Cognito.

  • The programming languages have an officially supported AWS SDK are PHP and Java.

  • A team of Dev must migrate an application running inside an AWS Elastic Beanstalk environment from a Classic Load Balancer to an Application Load Balancer. Steps should be taken to accomplish the task using the AWS Management Console are:
    1. Create a new environment with the same configurations except for the load balancer type.
    2. Deploy the same application version as used in the original environment.
    3. Run the swap-environment-cnames action.
By default, Elastic Beanstalk creates an Application Load Balancer for environment when enable load balancing with the Elastic Beanstalk console or the EB CLI. It configures the load balancer to listen for HTTP traffic on port 80 and forward this traffic to instances on the same port. Can choose the type of load balancer that environment uses only during environment creation. Later, can change settings to manage the behavior of running environment's load balancer, but can't change its type.​
  • Application is trying to upload a 6 GB file to Simple Storage Service and receive a 'Proposed upload exceeds the maximum allowed object size.' error message. A possible solution for this is Use the multi-part upload API for this object.

  • A Dev is writing an imaging micro service on AWS Lambda. The service is dependent on several libraries that are not available in the Lambda runtime environment. Strategy should the Dev follow to create the Lambda deployment package is Create a ZIP file with the source code and a script that installs the dependent libraries at runtime.

  • A company is running an application on AWS Elastic Beanstalk in a single-instance environment. The company's deployments must avoid any downtime. Deployment option will meet these requirements is Immutable.

  • A dev is building a static, client-side rendered website that is powered by ReactJS. The code has no server-side generated components and does not need to run any programming languages on the server. However, the code serves static HTML, CSS, and JavaScript to the client on each request. The dev's solution to host the website must maximize performance and cost-effectiveness. Combination of AWS services or resources should the dev use to meet these requirements are Amazon CloudFront and Amazon S3.

  • A dev team decides to adopt a Continuous Integration/Continuous Delivery (CI/CD) process using AWS CodePipeline and AWS CodeCommit for a new application. However, management wants a person to review and approve the code before it is deployed to production. The dev team can add a manual approver to the CI/CD pipeline by Add an approval action to the pipeline. Configure the approval action to publish to an Amazon SNS topic when approval is required. The pipeline execution will stop and wait for an approval.

  • While developing an application that runs on Amazon EC2 in an Amazon VPC, a Dev identifies the need for centralized storage of application-level logs. AWS service can be used to securely store these logs is Amazon CloudWatch Logs.

  • Custom libraries should be utilized in AWS Lambda by Modify the function runtime to include the necessary library.

  • Given the following AWS CloudFormation template:
    Description: Creates a new Amazon S3 bucket for shared content. Uses a random bucket name to avoid conflicts.
        Type: AWS::S3::Bucket
          Value: !Ref ContentBucket
    The MOST efficient way to reference the new Amazon S3 bucket from another AWS CloudFormation template is Add an Export declaration to the outputs section of the original template and use ImportValue in other templates.
Last edited:


  • template1.yaml
  • template2.yaml

  • basicS3bucket.yaml
  • customresource.yaml

  • CloudFormation Custom Resources use to extend its funtionality or integrate it with other system.

  • Designing a system using CloudFormation which has two distinct parts. Infrastructure (which includes a VPC, subnets, gateways and configuration) and multiple application instances. Should design this using Stack Exports/Imports (cross stack references).

  • CloudFormation Stack Roles allows identities to deploy infrastructure in a controlled way, beyond their usual permissions.

  • CloudFormation Intrinsic Functions Is often used to improve portability and make a template able to adjust itself based on where it's applied.

  • CloudFormation CFN-HUP allows EC2 instances to update their configuration if a STACK changes.

  • CloudFormation CFN-SIGNAL allows an instance to tell CloudFormation when it's finished bootstrapping and configuration.

  • CloudFormation Change Sets allows it to be integrated into a Organisations Change management processes.

  • If have two stacks which are always applied together (e.g. VPC stack and App Stack) should use Nested Stacks.

  • If need to deploy Infrastructure to multiple regions and accounts should use CloudFormation Stack Sets.

  • CloudFormation DependsOn allows to influence the order of resources created by CFN.







  • A dev is writing an application to analyze the traffic to a fleet of Amazon EC2 instances. The EC2 instances run behind a public Application Load Balancer (ALB). An HTTP server runs on each of the EC2 instances, logging all requests to a log file.
    The dev wants to capture the client public IP addresses. The dev analyzes the log files and notices only the IP address of the ALB. The dev must Install the AWS X-Ray daemon on each EC2 instance. Configure the daemon to write to the log file to capture the client public IP addresses in the log file.

  • A dev is creating AWS CloudFormation templates to manage an application's deployment in Amazon Elastic Container Service (Amazon ECS) through AWS CodeDeploy. The dev wants to automatically deploy new versions of the application to a percentage of users before the new version becomes available for all users. The dev should manage the deployment of the new version by Deploy the new version in a new CloudFormation stack. After testing is complete, update the application's DNS records for the new stack.

  • A dev is preparing a deployment package using AWS Cloud Formation. The package consists of two separate templates: one for the infrastructure and one for the application. The application has to be inside the VPC that is crated from the infrastructure template. The application stack can refer to the VPC created from the infrastructure template by Use the Ref function to import the VPC into the application stack from the infrastructure template.

  • A dev must allow guest users without logins to access an Amazon Cognito-enabled site to view files stored within an Amazon S3 bucket. The dev should meet these requirements by Create a new user pool, enable access to unauthenticated identities, and grant access to AWS resources.

  • In a multi-container Docker environment in AWS Elastic Beanstalk, to configure container instances in the environment is required An Amazon ECS task definition.

  • A front-end web application is using Amazon Cognito user pools to handle the user authentication flow. A dev is integrating Amazon DynamoDB into the application using the AWS SDK for JavaScript. The dev would securely call the API without exposing the access or secret keys by Hardcode the credentials use Amazon S3 to host the web application, and enable server-side encryption.

  • An organization is using Amazon CloudFront to ensure that its users experience low-latency access to its web application. The organization has identified a need to encrypt all traffic between users and CloudFront, and all traffic between CloudFront and the web application. These requirements can be met by Use AWS KMS to encrypt traffic between CloudFront and the web application and Set the Origin Protocol Policy to "HTTPS Only".

  • A company wants to implement authentication for its new REST service using Amazon API Gateway. To authenticate the calls, each request must include HTTP headers with a client ID and user ID. These credentials must be compared to authentication data in an Amazon DynamoDB table. The company MUST do to implement this authentication in API Gateway is Implement an AWS Lambda authorizer that references the DynamoDB authentication table.

  • A comapny has an AWS Lambda function that runs hourly, reads log files that are stored in Amazon S3, and forwards alerts to Amazon Simpile Notification Service (Amazon SNS) topics based on content. A dev wants to add a custom metric to the Lambda function to track the number of alerts of each type for each run. The dev needs to log this information in Amazon CloudWatch in a metric that is named Lambda/AlertCounts. The dev should modify the Lambda function to meet this requirement with the LEAST operational overhead by Add a call to the PutMetricAlarm API operation. Pass an array of alerts in the metrics member with the namespace of "Lambda/AlertCounts".

  • A company uses a third-party tool to build, bundle, and package rts applications on-premises and store them locally. The company uses Amazon EC2 instances to run its front-end applications. An application can be deployed from the source control system onto the EC2 instances by Upload the bundle to an Amazon S3 bucket and specify the S3 location when doing a deployment using AWS CodeDeploy.

  • A company needs to ingest terabytes of data each hour from thousands of sources that are delivered almost continually throughout the day. The volume of messages generated varies over the course of the day.
    Messages must be delivered in real time for fraud detection and live operational dashboards. Approach will meet these requirements is Use Amazon Kinesis Data Streams with Kinesis Client Library to ingest and deliver messages.

  • An application is running on a cluster of Amazon EC2 instance. While trying to read objects stored within a single Amazon S3 bucket that are encrypted with server-side encryption with AWS KMS managed keys (SSE-KMS), the application receives the following error:
    Service : AWSKMS: Status Code: 400: Code : ThrottlingException
    Combination of steps should be taken to prevent this failure are:
    • Contact AWS Support to request an AWS KMS rate limit increase.
    • Import a Customer Master Key (CMK) with a larger key size.
Last edited:









How to Install Dig on Windows:




  • A meteorological system monitors 600 temperature gauges, obtaining temperature samples every minute and saving each sample to a DynamoDB table. Each sample involves writing 1K of data and the writes are evenly distributed over time.
    600 / 60 secs = 10/sec. All writes are 1K (only unit of read provisioned throughput are rounded up to increment of 4K).
    10 x 1 = 10 write capacity units.

  • A dev has written an application that runs on Amazon EC2 instances and generates a value every minute. The Dev wants to monitor and graph the values generated over time without logging in to the instance each time. Approach should the Dev use to achieve this goal is Publish each generated value as a custom metric to Amazon CloudWatch using available AWS SDKs.

  • A three-tier application hosted on AWS uses Amazon RDS for MySQL as its database. A dev must ensure the database credentials are stored and accessed securely. The MOST secure way for the dev to achieve this is Store the credentials in a configuration file and commit it to the GIT repository.

  • A company experienced partial downtime during the last deployment of a new application AWS Elastic Beanstalk split the environment's Amazon EC2 instances into batches and deployed a new version one batch at a time after taking them out of service. Therefore, full capacity was not maintained during deployment. The dev plans to release a new version of the application, and is looking for a policy that will maintain full capacity and minimize the impact of the failed deployment. Dev policy should the dev use is Rolling with an Additional Batch.

  • A dev is testing a Docker-based application that uses the AWS SDK to interact with Amazon DynamoDB in the local development environment, the application has used IAM access keys. The application is now ready for deployment onto an ECS cluster. The application should authenticate with AWS services in production by Configure an ECS task IAM role for the application to use.

  • A company process incoming documents from an Amazon S3 bucket. Users upload documents to an S3 bucket using web user interface. Upon receiving files in S3, and AWS Lambda function is invoked to process the files, but the Lambda function times out intermittently. If the Lambda function is configured with the default settings, when there is a timeout exception The S3 event is discarded after the event is retried twice.

  • A dev wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket. Necessary steps to achieve this is Configure an S3 event to invoke a Lambda function that inserts records into DynamoDB.

  • A dev is building an application on Amazon EC2. The dev encountered an "Access Denied" error on some of the API calls to AWS services while testing. The dev needs to modify permissions that have been already given to the instance. These requirements can be met with minimal changes and minimum downtime by Update the attached IAM role adding the needed permissions.

  • A dev receives the following error message when trying to launch or terminate an Amazon EC2 instance using a boto4 script.
    boto.exception.BotoServerError: BotoServerError: 503 Service Unavailable
    <?xml version="1.0" encoding="UTF-8"?>
    <Message>Request limit exceeded.</Message></Error></Errors><RequestID>c0eefd95-64c4-5812-c839-edff0c7a7dfe</RequestID>
    To correct this error message the dev should Upgrade to the latest AWS CLI version so that boto4 can handle higher request rates.

  • A company hosts a monolithic application on Amazon EC2 instances. The company starts converting some features of the application to a serverless architecture by using Amazon API Gateway and AWS Lambda. After the migration, some users report problems with payment processing. Upon inspection, a dev discovers that the Lambda function that calls the external payment API is taking longer than expected. Therefore, the API Gateway requests are timing out. To resolve this issue in the serverless architecture the dev should Use Amazon Simple Queue Service (Amazon SQS) with API Gateway and the Lambda function to asynchronously call the payment API.

  • A Dev has an application that can upload tens of thousands of objects per second to Amazon S3 in parallel within a single AWS account. As part of new requirements, data stored in S3 must use Server Side Encryption with AWS KMS (SSE-KMS). After creating this change, performance of the application is slower. The MOST likely the cause of the application latency is The AWS KMS API calls limit is less than needed to achieve the desired performance.

  • A Dev wants access to make the log data of an application running on an EC2 instance available to systems administrators. To enables monitoring of this metric in Amazon CloudWatch is Install the Amazon CloudWatch Logs agent on the EC2 instance that the application is running on.

  • A dev has written an Amazon Kinesis Data Streams application. As usage grows and traffic increases over time, the application is regularly receiving ProvisionedThroughputExceededException error message. To resolve the error the dev should Increase the:
    • delay between the GetRecords call and the PutRecords call.
    • number of shards in the data stream.
  • A dev is building a WebSocket API using Amazon API Gateway. The payload sent to this API is JSON that includes an action key. This key can have three different values, create, update, and remove. The dev must integrate with different routes based on the value of the action key of the incoming JSON payload. The dev can accomplish this task with the LEAST amount of configuration by Set the value of the route selection expression to $request.body action.
Last edited:



  • A company is using AWS CodePipeline to deliver one of its applications. The delivery pipeline is triggered by changes to the master branch of an AWS CodeCommit repository and uses AWS CodeBuild to implement the test and build stages of the process and AWS CodeDeploy to deploy the application. The pipeline has been operating successfully for several months and there have been no modifications. Following a recent change to the application's source code, AWS CodeDeploy has not deployed the updates application as expected. The possible causes are:
    • The change was not made in the master branch of the AWS CodeCommit repository.
    • One of the earlier stages in the pipeline failed and the pipeline has terminated.
  • The default chosen region when making an API call with an AWS SDK is us-east-1.
    This section applies only when using a client builder to access AWS services. AWS clients created by using the client constructor will not automatically determine region from the environment and will, instead, use the default SDK region (us-east-1).

  • An Amazon DynamoDB table uses a Global Secondary Index (GSI) to support read queries. The primary table is write-heavy, whereas the GSI is used for read operations. Looking at Amazon CloudWatch metrics, the Dev notices that write operations to the primary table are throttled frequently under heavy write activity. However, write capacity units to the primary table are available and not fully consumed. The table being throttled because The GSI write capacity units are underprovisioned.

  • An ecommerce application is using Amazon Simple Notification Service (Amazon SNS) with an AWS Lambda subscription to save all new orders into an Amazon DynamoDB table. The company wants to record all the orders that are more than a certain amount of money in a separate table. The company wants to avoid changes to the processes that post orders to Amazon SNS or the current Lambda function that saves the orders to the DynamoDB table. A dev can implement this feature with the LEAST change to the existing application by Modify the Lambda code to filter the orders and save the appropriate orders to a separate table.

  • An application is running on an EC2 instance. The Dev wants to store an application metric in Amazon CloudWatch. The best practice for implementing this requirement is Use the CloudWatch PutMetricData API call to submit a custom metric to CloudWatch. Launch the EC2 instance with the required IAM role to enable the API call.

  • After launching an instance that intend to serve as a Network Address Translation (NAT) device in a public subnet, modify route tables to have the NAT device be the target of internet bound traffic of private subnet. When try and make an outbound connection to the Internet from an instance in the private subnet, are not successful. Could resolve the issue by Disabling the Source/Destination. Then check attribute on the NAT instance.

  • A dev wants to use React to build a web and mobile application. The application will be hosted on AWS. The application must authenticate users and then allow users to store and retrieve files that they own. The dev wants to use Facebook for authentication. MOST accelerate the development and deployment of this application on AWS is AWS Amplify CLI.

  • Attempt to store an object in the US-STANDARD region in Amazon S3, and receive a confirmation that it has been successfully stored. Then immediately make another API call and attempt to read this object. S3 tells that the object does not exist. Because US-STANDARD uses eventual consistency and it can take time for an object to be readable in a bucket.

  • A dev is refactoring a monolithic application. The application takes a POST request and performs several operations. Some of the operations are in parallel while others run sequentially. These operations have been refactored into individual AWS Lambda functions. The POST request will be processed by Amazon API Gateway. The dev should invoke the Lambda functions in the same sequence using API Gateway by Use Amazon SQS to invoke the Lambda functions.

  • No additional services cost with the use of the AWS platform are Auto Scaling and CloudFormation.

  • An application under development is required to store hundreds of video files. The data must be encrypted within the application prior to storage, with a unique key for each video file. The Dev should code the application by Use the KMS GenerateDataKey API to get a data key. Encrypt the data with the data key. Store the encrypted data key and data.

  • A company has an application that logs all information to Amazon S3. Whenever there is a new log file, an AWS Lambda function is invoked to process the log files. The code works, gathering all of the necessary information. However, when checking the Lambda function logs, duplicate entries with the same request ID are found. Causing the duplicate entries is The Lambda function failed, and the Lambda service retired the invocation with a delay.

  • A company is building a compute-intensive application that will run on a fleet of Amazon EC2 instances. The application uses attached Amazon EBS disks for storing data. The application will process sensitive information and all the data must be encrypted. To ensure the data is encrypted on disk without impacting performance a dev should Configure the Amazon EC2 instance fleet to use encrypted EBS volumes for storing data.

  • A dev team wants to immediately build and deploy an application whenever there is a change to the source code. Approaches could be used to trigger the deployment are Store the source code in an:
    • encrypted Amazon EBS volume. Configure AWS CodePipeline to start whenever a file in the volume changes
    • AWS CodeCommit repository. Configure AWS CodePipeline to start whenever a change is committed to the repository
  • Providing AWS consulting services for a company developing a new mobile application that will be leveraging Amazon SNS Mobile Push for push notifications. In order to send direct notification messages to individual devices each device registration identifier or token needs to be registered with SNS; however the dev are not sure of the best way to do this. Should Call the CreatePlatformEndPoint API function to register multiple device tokens.

  • A dev at a company writes an AWS CloudFormation template. The template refers to subnets that were created by a separate AWS CloudFormation template that the company's network team wrote. When the dev attempts to launch the stack for the first time, the launch fails. Template coding mistakes could have caused this failure are The dev's template does not use the ImportValue intrinsic function to refer to the subnets and The network team's template does not export the subnets in the Outputs section.

  • A company wants to migrate an existing web application to AWS. The application consists of two web servers and a MySQL database. The company wants the application to automatically scale in response to demand. The company also wants to reduce its operational overhead for database backups and maintenance. The company needs the ability to deploy multiple versions of the application concurrently. The MOST operationally efficient solution that meets these requirements is Deploy the application to AWS Elastic Beanstalk. Migrate the database to an Amazon RDS Multi-AZ DB instance.

  • A dev must modify an Alexa skill backed by an AWS Lambda function to access an Amazon DynamoDB table in a second account. A role in the second account has been created with permissions to access the table. The table should be accessed by Modify the Lambda function execution role's permissions to include the new role.
Last edited:










  • routing policy can be used to
    • Weighted - distribute load over recordsets in a controlled way
    • Failover - implement simple high-availability
  • A + Alias type of recordset is generally used to point at AWS resources

  • Private Hosted Zones is available only within 1 or more VPCs

  • Route53 HealthChecks feature can help improve the availability of service to customers

  • CloudFront is A Global CDN capable of caching static & dynamic content

  • Origin Access Identity (OAI) and Bucket Policies are used together to ensure S3 buckets can only be accessed via CloudFront

  • Certificates must be created when using AWS Certificate Manager (ACM) in the region the service which uses them is in

  • To use an ACM cert with CloudFront us-east-1 region must it be created in

  • ACM FULLY supports CloudFront and API Gateway services

  • A TCP based application which is used globally want to improve the network performance for global users. Service might support this requirement is Global Accelerator.

  • A Dev is using AWS CLI, but when running list commands on a large number of resources, it is timing out. To avoid this time-out can be Use pagination.

  • A company wants to implement a Continuous Integration (CI) for its workloads on AWS. The company wants to trigger unit test in its pipeline for commits-on its code repository, and wants to be notified of failure events in the pipeline. These requirements be met by Store the source code in AWS CodeCommit. Create a CodePipeline to automate unit testing. Use Amazon CloudWatch to trigger notification of failure events.

  • A dev has built a market application that stores pricing data in Amazon DynamoDB with Amazon ElastiCache in front. The prices of items in the market change frequently. Sellers have begun complaining that, after they update the price of an item, the price does not actually change in the product listing. This issue could be causing by The cache is not being invalidated when the price of the item is changed.

  • A company is developing a new online game that will run on top of Amazon ECS. Four distinct Amazon ECS services will be part of the architecture, each requiring specific permissions to various AWS services. The company wants to optimize the use of the underlying Amazon EC2 instances by bin packing the containers based on memory reservation. Configuration would allow the Dev team to meet these requirements MOST securely is Create four distinct IAM roles, each containing the required permissions for the associated ECS service, then configure each ECS task definition to reference the associated IAM role.

  • An application is designed to use Amazon SQS to manage messages from many independent senders. Each sender's messages must be processed in the order they are received. SQS feature should be implemented by the Dev is Configure each sender with a unique MessageGroupId.

  • Amazon ElastiCache, DynamoDB, and Simple Storage Service (S3) are key/value stores.

  • A dev is trying to get data from an Amazon DynamoDB table called demoman-table. The dev configured the AWS CLI to use a specific IAM user's credentials and executed the following command:
    aws dynamodb get-item table-name demoman-table --key '("id"; <"N"; "1993"}}' The command returned errors and no rows were returned. The MOST likely cause of these issues is The IAM user needs an associated policy with read access to demoman-table.

  • An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours. Managers run reports for ranges of names working in their office. One query is "Return all Items in this office for names starting with A through E". Table configuration will result in the lowest impact on provisioned throughput for this query is Configure the table to have a range index on the name attribute, and a hash index on the office identifier.
    Partition key and sort key - Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. DynamoDB uses the partition key value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored. All items with the same partition key value are stored together, in sorted order by sort key value.

  • If a message is retrieved from a queue in Amazon SQS, the message inaccessible to other users by default is 30 seconds.
    Visibility timeout: minimum 0 second, maximum 12 hours.

  • An application is expected to process many files. Each file takes four minutes to process each AWS Lambda invocation. The Lambda function does not return any important data. The fastest way to process all the files is Make asynchronous Event Lambda invocations and process the files in parallel.

  • A dev must extend an existing application that is based on the AWS Service Application Model (AWS SAM). The dev has used the AWS SAM CLI to create the project. The project contains different AWS Lambda functions. Commands must the dev use to redeploy the AWS SAM application are Sam init & deploy.

  • A Dev is writing transactions into a DynamoDB table called "SystemUpdates" that has 5 write capacity units. The highest read throughput is Strongly consistent reads of 5 read capacity units reading items that are 4 KB in size.

  • A company recently migrated its web, application and NoSQL database tiers to AWS. The company is using Auto Scaling to scale the web and application tiers. More than 95% of the Amazon DynamoDB requests are repeated read-requests. The DynamoDB NoSQL tier can be scaled up to cache these repeated requests by Amazon DynamoDB Accelerator.

  • An application is using a custom library to make HTTP calls directly to AWS service endpoints. The application is experiencing transient errors that are causing processes to stop when each error is first encountered. A request has been made to make the application more resilient by adding error retries and exponential backoff. A developer should implement the changes with MINIMAL custom code by Use an AWS SDK and set retry-specific configurations.

  • Amazon EBS-backed instances can be stopped and restarted.

  • Company needs a fully-managed source control service that will work in AWS. The service must ensure that revision control synchronizes multiple distributed repositories by exchanging sets of changes peer-to-peer. All users need to work productively even when not connected to a network. Should be used AWS CodeCommit source control service.
Last edited:


  • A Dev team would like to migrate their existing application code from a GitHub repository to AWS CodeCommit. Before can migrate a cloned repository to CodeCommit over HTTPS needs to be crated A set of Git credentials generated from IAM.

  • A dev is creating an AWS lambda function that generates a new file each time it runs. Each new file must be checked into an AWS CodeCommit repository hosted in the same AWS account. The dev should accomplish this by Upload the new file to an Amazon S3 bucket. Create an AWS step Function to accept S3 events. In the step Function, add the new file to the repository.

  • A Dev accesses AWS CodeCommit over SSH. The SSH keys configured to access AWS CodeCommit are tied to a user with the following permissions:
    "Version": "2013-11-28",
    "Statement": [
        "Effect": "Allow",
        "Action": [
         "Resource": "*"
    The Dev needs to create/delete branches. Based on the principle of least privilege, the specific IAM permissions need to be added are
    "codecommit: DeleteBranch",

  • A company has implemented AWS CodePipeline to automate its release pipelines. The dev team is writing an AWS Lambda function that will send notifications for state changes of each of the actions in the stages. Step must be taken to associate the Lambda function with the event source is Create an event trigger and specify the Lambda function from the CodePipeline console.

  • A dev team is creating a new application designed to run on AWS. While the test and production environments will run on Amazon EC2 instances, dev will each run their own environment on their laptops. The simplest and MOST secure way to access AWS services from the local development machines is Use an IAM role to assume a role and execute API calls using the role.

  • A dev is working on a serverless application. The application uses Amazon API Gateway. AWS Lambda functions that are written in Python, and Amazon DynamoDB. Combination of steps should the dev take so that the Lambda functions can be debugger in the event of application failures are Ensure that the execution role for the Lambda function has access to write to Amazon CloudWatch Logs and Use the Amazon CloudWatch metric for Lambda errors to create a CloudWatch alarm.

  • An application is real-time processing millions of events that are received through an API. Service could be used to allow multiple consumers to process the data concurrently and MOST cost-effectively is Amazon Kinesis Streams.

  • An organization is storing large files in Amazon S3, and is writing a web application to display meta-data about the files to end-users. Based on the metadata a user select an object to download. The organization needs a mechanism to index the files and provide single-digit millisecond latency retrieval for the metadata. AWS service should be used to accomplish this is Amazon DynamoDB.
    Amazon DynamoDB is a fast and flexible NoSQL database service for all applications that need consistent, single-digit millisecond latency at any scale. It is a fully managed database and supports both document and key-value data models. Its flexible data model and reliable performance make it a great fit for mobile, web, gaming, ad-tech, Internet of Things (IoT), and many other applications.

  • A Dev wants to enable AWS X-Ray for a secure application that runs in an Amazon ECS environment. Steps will enable X-Ray are:
    • Create a Docker image that runs the X-Ray daemon.
    • Add instrumentation to the application code for X-Ray.
    • Configure and use an IAM role for tasks.
  • A company is adding items to an Amazon DynamoDB table from an AWS Lambda function that is written in Python. A dev needs to implement a solution that inserts records in the DynamoDB table and performs automatic retry when the insert fails. Solution meets these requirements with MINIMUM code changes is Use the AWS Software Development Kit (SDK) for Python (boto3) to call the PutItem operation.

Application Modernization with AWS:

Enterprise Investment Trend on IT Infrastructure:


ทำไมลูกค้าย้ายไป AWS:
  • เปลี่ยนจากเงินก้อน (CapEx) เป็นเงินผ่อนหรือรายเดือน (OpEx)
  • จ่ายตามการใช้งานจริงเป็นรายชั่วโมง (Pay as you grow) / ง่ายในการ Scale
  • เพิ่มความเร็วและความคล่องตัว มี Service ให้พร้อมใช้งานแค่ 5 นาที
  • ขยายไปได้ทั่วโลก (Scale Globally)
  • ลดการทำงานของแผนก IT
  • เป็นมิตรกับสิ่งแวดล้อม
AWS customers in Thailand:
  • SCB abacus, SanSiRi, ANanDa, CPF, KBTG, PrukSa
  • ไทยวิวัฒน์, SCG, AIS, SET, King PoWer, CAT, TisCo, dtac, CounTer SerVice
  • BBTV, trueMart, CenTral Group, Jamsai
  • SingHa ESTate, MaJor, TV Direct, ascend
  • meb, true money, Eventpass, Snocko, 2C2P
  • WiseSight, KMITL, bitkub, QueQ, acommerce
  • wongnai, omise, AMPOS,, Sellsuki
  • eko, FlowAcCount, eatigo, OpenDurian, sunday, Pomelo
Broad and Deep Functionality - Periodic Table of Amazon Web Services:

Application migration strategies:

รู้จัก Application ของคุณและทำความเข้าใจตัวเลือกของคุณ:
  • ลดขนาด Resource ของคุณ: Retire, SaaS
  • ย้ายไป AWS: Lift and shift (Re-host) - ใช้ AWS Services แต่ยังไม่อยากแก้ไข Code
  • ปรับปรุงให้ทันสมัยบน AWS: Re-factor (Re-architect), Re-platform
ทำไมต้อง Re-host?:
  • Team งานค่อยๆ เรียนรู้วิธีการทำงานบน Cloud
  • เร่งเส้นทางสู่ Cloud
  • นำเสนอโอกาสในการใช้ขนาดที่เหมาะสม
    • ประหยัดค่าใช้จ่าย
  • อุปสรรคน้อย และใช้ความรู้อีกนิดหน่อย
  • ให้เวลาในการสร้างสถาปัตยกรรมใหม่
ประโยชน์และข้อจำกัดของ Re-host:
  • ร่นระยะเวลาส่งมอบบริการ (Time to value)
  • มีความยืดหยุ่นในการทำงาน สามารถทำ Redundancy ได้ แต่ยังต้องทำเอง
  • ประหยัดค่าใช้จ่าย แต่ประหยัดได้น้อยกว่าการใช้ Serverless และมี Limit ในการ Horizontal Scale
  • เพิ่ม Productivity ด้วยการจัดสรรทรัพยากร IT ที่ง่ายขึ้น
  • หลีกเลี่ยงค่าใช้จ่ายของการต้องซื้อ Hardware ทุกๆ 5 ปี
  • ความคล่องตัวทางธุรกิจ
ประโยชน์และข้อจำกัดของ Re-architecture:
  • ค่าใช้จ่ายในการทำงาน: ลดงาน Operation ด้วย Managed Services
  • เพิ่ม Productivity ขึ้น 30-50% เพื่อ Focus ไปที่ผลลัพธ์ทางธุรกิจ
  • ประหยัดค่าใช้จ่าย: ไม่ต้องซื้อ Hardware ทุกๆ 5 ปี ไม่มีค่าบำรุงรักษา หรือค่าบริหารจัดการ, Scale อัตโนมัติ ใช้เท่าไหร่จ่ายเท่านั้น
  • มีความยืดหยุ่นในการทำงานด้วย AWS Managed Services มี Redundancy ให้พร้อม (Highly Available: HA)
  • รวดเร็วขึ้นเพราะการ Release ได้บ่อยขึ้น และสามารถคาดเดาได้
  • ตอนแรกจะแพงจากค่า Migrate ไปเป็น Microservices และใช้เวลานาน แต่คุ้มค่าในระยะยาว

หมัดต่อหมัด Microservices vs. Monolithic / Monolith บริษัทเราเหมาะกับอะไรมากกว่า?:
  • APIs คือประตูด่านหน้าของ Microservices เพราะฉะนั้น Dev แต่ละ Teams ต้องคุยกันให้ดีๆ ก่อน โดย AWS จะใช้ CloudTrail ในการเก็บ Log
Serverless เป็นรูปแบบการทำงานที่ครอบคลุมบริการประเภทต่างๆ มากมาย:
  • Compute: AWS Lambda, AWS Fargate
  • Data stores: Amazon S3, Amazon Aurora Serverless, Amazon DynamoDB
  • Integration: Amazon API Gateway, Amazon SQS, Amazon SNS, AWS Step Functions, AWS AppSync

Cr: SiS :cool:
Last edited:


  • A Dev team wants to instrument their code to provide more detailed information to AWS X-Ray than simple outgoing and incoming requests. This will generate large amounts of data, so the Dev team wants to implement indexing so they can filter the data. To achieve this the Dev team should Add annotations to the segment document and the code.

  • An application writes items to an Amazon DynamoDB table. As the application scales to thousands of instances, calls to the DynamoDB API generate occasional ThrottlingException errors. The application is coded in a language incompatible with the AWS SDK. The error should be handled by Add exponential backoff to the application logic.
    SDKs automatically add exponential backoff. If not using the AWS SDKs, add own backoff logic to the application code.

  • An application needs to encrypt data that is written to Amazon S3 where the keys are managed in an on-premises data center and the encryption is handled by S3. Type of encryption should be used is Use server-side encryption with Amazon S3-managed keys.

  • A Lambda function processes data before sending it to a downstream service. Each piece of data is approximately 1 MB in size. After a security audit, the function this now required to encrypt the data before sending it downstream. API call is required to perform the encryption is Pass the data to KMS as part of the Encrypt API for encryption.

  • Run an ad-supported photo sharing website using S3 to serve photos to visitors of site. At some point find out that other sites have been linking to the photos on site, causing loss to business. An effective method to mitigate this is Remove public read access and use signed URLs with expiry dates.

  • A gaming application stores scores for players in an Amazon DynamoDB table that has four attributes user_id, user_name, user_score, and user_rank. The users are allowed to update their names only. A user is authenticated by web identity federation. Set of conditions should be added in the policy attached to the role for the DynamoDB PutItem API call is:
    "Condition": {
      "ForAllValues:StringEquals": {
        "dynamodb:LeadingKeys": [
        "dynamodb:Attributes": [

  • An application uses Amazon Kinesis Data Streams to ingest and process large streams of data records in real time. Amazon EC2 instances consume and process the data from the shards of the Kinesis data stream by using Amazon Kinesis Client Library (KCL). The application handles the failure scenarios and does not require standby workers. The application reports that a specific shard is receiving more data than expected. To adapt to the changes in the rate of data flow, the 'hot' shard is resharded. Assuming that the initial number of shards in the Kinesis data stream is 4, and after resharding the number of shards increased to 6, the maximum number of EC2 instances that can be deployed to process data from all the shards is 6.
    Typically, when use the KCL, should ensure that the number of instances does not exceed the number of shards (except for failure standby purposes). Each shard is processed by exactly one KCL worker and has exactly one corresponding record processor, so never need multiple instances to process one shard. However, one worker can process any number of shards, so it's fine if the number of shards exceeds the number of instances.

  • Features can be used to restrict access to data in S3 are Set an S3 Bucket policy and/or ACL on the bucket or the object.

  • A Dev has implemented a Lambda function that needs to add new customers to an RDS database that is expected to run hundreds of times per hour. The Lambda function is configured to use 512MB of RAM and is based on the following pseudo code:
    def lambda_handler(event, context):
    db = database.connect()
    db.statement('INSERT INTO Customers (CustomerName) VALUES (')
    After testing the Lambda function, the Dev notices that the Lambda execution time is much longer than expected. To improve performance the Dev should Move the database connection and close statement out of the handler. Place the connection in the global space.
    Lambda Best Practices - Take advantage of Execution Context reuse to improve the performance of function. Make sure any externalized configuration or dependencies that code retrieves are stored and referenced locally after initial execution. Limit the re-initialization of variables/objects on every invocation. Instead use static initialization/constructor, global/static variables and singletons. Keep alive and reuse connections (HTTP, database, etc.) that were established during a previous invocation.

Observability หรือการ Monitor ก็สำคัญ:
  • Logs: Amazon CloudWatch Logs
  • Metrics: Amazon CloudWatch metrics
  • Traces: AWS X-Ray (Application Performance Monitoring: APM)
ประสิทธิภาพของการเอา CloudWatch & X-Ray มาใช้ร่วมกัน:
  • เริ่มต้นจากการเก็บ Log ก่อน, Status, Metric ต่างๆ ที่เกิดขึ้น
  • สามารถ Monitor ค่าต่างๆ พวกนี้ได้จาก Dashboard จะทำเป็นอัตโนมัติ หรือ Custom ก็ได้ เพื่อให้รู้ว่า Action ต้องทำอะไรต่อ
  • แล้วก็นำไปวิเคราะห์ นำไป Plan ติดตั้ง Workload หรือการทำ Auto Scaling โดยใช้ตัว Metric ไป Trig ได้
  • Analyze: Application Insights, X-Ray Analytics, Container Insights, Logs Insights, Application Insights for .NET & SQL
AWS X-Ray tracing:

Monolith vs Microservice development lifecycle:


DevOps Tool ของ AWS มาช่วยอะไรบ้าง:
  • ช่วยทำให้เป็น Microservices, Team Pizza 2 ถาด คือ การแบ่งเป็น Team เล็กๆ ซึ่ง Team นึงกิน Pizza ได้ 2 ถาดพอดี หรือประมาณ 10 คน ไม่ควรเกินนี้ ยิ่งมากคนก็มากความ
  • ลดความผิดพลาด ลด Manual Process ด้วยการทำให้เป็นอัตโนมัติในทุกอย่าง
  • ด้วยเครื่องมือที่ได้มาตรฐาน ทำให้สามารถ Integrate เชื่อมต่อกันได้โดยไม่มีปัญหา
  • การทำ Template หรือการทำ Governance Policy ต่างๆ
  • Infrastructure as Code (IaC)
AWS Developer Tools for CI/CD:


AWS Cloud9 สำหรับ Development บน Cloud: Cloud IDE สำหรับเขียน, Run, และ Debug Code:
  • Code ด้วย Browser
  • เริ่มโครงการใหม่ได้อย่างรวดเร็ว
  • Code ด้วยกันแบบ Real Time
  • สร้าง Application แบบ Serverless ได้อย่างง่ายดาย
  • Access Terminal ได้โดยตรง
AWS CodeCommit:
  • บริการ Source Control ที่มีการจัดการที่ปลอดภัย ปรับขนาดได้มาก เป็น Private Git
  • ทำงานร่วมกับ Git Tool ที่มีอยู่แล้วได้
  • Integrate เข้ากับ AWS Services เช่น IAM, Cloudwatch Events, AWS KMS, Amazon SNS ได้
  • ไม่มี Hardware
  • มีความพร้อมใช้งานสูง (HA) (ด้วย S3)
AWS CodeBuild:
  • Fully Managed Build Service ที่ Comply Source Code, ทดสอบ Run, และสร้าง Software Package
  • ปรับขนาดได้อย่างต่อเนื่องและประมวลผลหลาย Build ได้พร้อมกัน
  • ไม่ต้องจัดการ Server
  • จ่ายเป็นนาทีสำหรับ Compute Resource ที่ใช้เท่านั้น
  • Monitor Build ด้วย Cloudwatch Events
AWS CodeDeploy:
  • Deploy Code โดยอัตโนมัติไปยัง Instance อะไรก็ได้ และ Lambda
  • จัดการกับความซับซ้อนของการ Update Application
  • หลีกเลี่ยง Downtime ระหว่างการ Deploy Application
  • Roll Back โดยอัตโนมัติหากตรวจเจอความผิดพลาด
  • Deploy ไปยัง Amazon EC2, Amazon ECS, AWS Lambda หรือ On-premises Server ได้
Cr: SiS :cool:
Last edited:


  • A dev wants to build an application that will allow new users to register and create new user accounts. The application must also allow users with social media accounts to log in using their social media credentials. To meet these requirements can be used Amazon Cognito user pools.

  • A Dev must trigger an AWS Lambda function based on the item lifecycle activity in an Amazon DynamoDB table. The Dev can create the solution by Enable a DynamoDB stream, and trigger the Lambda function synchronously from the stream.

  • EC2 instances are launched from Amazon Machine Images (AMIs). A given public AMI can only be used to launch EC2 instances in the same AWS region as the AMI is stored.
AWS Certified SysOps Administrator - Associate:





  • A user has created a VPC with public and private subnets using the VPC wizard. VPC bounds the main route table with a private subnet and a custom route table with a public subnet.
    A Virtual Private Cloud (VPC) is a virtual network dedicated to the user's AWS account. A user can create a subnet with VPC and launch instances inside that subnet. If the user has created a public private subnet, the instances in the public subnet can receive inbound traffic directly from the internet, whereas the instances in the private subnet cannot. If these subnets are created with Wizard, AWS will create a NAT instance of a smaller or higher size, respectively. The VPC has an implied router and the VPC wizard updates the main route table used with the private subnet, crates a custom route table and associates it with the public subnet.

  • The Accounting department would like to receive billing updates more than once a month. They would like the updates to be in a format that can easily be viewed with a spreadsheet application. This request can be fulfilled by Set AWS Cost and Usage Reports to publish bills daily to an Amazon S3 bucket in CSV format.

  • A company in a highly regulated industry has just migrated an Amazon EC2 based application to AWS. For compliance reasons, all network traffic data between the servers must be captured and retained. Solution will accomplish this with the LEAST amount of effort is Set up flow logs at the VPC level. Configure Amazon S3 as the destination.

  • Run a database on an EC2 instance, with the data stored on Elastic Block Store (EBS) for persistence. At times throughout the day, see large variance in the response times of the database queries. Looking into the instance with the isolate command see a lot of wait time on the disk volume that the database's data is stored on. While maintaining the current persistence of the data can improve the performance of the database's storage by Move the database to an EBS-Optimized Instance and Use Provisioned IOPs EBS.

  • A company issued SSL certificates to its users, and needs to ensure the private keys that are used to sign the certificates are encrypted. The company needs to be able to store the private keys and perform cryptographic signing operations in a secure environment. Service should be used to meet these requirements is AWS CloudHSM.
    Import and Existing Private Key might already have a private key and a corresponding SSL/TLS certificate that use for HTTPS on web server. If so, can import that key into an HSM by doing the following:
    To import an existing private key into an HSM:
    1. Connect to Amazon EC2 client instance. If necessary, copy existing private key and certificate to the instance.
    2. Run the command to start the AWS CloudHSM client.
  • Pricing is per instance-second or per instance-hour consumed for EC2 instances.
    In AWS, pay only for what use. EC2 pricing is depending on the instance type and operating system for the AMI. For example, spot, reserved, and on-demand instances are billed per-second, while Dedicated instances are billed per hour. Linux instances can be billed per second, but Microsoft Windows instances are billed per hour.

  • In Amazon CloudFront, if have chosen On for Logging, the access logs are stored in Amazon S3 bucket that want CloudFront to store access logs in. For example:
    If enable logging, CloudFront records information about each end-user request for an object and stores the files in the specified Amazon S3 bucket.

  • A user has configured ELB with two EBS backed instances. The user has stopped the instances for 1 week to save costs. The user restarts the instances after 1 week. The instances will automatically get registered with ELB.

  • A server with a 500GB Amazon EBS data volume. The volume is 80% full. Need to back up the volume at regular intervals and be able to re-create the volume in a new Availability Zone in the shortest time possible. All applications using the volume can be paused for a period of a few minutes with no discernible user impact. Backup method will best fulfill requirements is Take periodic snapshots of the EBS volume.
    EBS volumes can only be attached to EC2 instances within the same Availability Zone.

  • A security policy allows instances in the Production and Development accounts to write application logs to an Amazon S3 bucket belonging to the Security team's account. Only the Security team should be allowed to delete logs from the S3 bucket.
    Using the 'myAppRole' EC2 role, the production and development teams report that the application servers are not able to write to the S3 bucket.
    Production Account: 11
    Dev Account: 22
    Security Account: 33
      "Version": "2013-11-18"
      "Statement": [ [
        "Effect": "Allow",
        "Principal": [ {
          "AWS": [
            "arn: aws:iam: : 11: role/myAppRole"
            "arn: aws:iam: : 22: role/myAppRole"
        "Action": [
          "s3: *"
        "Resource": [
        "Condition" {
          "StringNotLike": {
            "aws: userID": [
    To allow the application logs to be written to the S3 bucket should Update the Action for the Allow policy from "s3.*" to "s3: PutObject".

  • A user has set the Alarm for the CPU utilization > 50%. Due to an internal process, the current CPU utilization will be 80% for 6 hours. The user can ensure that the CloudWatch alarm does not perform any action by disable the alarm using the DisableAlarmActions API or the mon-disable-alarm-actions command.
    The user can enable the CloudWatch alarm using the EnableAlarmActions API or mon-enable-alarm-actions command.

  • An organization has developed a new memory-intensive application that is deployed to a large Amazon EC2 Linux fleet. There is concern about potential memory exhaustion, so the Development team wants to monitor memory usage by using Amazon CloudWatch. The MOST efficient way to accomplish this goal is Monitor memory by using a script within the instance, and send it to CloudWatch as a custom metric.
Last edited:















  • A sysops admin created an AWS Lambda function within a VPC with no access to the Internet. The Lambda function pulls messages from an Amazon SQS queue and stores them in an Amazon RDS instance in the same VPC. After executing the Lambda function, the data is not showing up on the RDS instance. The possible causes for this are:
    • A VPC endpoint has not been created for Amazon SQS
    • The RDS security group is not allowing connections from the Lambda function.
The inbound SG for RDS needs to allow the lambda attached ENI Network to access MYSQL/RDS access on port 3306.​
AWS PrivateLink (interface endpoints) enables private access to RDS without an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection.​
  • It is not possible to change the size of the VPC once it has been created. Once the user has created a VPC, cannot change the CIDR of that VPC. The user has to terminate all the instances, delete the subnets and then delete the VPC. Create a new VPC with a higher size and launch instances with the newly created VPC and subnets.

  • In regard to AWS CloudFormation, to pass values to template at runtime should use parameters, and can be dereferenced in the Resources and Outputs sections of the template.
    Optional parameters are listed in the Parameters section.

  • A user has a refrigerator plant. The user is measuring the temperature of the plant every 15 minutes. If the user wants to send the data to CloudWatch to view the data visually, The user needs to use AWS CLI or API to upload the data.
    AWS CloudWatch supports the custom metrics. The user can always capture the custom data and upload the data to CloudWatch using CLI or APIs. While sending the data the user has to include the metric name, namespace and timezone as part of the request.

  • A SysOps admin is evaluating Amazon Route 53 DNS options to address concerns about high availability for an on-premises website. The website consists of two servers: a primary active server and a secondary passive server. Route 53 should route traffic to the primary server if the associated health check returns 2xx or 3xx HTTP codes. All other traffic should be directed to the secondary passive server. The failover record type, set ID, and routing policy have been set appropriately for both primary and secondary servers. To configure Route 53 next step should be Create an alias record for each server with evaluate target health set to yes. Associate the records with the Route 53 HTTP health check.

  • A company has a new requirement stating that all resources in AWS must be tagged according to a set policy. To enforce and continually identify all resources that are not in compliance with the policy should use AWS Config.
    AWS Config is a service that enables to assess, audit, and evaluate the configurations of AWS resources. Config continuously monitors and records AWS resource configurations and allows to automate the evaluation of recorded configurations against desired configurations. With Config, can review changes in configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine overall compliance against the configurations specified in internal guidelines. This enables to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

  • A sysops admin is managing a VPC network consisting of public and private subnets. Instances in the private subnets access the Internet through a NAT gateway. A recent AWS bill shows that the NAT gateway charges have doubled. The admin wants to identify which instances are creating the most network traffic. This should be accomplished by Enable flow logs on the NAT gateway elastic network interface and use Amazon CloudWatch insights to filter data based on the source IP addresses.

  • A SysOps Admin must monitor a fleet of Amazon EC2 Linux instance with the constraint that no agent be installed. The SysOps admin Chooses Amazon CloudWatch as the monitoring tool. Can be measured given the constraints by CPU Utilization, Disk Read Operations, and Network Packets in metrics.

  • Can configure multiple load balancers with an autoscaling group. Auto Scaling integrates with Elastic Load Balancing to enable to attach one or more load balancers to an existing Auto Scaling group. After attach the load balancer, it automatically registers the instances in the group and distributes incoming traffic across the instances.

  • AWS CloudWatch can be accessed from the Amazon CloudWatch Console, CloudWatch API, AWS CLI and AWS SDKs.

  • In Amazon S3, RRS stands for Reduced Redundancy Storage. RRSs store objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but it does not replicate objects as many times as Amazon S3 standard storage. In addition, RRS is designed to sustain the loss of data in a single facility.

  • When the user account has reached the maximum number of EC2 instances, it will not be allowed to launch an instance. AWS will throw an 'InstanceLimitExceeded' error. For other reasons, such as 'AMI is missing part', 'Corrupt Snapshot' or 'Volume limit has reached' it will launch an EC2 instance and then terminate it.

  • A user is trying to setup a recurring Auto Scaling process.
    The user has setup one process to scale up every day at 8 am and scale down at 7 pm.
    The user is trying to setup another recurring process which scales up on the 1st of every month at 8 am and scales down the same day at 7 pm.
    Auto Scaling will throw an error since there is a conflict in the schedule of two separate Auto Scaling Processes.
    Auto Scaling based on a schedule allows the user to scale the application in response to predictable load changes. The user can also configure the recurring schedule action which will follow the Linux cron format. As per Auto Scaling, a scheduled action must have a unique time value. If the user attempts to schedule an activity at a time when another existing activity is already scheduled, the call will be rejected with an error message noting the conflict.
Last edited:












  • Code:
      "Id": "IPAllowPolicy",
      "Statement": [
          "Sid": "IPAllow",
          "Action": "s3:*",
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::mybucket/*",
          "Condition": {
            "IpAddress": {
              "aws:SourceIp": ""
            "NotIpAddress": {
              "aws:SourceIp": ""
          "Principal": {
            "AWS": [
    This S3 bucket policy Denies the server with the IP address full access to the "mybucket" bucket.

  • A company operate a secure website running an Amazon EC2 instance behind a Classic Load Balancer. An SSL certificate from AWS Certificate Manager is deployment on the load balancer. The company's Marketing team has determined that too many customer using older browser are experiencing issues with the website has asked a SysOps Admin to fix this issue. The admin should Update the SSL negotiation configuration of the load balancer by creating a custom security policy. Ensure the appropriate cipher has been enabled so that the web application can support the web browser.
    Update the SSL Negotiation Configuration of Classic Load Balancer. Elastic Load Balancing provides security policies that have predefined SSL negotiation configurations to use to negotiate SSL connections between clients and load balancer. If using the HTTPS/SSL protocol for listener, can use one of the predefined security policies, or use own custom security policy.
    If create an HTTPS/SSL listener without associating a security policy, Elastic Load Balancing associates the default predefined security policy, ELBSecurityPolicy-2016-08, with load balancer.
    If have an existing load balancer with an SSL negotiation configuration that does not use the latest protocols and ciphers, recommend that update load balancer to use ELBSecurityPolicy-2016-08. If perfer, can create a custom configuration. Strongly recommend that test the new security policies before upgrade load balancer configuration.
    When update the SSL negotiation configuration for an HTTPS/SSL listener, the change does not affect requests that were received by a load balacer node and are pending routing to a healthy instance, but the updated configuration will be used with new requests that are received.

  • A user has enabled session stickiness with Elastic Load Balancer (ELB). The user does not want ELB to manage the cookie; instead he wants the application to manage the cookie. When the server instance, which is bound to a cookie, crashes, The session will not be sticky until a new cookie is inserted.
    With ELB, if the admin has enabled a sticky session with application controlled stickiness, the load balancer uses a special cookie generated by the application to associate the session with the original server which handles the request. ELB follows the lifetime of the application-generated cookie corresponding to the cookie name specified in the ELB policy configuration. The load balancer only inserts a new stickiness cookie if the application response includes a new application cookie. The load balancer stickiness cookie does not update with each request. If the application cookie is explicitly removed or expires, the session stops being sticky until a new application cookie is issued.

  • The user can disable the connection draining feature on an existing Elastic Load Balancer (ELB) from EC2 -> ELB console or from CLI.
    The ELB connection draining feature causes the load balancer to stop sending new requests to the back-end instances when the instances are deregistering or become unhealthy, while ensuring that inflight requests continue to be served. The user can enable or disable connection draining from the AWS EC2 console -> ELB or using CLI.

  • A SysOps Admin discovers the organization's tape archival system is no longer functioning in its on-premises data center. To create a virtual tape interface to replace the physical tape system can be used AWS Storage Gateway.

  • By default detailed monitoring is enabled for Auto Scaling. CloudWatch is used to monitor AWS as well as the custom services. It provides either basic or detailed monitoring for the supported AWS products. In basic monitoring, a service sends data points to CloudWatch every five minutes, while in detailed monitoring a service sends data points to CloudWatch every minute. To enable detailed instance monitoring for a new Auto Scaling group, the user does not need to take any extra steps. When the user creates an Auto Scaling launch config as the first step for creating an Auto Scaling group, each launch configuration contains a flag named InstanceMonitoring.Enabled. The default value of this flag is true. Thus, the user does not need to set this flag if he wants detailed monitoring.

  • Creating an Auto Scaling group whose Instances need to insert a custom metric into CloudWatch. The best way to authenticate CloudWatch PUT request would be Create an IAM role with the Put MetricData permission and modify the Auto Scaling launch configuration to launch instances in that role.
    Creates an IAM role is always the best practice to give permissions to EC2 instances in order to interact with other AWS services.

  • When an EC2 instance that is backed by an S3-based AMI is terminated, Data is automatically deleted on the root volume.

  • A SysOps Admin supports a legacy application that is hardcoded to service The application has recently been moved to AWS. The external DNS are managed by a third-party provider. The Admin has set up an internal domain for and configured this record using Amazon Route. To have instances in the same account resolve to the Route 53 service instead of the provider the MOST efficient way is Ensure that Domain Name System (DNS) resolution is enabled on the VPC.
    Using DNS with VPC:
    DNS is a standard by which names used on the Internet are resolved to their corresponding IP addresses. A DNS hostname is a name that uniquely and absolutely names a computer; it's composed of a host name and a domain name. DNS servers resolve DNS hostnames to their corresponding IP addresses.
    Public IPv4 addresses enable communication over the Internet, while private IPv4 addresses enable communication within the network of the instance (either EC2-Classic or a VPC).
    AWS provide an Amazon DNS server. To use your own DNS server, create a new set of DHCP options for VPC.

  • Amazon Elastic Compute Cloud (EC2) and Amazon Elastic Load Balancing (ELB) services should be implemented in multiple Availability Zones for high availability solutions.
Last edited:


  • A company wants to reduce costs on jobs that can be completed at any time. The jobs are currently run using multiple On-Demand Instances, and the jobs take just under 2 hours to complete. If a job fails for any reason, it can be restarted from the beginning. The MOST cost-effective method based on these requirements is Submit a request for a Spot block to be used for job execution.

  • When the user has launched an EC2 instance from an instance store backed AMI and the admin team wants to create an AMI from it, the user needs to setup the AWS AMI or the API tools first. Once the tool is setup the user will need the AWS account ID, AWS access and secret access key, and X.509 certificate with private key credentials.

  • With the threat of ransomware viruses encrypting and holding company data hostage, to protect an Amazon S3 bucket taken action should be Enable Amazon S3 versioning on the bucket.
    With versioning, the idea is that existing versions of data are immutable. Since they cannot change, any modification is going to result in a new version.

  • The encryption context is additional authenticated information AWS KMS uses to check for data integrity. When specified for the encryption operation, it must also be specified in the decryption operation or decryption will fail. AWS CodeCommit uses the AWS CodeCommit ropository ID for the encryption context. Can find the repository ID by using the get-repository command or by viewing repository details in the AWS CodeCommit console. Search for the AWS CodeCommit repository ID in AWS CloudTrail logs to understand which encryption operations were taken on which key in AWS KMS to encrypt or decrypt data in the AWS CodeCommit repository.

  • A SysOps Admin at an ecommerce company discovers that several 404 errors are being sent to one IP address every minute. The Admin suspects a bot is collecting information about products listed on the company's website. To block this suspected malicious activity should be used AWS WAF.

  • A company has an application that is running on an EC2 instance in one Availability Zone. A SysOps Admin has been tasked with making the application highly available. The Admin created a launch configuration from the running EC2 instance. The Admin also properly configured a load balancer. To make the application highly available Admin should Create an Auto Scaling group by using the launch configuration across at least 3 Availability Zones with a minimum size of 2, desired capacity of 2, and a maximum of 2. This will assure availability of at least ONE node in case of a second node fail.
    Can Create an Auto Scaling group across at least 2 AZs with a minimum size of 1, desired capacity of 1, and a maximum size of 1. This is assuring the availability, but the website will go down for the time where the new node is provisioned in case of failure.

  • Established a Virtual Private Cloud (VPC) peering relationship between VPC 1 and VPC 2. VPC 1 has routes to VPC2, yet hosts in VPC 1 cannot connect to hosts in VPC 2. Possible cause is The subnet route table in VPC2 does not have routes to VPC 1.
    Both VPN route table need to configure to communicate each other. As default network ACL by default allow all traffic.

  • Security Groups in VPC operate at the instance level, providing a way to control the incoming and outgoing instance traffic. In contrast, network ACLs operate at the subnet level, providing a way to control the traffic that flows through the subnets of VPC.

  • Can create an S3 bucket accessible only by a certain Identity and Access Management (IAM) user using policies in a CloudFormation template. All these resources can be created using a CloudFormation template.
    With AWS IAM, can create IAM users to control who has access to which resources in AWS account. Can use IAM with AWS CloudFormation to control what AWS CloudFormation actions users can perform, such as view stack templates, create, or delete stacks. In addition to AWS CloudFormation actions, can manage what AWS services and resources are available to each user.

  • A user has launched an EBS backed EC2 instance in the US-EAST-1a region. The user stopped the instance and started it back after 20 days. AWS throws up an 'InsufficientInstanceCapacity' error. The possible reason for this can be AWS does not have sufficient capacity in that availability zone.
    When the user gets an 'InsufficientInstanceCapacity' error while launching or starting an EC2 instance, it means that AWS does not currently have enough available capacity to service the user request. If the user is requesting a large number of instances, there might not be enough server capacity to host them. The user can either try again later, by specifying a smaller number of instances or changing the availability zone if launching a fresh instance.

  • Amazon CloudFront is a Content Delivery Network (CDN) service. It integrates with other Amazon Web Services to give developers and businesses an easy way to distribute content to end users with low latency, high data transfer speeds, and no minimum usage commitments.

  • A company wants to reduce costs across the entire company after discovering that several AWS accounts were using unauthorized services and incurring extremely high costs. To reduce costs by controlling access to AWS services for all AWS accounts the company should enables AWS Budgets service.

  • A user has created a queue named 'myqueue' in US-East region with AWS SQS. The user's AWS account ID is 123456789012. If the user wants to perform some action on this queue, The Queue URL should he use is

  • A company is deploying a web service to Amazon EC2 instances behind an Elastic Load Balancer. All resources will be defined and created in a single AWS CloudFormation stack using a template. The creation of each EC2 instance will not be considered complete until an initialization script has been run successfully on the EC2 instance. The Elastic Load Balancer cannot be created until all EC2 instances have been created. CloudFormation resource will coordinate the Elastic Load Balancer creation in the CloudFormation stack template is Init.

  • When the AWS Cloud infrastructure experiences an event that may impact an organization, AWS Personal Health Dashboard service can be used to see which of the organization's resources are affected.

  • An application running on Amazon EC2 needs login credentials to access a database. The login credentials are stored in AWS Systems Manager Parameter Store as secure string parameters. The MOST secure way to grant the application access to the credentials is Create an IAM policy for the application and grant the policy permission to read the Systems Manager parameters.

  • An organization wants to move to Cloud. They are looking for a secure encrypted database storage option. AWS EBS encryption functionalities helps them to achieve this.
    AWS EBS supports encryption of the volume while creating new volumes. It also supports creating volumes from existing snapshots provided the snapshots are created from encrypted volumes. The data at rest, the I/O as well as all the snapshops of EBS will be encrypted. The encryption occurs on the servers that host the EC2 instances, providing encryption of data as it moves between the EC2 instances and EBS storage. EBS encryption is based on the AES-256 cryptographic algorithm, which is the industry standard.

  • A company's website went down for several hours. The root cause was a full disk on one of the company's Amazon EC2 instances. To prevent this from happening in this future the SysOps Admin should Use the Amazon CloudWatch agent on the EC2 instances to collect disk metrics. Create a CloudWatch alarm to notify the Admin when disk space is running low.
Last edited:


AWS Solution Architect Interview Questions:

  1. What is AWS?
    • AWS stands for Amazon Web Service; it is a collection of remote computing services also known as a cloud computing platform.
    • It's comprehensive cloud computing platform offered by Amazon that offers flexible, reliable, scalable, easy-to-use and cost-effective cloud computing solutions.
    • Enables user to access on-demand computing services like database, storage, virtual cloud server, etc.
    • It works on 'pay-as-you-go' model that means - don't need to pay upfront for services it offer.
  2. What are the key Product Categories of AWS?
    • Analytics/Applications: SQS, SWF, SNS, Kenesis
    • Compute: EC2, Elastic Beanstalk, Volumes, Snapshots, AMI, Lambda
    • Database: RDS, DynamoDB, ElastiCache, Redshift, Multi AZ, Read Replicas
    • HA Architecture: Load Balancers, Auto scaling, CloudFormation
    • Networking and Content Delivery: Route 53, VPC, Cloudfront
    • Security, Identity, and Compliance: Identity Access Management (IAM), Groups, Users, Role, Permission
    • Storage: S3, Buckets, Pricing Tiers, Cross Region Replication, EFS, Glacier, Snow Ball, Storage Gateway
  3. What the key components of AWS are?
    • Elastic Compute Cloud (EC2): On-demand computing resource for hosting applications.
    • Route 53: DNS web service.
    • Simple Storage Service (S3): A widely used storage device service in AWS IAM.
    • Elastic Block Store (EBS): Allows storing constant volumes of data.
    • CloudWatch: Watch the critical areas of the AWS, set a reminder for troubleshooting.
    • Simple Email Service (SES): Send emails with the help of regular SMTP or by restful API call.
  4. Define and explain the three basic types of cloud services and the AWS products that are built based on them?
    • Computing includes EC2, Elastic Beanstalk, Lambda, Auto-Scaling, and Lightsail.
    • Storage includes S3, Glacier, Elastic Block Storage (EBS), Elastic File System.
    • Networking includes VPC, Amazon CloudFront, Route53.
  5. What is the difference in Availability Zone, Region, and Edge Locations?
    • AWS Regions:
      is kind of framework, in which have all the available providers.
      is a geographical location with a collection of availability zones mapped to physical data centres in that region.
      Example: For EC2 instance, Storage, DB - need a region in which can build services.
    • Availability Zone (AZ):
      is a facility that can be somewhere in a country or in a city, it's a logical data center in a region.
      Each zone in a region has redundant and separate power, networking and connectivity to reduce the likelihood of two zones failing simultaneously.
      can be a several data centres, but if they are close together, they are counted as 1 availability zone.
    • Edge Locations:
      are the locations where end user services are provided.
      are the endpoints for AWS used for caching content.
      are more than regions. Currently, there are over 150 edge locations.
  6. What is S3?
    • S3 stands for Simple Storage Service.
    • Allows storing any volume of data and retrieving data at any time.
    • Reduces costs significantly, eliminating the requirement for investments.
    • Offers effective scalability, data availability, data protection, and performance.
    • Using this service, can uncover insights from the stored data by analyzing with various analytical tools such as Big Data analytics, Machine Learning (ML), and Artificial Intelligence (AI).
  7. What is AMI?
    • AMI stands for Amazon Machine Image.
    • It provides the necessary information to launch an instance.
    • A single AMI can launch multiple instances with the same configuration, whereas different AMIs are required to launch instances with different configurations.
  8. Define what is auto-scaling?
    • Auto-scaling is a function that allows to provision and launch new instances whenever there is a demand.
    • It allows to automatically increase or decrease resource capacity in relation to the demand.
    • AWS Auto Scaling monitors applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost.
  9. Define what is Amazon VPC?
    • Amazon Virtual Private Cloud, allows to control virtual private cloud.
    • Using this service, can design VPC right from resource placement and connectivity to security.
    • And can add Amazon EC2 instances and Amazon Relational Database Service (RDS) instances according to needs.
    • Also, can define the communication between other VPCs, regions, and availability zones in the cloud.
  10. What are the steps involved in a CloudFormation Solution?
    1. Create or use an existing CloudFormation template using JSON or YAML format.
    2. Save the code in an S3 bucket, which serves as a repository for the code.
    3. Use AWS CloudFormation to call the bucket and create a stack on template.
    4. CloudFormation reads the file and understands the services that are called, their order, the relationship between the services, and provisions the services one after the other.
  11. How do you upgrade or downgrade a system with near-zero downtime?
    Using the following steps of migration:
    1. Open EC2 console
    2. Choose Operating System AMI
    3. Launch an instance with the new instance type
    4. Install all the updates.
    5. Install applications.
    6. Test the instance to see if it's working
    7. Once it's deployed, can upgrade or downgrade the system with near-zero downtime.
  12. What is the difference between Amazon S3 and EC2?
    • EC2:
      A cloud web service used for hosting applications.
      Hugh computer machine which can run either Linux or Windows and can handle applications like PHP, Python or any databases.
    • S3:
      A data storage system where any amount of data can be stored.
      S3 has a REST interface, uses secure HMAC SHA1 authentication keys.
  13. What is Amazon SQS?
    • A fully managed message queuing service. Using this service, can send, receive and store any quantity of messages between the applications.
    • This service helps to reduce complexity and eliminate administrative overhead.
    • It provides high protection to messages through the encryption method and delivers them to destinations without losing any message.
Last edited:


  • A SysOps Admin found that a newly-deployed Amazon EC2 application server is unable to connect to an existing Amazon RDS database. After enabling VPC Flow Logs and confirming that the flow log is active on the console, the log group cannot be located in Amazon CloudWatch. The MOST likely reasons for this situation are:
    • The Admin has waited less than ten minutes for the log group to be created in CloudWatch.
    • No relevant traffic has been sent since the VPC Flow Logs were created
After created a flow log, it can take several minutes to begin collecting and publishing data to the chosen destinations. Flow logs do not capture real-time log streams for network interfaces.​
  • A company's customers are reporting increased latency while accessing static web content from Amazon S3. A SysOps Admin observed a very high rate of read operations on a particular S3 bucket. To minimize latency by reducing load on the S3 bucket should Create an Amazon CloudFront distribution with the S3 bucket as the origin.

  • A SysOps Admin is maintaining an application running on Amazon EBS-backed Amazon EC2 instances in an Amazon EC2 Auto Scaling group. The application is set to automatically terminate unhealthy instances. The Admin wants to preserve application logs from these instances for future analysis. To accomplish this should Configure VPC Flow Logs for the subnet hosting the EC2 instance.

Alibaba Cloud Associate (ACA):
  • Using a cloud computing service is simple and straightforward. One can choose the instance with desired specification, finish payment and then use it right away. Moreover, the underlying physical machines are managed by cloud service providers and transparent to users.

  • A snapshot is a copy of data on a disk at a certain point in time.

  • Multiple lower-configuration I/O-optimized ECS instances can be used with Server Load Balancer to deliver a high-availability architecture.

  • ECS stands for Elastic Compute Service.

  • Alibaba Cloud does not support Intranet communication between products that are not in the same region, which does not mean ECS instances and other products in different regions, such as ApsaraDB for RDS and OSS instances, cannot communicate with each other on the intranet.

  • If running an online ticket booking service with relatively fixed traffic, then Pay-As-You-Go charging mode is suitable.

  • When talk about the 'Elastic' feature for ECS product, talking about Elastic Computing, Storage, and Network.

  • Website has high volume of traffic and sudden spikes for a very short time. In this scenario, Auto Scaling can manage traffic peak efficiently and maintain a consistent user experience.

  • Website has oscillating traffic peaks that are difficult to predict in advance. In this scenario, it is recommended to use SLB and Auto Scaling together with ECS.

  • Server Load Balancer is a ready-to-use service that seamlessly integrates with ECS to manage varying traffic levels without manual intervention.

  • When using Alibaba Cloud SLB, can set different weights for backend ECS instances. The higher the weight of a backend ECS instance, the more load will be assigned to it. Have to see the weight settings of all ECS instances to see what would happen.

  • Server Load Balancer can help protect from DDoS attack.

  • Every user can create more than one OSS bucket, Bucket name cannot be changed after creation, Every bucket should have a unique name, and There is no limitation to objects number inside one bucket.

  • is a newly launched social portal for exchanging automotive information. The portal is based on PHP, including 10GB of images, and some JavaScripts files. Currently one single ECS instance is used to host all the application content as well as a MySQL database. With the growing number of users, the portal is getting slower to load images or respond to user requests. Moreover, there will be additional 50GB image content uploaded to the portal every week. OSS + ECS combination Alibaba Cloud service can help resolve the storage and performance issue at the same time.

  • Supported methods by OSS to create a bucket are through OSS API/SDK, ECS instance filesystem directory operation, and cloud console.

  • A large shopping mall deploys a new video surveillance system. The five-floor building in which the mall is located installed 35 HD cameras to monitor the major exits. The surveillance system is deployed on an ECS instance, which has four 2 TB data disks to store video data. However, video data grows rapidly and the engineers find that the ECS will run out of storage within 2 weeks. In this case, OSS product is thought to be the best solution for addressing the storage challenge while ensuring quick access to historical video data when needed.

  • Alibaba Cloud's Object Storage Service (OSS) can be used to store massive amount of files, but there is still a maximum number of files in a single bucket.

  • Alibaba Cloud CDN can directly accelerate the access to the files stored in OSS and reduce OSS traffic costs.

  • Auto Scaling can automatically remove unhealthy ECS instances from the scaling group (this feature is called Elastic Self-Health). Yet, after removal, the user has to manually add equivalent amount of new ECS instances back to the scaling group.

  • ECS can be used along with OSS to store static images, videos, and reducing storage fees.

  • An ECS disk can be used jointly or separately to meet the requirements of different application scenarios. ECS disks are categorized into ephemeral SSD disks and cloud disks. Compared with ephemeral SSD disks, cloud disks are more reliable as they use triplicate distributed system/a distributed file management system with 3 redundant copies to provide block-level data storage for ECS instances, ensuring 99.9999999% data reliability.

  • System disk size ECS instance configurations can't be changed if don't change instance type.

  • Some applications may encounter large traffic fluctuations within a short period. When ECS is used with Auto Scaling, the number of ECS instances is automatically adjusted based on traffic.

  • A company uses Alibaba Cloud SLB and Auto Scaling at the same time, hoping this combination can help save O&M cost and provide a stable and reliable system. However, because they do not have relevant experience, the company's engineers listed some precautions based on their understanding, and turned the list for advice. As an expert of Alibaba Cloud, should tell them that SLB instances must have Health Check enabled, or they cannot be used together with Auto Scaling.

  • Session persistence feature in Server Load Balancer means that it can forward the access requests from a single user to the same ECS instance within a certain period to ensure session continuity.

  • Alibaba Cloud OSS is a massive, highly available, secured and cost effective storage service. One of the special characteristics of OSS is its superior data reliability, because of its underlying backup technology and policies. Customer has to safeguard their own data. OSS provides various backup interfaces to facilitate offsite backups.

  • OSS Data Reliability is not less than 99.9999999%. Data is backed up automatically with multiple redundant copies.
Last edited:


  • All file uploading method provided by Alibaba Cloud OSS service will support resuming from break point.
    If uploading some small files (<1m), can choose using put method to be able finish uploading in one http request.
    If need to upload very large file better use multi-part uploading to get the capability of resuming from break point.

  • RDS read-only instances use to Reduces the read pressure on the RDS primary instance.

  • SQL injection is a common application layer attack, usually through building special input parameters and pass it to web applications to steal or sabotage the application data. Database is the target for SQL injection.

  • A company wants to use Alibaba Cloud Service to replace their self-built database, should Deploy more ECS instances.

  • The RDS is not support Oracle database.

  • RDS:
    • Features a high availability of 99.95% while self-built DB requires to implement data protection, primary-standby replication, and RAID all.
    • Provides automatic backup while self-built DB requires to prepare storage space for backup copies and regularly verify that these copies can be restored.
    • Supports quick deployment and elastic scaling.
    • Requires no O&M while self-built DB requires a dedicated DBA for maintenance, which results in high HR cost.
  • Auto Scaling is a management service that can automatically adjust elastic computing resources based on business need and policies. Alibaba Cloud Auto Scaling supports Elastic Scale-Out, Scale-In, and Self-Health.

  • Company builds a music download website based on OSS and ECS, and users can download mp3 files after registering for the website. Recently, the public network traffic to the OSS has doubled but the increase of registered users is less than 10%. After in-depth analysis, engineers find that many user download requests are raised from search engine but not the website itself. To address this issue should Limits access sources by configuring 'Anti-leech settings' of OSS bucket attributes.

  • Multiple database can be created in one RDS instance.


  • By default, Amazon EC2 metric data is automatically sent to CloudWatch in 5-minute periods. However, can enable detailed monitoring on an Amazon EC2 instance, which sends data to CloudWatch in 1-minute periods.
  • AWS Auto Scaling and Simple Notification Service (SNS) work in conjunction with CloudWatch. CloudWatch can send alerts to the AS policy or to the SNS end points.
  • Auto Scaling can perform Schedule Actions, Replace Unhealthy, Availability Zone Balancing, and Terminate Instance.
  • Amazon Simple Workflow Service (SWF) provides the glue needed by application to coordinate several tasks. These tasks are tackled by several instances coordinating aspects like the dependencies between them.
  • A security audit revealed that the security groups in a VPC have ports 22 and 3389 open to all, introducing a possible threat that instances can be stopped or configurations can be modified. A sysops admin needs to automate remediation. To meet these requirements the sysops admin should Define an AWS config rule and remediation action with AWS Systems Manager automation documents.​

AWS Discovery Day - Strategies and Tools to Perform Large-Scale Migrations:

แรงผลักดันในการย้ายแต่ละองค์กรไปใช้ Cloud:
  1. มีความคล่องตัว บุคลากรทำงานได้ดีขึ้น 30 - 70% เพราะ Cloud Provider แบ่งงานของเราไป
  2. การควบรวมบริษัท ควบรวม Data Center
  3. อยากนำ Technology หรือนวัตกรรมใหม่ๆ มาใช้ เพื่อให้ได้เปรียบคู่แข่ง
  4. ลดค่าใช้จ่าย ลดต้นทุน > เพิ่มกำไร

อุปสรรคในการย้ายไปใช้ Cloud:
  1. พึ่งจะซื้อ Server ชุดใหม่
  2. กลัวเกิดค่าใช้จ่ายล่วงหน้า
  3. เกิดความสงสัยจาก Stakeholder (ผู้มีส่วนได้ส่วนเสีย)
  4. Application เชื่อมโยงกันมากมาย ยากต่อการย้าย
  5. กลัว Downtime ในระหว่างการ Migrate
  6. ไม่มีความรู้ทางด้าน Cloud

องค์ประกอบที่ทำให้ประสบความสำเร็จในการย้ายไปยัง Cloud:
  1. เอื้อประโยชน์ให้ธุรกิจ และผู้นำองค์กรผลักดัน
  2. ทำให้สมาชิกในองค์กรเห็นประโยชน์ของ Cloud
  3. ปรับปรุงนโยบายองค์กรให้สอดคล้อง
  4. Migrate จากสิ่งเล็กๆ ก่อน > ทำซ้ำ > วัดผล > ปรับปรุง
  5. Cloud Center of Excellence (CCoE) - สร้าง Team ที่ปรึกษาทางด้าน Cloud
  6. มีการเผยแพร่ แนะนำการใช้ Cloud อย่างถูกวิธี จนถึงข้อควรระวังต่างๆ
  7. ทดลองใช้งาน > ทำซ้ำ > วัดผล > ปรับปรุง

Migration Process:
  1. การเตรียม Migrate & วางแผนธุรกิจให้ Support การ Migrate
  2. สำรวจว่ามี Service หรือ Application อะไรบ้าง และวางแผนการย้าย
  3. ทำการ Migrate, ในส่วน Application ใหม่ๆ ก็ Design ไปอยู่บน Cloud เลย
  4. Operate, Monitoring

Stages of adoption:


  1. Project stage: ประเมินว่า Cloud เป็นตัวเลือกที่เหมาะสมหรือไม่
  2. Foundation: วางรูปแบบ Security Compliance, สร้าง Framework โดยปรึกษาผู้เชี่ยวชาญ, จัดฝึกอบรมทางด้าน Cloud, ย้าย Application ไม่สำคัญ
  3. Migration: เริ่มต้นใช้ Cloud ระยะยาว กำหนดบทบาทหน้าที่ IT (CCoE), ย้าย Application ที่เหลือมายัง Cloud
  4. Reinvention: Project ใหม่ๆ ก็ออกแบบไปไว้บน Cloud เลย

Migration Readiness and Assessment (MRA) - การประเมินความพร้อมในการทำ Cloud Migration:
  • 1-day Workshop
  • ช่วยกันตอบคำถามประมาณ 69 คำถาม ตาม AWS Cloud Adoption Framework (CAF)

CAF คือ แนวคิดที่ช่วยให้เรามั่นใจว่าเรามองครบทุกมุมมองในการทำ Cloud Migration:
  1. มุมมองทางด้านธุรกิจ
  2. คน
  3. ผู้บริหาร
  4. Platform เป็นยังไง Design แบบไหน
  5. ระบบการรักษาความปลอดภัย
  6. การ Operate

ทำไมควรทำ MRA?:

  1. ทำให้รู้ว่าเรามีสถานะอย่างไรบ้าง
  2. มีความสามารถอะไรบ้าง
  3. ทำให้ตระหนักถึงจุดอ่อนที่เรามี
  4. มีแผนปฏิบัติการในการเพิ่มขีดความสามารถ
สำหรับ Partner:
  1. ได้รู้ข้อมูลของลูกค้า
  2. เข้าใจซึ่งกันและกันระหว่างลูกค้า และ Partner
  3. เข้าใจสถานการณ์ทางการเมืองขององค์กร
  4. ได้รู้วิธีการทำงานของลูกค้า
  5. นำมาซึ่งการแนะนำการทำงานได้ดียิ่งขึ้น
  1. มีอุปสรรคน้อยลง
  2. มีวิธีการที่จะใช้ในการพัฒนาปรับปรุงองค์กรของเรา

MRA มีขั้นตอนอย่างไร?:
  1. สนใจและมีคุณสมบัติเพียงพอในการทำ Migration Acceleration Program (MAP) หรือไม่
    MAP คือ บริการที่ครอบคลุมที่ทำกับหลายๆ บริษัทมาแล้ว ทำให้เราสามารถที่จะย้ายไปยัง Cloud ได้อย่างถูกต้องและรวดเร็ว
  2. 1-day Workshop/Meeting
  3. Discuss กันโดยใช้ MRA Survey
  4. นำผลลัพธ์ที่ได้มาวิเคราะห์ และวางแผนสำหรับขั้นตอนต่อไป


Migration Readiness & Planning (MRP) - การเตรียมความพร้อม และวางแผน:

Discovery & Planning:
  1. เข้าไปดูใน Datacenter เราว่ามี Application อะไร Run อยู่บ้าง
  2. แล้วจัด App เป็นกลุ่มๆ ที่มีการใช้งานคล้ายกัน
  3. Design ว่าเอาขึ้น Cloud ไปแล้วจะใช้ Service อะไรบ้าง
  4. เตรียมย้าย

Discovery - Portfolio data gathering:
  1. Application: ใครเป็นเจ้าของ, สำคัญแค่ไหน, พฤติกรรมเป็นยังไง, Stack หรือ Infrastructure เป็นยังไง
  2. Server: Physical/virtual, OS version อะไร, Spec เท่าไหร่ (CPU, RAM, Disk), มีการใช้งานอยู่เท่าไหร่
  3. Network: มีการ Routing อะไร ยังไงบ้าง, เชื่อมต่อกันยังไง, มีการ Set Firewall Rule อะไรบ้าง
  4. Storage: มีประเภทไหนบ้าง, ความจุเท่าไหร่, ใช้เท่าไหร่

  1. Apps, Servers, Connections: จำนวนเท่าไหร่ จะย้ายไป Service อะไร
  2. ทำให้รู้ว่าต้องใช้ Spec เท่าไหร่ (Right-size target)
  3. ตั้งชื่อแต่ละ Service ทำให้จัดกลุ่มผู้เป็นเจ้าของได้
Cr: Trainocate :cool:
Last edited:


7-R Application Migration Strategies:



  1. Relocate: คือย้ายไปเลย จะเร็วที่สุด เคยอยู่บน VMware ย้ายไปใช้ VMware Cloud on AWS, Containers
  2. Rehosting (Lift & Shift): ย้าย App ไปแบบ Automate หรือ Manual
  3. Replatforming: ย้ายไปใช้ Manage Service บ้าง เช่น ใช้ RDS, Elastic Beanstalk
  4. Repurchasing: ใช้ SaaS ตัวอื่นแทน เช่น Workday, CRM > Saleforce
  5. Refactoring: Migrate จาก Monolithic เป็น Microservice หรือ Serverless คุ้มค่า แต่ใช้เวลานาน
  6. Retain: ไม่ย้ายดีกว่า ถ้าย้ายปัญหาเยอะ
  7. Retire: รอเลิกใช้งาน
Cr: Trainocate

Alibaba Cloud Professional (ACP) Cloud Computing:

  • When create an Alibaba Cloud VPC. A VRouter and a route table will be created automatically.
    • Each VRouter may have multiple route tables.
    • This route table cannot be deleted.
    • The routing entries of the route table can not be modified manually.
  • Alibaba Cloud SLB can distribute user requests to backend ECS instances regardless of their specifications.
    When configuring SLB, can specify the backend ECS instances to which it should forward traffic. These ECS instances can have different specifications and can be located in different regions or availability zones.

  • Common user privilege is required to manually install Alibaba Cloud Security Center on the server.

  • Alibaba Cloud Elastic Compute Service (ECS) instances in different Security Groups will definitely have no way to communicate with each other.

  • Alibaba Cloud Content Delivery Network (CDN) is a distributed network that is built and overlaid on the bearer network. Moreover, it is composed of edge node server clusters distributed across different regions. It replaces the traditional data transmission mode, which is centered on Web servers. When using Alibaba Cloud CDN, a user's request will first reach the edge node, and then receive data from the origin site by means of back-to-source. Moreover, the admin can obtain visitor's real IP on the origin site.
    • 'Visitor's real IP' is saved in 'X-Forwarded-For' header in HTTP protocol. It can be directly obtained in the user-defined LOG of Apache and Nginx.
    • In Windows, if IIS is used: after installing 'F5XForwardedFor' extension module. 'Visitor's real IP' can then be seen in the log.
  • Company A constructed a sales management platform using three ECS instances. One of the instances runs MySQL, and is used as the database server. The other two instances are used as Web servers. After some time, the number of employees in Company A dramatically increases, leading to higher sales volumes. At the same time, the platform response speed is gradually decresing too.
    According to the report from CloudMonitor, the average CPU utilization rate of the two Web servers exceeds 70%, and database load reaches 75%. To cope with the issue and optimize the performance, company A can:
    • Incorporate Server Load Balancer (SLB) and add additional ECS instances to relieve the load on existing ECS instances and
    • Replace the self-built MySQL database with ApsaraDB for RDS to obtain better database performance, and utilize RDS read-only instances to handle read-only requests.
  • Many cloud computing service providers support users to activate/create a cloud service through Open API such as HTTP, Restful, and Web Service.

  • If want to build a secure and isolated network environment on Alibaba Cloud; meanwhile, design network topology and specify Intranet IP addresses or CIDR Blocks in this network environment as needed, can choose Virtual Private Cloud (VPC).

  • Different Alibaba Cloud VPCs are completely isolated from each other. By default, the VPCs cannot communicate with each other over Intranet, but can establish VPN connections via the Internet to achieve interconnection between VPCs.

  • The Alibaba Cloud CDN can directly accelerate access to the files stored in OSS and reduce OSS traffic costs.

  • All RDS for MySQL backups are full backups.

  • Auto Scaling can automatically adjust the number of ECS instances based on user-defined scaling rules to meet service needs. If a user cannot predict service changes or does not have enough history data, he/she can still use dynamic scaling mode to automatically add/remove ECS instances based on certain CloudMonitor performance metrics (such as the CPU utilization rate).

  • Many of Alibaba Cloud services provide highly reliable data storage capacity. For example, OSS promises that its data reliability is no less than 99.99999999% (five-nine). This high data reliability is solely achieved by RAID 0+1 redundancy technology.

  • When using Alibaba Cloud SLB, users can enable the health check function. If a backend ECS instance A is running abnormally, SLB will isolate it and forward the requests to other ECS instances, and when the backend ECS instance A is back to normal, SLB will again forward requests to it.

  • When creating an ECS instance in Alibaba Cloud VPC, must specify a VSwitch for that instance at the same time. Otherwise, will not be able to create this ECS instance.

  • RDS provides whitelist access policies. Can set permitted IP addresses and IP network segments to effectively prevent hackers from attacking the server by port scanning.

  • When using Alibaba Cloud SLB to forward layer 7 (HTTP) service requests. SLB will replace the IP address in the HTTP header file to forward requests. Therefore the source IP address that can be seen on the backend ECS instance is the IP address of SLB instead of the clients real IP address.

  • Build a Relational Database Using RDS
    How to achieve database synchronization
    Data Backup and Recovery Using RDS

  • Can choose Alibaba Cloud ECS instances in multiple different regions based on the distribution of customer communities, which can improve cross-region disaster tolerance of business while meeting customers' access speed demands.
    • Cannot change an ECS instance region after the purchase.
    • Advised to select the same region if need to use the purchased ECS instance in combination with other Alibaba Cloud products and keep them interconnected via the intranet.
    • There can be multiple zones within the same region.
  • A challenging characteristic of the game industry is the unpredictable and fluctuating business traffic. For example, today one ECS instance is enough to handle all business demands, but tomorrow ten ECS instances may be required. In these scenarios, it is difficult to predict the number of ECS instances required - if too few ECS instances are utilized, may find it hard to cope with business peaks; whereas if prepare too many ECS instances, may incur unnecessary cost for unused or under-utilized instances.
    On Alibaba Cloud, the best combination of products to cope with these game scenarios are Server Load Balancer + ECS + ApsaraDB for RDS + Auto Scaling.

  • An e-commerce website uses the combination of Alibaba Cloud Server Load Balancer instance and backend ECS instance for its architecture. A user initiates a query for a product, and the request returns the product description and image.
    If want to forward image requests to a specific image service for processing, and text requests to a specific text service for processing, should use Layer-7 Server Load Balancer service.
Last edited:


มาลองสร้าง App Social Media บน AWS กันครับ:

  1. สร้าง Private Network ขึ้นมาก่อน ด้วย Virtual Private Cloud (VPC) เพื่อความปลอดภัย

  2. ก็ต้องมี Web Server ด้วย Elastic Compute Cloud (EC2) พร้อม Elastic Block Storage (EBS) และมี Public IP ให้ User สามารถ Access ได้ (Front-end)
    ถัดมามีระบบ Login, ต้องการ Run Code, หรืออื่นๆ ก็สามารถแยก EC2+EBS เพิ่มมาอีกชุดเป็น Application Server ได้ สำหรับเป็น Back-end

  3. อยากจะได้เป็น Three-tier Architecture มี Relational Database เพิ่ม ก็เพิ่ม RDS เข้ามา

  4. บน Cloud จะมี Auto Horizontal Scaling ด้วย Auto Scaling Groups (ASG) ทำให้สามารถเพิ่มลดจำนวนเครื่องได้ตามการใช้งานจริง แก้ปัญหาคอขวด

  5. ตอนนี้เราก็จะมีหลาย IP ถึงเวลาต้องใช้ตัวช่วยกระจาย Incomimg Traffic Load ให้ Web Server แต่ละตัวด้วย Elastic Load Balancer (ELB)

  6. ก็ดูดีแล้ว แต่เดี๋ยวก่อน User ยังต้องจำ IP ของ Web เราอยู่ ใช้เป็นภาษาคนน่าจะง่ายกว่า เราก็ใช้ Route53 มาเป็น Domain Name System (DNS) ซะก็จบ

  7. ใช้ไปใช้มา คน/Connection เยอะขึ้น Relational DB ก็เริ่มรับไม่ไหว ก็ต้องหา NoSQL DB มาช่วยเก็บ Connection Information ด้วย DynamoDB

AWS Lift & Shift Migration:

AWS Application MiGratioN Service เป็นบริการ AWS แบบ Lift & Shift ที่สามารถเรียกใช้ได้ผ่าน AWS Management Console เป็น Solution การ Lift & Shift ที่ยืดหยุ่น เชื่อถือได้ และเป็นระบบอัตโนมัติ สามารถใช้เพื่อช่วยลดความซับซ้อน, เร่งความเร็ว และลดค่าใช้จ่ายในการย้าย Application ต่างๆ ไปยัง AWS ไม่ว่าจะเป็น Physical, Virtual, หรือ Cloud Server ไปยัง AWS โดยไม่มีปัญหาด้านความเข้ากันได้ (Compatibility), การหยุดชะงักทางด้านประสิทธิภาพ, หรือการ Cutover Windows ที่ใช้ระยะเวลานาน สามารถโยกย้าย Application และฐานข้อมูลจาก Infrasturcture ต้นทางที่ Run ระบบปฏิบัติการที่รองรับ ซึ่งรวมถึงฐานข้อมูลทั่วไป เช่น Oracle และ SQL Server, Application ที่มีความสำคัญต่อธุรกิจ เช่น SAP และ Application ที่พัฒนาขึ้นมาเอง

ทางด้านซ้าย มี Environment ต้นทาง ที่สามารถเป็นได้ทั้ง Physical Server, Virtual Server หรือระบบ Cloud ในตัวอย่างนี้ Environment ต้นทางมี Server สองตัว โดยมี Disk 2 ตัวต่ออยู่กับ Server ด้านบน และ Disk 3 ตัวอยู่กับ Server ด้านล่าง ทางด้านขวา จะเป็น AWS Region ที่จะ Migrate Server ขึ้นไป ในตัวอย่างนี้ Subnet ได้ถูกกำหนดไว้แล้ว
ก่อนอื่น ให้ติดตั้ง AWS Replication Agent บน Server ต้นทาง สามารถติดตั้ง Agent ได้โดยไม่ต้อง Reboot เมื่อทำการติดตั้ง Agent จะดำเนินการเชื่อมต่อกับ AWS MGN API ด้วยการ Handshake ซึ่งเข้ารหัสด้วย TLS 1.3 แล้วลงทะเบียน Agent เข้ากับตัว Service และจัดเตรียม Resource Subnet ของ Staging Area ให้โดยอัตโนมัติ

  1. Create an IAM user
  2. Install the AWS Replication Agent
  3. Configure the launch settings
  4. Launch a test instance

  • การวัดผลความสำเร็จในการเปลี่ยนแปลงองค์กร:

AWSome Day Online Conference:
  1. Review การเข้าร่วม : Module 1:
  2. Module 2:
  3. Module 3:
  4. Module 4:
  5. Module 5:
  6. สรุปนิทรรศการ re:Invent 2020:
  7. ทำความรู้จัก Amazon SageMaker Studio เบื้องต้น:
  8. วิธีการ Deploy Flask Application ขึ้นบน AWS โดยใช้ Elastic Beanstalk:
  9. Deploy Node JS บน AWS Lambda ด้วย ZIP File กันเถอะ:
    Amazon CloudFront:
  10. Edge Location ที่ประเทศไทยเปิดใช้งานแล้ว:
  11. วิธีการแสดงผล Website ที่สร้างจาก EC2:
  12. การตั้งรหัสผ่านด้วย Functions:
  13. วิธีเปลี่ยนการตั้งค่า Cache สำหรับแต่ละนามสกุล File:
    How To Static Website:
  14. แนะนำก่อนเริ่มต้นใช้งาน:
  15. การเก็บข้อมูล Website ลงบน Amazon S3 เพื่อให้ไปแสดงผลผ่าน Amazon CloudFront:
  16. เปลี่ยนชื่อ Domain Name (ชื่อ Website) ง่ายๆ ด้วย Amazon Route 53 และ AWS Certificate Manager (ACM) พร้อมสอนการสร้าง SSL:
  17. การตั้งค่า Free SSL กับ DNS ใน CloudFront โดยใช้ ACM กับ Route 53:
  18. วิธีการ Clear CloudFront Cache File:
    Amazon EC2:
  19. Knowledge ติวเข้ม Lesson 2: Compute in the cloud:
  20. วิธีติดตั้ง Amazon Linux และเชื่อมต่อ Server ด้วย Program PuTTy:
  21. วิธีเชื่อมโยง Elastic IP (EIP):
  22. วิธีแก้ปัญหาการเชื่อมต่อ Security Group:
  23. วิธีตั้งค่า Time Zone ใน Amazon Linux 2:
  24. วิธีสร้าง Swap Memory ใน Amazon Linux 2:
  25. วิธี Install PHP 8.0 and Apache ใน Amazon Linux 2:
  26. วิธี Upload File ด้วย WinSCP ไปยัง Server Website:
  27. วิธีเพิ่ม Memory:
  28. วิธีเพิ่ม Storage:
  29. Reset Administrator Password Windows Server 2019 ด้วย AWS EC2 Rescue:
  30. การตั้งค่าเปิดการใช้งาน Termination Protection:
  31. การติดตั้ง MySQL (MariaDB) และสร้าง Database ใน Amazon Linux 2:
  32. การติดตั้ง WordPress ใน Amazon Linux 2:
  33. มาเชื่อมต่อ Lambda กับ API Gateway กันเถอะ:
  • Reference architecture ของ Data Analytics Pipeline แบบ Serverless บน AWS:
Last edited:


ACP Cloud Computing:

  • Mike is an architect at a social networking website which has a small user base during the initial startup phase. The images uploaded by each registered customer are directly stored on an Alibaba Cloud ECS instance. However, the user base rapidly expanded recently and there are now 3.5 TB of stored images, with the web servers scaling from the original one ECS instance to five. The performance issue has been addressed, but images stored on the ECS instances aren't available for reading or writing across the ECS instances.
    Alibaba Cloud's OSS product is very suitable for solving this problem.

  • When a customer uses Alibaba Cloud OSS service and finds there exist an amount of Internet downstream traffic, he/she can use Alibaba Cloud CDN service to reduce the traffic cost.
    Because the Internet traffic cost of CDN is lower than that of OSS, moreover, the back-to-source traffic cost from CDN to OSS is also lower than a user access to OSS directly.

  • ECS Alibaba Cloud product can work with Server Guard.

  • Many combination samples of Alibaba Cloud CDN with other cloud products and business scenarios are provided for reference on the Alibaba Cloud official website.
    Alibaba Cloud CDN is a suitable option for the following scenarios:
    • Purely static sites with more than 100,000 page views daily on Alibaba Cloud virtual host.
    • Images, HTML, CSS and JS files on a medium-sized e-commerce website.
    • News portal websites with more than 30 million page views daily and users all around the country.
  • MapReduce is a programming model for parallel operations of large-scale datasets. It combines well with cloud computing to deal with massive data calculations.
    The design objectives of MapReduce are Easy to program, expand, and Highly error tolerant.

  • A video company uses a Server Load Balancer to distribute user access requests to 20 ECS instances with the same configuration which will respond to user requests. The Spring Festival is approaching, and the business volume during the holiday is expected to double based on past experience. Alibaba Cloud Auto Scaling can be used to cope with the elastic changes of resources. Since can accurately predict the changes in the business volume, can choose a variety of scaling modes for implementation. The solutions are:
    • Scheduled task: Increase the number of ECS instances to 40 on the first day of the holiday, and reduce the number to 20 after the holiday.
    • CloudMonitor alerting task: Increase the number of ECS instances dynamically when resources run low by monitoring the CPU and load among others and reduce the number of ECS instances when resources go idle.
    • Fixed quantity mode: Set the minimal number of instances in a scaling group to 40 from the first day of the holiday, and tune down the parameter to 20 after the holiday.
  • Alibaba Cloud OSS is a cloud storage service that features massive capacity, outstanding security, low cost, and high reliability. To store files in OSS, must create a bucket and then upload and manage files in the bucket.
    • Data transfers to buckets are made over SSL and can be encrypted.
    • Each user can have multiple buckets
    • Bucket names must be globally unique throughout the OSS and cannot be changed once created.
  • Alibaba Cloud Anti-DDoS Basic defend can against SYN, UDP, ACK, and ICMP Flood attacks.

  • Cloud computing services face security threats that affect their availability, integrity, and confidentiality. Large-scale DDoS and ChallengeCollapsar (CC) attacks threats directly affect the availability of cloud computing services. While Webshell implanting affect confidentiality/integrity.

  • By default, Alibaba RDS for MySQL listens on port 3306. This is the standard port for MySQL DB. If connecting to the DB from a remote machine, will need to ensure that firewall allows traffic through this port.

  • RDS features a high availability of 99.95% and fully managed DB service while self-built DB require to implement data protection, primary-standby replication, and RAID all by yourself.

  • RDS provides automatic backup while self-built DB require to prepare storage space for backup copies and regularly verify that these copies can be restored.

  • RDS requires no Operational & Maintenance (O&M) while self-built DB require a dedicated DB Administrator (DBA) for maintenance, which results in high HR costs.

  • Company B runs a mobile App store. At the venture stage, all software packages are stored on the data disks of their ECS instances.
    Starting from last month, the company launched a number of campaigns, resulting in 200 times increase in download traffic, and the user coverage extends from a single country to global.
    To save storage and bandwidth costs while improving user download speed, company B should select Alibaba Cloud OSS + CDN services.

  • Compared with the traditional manufacturer's CDN, Alibaba Cloud CDN is stable, fast, cost-saving, and easy to use. When it comes to cost saving:
    • Elastic resource scaling is charged only for resources actually use, and can achieve cross-carrier, cross-region, and network-wide coverage.
    • Use first, pay later. It provides different billing types to satisfy different business needs.
    • The service automatically responds to site traffic spikes and makes proper adjustments without user intervention, reducing the pressure on the source site.
    • Back-to-source traffic fees are not charged between the CDN and ECS instance. However, it is important to note that the specific pricing and fees for using CDN may vary depending on the specific CDN product and pricing plan that choose.
  • When using Auto Scaling, can see two types of ECS instances in the scaling group: ECS instances automatically created by Auto Scaling according to the scaling configuration and scaling rules, and ECS instances that are manually created and added into the group. In the scaling group, both types of ECS instances may be removed from the scaling group by Auto Scaling. But only the instances that are automatically created will be stopped and released. Manually added ECS instance are not subject to the specification restriction defined in the scaling configuration.

  • An Alibaba Cloud RDS read-only instance has its data synchronized from the master instance. Backup policy configuration and manual backup can be performed by the read-only instance from the RDS management console?

  • An organization has configured the custom metric upload with CloudWatch. The organization has given permission to its employees to upload data using CLI as well SDK. The user can track the calls made to CloudWatch by Use CloudTrail to monitor the API calls.
    AWS CloudTrail is a web service which will allow the user to monitor the calls made to the Amazon CloudWatch API for the organization's account, including calls made by the AWS Management Console, CLI, and other services. When CloudTrail logging is turned on, CloudWatch will write log files into the Amazon S3 bucket, which is specified during the CloudTrail configuration.

  • The AWS snapshot is a point in time backup of an EBS volume. When the snapshot command is executed if will capture the current state of the data that is written on the drive and take a backup.
    For a better and consistent snapshot of the root EBS volume, AWS recommends stopping the instance. For additional volumes it is recommended to unmount the device. The snapshots are asynchronous and incremental.

  • A company needs to monitor the read and write IOPs metrics for their AWS MySQL RDS instance and send real-time alerts to their operations team. Amazon CloudWatch and Simple Notification Service (SNS) services can accomplish this.
Last edited: