Cloud Computing



  • ติดตั้งใช้ Instance Run Web Server ใน Subnet ใน VPC เมื่อพยายามเชื่อมต่อ Instance ผ่าน Browser โดยใช้ HTTP ทาง Internet พบว่า Connection Timeout สามารถตรวจสอบได้โดย:
    • ตรวจสอบว่า VPC มี Internet Gateway และ Default Route ชี้ไปหา Internet Gateway
    • ตรวจสอบว่า Security Group อนุญาตให้มีการเข้าถึงขาเข้า Port 80
    • ตรวจสอบว่า Network ACL อนุญาตให้มีการเข้าถึงขาเข้า Port 80
  • ตัวอย่างการดำเนินการที่สามารควบคุมได้ด้วย IAM Policy:
    • การกำหนดค่า Security Group ของ VPC
    • การสร้างฐานข้อมูล RDS สำหรับ Oracle
    • การสร้าง Bucket Amazon S3
  • ต้องการสร้างกลุ่ม Amazon EC2 Instance ใน Application Tier Subnet ที่เปิดการรับส่งข้อมูลจาก Instance ใน Web Tier ผ่าน HTTP เท่านั้น (กลุ่มของ Instance ต่าง Subnet Share Security Group Web Tier กัน) สามารถทำได้โดยการเชื่อมโยงแต่ละ Instance ใน Application Tier กับ Security Group ที่อนุญาตการรับส่งข้อมูลผ่าน HTTP ขาเข้าจาก Security Group Web Tier
  • ต้อง Run งานแบบ Batch ทุกคืนวันอาทิตย์ งานเสร็จสิ้นภายในเวลาไม่ถึง 90 นาที และไม่สามารถเลื่อนเวลางานออกไปได้ ควรใช้รูปแบบการชำระเงินสำหรับ EC2 แบบ Scheduled Instance
  • ได้รับคำขอให้ทำให้ File PDF พร้อมใช้งานต่อสาธารณะบน Web, File นี้จะถูก Download โดยลูกค้าผ่านทาง Browser นับล้านครั้ง ควรจัดเก็บ File ใน S3 Standard
S3 Websites:
  • S3 can host static websites and have them accessible on the www
  • The website URL will be:
  • If get a 403 (Forbidden) error, make sure the bucket policy allows public reads!
  1. Has two applications: a sender application that sends messages with payloads to be processed and a processing application intended to receive the messages with payloads. The company wants to implement an AWS service to handle messages between the two applications. The sender application can send about 1,000 messages each hour. The messages may take up to 2 days to be processed. If the messages fail to process, they must be retained so that they do not impact the processing of any remaining messages. Integrate the sender and processor application with an Amazon Simple Queue Service (Amazon SQS) queue. Configure a dead-letter queue to collect the messages that failed to process is the MOST operationally efficient.
  2. Recently signed a contract with an AWS Managed Service Provider (MSP) Partner for help with an application migration initiative. A solution architect needs to share an Amazon Machine Image (AMI) from an existing AWS account with the MSP Partner's AWS account. The AMI is backed by Amazon Elastic Block Store (Amazon EBS) and uses a customer managed Customer Master Key (CMK) to encrypt EBS volume snapshots. The MOST secure way for the solution architect to share the AMI with the MSP Partner's AWS account is Make the encrypted AMI and snapshots publicly available. Modify the CMK's key policy to allow the MSP Partner's AWS account to use the key.
  3. Needs a storage solution for an application that runs on a High Performance Computing (HPC) cluster. The cluster is hosted on AWS Fargate for Amazon Elastic Container Service (Amazon ECS). The company needs a mountable file system that provides concurrent access to files while delivering hundreds of GBps of throughput at sub-millisecond latency. Should Create an Amazon FSx for Lustre file share for the application data. Create an IAM role that allows Fargate to access the FSx for Lustre file share.
  4. Has an ecommerce application that stores data in an on-premises SQL database. The company has decided to migrate this database to AWS. However, as part of the migration, the company wants to find a way to attain sub-millisecond responses to common read requests. A solution architect knows that the increase in speed is paramount and that a small percentage of stale data returned in the database reads is acceptable. The solution architect should recommend Build a database cache using Amazon ElastiCache.
  5. Has an application that calls AWS Lambda functions. A recent code review found database credentials stored in the source code. The database credentials needs to be removed from the Lambda source code. The credentials must then be securely stored and rotated on an on-going basis to meet security policy requirements. A solution architect should recommend Store the password in AWS Secrets Manager. Associate the Lambda function with a role that can retrieve the password from secrets Manager given its secret ID.
  6. A social media company is building a feature for its website. The feature will give users the ability to upload photos. The company expects significant increases in demand during large events and must ensure that the website can handle the upload traffic from users. Generate Amazon S3 presigned URLs in the application. Upload files directly from the user's browser into an S3 bucket is the MOST scalability.
  7. Hosts historical weather records in Amazon S3. The records are downloaded from the company's website by way of a URL that resolves to a domain name. Users all over the world access this content through subscriptions. A third-party provider hosts the company's root domain name, but the company recently migrated some of its services to Amazon Route 53. The company wants to consolidate contracts, reduce latency for users, and reduce costs related to serving the application to subscribers. Should Create an A record in a Route 53 hosted zone for the application. Create a Route 53 traffic policy for the web application, and configure a geolocation rule. Configure health checks to check the health of the endpoint and route DNS queries to other endpoints if an endpoint is unhealthy.
  • สอนทั้งทฤษฎีและปฏิบัติ ไม่ได้เรียนเพื่อสอบผ่านเท่านั้น
  • เน้นสอบโดยเฉพาะ เรียนใน Udemy ของ Stéphane Maarek
  • ไม่แนะนำ A Cloud Guru
  • ถ้าเรียนฟรีบน Youtube
  1. License Microsoft - Volume, CAL, SPLA อธิบายง่ายๆ ให้อ่านรอบเดียวเข้าใจ:
  2. เข้าใจ Windows Server License ง่ายๆ จบใน 5 นาที:
  3. SQL Server License เลือกง่ายๆ ประหยัดได้หลายแสน!!:
  4. Cloud Server กับ Virtual Private Server (VPS) แตกต่างกันอย่างไร?:
  5. DNS คืออะไร? และมีความสำคัญอย่างไรต่อระบบของคุณ?:
  6. Backup ต่างกับ Disaster Recovery และ High Availability อย่างไร?:
  7. Drone ในยุค 4.0 กับความสามารถที่อาจทำให้แปลกใจ:
  8. cloudDR คืออะไร?:
  9. SLA คืออะไร?:
  10. High Availability (HA) คืออะไร?:
  11. AWS vs Domestic Cloud ค่าใช้จ่ายลับๆ ที่อาจนึกไม่ถึง:
  12. Data Center คืออะไร? มาตรฐาน Tier ที่อาจจะยังไม่รู้!!:
  13. วิวัฒนาการของ IT Infrastructure และ Hyper-Converged Infrastructure (HCI):



  1. 3 วิธีการ Migrate Server ไปยัง Cloud ได้อย่างง่ายดาย!:
  2. Terminal Server / Remote Desktop Services (RDS) vs Virtual Desktop Infrastructure (VDI) แตกต่างกันอย่างไร?:
  3. 3 Copies - 2 Medias - 1 Offsite Backup Rule คืออะไร จำเป็นด้วยเหรอ?:
  4. Augmented Reality (AR) และ Virtual Reality (VR) คืออะไร ความแตกต่างที่เหมือนกันหรือไม่?:
  5. อาชีพของงานสาย IT ในปัจจุบัน:

  6. Data, Capacity, & Software Cost Calculator - เครื่องคิดเลขสำหรับคน IT:
  7. Remote Desktop CALs มีไว้ใช้ทำอะไร จำเป็นต้องซื้อมาใช้หรือไม่?:
  8. Recovery Point & Time Objective (RPTO) คืออะไร? สิ่งที่ต้องรู้ก่อนจะ Backup หรือทำ DR Site!:
  9. เคยสงสัยหรือไม่ว่า VeeaM Backup กับ Replication ต่างกันอย่างไร?:
  10. Acronis vs. VeeaM ตัวเลือกในการ Backup เจ้าไหนดีกว่ากัน?:
  11. Systems Engineer ทำอะไร?:
  12. Corporate vs. Home Internet เลือกใช้อย่างไรดี?:

    เชื่อมต่อ Multi Cloud ได้เร็ว และง่ายยิ่งขึ้น ด้วย ECX และ VMware SD-WAN:
  13. File-sharing as a Service กับ Feature ลับๆ ที่ตอบสนองการ Work From Home:
  14. เลิกปวดหัวกับการใช้ AWS ผ่านบัตร Credit ด้วย Local Billing:
  15. VMware vSphere Version 7 Key Features:
  16. Structured Query Language (SQL) vs NoSQL คืออะไร? เหมาะกับลักษณะงานแบบใด?:
  17. Cloud ไม่ได้เหมือนกันทุกที่! ตรงไหนบ้างล่ะที่แตกต่าง?:
  18. AWS vs Azure ต่างกันอย่างไรบ้าง? เลือกใช้เจ้าไหนดี?:
  19. 3 ยักษ์ใหญ่แห่งวงการ Public Cloud มีใครบ้างมาดูกัน!:

  20. Data Center คืออะไร? มาตรฐาน Tier 4 ในไทยเป็นยังไง?:
  21. CPU vs GPU แตกต่างกันอย่างไร? ควรจะเลือกใช้ยังไง?:
  22. MALicious softWARE (Malware) คืออะไร? ป้องกันยังไงไม่ให้ถูกโจมตี?:
  23. 3 รูปแบบของ Incremental Backup คนดูแลระบบควรรู้!:
  24. Containers คืออะไร แตกต่างจาก VM อย่างไร ทำไมถึงเป็นที่นิยม?:
  25. Kubernetes คืออะไร? ทำไมหลายๆ องค์กรถึงนิยมใช้งานในปัจจุบัน?:
  26. Hybrid และ Multi Cloud ตัวเลือกใหม่ที่มีประโยชน์กับธุรกิจ:
  27. Cloud มีประโยชน์ยังไงบ้างในยุค Covid-19:
  28. Edge Computing ระบบการประมวลผลขนาดจิ๋วๆ แต่ไม่จิ๋วอย่างที่คิด:
  29. Serverless ทางเลือกในยุคใหม่ของการใช้ Cloud Computing:
  30. On-Cloud vs On-Premise แบบไหนดีกว่ากัน?:
  31. DEVelopment OPerationS (DevOps) Engineer vs Site Reliability Engineer (SRE) มีความแตกต่างกันอย่างไร?:
  32. Cloud Server คืออะไร?:
  33. Cloud Technology มีประโยชน์อย่างไรในชีวิตประจำวันของเราบ้าง?:
  34. เลือกซื้อ Server อย่างไร ให้คุ้มค่า คุ้มราคา!:
  35. ทำไม VMware ต้องใช้ Open Virtualization Format (OVF) และ Open Virtual Appliance (OVA) ในการ Import/Export?:
  36. Amazon EC2 เลือกวิธีจ่ายเงินอย่างไร? ให้ประหยัดและคุ้มค่า:
  37. CI/CD คืออะไร? ช่วยให้ Developer ทำงานง่ายขึ้นได้มากขนาดไหน?:

  38. AWS Container Services:
    • AWS Containers Immersion Day:
    • ECS Workshop for AWS Fargate:
    • ECS Workshop:
    • EKS Workshop:
  39. Elasticsearch - Logstash - Kibana 4 (ELK Stack) Setup Tutorial:
    • How can I connect to my Amazon EC2 instance if I lost my SSH key pair after it initial launch?:
    • wget
  40. AWS Dev Day: Hands-on EKS workshop for K8s security and observability
AWS Base Camp:
  1. แนะนำเบื้องต้นเกี่ยวกับ AWS Cloud
  2. เริ่มต้นใช้งานบน AWS Cloud
  3. การวางระบบบน AWS Cloud
  4. การรักษาความปลอดภัย Application บนระบบ Cloud
  5. ราคา AWS Support และการสร้างสถาปัตยกรรมบน AWS
  6. การติดตั้ง Web Server บน Amazon Web Services (AWS)
  7. การติดตั้งและใช้งาน WinSCP + PuTTY สำหรับ Amazon Web Services (AWS)
  8. การติดตั้ง Learning Locker บน Amazon Web Services (AWS)
  9. การใช้งาน Static IP + Dynamic DNS บน Amazon EC2
  10. เริ่มต้นกับ EC2 บริการดีๆ จาก Amazon (Getting Started with AWS)
  11. Cloud Transformation - เปลี่ยนแนวคิดให้องค์กร พร้อมมุ่งสู่ยุค Digital Thailand
  12. AWS Summit Singapore Opening Keynote 2018
  13. AWS Security Best Practices
  14. Sunday Insurance: The 1st Full-Stack InsurTech in Southeast Asia Revolutionizing the Industry
  15. แนวคิดและการออกแบบ on-premises, AWS VPC
  16. การพัฒนา Application สมัยใหม่ด้วย Container Services บน AWS Cloud
  17. AWS Quick Start - Intro
  18. AWS Quick Start - ฐานข้อมูลตามวัตถุประสงค์ - เลือกเครื่องมือที่เหมาะสมกับแต่ละงาน
  19. AWS Quick Start - การสำรองข้อมูลในระบบ Cloud และการกู้คืนจากความเสียหายบน AWS อย่างง่าย
  20. AWS Quick Start - การสร้าง Web Application แบบไร้ Server - เพื่อที่จะรองรับผู้ใช้งาน 10 ล้านคนแรก



Introduction to Cloud Computing:
What is Cloud Computing:
  • The delivery of computing services over the internet by using a pay-as-you-go pricing model.
  • It is a way to rent compute power and storage from someone else's datacenter.
  • When done using them, give them back. You're billed only for what you use.
  • Microsoft Azure Fundamental

ไม่ว่าธุรกิจประเภทใด ย่อมมีความต้องการและข้อกำหนดที่แตกต่างกันออกไป ในจุดนี้เอง บริการ Cloud Computing ซึ่งมีความยืดหยุ่น (Flexible) และจัดการค่าใช้จ่ายได้อย่างคุ้มค่า (Cost-efficient) จึงเป็นทางออกสำหรับทุกปัญหาของธุรกิจ ไม่ว่าจะเป็นธุรกิจเกิดใหม่, ขนาดเล็ก หรือองค์กรขนาดใหญ่ก็ตาม
บริการ Cloud Computing ช่วยให้การบริหารธุรกิจเป็นไปอย่างราบรื่น, คุ้มค่าต่อการลงทุน, ปรับขนาดได้, มีความยืดหยุ่นสูง, ก้าวทัน Technology, น่าเชื่อถือ, ครอบคลุมทั่วโลก และมีความปลอดภัยสูง ซึ่งหมายความว่า สามารถใช้เวลาไป Focus อยู่กับส่วนสำคัญส่วนอื่นของธุรกิจและใช้เวลาน้อยลงกับการบริหารจัดการด้าน Technology
การให้บริการ Cloud Computing นั้นมีความยืดหยุ่นสูง ช่วยให้สามารถเลือกรูปแบบการใช้งานกับ Application ได้อย่างอิสระ ซึ่ง Cloud Deployment Model ที่เลือกนั้นขึ้นอยู่กับงบประมาณ, ระดับความปลอดภัย, ขนาด และความต้องการในการบำรุงรักษา
  • ความยืดหยุ่นเป็นประโยชน์นึงจากการใช้บริการ Cloud
  • สมมติว่ามี Application สองประเภท:
    • ประเภทแรกเป็น Application ตัวเก่าที่ใช้ตกทอดกันมา ซึ่งต้องการ Mainframe Hardware เฉพาะรุ่น
    • อีกประเภทคือ Application ใหม่ซึ่งสามารถ Run บน Commodity Hardware ได้
    บริการ Cloud แบบผสมจะเหมาะสมที่สุด
  • Platform as a Service (PaaS) เหมาะสำหรับพัฒนา Application และต้องการ Focus ไปที่การสร้าง, ทดสอบ, และการใช้ ไม่ต้องกังวลกับการบริหารจัดการ Hardware หรือ Software พื้นฐาน
Microsoft ครอบคลุมการบริการมากกว่าผู้ให้บริการ Cloud อื่นๆ โดยมีมากกว่า 54 ภูมิภาค (Region) กระจายไปทั่วโลก Infrastructure นี้ช่วยให้สามารถขยายการใช้งาน Application ให้ใกล้ชิดกับผู้ใช้ทั่วโลกได้มากขึ้น Azure ยังมีภูมิภาคพิเศษเพื่อสนับสนุนการใช้งานของรัฐบาล และ Application ที่จำเป็นต้อง Deploy ในประเทศจีน เพื่อให้มั่นใจในเรื่องความปลอดภัยของข้อมูล (Data Security) และการเก็บข้อมูลในขอบเขตที่กำหนด (Data Residency) ซึ่งตรงตามข้อกำหนดและความต้องการของลูกค้าในการกู้คืนระบบเมื่อระบบล่ม
  • การ Deploy App ในระดับภูมิภาค (Region) เป็นระดับที่เล็กที่สุด
  • หากต้องการให้ Azure Datacenters สามารถใช้งานระบบไฟฟ้า, ระบบทำความเย็น และระบบ Network ที่แยกเป็นอิสระจากที่อื่นๆ ในภูมิภาค (Region) ให้เลือก Region ที่รองรับ Availability Zone
  • Application Availability หมายถึง เวลาโดยรวมที่ระบบทำงาน และสามารถใช้งานได้
ไม่ว่าจะเป็นบุคคลธรรมดา, ธุรกิจขนาดเล็ก, องค์กรขนาดใหญ่ หรือนักเรียน ต้องมีการ Subscription เพื่อใช้บริการ Azure ซึ่งโดยทั่วไปสามารถเริ่มต้นด้วย Subscription แบบ Free เพื่อให้ทดลองใช้บริการ Azure เมื่อระยะเวลาทดลองใช้หมดอายุ จึงค่อยเปลี่ยนจาก Subscription Free เป็น Pay-As-You-Go



AWS API Gateway Overview:
Example: Building a Serverless API:
AWS API Gateway:
  • AWS Lambda + API Gateway: No infrastructure to manage
  • Support for the WebSocket Protocol
  • Handle API versioning (v1, v2...)
  • Handle different environments (dev, test, prod...)
  • Handle security (Authentication and Authorization)
  • Create API keys, handle request throttling
  • Swagger / Open API import to quickly define APIs
  • Transform and validate requests and responses
  • Generate SDK and API specifications
  • Cache API responses
Integrations High Level:
  • Lambda Function:
    • Invoke Lambda function
    • Easy way to expose REST API backed by AWS Lambda
  • HTTP:
    • Expose HTTP endpoints in the backend
    • Example: internal HTTP API on premise, Application Load Balancer...
    • Why? Add rate limiting, caching, user authentications, API keys, etc...
  • AWS Service:
    • Expose any AWS API through the API Gateway?
    • Example: start an AWS Step Function workflow, post a message to SQS
    • Why? Add authentication, deploy publicly, rate control...
Endpoint Types:
  • Edge-Optimized (default): For global clients
    • Requests are routed through the CloudFront Edge locations (improves latency)
    • The API Gateway still lives in only one region
  • Regional:
    • For clients within the same region
    • Could manually combine with CloudFront (more control over the caching strategies and the distribution)
  • Private:
    • Can only be accessed from VPC using an interface VPC endpoint (ENI)
    • Use a resource policy to define access
    1. VPC > Your VPSs > Create VPC
    2. VPC > Subnets > Create subnet
    3. VPC > Route tables > Create route table
      VPC > Route tables > PrivateRouteTable> Actions > Edit subnet associations
    4. VPC > Internet gateways > Create internet gateway
      VPC > Elastic IP addresses > Allocate Elastic IP address
AWS Migration Workshop by SIS Thailand:
  • Azure Subscription เป็น Logical Unit ของบริการ Azure ที่เชื่อมโยงกับบัญชี Azure
  • Azure มีบริการที่สามารถใช้ได้ Free เมื่อมี Azure Subscription
  • การเรียกเก็บเงินใน Azure มีรอบการเก็บรายเดือนสำหรับ Azure Subscription ตามการใช้งาน
  1. Can create policy group using Azure AD.
  2. Can join android devices to Azure AD.
  3. Can join windows devices to AD.
  4. Pay-as-you-go model is entirely OpEx transactions.
  5. Reserved VM is an upfront payment, it will be classed as CapEx, not OpEx.
  1. Company intends to subscribe to an Azure support plan. The support plan must allow for new support requests to be opened. Basic, Developer, Standard, and Professional Direct are support plans that will allow this.
  2. Company has datacenters in Los Angeles and New York. The company has a Microsoft Azure subscription. Configuring the two datacenters as geo-clustered sites for site resiliency. Need to recommend an Azure storage redundancy option. There are the following data storage requirements: Data
    • Must be stored on multiple nodes.
    • Must be stored on nodes in separate geographic locations.
    • Can be read from the secondary location as well as from the primary location.
    Azure Read-Access Geo-Redundant Storage (RA-GRS) should recommend.
    RA-GRS allows to have higher read availability for storage account by providing 'read only' access to the data replicated to the secondary location. Once enable this feature, the secondary location may be used to achieve Higher Availability (HA) in the event the data is not available in the primary region. This is an 'opt-in' feature which requires the storage account be geo-replicated.
    • Locally-Redundant Storage (LRS): Primary region in 1 datacenter 3 copies.


    • Zone-Redundant Storage (ZRS): Primary region, 3 Availability zone's within 3 separate Datacenters and separate copy.


    • (RA-)GRS: Primary region, 1 Datacenter 3 copies - Geo-replication - Secondary region, 1 Datacenter 3 copies.


    • (RA-)GZRS: Primary region, 3 Availability zone's within 3 separate Datacenters - Geo-replication - a 2nd region with 1 Datacenter and 3 copies.

  3. Company's Azure subscription includes a Basic support plan. They would like to request an assessment of an Azure environment's design from Microsoft. This is, however, not supported by the existing plan. Want to make sure that the company subscribes to a support plan that allows this functionality, while keeping expenses to a minimum. Recommend that the company should subscribes to the Professional Direct support plan that have Onboarding services, service reviews, Azure Advisor consultations.
  4. Tasked with deploying Azure Virtual Machines (VMs) for company. Need to make use of the appropriate cloud deployment solution. Should make use of Infrastructure as a Service (IaaS).
  5. Developers have created 10 web applications that must be host on Azure. The web tier plan must meet the following requirements:
    • The web apps will use custom domains. (Basic, Shared, and Standard)
    • Web apps each require 10 GB of storage. (Basic & Standard)
    • Webs must each run in dedicated compute instances. (Basic 3 instances max, Standard 10)
    • Load balancing between instances must be included. (Standard & Above)
    • Costs must be minimized.
    Should use Standard web tier plan.
  6. Planning to migrate a company to Azure. Each of the company's numerous divisions will have an administrator in place to manage the Azure resources used by respective division. The Azure deployment allows for Azure to be segmented for the divisions, while keeping administrative effort to a minimum. Plan to make use of Azure Active Directory (AD) directory.
  7. Azure web tier plan storage / disk space:
    • Free = 1 GB
    • Shared = 1 GB
    • Basic = 10 GB
    • Standard = 50 GB
    • Premium = 250 GB
    • Isolated = 1 TB


  1. AI fits with Azure Machine Learning (ML) Studio (classic) is a drag-and-drop tool can use to build, test, and deploy predictive analytics solutions.

    Azure Cosmo DB is a fully managed NoSQL database for modern app development. Single-digit millisecond response times, and automatic and instant scalability, guarantee speed at any scale.
  2. Company's Active Directory forest includes thousands of user accounts. All network resources will be migrated to Azure. Thereafter, the on-premises data center will be retired. A strategy that reduces the effect on users, once the planned migration has been completed is sync all the Active Directory user accounts to Azure Active Directory (Azure AD).

  3. Azure API Management (APIM) Service is a way to create an manage customer APIs for existing backend services.

    Azure Resource Manager (ARM) is a tool that automates the deployments of the AZ cloud.
    ARM template can automate deployments and use the practice of Infrastructure as Code (IaC). In code, define the infrastructure that needs to be deployed. The infrastructure code becomes part of project. Just like application code, store the infrastructure code in a source repository and version it. Any one on team can run the code and deploy similar environments.

  4. Azure VM High Availability (HA) SLA:
    • 95%: Single VM using Standard HDD Managed Disks for Operating System and Data Disks
    • 99.5%: Single VM using Standard SSD Managed Disks for OS and Data Disks
    • 99.9%: Single VM using Premium SSD or Ultra Disk for all OS and Data Disks
    • 99.95%: 2 VMs
    • 99.99%: 2 VMs in 2 Availability Zones (AZs)

  5. Company's developers intend to deploy a large number of custom virtual machines on a weekly basis. They will also be removing these VMs during the same week it was deployed. 60% of the VMs have Windows Server 2016 installed, while the other 40% has Ubuntu Linux installed. The administrative effort, needed for this process, is reduced by employing an Azure DevTest Labs.
    Azure DevTest Labs:
    • Quickly provision development and test environments
    • Minimize waste with quotas and policies
    • Set automated shutdowns to minimize costs
    • Build Windows and Linux environments
  6. Company has VMs hosted in Microsoft Azure. The VMs are located in a single Azure virtual network named VNet1. The company has users that work remotely. The remote workers require access to the VMs on VNet1. Need to provide access for the remote workers by Configure a Point-to-Site (P2S) VPN.
    A P2S VPN gateway connection lets create a secure connection (over OpenVPN, IKEv2, or SSTP) to virtual network from an individual client computer in a remote location, such as from a conference or home.
  7. To automate server deployment to Azure, there is, however, some concern that administrative credentials could be uncovered during this process. During the deployment, the administrative credentials are encrypted using an Azure Key Vault.
  8. Azure Government can only be used by a United States government contractor and entity.
    Azure Government is a cloud environment specifically built to meet compliance and security requirements of US government. This mission-critical cloud, delivering breakthrough innovation to US government customers and their partners. Only US federal, state, local, and tribal governments and their partners including Department of Defense have access to this dedicated instance. Applies to government at any level.
  9. Company has an Azure Active Directory (Azure AD) environment. Users occasionally connect to Azure AD via Internet. Users who connect to Azure AD via the internet from an unidentified IP address, are automatically encouraged to change passwords by Azure AD Identity Protection.
    Identity Protection identifies risks of many types, including:
    • Anonymous IP address use
    • Atypical travel
    • Malware linked IP address
    • Unfamiliar sign-in properties
    • Leaked credentials
    • Password spray
  10. Planning a strategy to deploy numerous web servers and database servers to Azure. This strategy should allow for connection types between the web servers and database servers to be controlled by Network Security Groups (NSGs).
    Azure NSG can filter network traffic to and from Azure resources in an Azure virtual network. Contains security rules that allow or deny inbound network traffic to, or outbound network traffic from, several types of Azure resources. For each rule, can specify source and destination, port, and protocol.
  11. A PaaS solution does not provide access to the operating system. The Azure Web Apps service provides an environment to host web applications. Behind the scenes, the web apps are hosted on virtual machines running IIS. However, have no direct access to the VM, the OS or IIS.
    A PaaS solution that hosts web apps in Azure does provide:
    • The ability to scale the platform automatically. This is known as autoscaling. Autoscaling means adding more load balanced VMs to host the web apps.
    • A framework that developers can build upon to develop or customize cloud-based applications. PaaS development tools can cut the time it takes to code new apps with pre-coded application components built into the platform, such as workflow, directory services, security features, search and so on.
  12. Traditionally, IT expenses have been considered a Capital Expenditure (CapEx). Today, with the move to the cloud and the pay-as-you-go model, organizations have the ability to stretch their budgets and are shifting their IT CapEx costs to Operating Expenditures (OpEx) instead. This flexibility, in accounting terms, is now an option due to the 'as a Service' model of purchasing software, cloud storage and other IT related resources.

    Two VMs using the same size could have different disk configurations. Therefore, the monthly costs could be different.

    When an Azure VM is stopped, don't pay for the VM. However, still pay for the storage costs associated to the VM. The most common storage costs are for the disks attached to the VMs. There are also other storage costs associated with a VM such as storage for diagnostic data and VM backups.
Last edited:


  1. When implementing a Software as a Service (SaaS) solution, you're responsible for configuring the SaaS solution. Everything else is managed by the cloud provider.
    • requires the least amount of management. The cloud provider is responsible for managing everything, and the end user just uses the software.
    • allows users to connect to and use cloud-based apps over the Internet. Common examples are email, calendaring and office tools (such as Microsoft Office 365).
    • provides a complete software solution which purchase on a pay-as-you-go basis from a cloud service provider. You rent the use of an app for organization and users connect to it over the Internet, usually with a web browser. All of the underlying infrastructure, middleware, app software and app data are located in the service provider's data center. The service provider manages the hardware and software and with the appropriate service agreement, will ensure the availability and the security of the app and data as well.
  2. Fault tolerance is the ability of a system to continue to function in the event of a failure of some of its components. Could have servers that are replicated across datacenters.
    Availability zones:
    • expand the level of control have to maintain the availability of the applications and data on VMs.
    • unique physical locations within an Azure region.
    • Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking.
    • To ensure resiliency, there are a minimum of three sperate zones in all enabled regions.
    • The physical separation within a region protects applications and data from datacenter failures.
    • Azure offers industry best 99.99% VM uptime SLA. By architecting solutions to use replicated VMs in zones, can protect applications and data (managed disks) from the loss of a datacenter. If one zone is compromised, then replicated apps and data are instantly available in another zone.
  3. Cloud Deployment Models:
    1. A private cloud is hosted in datacenter. Therefore, cannot close datacenter if using a private cloud. You create a cloud environment in own datacenter and provide self-service access to compute resources to users in organization. This offers a simulation of a public cloud to users, but you remain completely responsible for the purchase and maintenance of the hardware and software services provide.
    2. A public cloud is hosted externally, for example, in Microsoft Azure. An organization that hosts its infrastructure in a public cloud can close / no longer requires its data center. Public cloud is the most common deployment model. No local hardware to manage or keep up-to-date, everything runs on cloud provider's hardware.
      1. Get pay-as-you-go pricing, pay only for what use, no CapEx costs. (metered pricing)
      2. Self-service management. You are responsible for the deployment and configuration of the cloud resources such as VMs or web sites. The underlying hardware that hosts the cloud resources is managed by the cloud provider.
      3. The underlying hardware is shared so could have multiple customers using cloud resources hosted on the same physical hardware.
      4. Connections to the public cloud are secure.
      5. Storage is not limited. Can have as much storage as like.
  4. When planning to migrate a public website to Azure, must plan to pay monthly usage costs. This is because Azure uses the pay-as-you-go model.
  5. Examples of Azure solutions:
    • IaaS: VMs, Microsoft SQL Server, DNS server installed on a VM
      IaaS is the most flexible category of cloud services. It aims to give complete control over the hardware that runs application (IT infrastructure servers and VMs, storage, networks, and OS). Instead of buying hardware, with IaaS just rent it.
    • PaaS: Azure App Service, Backup, Cosmos DB, Files, logic app, Storage, SQL databases, web app, etc.
      Azure SQL Database is a fully managed PaaS DB Engine that handles most of the DB management functions such as upgrading, patching, backups, and monitoring without user involvement. Always running on the latest stable version of SQL Server.
      DB Engine and patched OS with 99.99% availabilities that are built-in enable to focus on the domain specific DB administration and optimization activities that are critical for business.
    • SaaS: Microsoft Intune
  6. Elasticity is the ability to provide additional compute resource when needed and reduce when not needed to reduce costs. An example is autoscaling.
    Elastic computing is the ability to quickly expand or decrease computer processing, memory and storage resources to meet changing demands without worrying about capacity planning and engineering for peak usage. Typically controlled by system monitoring tools, elastic computing matches the amount of resources allocated to the amount of resources actually needed without disrupting operations. With cloud elasticity, a company avoids paying for unused capacity or idle resources and doesn't have to worry about investing in the purchase or maintenance of additional resources and equipment.

  7. Azure App Service is used to build, deploy, and scale web apps. A PaaS offering that lets create web and mobile apps for any platform or device and connect to data anywhere, in the cloud or on-premises. App Service includes the web and mobile capabilities that were previously delivered separately as Azure Websites and Azure Mobile Services.

  8. A hybrid cloud is a combination of a private and public cloud.
    CapEx is the spending of money up-front for infrastructure such as new servers.
    With a hybrid cloud, can continue to use the on-premises servers while adding new servers in the public cloud (Azure for example). It minimizes the CapEx costs as are not paying for new servers as would if deployed new server on-premises.
    A complete migration 100 on-premises servers to the public cloud would involve a lot of OpEx (the cost of migrating all the servers).
    Could start with a public cloud and then combine that with an on-premise infrastructure to implement a hybrid cloud.
  9. A company can extend the capacity of its internal network by using the public cloud is very common. When need more capacity, rather than pay out for new on-premises infrastructure, can configure a cloud environment and connect on-premises network to the cloud environment by using VPN.
  10. Can give anyone with an account in Azure AD access to the cloud resources.
    There are many authentication scenarios but a common one is to replicate on-premises AD accounts to Azure AD and provide access to the Azure AD accounts.
    Another commonly used authentication method is 'Federation' where authentication for access to cloud resources is passed to another authentication provider such as an on-premises AD.
  11. The public cloud is a shared entity whereby multiple corporation each use a portion of the resources in the cloud. The hardware resources (servers, infrastructure, etc.) are managed by the cloud provider. Multiple companies create resources such as VMs and virtual networks on the hardware resources.
    Microsoft Azure, Amazon Web Services, and Google Cloud are three example of public cloud services.
    The Microsoft Azure cloud is owned by Microsoft. Amazon and Google own their hardware too. The tenants are the customers who use the public cloud services.
    You pay for a cloud subscription and create accounts for users to access cloud resources. No one can access until create user accounts and provide the appropriate access permissions.
Last edited:


  1. An Azure web app that queries an on-premises Microsoft SQL server is an example of a hybrid cloud.
  2. One of the major changes that will face when move from on-premises cloud to the public cloud is the switch from CapEx (buying hardware) to OpEx (paying for services as use it). This switch also requires more careful management of costs. The benefit of the cloud is that can fundamentally and positively affect the cost of a service use by merely shutting down or resizing it when it's not needed.
  3. Fault tolerance is the ability of a service to remain available after a failure of one of the components of the service. For example, a service running on multiple servers can withstand the failure of one of the servers.

    Disaster recovery is the recovery of a service after a failure. For example, restoring a VM from backup after a VM failure.
  4. Dynamic scalability is the ability for compute resources to be added to a service when the service is under heavy load. For example, in a VM scale set, additional instances of the VM are added when the existing VMs are under heavy load.

    Latency is the time a service to respond to requests. For example, the time it takes for a web page to be returned from a web server. Low latency means low response time which means a quicker response.
  5. To implement a hybrid cloud model, a company must not have an internal network. Could start with a public cloud and then combine that with an on-premise infrastructure to implement a hybrid cloud. A private cloud can also be hosted at a third-party data center.
  6. A PaaS solution provides additional memory to apps by changing pricing tiers and can automatically scale the number of instances.
  7. Azure VMs run on Hyper-V physical servers are owned and managed by Microsoft. As an Azure customer responsibility to:
    • Have no access to the physical servers. Microsoft manage the replacement of failed server hardware and the security of the physical servers.
    • Backing up application data
    • Updating server OS
    • Managing permissions to shared documents
  8. Paying for electricity for own datacenter will be classed as CapEx
    Deploying own datacenter is an example of CapEx because need to purchase all the infrastructure upfront before can use it.
  9. Private and hybrid cloud can deploy physical servers.
  10. With a public cloud, there is no CapEx on server hardware, etc. Only pay for cloud resources that use them.
    A private cloud exists on premises, so have complete control over security.
    A hybrid cloud is a mix of public cloud and on-premises resources. Therefore, have a choice to use either.
  11. To create a hybrid cloud, must deploy resources to a public and private cloud.
    Private clouds can be and most commonly are connected to the Internet.
  12. Company plans to deploy several custom applications to Azure. The applications will provide invoicing services to the customers of the company. Each application will have several prerequisite applications and services installed. IaaS could be a deployment solution for all the applications.
  13. Azure Site Recovery provides disaster recovery for VMs. As an organization need to adopt a Business Continuity and Disaster Recovery (BCDR) strategy that keeps data safe, and apps and workload online, when planned and unplanned outages occur.
  14. CapEx:
    • Building a data center infrastructure
    • Purchased software as a one-off purchase
    • Staff salaries
    • Leasing software
  15. Users can run most SaaS apps directly from their web browser without needing to download and install any software, although some apps require plugins.

    With IaaS, must install the software that want to use.
  16. Cannot create a resource group inside of another resource group.

    Each resource can exist in only one resource group.

    A resource group can contain resources from multiple Azure regions. Resources from multiple different regions can be placed in a resource group. The resource group only contains metadata about the resources it contains.
  17. Cloud computing:
    • provides elastic scalability.
    • leverages virtualization to provide services to multiple customers simultaneously.
  18. Define strategy is the first stage in the Microsoft Cloud Adoption Framework for Axure.

  19. A resource group is a logical container for Azure resources. Resource groups make the management of Azure resources easier. Can allow a user to manage all resources in the resource group, such as VMs, websites, and subnets. The permissions apply to the resource group apply to all resources contained in the resource group.
  20. Availability Zones (AZs) expand the level of control have to maintain the availability of the applications and data on VMs. An AZ is a physically separate zone, within an Azure region. There are three AZs per supported Azure region.
    Each AZ has a distinct power source, network, and cooling. By architecting solutions to use replicated VMs in zones, can protect apps and data from the loss of a datacenter. If one zones is compromised, then replicated apps and data are instantly available in another zone.
  21. Azure Data Warehouse (now known as Azure Synapse Analytics) is a PaaS offering from Microsoft. As with all PaaS services from Microsoft, SQL Data Warehouse offers an availability SLA of 99.9% because it has high availability (HA) features built into the platform.
  22. By deploying the VMs to two or more regions, are deploying the VMs to multiple datacenters. This will ensure that the services running on the VMs are available if a single data center fails.
    Azure operates in multiple datacenters around the world. These datacenters are grouped in to geographic regions, giving flexibility in choosing where to build applications.
    Create Azure resources in defined geographic regions like 'West US', 'North Europe', or 'Southeast Asia'. Can review the list of regions and their locations.
    Within each region, multiple datacenters exist to provide for redundancy and availability.
  23. A resource can interact with resources in other resource groups.

    Deleting the resource group will remove the resource group as well as all the resources in that resource group. This can be useful for the management of resources. For example, a VM has several components (the VM itself, virtual disks, network adapter, etc.). By placing the VM in its own resource group, can delete the VM along with all its associated components by deleting the resource group.
    Another example is when creating a test environment. Could place the entire test environment (Network components, VMs, etc.) in one resource group. Can then delete the entire test environment by deleting the resource group.

    Resources from multiple different regions can be placed in a resource group.
  24. Can use Power BI to analyze and visualize data stored in Azure Data Lake and Synapse Analytics.
    Azure Data Lake includes all of the capabilities required to make it easy for developers, data scientists and analysts to store data of any size and shape and at any speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all data while making it faster to get up and running with batch, streaming, and interactive analytics. It also integrates seamlessly with operational stores and data warehouses so that can extend current data applications.


  1. The Azure portal is a web-based management interface where can view and manage all Azure resources in one unified hub, including web apps, databases, VMs, virtual networks, storage, and Visual Studio team projects. The URL is
  2. Regions > Zones > Datacenters > Availability Sets (Rack)
  3. A Local Network Gateway is an object in Azure that represents on-premise VPN device/location. Give the site a name by which Azure can refer to it, then specify the IP address of the on-premises VPN device to which will create a connection. Also specify the IP address prefixes that will be routed through the VPN gateway to the VPN device. The address prefixes specify are the prefixes located on on-premises network. If on-premises network changes or need to change the public IP address for the VPN device, can easily update the values later.
    A Virtual Network Gateway is the VPN object at the Azure end of the VPN. A 'connection' is what connects the Local Network Gateway and the Virtual Network Gateway to bring up the VPN.
  4. When create a resource group, specify which location to create the resource group in. However, when create a VM and place it in the resource group, the VM can still be in a different location (different datacenter). Therefore, creating multiple resource groups, even if they are in separate datacenters does not ensure that the services running on the VMs are available if a single data center fails.
  5. Azure VM scale sets
    • let create and manage a group of load balanced VMs. The number of VM instances can automatically increase or decrease in response to demand or a defined schedule.
    • provide high availability to applications, and allow to centrally manage, configure, and update many VMs.
    • VMs in a scale set can be deployed across multiple update and fault domains to maximize availability and resilience to outages due to data center outages, and planned or unplanned maintenance events.
  6. Azure AD tenant can have multiple subscriptions but an Azure subscription can only be associated with one Azure AD tenant.

    Can change the Azure AD tenant to which an Azure subscription is associated.

    If subscription expires, lose access to all the other resources associated with the subscription. However, the Azure AD directory remains in Azure. Can associate and manage the directory using a different Azure subscription.
  7. Having the ability to manage compliance across multiple subscriptions is The Definition of management groups.
    If organization has many subscriptions, may need a way to efficiently manage access, policies, and compliance for those subscriptions. Azure management groups provide a level of scope above subscriptions.
    Need Azure Policies to manage compliance of Azure Resources, but only Management Groups provides a simple way (or 'the ability') to do it across multiple subscriptions.


  8. An Azure subscription is a container for Azure resources. It is also a boundary for permissions to resources and for billing. Charged monthly for all resources in a subscription. A single Azure tenant (Azure AD) can contain multiple Azure subscriptions.
    A resource group is a container that holds related resources for an Azure solution. Can include all the resources for the solution, or only those resources that want to manage as a group.
    To enable each department administrator to manage the Azure resources used by that department, will need to create a separate subscription per department. Can then assign each department administrator as an administrator for the subscription to enable them to manage all resources in that subscription.
  9. Can use a single Microsoft account to manage multiple subscriptions. Can create an additional subscription for account in the Azure portal. May want an additional subscription to avoid hitting subscription limits, to create separate environments for security, or to isolate data for compliance reasons.

    Cannot merge two subscriptions into a single subscription. However, can move some Azure resources from one subscription to another. Can also transfer ownership of a subscription and change the billing type for a subscription.

    A company can have multiple subscriptions and store resources in the different subscriptions. However, a resource instance can exist in only one subscription.
  10. Can move a VM and its associated resources to a different subscription by using the Azure portal.
    Moving between subscriptions can be handy if originally created a VM in a personal subscription and now want to move it to company's subscription to continue work. Do not need to start the VM in order to move it and it should continue to run during the move.
  11. To implement a solution that enables the client computers on on-premises network to communicate to the Azure VMs, need to configure a VPN to connect the on-premises network to the Azure VM.
    The Azure VPN device is known as a Virtual Network Gateway. It needs to be located in a dedicated subnet in the Azure virtual network. This dedicated subnet is known as a gateway subnet and must be named 'Gateway Subnet'.
    A virtual network is also required. However, as already have VMs deployed in an Azure, can assume that the virtual network is already in place.
  12. Many Azure resource have quota limits. The purpose is to help control Azure cost. However, it is common to require an increase to the default quota.
    Can request a quota limit increase by opening a support request. Select 'Service and subscription limits (quotas)', for the Issue type, select subscription and the service want to increase the quota for.
  13. Can assign service administrators and co-administrators in the Azure Portal but there can only be one account administrator.

    Need an Azure AD account to manage a subscription, not a Microsoft account.
    An account is created in the Azure AD when create the subscription. Further accounts can be created in the Azure AD to manage the subscription.

    Resource groups are logical containers for Azure resources. Subscriptions contain resource groups.
  14. Not all Azure regions support AZs.

    AZs can be used with many Azure services, VMs, etc.

    AZ are unique physical locations within a single Azure region.
  15. Azure containers are the backbone of the virtual disks platform for Azure IaaS. Both Azure OS and data disks are implemented as virtual disks where data is durably persisted in the Azure Storage platform and then delivered to the VMs for mamimum performance. Azure Disks are persisted in Hyper-V VHD format and stored as a page blob in Azure Storage.
  16. Networks in Azure are known as virtual networks. A virtual network can have multiple IP address spaces and multiple subnets. Azure automatically routes traffic between different subnets within a virtual network.
    The only way to separate XServer from the other servers in networking terms is to place the server in a different virtual network to the other servers.


  1. Azure Files is Microsoft's easy-to-use cloud file system. Azure file shares can be seamlessly used in Windows and Windows Server.
    To use with Windows, must either mount it, which means assigning it a drive letter or mount point path, or access it via its Universal Naming Convention (UNC) path.
    Unlike other Server Message Block (SMB) shares may have interacted with, such as those hosted on a Windows Server, Linux Samba server, or Network Attached Storage (NAS) device, Azure file shares do not currently support Kerberos authentication with AD or Azure AD (AAD) identity.
    Instead, must access Azure file share with the storage account key for the storage account containing Azure file share. A storage account key is an administrator key for a storage account, including administrator permissions to all files and folders within the file share accessing, and for all file shares and other storage resources (blobs, queues, tables, etc) contained within storage account.
  2. Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. With a click of a button, enables to elastically and independently scale throughput and storage across any number of Azure (multiple) regions worldwide.
    Is a great way to store unstructured and JSON data. Combined with Azure Functions, makes storing data quick and easy with much less code than required for storing data in a relational DB.
  3. The first thing create in Azure is a subscription. Can think of an Azure subscription as an 'Azure account'. Get billed per subscription.
    A subscription is an agreement with Microsoft to use one or more Microsoft cloud platforms or services, for which charges accrue based on either a per-user license free or on cloud-based resource consumption.
    • Microsoft's SaaS-based cloud offering (O365, Intune/EMS, and Dynamics 365) charge per-user license fees.
    • Microsoft's PaaS and IaaS cloud offering (Azure) charge based on cloud resource consumption.
    Can also use a trial subscription, but the subscription expires after a specific amount of time or consumption charges. Can convert a trial subscription to a paid subsription.
    Organizations can have multiple subscriptions for Microsoft's cloud offerings.
  4. Azure resources deployed to a single resource group can be located in different regions. The resource group only contains metadata about the resources it contains. When creating a resource group, need to provide a location for that resource group. The resource group stores metadata about the resources. When specify a location for the resource group, specifying where that metadata is stored. For compliance reasons, may need to ensure that data is stored in a particular region.

    Tags for Resources are not inherited by default from their Resource Group

    A resource group can be used to scope access control for administrative actions. By default, permissions set at the resource level are inherited by the resources in the resource group.
  5. Azure storage offers different access tiers: hot, cool, and archive.
    The archive access tier has the lowest storage cost. But it has higher data retrieval costs compared to the hot and cool tiers. Data in the archive tier can take several hours to retrieve.
    While a Binary Large OBject (BLOB) is in archive storage, the blob data is offline and can't be read, overwritten, or modified. To read or download a blob in archive, must first rehydrate it to an online tier.
    Example usage scenarios for the archive access tier include:
    • Long-term backup, secondary backup, and archival datasets
    • Original (raw) data that must be preserved, even after it has been processed into final usable form.
    • Compliance and archival data that needs to be stored for a long time and is hardly ever accessed.
  6. Azure Event Hub is a big data streaming platform and event ingestion service. It can receive and process millions of events per second. Data sent to an event hub can be transformed and stored by using any real-time analytics provider or batching/storage adapters.
    Can be used to ingest, buffer, store, and process stream in real time to get actionable insights. Uses a partitioned consumer model, enabling multiple applications to process the stream concurrently and letting control the speed of processing.
    Can be used to capture data in near-real time in an Azure Blob storage or Azure Data Lake Storage for long-term retention or micro-batch processing.
  7. To correlate events from multiple resources into a centralized repository. Log data collected by Azure Monitor is stored in a Log Analytics workspace, which is based on Azure Data Explorer. It collects telemetry from a variety of sources and uses the Kusto query language used by Data Explorer to retrieve and analyze data.
  8. There are different replication options available with a storage account. The 'minimum' replication option is Locally Redundant Storage (LRS). With LRS, data is replicated synchronously three times within the primary region.
  9. Data is not backed up automatically to another Azure Data Center although it can be depending on the replication option configured for the account. LRS is the default which maintains three copies of the data in the data center.
    Geo-redundant storage (GRS) has cross-regional replication to protect against regional outages. Data is replicated synchronously three times in the primary region, then replicated asynchronously to the secondary region.

    The current storage limit is 2 PB for US and Europe, and 500 TB for all other regions (including the UK) with no limit on the number of files.
  10. Regions that support AZs support both Windows Server and Linux VMs.

    AZs is a high-availability offering that protects applications and data from datacenter failures. Zone-redundant services replicate applications and data across AZs to protect from single-points-of-failure.
  11. North America has several Azure reions, including West, Central, South Central, East US, and Canada East.
  12. A region is a set of datacenters deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.

    Outbound data transfer is charged at the normal rate and inbound data transfer is free.
  13. Azure Service Health provides a personalized view of the health of the Azure services and regions using. This is the best place to look for service impacting communications about outages, planned maintenance activities, and other health advisories because the authenticated Service Health experience knows which services and resources currently use.
  14. IoT Hub: A managed service that provides bidirectional communication between IoT devices and Azure

    IoT Central: A fully managed SaaS solution to connect, monitor, and manage IoT devices at scale

    Azure Sphere: A software and hardware solution that provides communication and security features for IoT devices
  15. A Windows Virtual Desktop:
    • Session host can run Windows 10 Enterprise multi-session / Windows 10 Enterprise, Windows 7 Enterprise, Windows Server 2019, 2016, 2012 R2.
    • Supports a maximum of simultaneous user connections by enter the maximum number of users want load-balanced to a single session host.
    • Supports desktop and app virtualization.
  16. The Azure Total Cost of Ownership (TCO) calculator can calculate cost savings due to reduced electricity consumption as a result of migrating on-premises Microsoft SQL servers to Azure.


  1. Microsoft Azure currently has 58 regions worldwide. Regions are divided into AZs.
  2. To use Azure AD credentials to sign in to a computer that runs Windows 10, the computer must be joined to Azure AD.

    Azure AD groups support dynamic membership rules.
  3. Azure automatically routes traffic between subnets in a virtual network. Therefore, all VMs in a virtual network can connect to the other VMs in the same virtual network. Even if the VMs are on separate subnets within the virtual network, they can still communicate with each other. To ensure that a VM cannot connect to the other VMs, the VM must be deployed to a separate virtual network.
  4. Azure Synapse Analytics: A fully managed data warehouse that has integral security at every level of scale at no extra cost.
  5. Azure Cosmos DB: A globally distributed DB that supports NoSQL.

    Azure HDInsight: Managed Apache Hadoop clusters in the cloud that enable to process massive amounts of data.
  6. Only the hot and cool access tiers can be set at the account level. The archive access tier can only be set at the blob level.
  7. The Hot access tier is recommended for data that is accessed and modified frequently. Usage scenarios include Data that is:
    • in active use or is expected to be read from and written to frequently
    • staged for processing and eventual migration to the cool access tier
    The Cool access tier is recommended for short-term backup and disaster recovery. Usage scenarios include:
    • Older data not used frequently but expected to be available immediately when accessed
    • Large data sets that need to be stored cost effectively, while more data is being gathered for future processing
  8. Need to purchase a third-party virtual security appliance that will deploy to an Azure subscription. Should use Azure Marketplace.
  9. Executes code: Azure Functions allows to implement system's logic into readily available blocks of code called 'functions'. Different functions can run anytime need to respond to critical events.

    Azure Logic Apps
    • can have multiple stateful and stateless workflows.
    • is a cloud-based platform for creating and running automated workflows that integrate apps, data, services, and systems.
  10. Azure Functions:
    • provides the platform for serverless code / computing functionalities.
    • is a serverless compute service that lets run event-triggered code without having to explicitly provision or manage infrastructure.
    Azure Databricks is:
    • a big analysis service for machine learning.
    • an Apache Spark-based analytics platform. The platform consists of several components including 'MLib'. Mlib is a Machine Learning library consisting of common learning algorithms and utilities, including classification, regression, clustering, collaborative filtering, dimensionality reduction, as well as underlying optimization primitives.
  11. Azure Application Insights:
    • detects and diagnoses anomalies in web apps.
    • a feature of Azure Monitor, is an extensible Application Performance Management (APM) service for developers and DevOps professionals. Use it to monitor live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help diagnose issues and to understand what users actually do with app.
    Azure App Service:
    • hosts web apps.
    • is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. Can develop in favorite language, be it .NET, .NET Core, Jave, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.
  12. A team of developers plans to deploy, and then remove, 50 customized VMs each week. Thirty of VMs run Windows Server 2016 and 20 of the VMs run Ubuntu Linux. Azure DevTest Labs will minimize the administrative effort required to deploy and remove the VMs.
    DevTest Labs creates labs consisting of pre-configured bases or ARM templates. By using DevTest Labs, can test the latest versions of applications by doing the following tasks:
    • Quickly provision Windows and Linux environments by using reusable templates and artifacts.
    • Easily integrate deployment pipeline with DevTest Labs to provision on-demand environments.
    • Scale up load testing by provisioning multiple test agents and create pre-provisioned environments for training and demos.
  13. Can access Azure Cloud Shell in the Azure portal by clicking the >_ icon.


    Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way work, either Bash or PowerShell.
    Cloud Shell enables access to a browser-based command-line experience built with Azure management tasks in mind.
  14. For Windows the Azure CLI is installed via an MSI, which give access to the CLI through the Windows ComMand Prompt (CMD) or PowerShell.
  15. On the Help + support blade,


    there is a Service Health option. If click Service Health, a new blade opens. The Service Health blade contains the Planned Maintenance link which opens a blade where can view a list of planned maintenance events that can affect the availability of an Azure subscription.
  16. Azure DevOps is Microsoft's primary software development and deployment platform. DevOps influences the application lifecycle throughout its plan, develop, deliver, and operate phases. An integrated solution for the deployment of code.

    Azure Advisor is a personalized cloud consultant that helps follow best practices to optimize Azure deployments. It analyzes resource configuration and usage telemetry and then recommends solutions that can help improve the cost effectiveness, performance, high availability, and security of Azure resources. A tool that provides guidance and recommendations to improve an Azure environment.

    Azure Cognitive Services are APIs, SDKs, and services available to help developers build intelligent applications without having direct AI or data science skills or knowledge. Enable developers to easily add cognitive features into their applications. The goal is to help developers create applications that can see, hear, speak, understand, and even begin to reason. The catalog of services can be categorized into five main pillars - Vision, Speech, Language, Web Search, and Decision. A simplified tool to build intelligent AI applications.

    Azure Application Insights detects and diagnoses anomalies in web apps. A feature of Azure Monitor, is an extensible APM service for developers and DevOps professionals. Use it to monitor live (web) applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help diagnose issues and to understand what users actually do with app.
  17. Azure SQL Database is a relational DB service. A managed SQL Server DB in Azure. The SQL Server is managed by Microsoft; just have access to the DB.
Last edited:


  1. Azure SQL Data Warehouse (SQL DW) or Synapse Analytics is a cloud-based PaaS offering from Microsoft. It is a large-scale, distributed, Massively Parallel Processing (MPP) relational DB technology in the same class of competitiors as Amazon Redshift or Snowflake. An important component of the Modern Data Warehouse multi-platform architecture. Because it is an MPP system with a shared-nothing architecture across distributions, it is meant for large-scale analytical workloads which can take advantage of parallelism.

    Can process big data jobs in seconds with Azure Data Lake Analytics. Can process petabyte of data for diverse workload categories such as querying, ETL, analytics, machine learning, machine translation, image processing, and sentiment analysis by leveraging existing libraries written in .NET languages, R or Python.

    Apache Hadoop was the original open-source framework for distributed processing and analysis of big data sets on clusters. The Hadoop ecosystem includes related software and utilities, including Apache Hive, HBase, Spark, Kafka, and many others.
    Azure HDInsight is a fully managed, full-spectrum, open-source analytics service in the cloud for enterprises. The Apache Hadoop cluster type in Azure HDInsight allows to use HDFS, YARN resource management, and a simple MapReduce programming model to process and analyze batch data in parallel.
  2. Azure Advisor displays security recommendations. Provides a consistent, consolidated view of recommendations for all Azure resources. It integrates with Azure Security Center to bring security recommendations. Can get security recommendations from the Security tab on the Advisor dashboard. Examples of recommendations include restricting access to VMs by configuring Network Security Groups, enabling storage encryption, installing vulnerability assessment solutions. Helps optimize and reduce overall Azure spend by identifying idle and underutilized resources. Get recommendations from the Cost tab
  3. Azure Machine Learning designer lets visually connect datasets and modules on an interactive canvas to create machine learning models.
  4. Composite SLAs involve multiple services supporting an application, each with differing levels of availability. For example, consider an App Service web app that writes to Azure SQL DB. At the time of this writing, these Azure services have the following SLAs:
    • App Service web apps = 99.95%
    • SQL DB = 99.99%
    If either service fails, the whole application fails. The probability of each service failing is independent, so the composite SLA for this application is 99.95% x 99.99% = 99.94%. That's lower than the individual SLAs, which isn't surprising because an application that relies on multiple services has more potential failure points.
  5. The Azure portal is the web-based portal for managing Azure.
    Azure Cloud Shell is a web-based command line for managing Azure, access from the Azure portal.
    Being web-based, can use the Azure portal and Cloud Shell on an iPhone.
  6. Internet of Things (IoT) Hub provides data from millions of sensors. A managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between IoT application and the devices it manages. Can use to build IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. Can connect virtually any device to IoT Hub.
    There are two storage services IoT Hub can route messages to - Azure Blob Storage and Azure Data Lake Storage Gen2 (ADLS Gen2) accounts. Azure Data Lake Storage accounts are hierarchical namespace-enabled storage account built on top of blob storage. Both of these use blobs for their storage.
  7. The basic advantage of cloud computing is shifting high Capital Expenditure (CapEx) requirements to optimal Pay-As-You-Go model which is Operational Expenditure (OpEx).
  1. Can install the Azure PowerShell module locally on Windows, macOS, and Linux. It can also be used from a browser through Azure Cloud Shell or inside a Docker container.
  2. PowerApps lets quickly build business applications with little or no code. It is not used to create Azure VMs.
    PowerApps Portals allow organizations to create websites which can be shared with users external to their organization either anonymously or through the login provider of their choice like LinkedIn, Microsoft Account, other commercial login providers.
  3. The Azure portal is a web-based, unified console that provides an alternative to command-line tools. With the Azure portal, can manage Azure subscription using a Graphical User Interface (GUI). Can build, manage, and monitor everything from simple web apps to complex cloud deployments. Create custom dashboards for an organized view of resources. Configure accessibility options for an optimal experience.
  4. Azure Monitor cannot send alerts (email) to Azure AD security group but only to Azure AD.
  5. Azure Monitor:
    • is used to monitor the health of Azure services.
    • maximizes the availability and performance of applications and services by delivering a comprehensive solution for collecting, analyzing, and acting on telemetry from cloud and on-premises environments. It helps understand how applications are performing and proactively identifies issues affecting them and the resources they depend on.
    • uses Target Resource, which is the scope and signals available for alerting. A target can be any Azure resource. Example targets: a VM, a storage account, a VM scale set, a Log Analytics workspace, or an Application Insights resource.
  6. Azure Repos is a set of version control tools that can use to manage code.

    Azure DevTest Labs creates labs consisting of pre-configured bases or Azure Resource Manager templates. These have all the necessary tools and software that can use to create environments.
  7. Azure Monitor should use from the Azure portal to view service failure notifications that can affect the availability of VM.
  8. A PowerShell script is a file that contains PowerShell cmdlets and code. Needs to be run in PowerShell or Azure Cloud Shell.
    PowerShell can now also be installed on Linux.

  9. Azure Service Health consists of three components: Azure Status, Azure Service Health, and Azure Resource Health.
    To view the health of all other services available in Azure, would use the Azure Status component of Azure Service Health. Azure status informs service outages in Azure on the Azure Status page. The page is a global view of the health of all Azure services across all Azure regions.

    The best way to use Service Health is to set up Service Health alerts to notify via preferred communication channels when service issues, planned maintenance, or other changes may affect the Azure services and regions use.

    Can use Resource Health to view the health of a VM. However, cannot use to prevent a service failure affecting the VM.
    Azure resource health provides information about the health of individual cloud resources such as a specific VM instance.

  10. The computer that has PowerShell Core, but it doesn't have the Azure CLI installed. Need Azure PowerShell module in addition to PowerShell to run Azure commands, such as New-Azure VM.
Last edited:


  1. Can browse available VM images in the Azure Marketplace.
    Azure Marketplace provides access and information on solutions and services available from Microsoft and their partners. Customers can discover, try, or buy cloud software solutions built on or for Azure. The catalog of 8,000+ listings provides Azure building blocks, such as VMs, APIs, Azure apps, Solution Templates and managed applications, SaaS apps, containers, and consulting services.

  2. Security Center helps prevent, detect, and respond to threats with increased visibility into and control over the security of Azure resources. It periodically analyzes the security state of Azure resources. When Security Center identifies potential security vulnerabilities, it creates recommendations. The recommendations guide through the process of configuring the controls need.

  3. With Azure Cloud Shell, can create VMs using Bash or PowerShell.

  4. Azure Logic Apps is a cloud service that helps schedule, automate, and orchestrate tasks, business processes, and workflows when need to integrate apps, data, systems, and services across enterprises or organizations. Simplifies how to design and build scalable solutions for app, data, system integration, Enterprise Application Integration (EAI), and Business-to-Business (B2B) communication, whether in the cloud, on premises, or both.
    For example, here are just a few workloads can automate with logic apps:
    • Process and route order across on-premises systems and cloud services.
    • Send email notifications with O365 when events happen in various systems, apps, and services.
    • Move uploaded files from an SFTP or FTP server to Azure Storage.
    • Monitor tweets for a specific subject, analyze the sentiment, and create alerts or tasks for items that need review.
  5. Users are located worldwide and will be downloading large video files. The video playback experience would be improved if they can download the video from servers in the same region as the users. Can achieve this by using a Content Delivery Network (CDN).
    CDN is a distributed network of servers that can efficiently deliver web content to users. CDNs store cached content on edge servers in Point-of-Presence (PoP) locations that are close to end users, to minimize latency.
    Azure CDN offers developers a global solution for rapidly delivering high-bandwidth content to users by caching their content at strategically placed physical nodes across the world. Can also accelerate dynamic content, which cannot be cached, by leveraging various network optimizations using CDN PoPs. For example, route optimization to bypass Border Gateway Protocol (BGP).
    The benefits of using Azure CDN to deliver web site assets include:
    • Better performance and improved user experience for end users, especially when using applications in which multiple round-trips are required to load content.
    • Large scaling to better handle instantaneous high loads, such as the start of a product launch event.
    • Distribution of user requests and serving of content directly from edge servers so that less traffic is sent to the origin server.
  6. Azure Advisor generate a list of virtual that are NOT protected by Azure Backup. Can view a list of VMs that are protected by Azure Backup by viewing the Protected Items in the Azure Recovery Services Vault.

  7. If implement the security recommendations, company's score will increase.
    There is no requirement to implement the security recommendations provided by Azure Advisor. The recommendations are just that, 'recommendations'. They are not 'requirements'.

  8. Azure Monitor use to automatically send an alert if an administrator stops an Azure VM.

  9. Azure Machine Learning uses past trainings to provide predictions that have high probability.
    Machine learning is a data science technique that allows computers to use existing data to forecast future behaviors, outcomes, and trends. By using machine learning, computers learn without being explicitly programmed.
    Forecasts or predictions from machine learning can make apps and devices smarter. For example, when shop online, machine learning helps recommend other products might want based on what bought.

  10. The Azure portal offers three ways to create a VM Using the:
    1. graphical portal.
    2. Azure Cloud Shell using Bash.
    3. Azure Cloud Shell using PowerShell.
  11. The command can be run:
    1. in the Azure Cloud Shell both PowerShell and Bash.
      The Azure Cloud Shell is a free interactive shell. It has common Azure tools preinstalled and configured to use with account. Can also launch Cloud Shell in a separate browser tab by going to
    2. from PowerShell or the command prompt if the Azure CLI installed.
  12. Previously, the Azure CLI (or x-plat CLI) was the only option for managing Azure subscriptions and resources from the command-line on Linux and macOS. Now with the open source and cross-platform release of PowerShell, will be able to manage all Azure resources from Windows, Linux, and macOS using The Azure CLI, the Azure portal, and Azure PowerShell cmdlets.
    The Azure portal runs in a web browser so can be used in either OS.

  13. Microsoft Compliance Manager is a feature in the Microsoft 365 compliance center that helps manage organization's compliance requirements with greater ease and convenience. Compliance Manager can help throughout compliance journey, from taking inventory of data protection risks to managing the complexities of implementing controls, staying current with regulations and certifications, and reporting to auditors.

  14. Azure Resource Manager templates provides a common platform for deploying objects to a cloud infrastructure and for implementing consistency across the Azure environment.
    Azure policies are used to define rules for what can be deployed and how it should be deployed. Whilst this can help in ensuring consistency, Azure policies do not provide the common platform for deploying objects to a cloud infrastructure.

  15. Azure Bot Services provides a digital online assistant that provides speech support.
    Bots provide an experience that feels less like using a computer and more like dealing with a person - or at least an intelligent robot. They can be used to shift simple, repetitive tasks, such as taking a dinner reservation or gathering profile information, on to automated systems that may no longer require direct human intervention. Users converse with a bot using text, interactive cards, and speech. A bot interaction can be a quick question and answer, or it can be a sophisticated conversation that intelligently provides access to services.

  16. Azure VMs provide OS virtualization. One of several types of on-demand, scalable computing resources that Azure offers. Typically, choose a VM when need more control over the computing environment than the other choices offer.

    Azure Container Instances provide portable environments for virtualized applications. Offers the fastest and simplest way to run a container in Azure, without having to manage any VMs and without having to adopt a higher-level service. Can start containers in Azure in seconds, without the need to provision and manage VMs.
    Containers are becoming the preferred way to package, deploy, and manage cloud applications. Offer significant startup benefits over VMs.
Last edited:


  1. To implement IaC for Azure solutions, use ARM templates. The template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for project. The template uses declarative syntax, which lets state what intend to depoly without having to write the sequence of programming commands to create it. In the template, specify the resources to deploy and the properties for those resources.

  2. Can use Azure Cost Management to:
    • view costs associated to management / resource groups.
    • the usage of VMs during the last three months.
  3. The VMs can be moved to another subscription, however there might be impact on related resources and a need to reconfigure.

  4. To monitor threats by using sensors, would use Azure Advanced Threat Protection (ATP).
    Azure ATP is a cloud-based security solution that leverages on-premises AD signals to identify, detect, and investigate advanced threats, compromised identities, and malicious insider actions directed at organization.
    Sensors are software packages install on servers to upload information to Azure ATP.

  5. To enforce MFA based on a condition, would use Azure AD Identity Protection.
    Azure AD Identity Protection helps manage the roll-out of Azure Multi-Factor Authentication (MFA) registration by configuring a Conditional Access policy to require MFA registration no matter what modern authentication app are signing in to.

  6. The accessible solution on Azure are:
    1. Azure firewall
    2. NSG works like a firewall. Can attach an NSG to a virtual network and/or individual subnets within the virtual network. Can also attach an NSG to a network interface assigned to a VM. Can use multiple NSG within a virtual network to restrict traffic between resources such as VMs and subnets.
      Accessible over HTTP need to add a rule to the NSG to allow the connection to the VM on port 80.
  7. The Just-In-Time (JIT) VM access feature in Azure Security Center / Microsoft Defender for Cloud allows to lock down inbound traffic to Azure VMs. This reduces exposure to attacks while providing easy access when need to connect to a VM.

  8. Can associated zero, or one, NSG to each virtual network subnet and network interface in a VM. The same NSG can be associated to as many subnets and network interfaces as choose.

  9. Can restrict traffic to multiple virtual networks with a single Azure Firewall.
    Azure Firewall is a managed, cloud-based network security service that protects Azure Virtual Network resources. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. Uses a static public IP address for virtual network resources allowing outside firewalls to identify traffic originating from virtual network.
    Can centrally create, enforce, and log application and network connectivity policies across subscriptions and virtual networks.

  10. Key Vault is designed to store configuration secrets for server apps. It's not intended for storing data belonging to app's users, and it shouldn't be used in the client-side part of an app.
    Azure Key Vault is a secure store for storage various types of sensitive information. If store the administrative credentials in the Key Vault, there is no need to store the administrative credentials as plain text in the deployment scripts. All information stored in the Key Vault is encrypted.
    Can be used to Securely store and tightly control access to tokens, passwords, certificates, API keys, and other secrets.
    Secrets and keys are safeguarded by Azure, using industry-standard algorithms, key lengths, and Hardware Security Modules (HSMs). The HSMs used are Federal Information Processing Standards (FIPS) 140-2 Level 2 validated.
    Access to a key vault requires proper authentication and authorization before a caller (user or application) can get access. Authentication establishes the identity of the caller, while authorization determines the operations that they are allowed to perform.

  11. When create a VM, the default setting is to create a NSG attached to the network interface assigned to a VM. To allow connections to TCP port 8080 on the VM, need to add a rule to the NSG.

  12. Can create custom Azure roles to control access to resources.

    A user account can be assigned to multiple Azure roles.

  13. A resource group can have the Owner role assigned to multiple users.

  14. DDoS is a form of attack on a network resource. A DDoS protection plan is used to protect against DDoS attacks.

  15. Microsoft (Azure) Sentinel is a scalable, cloud-native, Security Information Event Management (SIEM) and Security Orchestration Automated Response (SOAR) solution. Deliver intelligent security analytics and threat intelligence across the enterprise, providing a single solution for alert detection, threat visibility, proactive hunting, and threat response. Comes with a number of connectors for Microsoft solutions, available out of the box and providing real-time integration, including Microsoft 365 Defender (formerly Microsoft Threat Protection) solutions, and Microsoft 365 sources, including Office 365, Azure AD, Microsoft Defender for Identity (formerly Azure ATP), and Microsoft Cloud App Security, and more.

  16. The Azure Firewall service complements NSG functionality. Together, they provide better 'defense-in-depth' network security. NSG provide distributed network layer traffic tiltering to limit traffic to resources within virtual networks in each subscription. Is a fully stateful, centralized network firewall as-a-service, which provides network- and application-level protection across different subscriptions and virtual networks. Provides inbound protection for non-HTTP/S protocols (for example, RDP, SSH, FTP), outbound network-level protection for all ports and protocols, and application-level protection for outbound HTTP/S.

    The Web Application Firewall (WAF) is a feature of Application Gateway that provides centralized inbound protection of web applications from common exploits and vulnerabilities.
  • Code:ที่ใช้บน-public-cloud-ของ-kbtg-58bb151812bb
  • Code:และ-kubernetes-คืออะไร-ทำไมคนถึงพูดกันเยอะจังนะ-part-1-fe318ba5b36
  • Rancher:
Last edited:


เราจะมาเล่น AWS Certified Developer - Associate กันบน ProEn Any Cloud ค่า Ubuntu เริ่มต้นเดือนละ 50฿ (ถ้าเปิดตลอด 24 ชม.)
เค้ามีให้ทดลองใช้ Free 14 วัน และก็สนับสนุนอาจารย์/นักศึกษาอีก 5,000 ฿ นะ ถ้าใช้ @your_university ติดต่อ Ad มาได้เลย
  • Legacy on-premises hardware is failing > Agility - spin up new marketing campaigns, social and progressive applications (IoT, Big Data, etc.)
  • AWS/Azure Pilots are messy and not best practice
  • Performance issues > Fast performance for all field workers
  • Lack of HA and Scalability - hardware is expensive > Low cost & scalable base infrastructure
  • Staff skills/capabilities strubble.. little automation > Automation - low base staffing costs
  • Global expansion concerns - cost for new infrastructure > Able to deploy into new regions quickly when required
  • C:\Users\pws_a>aws --version
  • >aws configure --profile plawansai
    AWS Access Key ID [None]: AKIAU63FGO7J7PATYW
    AWS Secret Access Key [None]: m9wHfwfo3j/ebp/Pta7LAL4NkJi+VSofxCjTif
    Default region name [None]: ap-southeast-1
    Default output format [None]:
  • aws s3 ls --profile plawansai

รู้จักกับ Network Physical Layer ทั้งมีสาย (LAN/UTP,STP, Optical Fiber Cable/OFC) และไร้สาย จนถึง หัว Transceiver (SFP) กันเลย



คำสั่ง stress ช่วยให้เราสามารถทำการทดสอบ Workload ของ CPU, Memory, IO, HDD บน Linux
  • $ sudo yum install stress -y
  • stress > to see the application is installed
  • stress -c 2


  • High-Availability (HA) - Minimise any outages
  • Fault-Tolerance (FT) - Operate Through Faults
  • Disaster Recovery (DR) - Used when these don't work

  • An AMI have Public Access, Owner only, and Specific AWS Accounts Permission options.

  • EC2 is an example of IaaS service model.

  • AWS Public Service Located in the AWS Public zone and Anyone can connect, but permissions are required to access the service.

  • An AWS Private Service Located in a VPC, Accessible from the VPC it is located in, and Accessible from other VPCs or on-premises networks as long as private networking is configured.

  • Simple Storage Service (S3) is an AWS Public Service, an object storage system, and Buckets can store an unlimited amount of data.

  • A CloudFormation Logical Resource is A resource defined in a CloudFormation Template.

  • A CloudFormation Physical resource is A physical resource created by creating a CloudFormation stack.

  • High Availability (HA) is A system which maximises uptime.

  • A Fault Tolerant (FT) system is A system which allows failure, and can continue operating without disruption.

  • image-12.jpg

  • The DNS Root Servers manages by 12 Large Organisations.

  • The DNS Root Zone manages by IANA.

  • DNS A Record Type converts a HOST into an IPv4 Address.

  • DNS NS Record type is how the root zone delegates control of .org to the .org registry.

  • Registry type of organisation maintains the zones for a Top-Level Domain (TLD) (e.g. .ORG).

  • Registrar type of organisation has relationships with the .org TLD zone manager allowing domain registration.

  • Subnets in a default VPC Equal to the number of AZs in the region the VPC is located in.

  • The IP CIDR of a default VPC is
  • A limit to the number of IAM users in an AWS Account is 5,000 per account.

  • Features of IAM groups are Admin groupings of IAM Users and Can hold Identity Permissions.

  • Within AWS policies, Explicit Deny is always a priority.

  • Permissions and Trust Policy are assigned to an IAM Role.

  • IAM Roles can be assumed and When assumed - temporary credentials are generated.

  • AWS Organizations features are Consolidated billing, AWS Account restrictions using SCP, and Account organisation via Organizational Unit (OU)'s.

  • Functionality provided by CloudTrail is Account wide Auditing and API Logging.

  • Can restrict what the Account Root User can do If AWS Organizations are used .. but not the management account.

  • Role Switching is Assuming a role in another AWS account to access that account via the console UI.

  • Valid IAM Policy types are AWS, Customer Managed Policy, and Inline Policies.


  • static_website_hosting
  • s3_versioning
  • object_encryption
  • replication
  • presigned_url

  • SSE-S3 encryption is AES256.
    AWS perform the encryption operations and handle key creation & management.

  • S3 One Zone-IA Storage class is the most cost effective suitable for data which is easily replaced.

  • Glacier is the cheapest S3 storage class for important data which need to be retained for long periods and is rarely accessed.


  • Required steps to allow an S3 bucket to operate as a website:
    • Upload web files
    • Set index and error documents
    • Enable static web hosting
    • Disable block public access settings
    • Add a bucket policy
Last edited:


  • Intelligent-Tiering Object class in S3 is ideal for uncertain access and low admin overhead.

  • S3 Lifecycle policies allows objects storage classes to be changed and objects deleted automatically.

  • The default limit of the number of S3 buckets in an AWS account is 100.

  • An object in S3 can be large / Object Max = 5TB and there are NOT the number of objects bucket limit.

  • S3 Versioning needs / required to be enabled to allow Cross-Region Replication (CRR).

  • Resource Policies can be used to grant external accounts access to an S3 bucket.

  • SSE-KMS encryption allows for role separation where an S3 Full Admin might not be able to decrypt objects.

  • SSE-C encryption is where AWS perform encryption operations but DON'T hold any keys.

  • When an object is deleted in a bucket with versioning enabled, A delete marker is added.









  • The Managed NAT Gateway charges a fee for every hour that it’s running.
  • In DynamoDB, if create a table and request 10 units of write capacity and 200 units of read capacity of pvovisioned throughput, would be charged in US East (Northern Virginia) Region for $0.00065 x 10 + $0.00013 x 200 = $0.03 per hour.

  • In DynamoDB, SSDs help achieve design goals of predictable low-latency response times for storing and accessing data at any scale. The high I/O performance of SSDs also enables to serve high-scale request workloads cost efficiently, and to pass this efficiency along in low request pricing.

  • A user is planning to make a mobile game which can be played online or offline and will be hosted on EC2. The user wants to ensure that if someone breaks the highest score or they achieve some milestone they can inform all their colleagues through email. The AWS service helps achieve this goal is AWS Simple Email Service (SES).
    Amazon SES is a highly scalable and cost-effective email-sending service for businesses and developers. It integrates with other AWS services, making it easy to send emails from applications that are hosted on AWS.

  • In DynamoDB, to get a detailed listing of secondary indexes on a table, can use the DescribeTable action. It returns information about the table, including the current status of the table, when it was created, the primary key schema, and any indexes on the table.

  • An EC2 instance, once terminated, may be available in the AWS console for a while after termination. The user can find the details about the termination from the description tab under the label State transition reason. If the instance is still running, there will be no reason listed. If the user has explicitly stopped or terminated the instance, the reason will be 'User initiated shutdown'.

  • AWS RDS is a managed database server offered by AWS, which makes it easy to set up, operate, and scale a relational database or structured data in cloud.

  • In Amazon SNS, there is ability to send notification messages directly to apps on mobile devices. Notification messages sent to a mobile endpoint can appear in the mobile app as message alerts, badge updates, or even sound alerts. It supports Google Cloud Messaging for Android (GCM), Apple Push Notification Service (APNS), and Amazon Device Messaging (ADM).

  • AWS RDS provides a managed DB platform, which offers features, such as automated backup, software patch management, automated failure detection and recovery. For scaling, user needs to plan it with a few clicks.

  • Amazon EC2 supports two types of block devices:
    • Instance store volumes (virtual devices whose underlying hardware is physically attached to the host computer for the instance)
    • EBS volumes (remote storage devices)
  • AWS Elastic Beanstalk and AWS CloudFormation are designed to complement each other.
    Elastic Beanstalk provides an environment to easily develop and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience to manage the lifecycle of applications.
    CloudFormation is a convenient deployment mechanism for a broad range of AWS resources. Can design and script custom resources. It supports the infrastructure needs of many different types of applications such as existing enterprise / legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using Elastic Beanstalk). It introduces two new concepts: The template, a JSON-format, text-based file that describes all the AWS resources need to deploy to run application and the stack, the set of AWS resources that are created and managed as a single unit when CloudFormation instantiates a template.

  • The SQS message retention period is configurable and can be set anywhere from 1 minute to 2 weeks. The default is 4 days and once the message retention limit is reached messages will be automatically deleted. The option for longer message retention provides greater flexibility to allow for longer intervals between message production and consumption.

  • When the user makes any changes to the RDS security group the rule status will be authorizing for some time until the changes are applied to all instances that the group is connected with. Once the changes are propagated the rule status will change to authorized.

  • In AWS Elastic Beanstalk, can update deployed application, even while it is part of a running environment. For a Java application, can also use the AWS Toolkit for Eclipse to update deployed application.

  • When a user is launching an ELB with VPC, has to select the options, such as subnet and security group before selecting the instances part of that subnet.
Last edited:


  • A VPC provide Isolated Network service.

  • Default and custom VPCs:
    • Regions can only have 1 default VPC and manay custom VPCs
    • Custom VPCs allow flexible network configuration, the default VPC has a fixed scheme
    • Some services can behave oddly if the default VPC doesn't exist
    • Default VPCs can be recreated
  • The valid sizes of a VPC are Max /16 & Min /28.

  • DynamoDB has seamless scalability with no table size limits and unlimited storage, so shouldn't be worried about managing storage on the host or to provisioning more drive, as data requirement changes.

  • If the user is launching RDS with Multi AZ the user cannot provision the Availability Zone. RDS is launched automatically instead.

  • The user cannot authorize an Amazon EC2 security group if it is in a different AWS Region than the RDS DB instance. The user can authorize an IP range or specify an Amazon EC2 security group in the same region that refers to an IP address in another region.

  • A stack is the set of AWS resources that are crated and managed as a single unit when AWS CloudFormation initiates a template.

  • When use the AWS Elastic Beanstalk console to deploy a new application or an application version, will need to upload a source bundle. Source bundle must meet the following requirements:
    • Consist of a single .zip file or .war file
    • Not exceed 512 MB
    • Not include a parent folder or top-level directory (subdirectories are fine)
  • To host a static website, the user needs to configure an Amazon S3 bucket for website hosting and then upload the website contents to the bucket. The website is then available at the region-specific website endpoint of the bucket.

  • Every Amazon SQS queue has a configurable visibility timeout. For the designated amount of time after a message is read from a queue, it will not be visible to any other reader. As long as the amount of time that it takes to process the message is less than the visibility timeout, every message will be processed and deleted. In the event that the component processing the message fails or becomes unavailable, the message will again become visible to any component reading the queue once the visibility timeout ends. This allows to have many components all reading messages from the same queue, with each working to process different messages.

  • An application running on Amazon EC2 opens connections to an Amazon RDS SQL Server database. The dev does not want to store the username and password for the database in the code. The dev would also like to automatically rotate the credentials. The MOST secure way to store and access the database credentials is Use AWS Secrets Manager to store the credentials. Retrieve the credentials from Secrets Manager as needed.

  • Amazon S3 has the following structure: S3://BUCKET/FOLDERNAME/ S3 best practice would optimize performance with thousand of PUT request each second to a single bucket is Prefix folder names with random hex hashes; for example, s3://BUCKET/34b7-FOLDERNAME/

  • Company D is currently hosting their corporate site in an Amazon S3 bucket with Static Website Hosting enabled. Currently, when visitors go to the index.html page is returned. Company D now would like a new page welcome.html to be returned when a visitor enters in the browser. The following steps will allow Company D to meet this requirement:
    • Upload an html page named welcome.html to their S3 bucket
    • Set the index Document property to welcome.html
  • A dev created a Lambda function for a web application backend. When testing the Lambda function from the AWS Lambda console, the Dev can see that the function is being executed, but there is no log data being generated in Amazon CloudWatch Logs, even after several minutes. This situation could cause from The execution role for the Lambda functions is missing permissions to write log data to the CloudWatch Logs.

  • A dev is creating a new application that will be accessed by users through an API created using Amazon API Gateway. The users need to be authenticated by a 3rd-party Security Assertion Markup Languages (SAML) identity provider. Once authenticated, users will need access to other AWS services such as Amazon S3 and DynamoDB. These requirements be met by Use Amazon Cognito Identity pools with a SAML identity provider as one of the authentication providers.

  • The Lambda function below is being called through an API using Amazon API Gateway. The average execution time for the Lambda function is about 1 second. The pseudocode for the Lambda function is as shown below:
    include "3rd party encryption module"include "match module"
    lambda_handler(event, context)
        rds_host = "rds-instance-endpoint"
        name = db_username
        password = db_password
        db_name = db_name
    # Connect to the RDS Database
    Conn = RDSConnection(rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
    #Perform some Processing reading data from the RDS database
    #Code Block
    #Code Block
    #Code Block
    To improve the performance of this Lambda function without increasing the cost of the solution can be taken these actions:
    • Package only the modules the Lambda function requires
    • Move the initialization of the variable Amazon RDS connection outside of the handler function
  • Can secure data at rest on an EBS volume by Use an encrypted file system on top of the EBS volume.

  • A dev is building an application integrating an Amazon API Gateway with an AWS Lambda function. When calling the API, the dev receives the following error. Thu Dec 04 02:14:00 UTC 2018 : Method completed with status: 502. To resolve the error the dev should Change the format of the Lambda function response to the API call.

  • A Dev wants access to make the log data of an application running on an EC2 instance available to system administrators. Can enables monitoring of this metric in Amazon CloudWatch by Install the Amazon CloudWatch Logs agent on the EC2 instance that the application is running on.

  • A dev is migrating a legacy monolithic application to AWS and wants to convert the application's internal processes to microservices. The application's internal processes communicate through internal asynchronous messaging. Occasionally, messages need to be reprocessed by multiple microservices. The dev should migrate the application's internal messaging to AWS to meet these requirements by Use Amazon Simple Queue Service (Amazon SQS) queues to communicate messages between the microservices.

  • A legacy service has an XML-based SOAP interface. The Dev wants to expose the functionality of the service to external clients with the Amazon API Gateway. Technique will accomplish this is Create a RESTful API with the API Gateway; transform the incoming JSON into a valid XML message for the SOAP interface using mapping templates.

  • An organization is using Amazon API Gateway to provide a public API called "Survey" for collecting user feedback posts about its products. The survey API has "DEV" and "PROD" stages and consists of one resource. "/feedback" which allows users to retrieve/create/update single feedback posts.
    A version-controlled Swagger file is used to define a new API that retrieves multiple feedback posts. To add the new API resource "/listFeedbackForProduct" the dev makes changes to the Swagger file defining an API uploads the file to the organization's version control system, and uses the API Gateway Import API feature to apply the changes to the Survey API. After successful import the dev runs the tests against the DEV stage and finds that resource VisitFeedbackForProduct" is not available. MOST likely the reason for resource not being available is Even though the Swagger import was successful, resource creation failed afterwards.
Last edited:


  • ใช้ SecureCRT Remote ไป Private EC2 ผ่าน Public EC2 ไม่ได้ ต้องเปิด Function SSH Agent Forwarding ก่อนนะจ้า
    Global Options > SSH2 > Enable OpenSSH agent forwarding

  • An AZ can have many subnets, a subnet is in one AZ.

  • บน AWS 1 Subnet จะถูก Reserved ไว้ทั้งหมด 5 IP ดังนี้:
    (สมมติ Subnet เป็น
    1. = Network Address
    2. = VPC Router
    3. = DNS Server
    4. = Future Used
    5. = Broadcast Address
  • An Internet GateWay (IGW) is Highly Available (HA) by default - attached to a VPC.

  • Security Groups (SGs) can only ALLOW traffic, while Network Access Control List (NACL) can ALLOW and DENY traffic.

  • NAT Allows IPv4 private instances outgoing access to the internet.

  • A subnet can have one Route table attached, A route table can be associated with multiple subnets.



  • Easy Amazon EC2 Instance Comparison

  • Code:

    service "EC2_INSTANCE_CONNECT"








  • A dev has written an Amazon Kinesis Data Streams application. As usage grows and traffic increases over time, the application is regularly receiving ProvisionedThroughputExceededException error messages. The dev should take to resolve the error by:
    • Implement exponential backoff on the GetRecords call and the PutRecords call.
    • Increase the number of shards in the data stream.
  • When a Dev tries to run an AWS CodeBuild project, it raises an error because the length of all environment variables exceeds the limit for the combined maximum of characters. The recommended solution is Use AWS Systems Manager Parameter Store to store large numbers of environment variables.

  • A Dev is writing transactions into a DynamoDB table called "SystemUpdates" that has 5 write capacity units. The highest read throughput option is Strongly consistent reads of 5 read capacity units reading items that are 4 KB in size.

  • A dev wants the ability to roll back to a previous version of an AWS Lambda function in the event of errors caused by a new deployment. The dev can achieve this with MINIMAL impact on users by Change the application to use an alias that points to the current version. Deploy the version of the code. Update the alias to use the newly deployed version. If too many errors are encountered, point the alias back to the previous version.

  • A Dev has written a serverless application using multiple AWS services. The business logic is written as a Lambda function which has dependencies on third-party libraries. The Lambda function endpoints will be exposed using Amazon API Gateway. The Lambda function will write the information to Amazon DynamoDB. The Dev is ready to deploy the application but must have the ability to rollback. This deployment can be automated, based on Use systax conforming to the Serverless Application Model (SAM) in the AWS CloudFormation template to define the Lambda function resource.
    If use AWS SAM to create serverless application, it comes built-in with AWS CodeDeploy to help ensure safe Lambda deployments. With just a few lines of configuration, AWS SAM does the following:
    • Deploys new versions of Lambda function, and automatically creates aliases that point to the new version.
    • Gradually shifts customer traffic to the new version until satisfied that it's working as expected, or roll back the update.
    • Defines pre-traffic and post-traffic test functions to verify that the newly deployed code is configured correctly and application operates as expected.
    • Rolls back the deployment if CloudWatch alarms are triggered.
  • A Dev is migrating an on-premises application to AWS. The application currently takes user uploads and saves them to a local directory on the server. All uploads must be saved and made immediately available to all instances in an Auto scaling group. Approach will meet these requirements is Use Amazon S3 and rearchitect the application so all uploads are placed in S3.

  • A dev at a company writes an AWS CloudFormation template. The template refers to subnets that were created by a separate AWS CloudFormation template that the company's network team wrote. When the dev attempts to launch the stack for the first time, the launch fails. Template coding mistakes could have caused this failure are:
    • The dev's template does not use the ImportValue intrinsic function to refer to the subnets
    • The network team's template does not export the subnets in the Outputs section
  • Policy evaluation logic in AWS Identity and Access Management, by default, all requests are implicitly denied. (Alternatively, by default, the AWS account root user has full access.) An explicit allow in an identity-based or resource-based policy overrides this default. If a permissions boundary, Organizations SCP, or session policy is present, it might override the allow with an implicit deny. An explicit deny in any policy overrides any allows.

  • A company requires objects that are stored in Amazon S3 to be encrypted. The company is currently using Server-Side Encryption with AWS KMS managed encryption keys (SSE-KMS). A dev needs to optimize the cost-effectiveness of the encryption mechanism without negatively affecting performance. The dev should Configure the S3 bucket to use an S3 Bucket Key for SSE-KMS.

  • A company is launching a new web application in the AWS Cloud. The company's dev team is using AWS Elastic Beanstalk for deployment and maintenance. According to the company's change management process, the dev team must evaluate changes for a specific time period before completing the rollout. Should use Immutable deployment policy.

  • A Dev has been asked to create an AWS Lambda function that is triggered any time updates are made to items in an Amazon DynamoDB table. The function has been created, and appropiate permissions have been added to the Lambda execution role. Amazon DynamoDB streams have been enabled for the table, but the function is still not being triggered. Option would enable DynamoDB table updates to trigger the Lambda function is Configure event source mepping for the Lambda function.
Last edited:


  • A Dev is storing sensitive documents in Amazon S3 that will require encryption at rest. The encryption keys must be rotated annually, at least. The easiest way to achieve this is Use AWS KMS with automatic key rotation.
    Can use the same techniques to view and manage the CMKs in custom key store that use for CMKs in the AWS KMS key store. Can control access with IAM and key policies, create tags and aliases, enable and disable the CMKs, and schedule key deletion. Can use the CMKs for cryptographic operations and use them with AWS services that integrate with AWS KMS. However, cannot enable automatic key rotation and cannot import key material into a CMK in a custom key store.
    Can choose to have AWS KMS automatically rotate CMKs every year, provided that those keys were generated within AWS KMS HSMs. Automatic key rotation is not supported for imported keys, asymmetric keys, or keys generated in an AWS CloudHSM cluster using the AWS KMS custom key store feature. If choose to import keys to AWS KMS or asymmetric keys or use a custom key store, can manually rotate them by creating a new CMK and mapping an existing key alias from the old CMK to the new CMK.

  • A company provides APIs as a service and commits to a Service Level Agreement (SLA) with all its users. To comply with each SLA, the company should Enable default throttling limits for each stage after deploying the APIs.

  • A company is building a compute-intensive application that will run on a fleet of Amazon EC2 instances. The application uses attached Amazon EBS disks for storing data. The application will process sensitive information and all the data must be encrypted. A dev should Configure the Amazon EC2 instance fleet to use encrypted EBS volumes for storing data to ensure the data is encrypted on disk without impacting performance.

  • A dev is using AWS CodeDeploy to deploy an application running on Amazon EC2. The dev wants to change the file permissions for a specific deployment file. A dev should use AfterInstall lifecycle event to meet this requirement.

  • A video-hosting website has two types of members: those who pay a fee and those who do not. Each video upload places a message in Amazon SQS. A fleet of Amazon EC2 instances polls Amazon SQS and processes each video. The dev needs to ensure that the videos uploaded by the paying members are processed first. The dev can Use SQS to set priorities on individual items within a single queue: give the paying members' videos the highest priority.

  • A Dev is trying to deploy a serverless application using AWS CodeDeploy. The application was updated and needs to be redeployed. The Dev need to update appspec.yml file to push that change through CodeDeploy.

  • DescribeImages EC2 API call would use to retrieve a list of Amazon Machine Images (AMIs).

  • A company wants to migrate its web application to AWS and leverage Auto Scaling to handle workloads. The Solution Architect determined that the best metric for an Auto Scaling event is the number of concurrent users. Based on this information, the Dev should use A Custom Amazon CloudWatch metric for concurrent users to autoscale based on concurrent users.

  • A dev is building a new application that uses an Amazon DynamoDB table. The specification states that all items that are older than 48 hours must be removed. Solution will meet this requirement is Create a new attribute that has the Number data type. Enable TTL on the DynamoDB table for this attribute. In the application code set the value of this attribute to the current timestamp plus 48 hours for each new item that is being inserted.

  • A dev converted an existing program to an AWS Lambda function in the console. The program runs properly on a local laptop, but shows an 'Unable to import module' error when tested in the Lambda console. Can fix the error by In the Lambda code invoke a Linux command to install the missing modules under the /usr/lib directory.

  • A company has an AWS Lambda function that runs hourly, reads log files that are stored in Amazon S3, and forwards alerts to Amazon Simple Notification Service (Amazon SNS) topics based on content. A dev wants to add a custom metric to the Lambda function to track the number of alerts of each type for each run. The dev needs to log this information in Amazon CloudWatch in a metric that is named Lambda/AlertCounts. The dev should Add a call to the PutMetricAlarm API operation. Pass an array of alerts in the metrics member with the namespace of 'Lambda/AlertCounts' to meet this requirement with the LEAST operational overhead.

  • A dev wants to insert a record into an Amazon DynamoDB table as soon as a new file is added to an Amazon S3 bucket. Should Configure an S3 event to invoke a Lambda function that inserts records into DynamoDB.

  • The release process workflow of an application requires a manual approval before the code is deployed into the production environment. The BEST way to achieve this using AWS CodePipeline is Use an approval action in a stage.

  • When writing a Lambda function, the benefit of instantiating AWS clients outside the scope of the handler is Taking advantage of connection re-use.

  • A company is adding stored value for gift card capability to its highly popular casual gaming website. Users need to be able to trade this value for other users' items on the platform. This would require both users' records be updated as a single transaction, or both users' records to be completely rolled back. AWS database options can provide the transactional capability required for this new feature are:
    • Amazon Aurora MySQL with operations made within a transaction block
    • Amazon DynamoDB with reads and writes made using Transact operations
  • A Dev is building a web application that uses Amazon API Gateway to expose an AWS Lambda function to process requests from clients. During testing, the Dev notices that the API Gateway times out even though the Lambda function finishes under the set time limit. The API Gateway metrics in Amazon CloudWatch can help the Dev troubleshoot the issue are IntegrationLatency and CacheMissCount.

  • Queries to an Amazon DynamoDB table are consuming a large amount of read capacity. The table has a significant number of large attributes. The application does not need all of the attribute data. DynamoDB costs can be minimized while maximizing application performance by Implement exponential backoffs in the application.

  • A company is migrating a single-server, on-premises web application to AWS. The company intends to use multiple servers behind an Elastic Load Balancer (ELB) to balance the load, and will also store session data in memory on the web server. The company does not want to lose that session data if a server fails or goes offline, and it wants to minimize user's downtime. The company should move session data to MOST effectively reduce downtime and make users' session data more fault tolerant to An Amazon ElastiCache for Redis cluster.

  • A company needs to distribute firmware updates to its customers around the world. Service will allow easy and secure control of the access to the downloads at the lowest cost is Amazon CloudFront with signed URLs for Amazon S3.

  • Multiple dev teams are working on a project to migrate a monolithic application to a microservices-based application running on AWS Lambda. The teams need a way to centrally manage code that is shared across multiple functions. Approach requires the LEAST maintenance is Each team builds and publishes the component they want to share to an Amazon S3 bucket. The Lambda functions will download the components from the bucket.
Last edited:


Iterative App Modernization Workshop:


  • There are three main states of EC2 instances: Running, Stopped, and Terminated.

  • Instance store volumes is a Temporary (ephemeral) storage and Data stored can be lost when an EC2 instance stops and starts or if a hardware failure occurs

  • If the AZ which an EC2 instance is running in fails, The instance will remain failed until at least when the AZ recovers.

  • An EC2 instance cannot be migrated between AZs, but an AMI can be created from an instance and used to provision a clone in another AZ.

  • IO1 & 2 EBS volumes suits When mazimum consistent IOPS is a priority and data is important.

  • IO1 be able to specify performance requirements (IOPS) independent of volume size.

  • A dev team uses AWS Elastic Beanstalk for application deployment. The team has configured the application version lifecycle policy to limit the number of application versions to 25. However even with the lifecycle policy the source bundle is deleted from the Amazon S3 source bucket. A dev should Change the Set the application version limit by age setting to zero in the Elastic Beanstalk application version lifecycle settings to retain the source code in the S3 bucket.

  • An application running on an Amazon Linux EC2 instance needs to manage the AWS infrastructure. The EC2 instance can be configured to make AWS API calls securely by Specify a role for the EC2 instance with the necessary privileges.

  • A corporate web application is deployed within an Amazon VPC, and is connected to the corporate data center via IPSec VPN. The application must authenticate against the on-premise LDAP server. Once authenticated, logged-in users can only access an S3 keyspace specific to the user. Approaches can satisfy the objectives are:
    • The application authenticates against LDAP, and retrieves the name of an IAM role associated with the user. The application then calls the IAM Security Token Service to assume that IAM Role. The application can use the temporary credentials to access the appropriate S3 bucket.
    • Develop an identity broker which authenticates against LDAP, and then calls IAM Security Token Service to get IAM federated user credentials. The application calls the identity broker to get IAM federated user credentials with access to the appropriate S3 bucket.
  • In a move toward using microservices, a company's Management team has asked all Dev teams to build their services so that API requests depend only on that service's data store. One team is building a Payment service which has its own database; the service needs data that originates in the Accounts database. Both are using Amazon DynamoDB. Approach will result in the simplest, decoupled, and reliable method to get near-real time updates from the Accounts database is Use Amazon DynamoDB Streams to deliver all changes from the Accounts database to the Payments database.

  • Are inserting 1,000 new items every second in a DynamoDB table. Once an hour these items are analyzed and then are no longer needed. Need to minimize provisioned throughput, storage, and API calls. Given these requirements, the most efficient way to manage these Items after the analysis is Delete the table and create a new table per hour.

  • A dev team consists of 10 team members. Similar to a home directory for each team member the manager wants to grant access to user-specific folders in an Amazon S3 bucket. For the team member with the username 'TeamMemberX', the snippet of the IAM policy looks like this:
    {"Sid": "AllowS3ActionToFolders","Effect": "Allow", "Action":
    ["s3:*"], "Resource":
    ["arn:aws:s3:::companyname/home/TeamMemberX/*] }
    Instead of creating distinct policies for each team member, approach can be used to make this policy snippet generic for all team members is Use IAM policy variables.

  • An application stores payroll information nightly in DynamoDB for a large number of employees across hundreds of offices. Item attributes consist of individual name, office identifier, and cumulative daily hours.
    Managers run reports for ranges of names working in their office. One query is 'Return all items in this office for names starting with A through E'. Table configuration will result in the lowest impact on provisioned throughput for this query is Configure the table to have a range index on the name attribute, and a hash index on the office identier.
    Partition key and sort key - Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key.
    DynamoDB uses the partition key value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored. All items with the same partition key value are stored together, in sorted order by sort key value.

  • A company is using Amazon API Gateway to manage its public-facing API. The CISO requires that the APIs be used by test account users only. The MOST secure way to restrict API access to users of this particular AWS account is Usage plans.

  • A company runs an e-commerce website that uses Amazon DynamoDB where pricing for items is dynamically updated in real time. At any given time, multiple updates may occur simultaneously for pricing information on a particular product. This is causing the original editor's changes to be overwritten without a proper review process. DynamoDB Conditional writes option should be selected to prevent this overwriting.

  • A company maintains a REST service using Amazon API Gateway and the API Gateway native API key validation. The company recently launched a new registration page, which allows users to sign up for the service. The registration page creates a new API key using CreateApiKey and sends the new key to the user. When the user attempts to call the API using this key, the user receives a 403 Forbidden error. Existing users are unaffected and can still call the API. Code updates will grant these new users access to the API is The createUsagePlanKey method must be called to associate the newly created API key with the correct usage plan.

  • A dev must extend an existing application that is based on the AWS Services Application Model (AWS SAM). The dev has used the AWS SAM CLI to create the project. The project contains different AWS Lambda functions. Sam init and deploy commands must the dev use to redeploy the AWS SAM application.

  • A company wants to containerize an existing three-tier web application and deploy it to Amazon ECS Fargate. The app is using session data to keep track of user activities. The BEST user experience would be Enable session stickiness in the existing Network Load Balancer (NLB) and manage the session data in the container.

  • Custom libraries be utilized in AWS Lambda by Modify the function runtime to include the necessary library.
Last edited:


  • Switch role ง่ายๆ ไม่ต้อง Sign out & Sign in ใหม่ หรือใช้ private window
    1. สร้าง IAM Role บน Account ปลายทาง จดชื่อไว้เดี๋ยวจะเอาไปใช้
    2. Copy Account ID มา
    3. Switch role กันได้เลย
      Account: จากข้อ 2.
      Role: จากข้อ 1.




  • A GP2 volume can be attached to only 1 instance at the same time.

  • EBS volumes can be attached to only instances in the same AZ as the volume.

  • Instance store volumes should be used For max IO, replaceable and temporary data.

  • EC2 On-Demand billing model suits for a short term workload which needs the cheapest pricing but can't tolerate interruption.

  • Metric Filter feature of CloudWatch Logs allows to generate an alarm based on patterns within a Log Group.

  • Permissions and Retention are defined on a CloudWatch Log Group.

  • Subscription + Lambda can be used together for real time processing of CloudWatch Logs.

  • Alarm states within CloudWatch are OK, Alarm, and Insufficient Data.

  • Types of information logged by VPC flow logs are Packet SRC and DST, Date and Time, Ports, and Allow or Deny.

  • AWS CloudTrail generates logs of API calls made against the account.

  • The structure within CloudWatch logs is Log Groups -> Log Streams -> Log Events.

  • The MINIMUM options required to log processing_running within an EC2 Instance are EC2 Instance Role with CloudWatch permissions and CWAgent Install (with configuration).

  • The options enabled via installing the CWAgent are Injecting Detailed and Custom metrics from an EC2 instance into CloudWatch and Logging system, application and custom logs into CloudWatch logs.

  • The locations for VPC Flow Logging are ENI, Subnet, and VPC.

  • An IAM role is attached to an Amazon EC2 instance that explicitly denies access to all Amazon S3 API actions. The EC2 instance credentials file specifies the IAM access key and secret access key, which allow full administrative access. Given that multiple modes of IAM access are present for this EC2 instance, The EC2 instance will be able to perform all actions on any S3 bucket.

  • A Dev has developed a web application and wants to deploy it quickly on a Tomcat server on AWS. The dev wants to avoid having to manage the underlying infrastructure. The easiest way to deploy the application based on these requirements is AWS Elastic Beanstalk.

  • An application on AWS is using third-party APIs. The Dev needs to monitor API errors in the code, and wants to receive notifications if failures go above a set threshold value. The Dev can achieve these requirements by Publish a custom metric on Amazon CloudWatch and use Amazon SNS for notification.

  • A dev is implementing authentication and authorization for an application. The dev needs to ensure that the user credentials are never exposed. The dev should Store the user credentials In Amazon DynamoDB Build an AWS Lambda function to validate the redentials and authorize users.

  • An Amazon RDS database instance is used by many applications to look up historical data. The query rate is relatively constant. When the historical data is updated each day, the resulting write traffic slows the read query performance and affects all application users. Can eliminate the performance impact on application users by Create an RDS Read Replica and direct all read traffic to the replica.

  • A dev supports an application that access data in an Amazon DynamoDB table. One of the item attributes is expirationDate In the timestamp format. The applications uses this attribute to find items archive them and remove them from the table based on the timestamp value. The application will be decommissioned soon, and the dev must find another way to implement this functionality. The dev needs a solution that will require the least amount of code to write. Solution will meet these requirements is Create two AWS Lambda functions, one to delete the items and one to process the items.
    • Create an Amazon EventBridge (Amazon CloudWatch Events) scheduled rule to invoke the Lambda functions. Use the DeleteItem API operation to delete the items based on the expirationDate attribute.
    • Use the GetRecords API operation to get the items from the DynamoDB table and process them.
  • A nightly batch job loads 1 million new records into a DynamoDB table. The records are only needed for one hour, and the table needs to be empty by the next night's batch job. The MOST efficient and cost-effective method to provide an empty table is Create and then delete the table after the task has completed.

  • A dev has launched an application that calls an API by way of Amazon API Gateway. It offers information that changes several times a day, but is not updated in real time. The application has become so popular that the API endpoint is overloaded and that traffic to the endpoint must be reduced. The dev can Enable API caching in Amazon ElastiCache to address the performance issues.

  • An organization must store thousands of sensitive audio and video files in an Amazon S3 bucket. Organizational security policies require that all data written to this bucket be encrypted by Configure an Amazon S3 bucket policy to prevent the upload of objects that do not contain the x-amzserver-side-encryption header.

  • A company's fleet of Amazon EC2 instances receives data from millions of users through an API. The servers batch the data, add an object for each user, and upload the objects to an S3 bucket to ensure high access rates. The object attributes are Customer ID, Server ID, TS-Server (TimeStamp and Server ID) the size of the object, and a timestamp. A dev wants to find all the objects for a given user collected during a specified time range. After creating an S3 object created event, the dev can achieve this requirement by Execute an AWS Lambda function in response to the S3 object creation events that creates an Amazon DynamoDB record for every object with the Customer ID as the partition key and TS-Server as the sort key. Retrieve all the records using the Customer ID and TS-Server attributes.

  • A dev is designing an AWS Lambda function that create temporary files that are less than 10 MB during execution. The temporary files will be accessed and modified multiple times during execution. The dev has no need to save or retrieve these files in the future. The temporary file should be stored in the /tmp directory.

  • A Dev writes an AWS Lambda function and uploads the code in a .ZIP file to Amazon S3. The Dev makes changes to the code and uploads a new .ZIP file to Amazon S3. However, Lambda executes the earlier code. The Dev can fix this in the LEAST disruptive way by Call the update-function-code API.
Last edited: