Cloud Computing

PlAwAnSaI

Administrator
2+
  1. A company uses AWS Organizations with a single OU named Production to manage multiple accounts. All accounts are members of the Production OU. Admins use deny list SCPs in the root of the organization to manage access to restricted services.
    The company recently acquired a new business unit and invited the new unit's existing AWS account to the organization. Once onboarded, the admins of the new business unit discovered that they are not able to update existing AWS Config rules to meet the company's policies.
    To allow admins to make changes and continue to enforce the current policies without introducing additional long-term maintenance should Create a temporary new OU named Onboarding for the new account. Apply/assign an SCP to the Onboarding OU to allow AWS Config actions. Move the organization's root SCP to the Production OU. Move the new account to the Production OU when adjustments to AWS Config are complete/done.
    An SCP at a lower level can't add a permission after it is blocked by an SCP at a higher level. SCPs can only filter; they never add permissions.

  2. A company is running a two-tier web-based app in an on-premises data center. The app layer consists of a single server running a stateful app. The app connects to a PostgreSQL DB running on a separate server. The app's user base is expected to grow significantly, so the company is migrating the app and DB to AWS. The solution will use Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing (ELB).
    Solution will provide a consistent user experience that will allow the app and DB tiers to scale is Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer (ALB) with the round robin routing and sticky sessions enabled.

  3. A company uses a service to collect metadata from apps that the company hosts on premises. Consumer devices such as TVs and internet radios access the apps. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers.
    The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the apps into a set of AWS Lambda functions.
    Solution will meets these requirements is Create an Amazon CloudFront distribution for the metadata service. Crate an ALB. Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.
    CloudFront function is faster/light-weight than Lambda@Edge.

  4. A retail company needs to provide a series of data files to another company, which is its business partner. These files are saved in an Amazon S3 bucket under Account A, which belongs to the retail company. The business partner company wants one of its IAM users, User_DataProcessor, to access the files from its own AWS account (Account B).
    Steps the companies must take so that User_DataProcessor can access the S3 bucket successfully are:
    • In Account A, set the S3 bucket policy to allow access to the bucket from the IAM user in Account B. This is done by adding a statement to the bucket policy that allows the IAM user in Account B to perform the necessary actions (GetObject and ListBucket) on the bucket and its contents.
      Code:
      {
        "Effect": "Allow",
        "Principal": {
          "AWS": "arn:aws:iam::AccountB:user/User_DataProcessor"
        },
        "Action": [
          "s3:GetObject",
          "s3:ListBucket"
        ],
        "Resource": [
          "arn:aws:s3:::AccountABucketName/*"
        ]
      }
    • In Account B, create an IAM policy that allows the IAM user (User_DataProcessor) to perform the necessary actions (GetObject and ListBucket) on the S3 bucket and its contents. The policy should reference the ARN of the S3 bucket and the actions that the user is allowed to perform.
      Code:
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject",
          "s3:ListBucket"
        ],
        "Resource": "arn:aws:s3:::AccountABucketName/*"
      }
      .
  5. A company is running a traditional web app on Amazon EC2 instances. The company needs to refactor the app as microservices that run on containers. Separate versions of the app exist in two distinct environments: production and testing. Load for the app is variable, but the minimum load and the maximum load are known. A solution architect needs to design the updated app with a serverless architecture that minimizes operational complexity.
    Solution will meet these requirements MOST cost-effectively is Upload the container images to Amazon ECR. Configure two auto scaled Amazon ECS clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate ALB to direct traffic to the ECS clusters.
    EKS requires paying a fixed monthly cost of approximately $70 for the control plane plus additional fees to run supporting service.
    Code:
    https://www.densify.com/eks-best-practices/aws-ecs-vs-eks

  6. A company has a multi-tier web app that runs on a fleet of Amazon EC2 instance behind an ALB. The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum and maximum value for the Auto Scaling group is set to zero. An Amazon RDS Multi-AZ DB instance stores the app's data. The DB instance has a read replica in the Backup Region. The app presents an endpoint to end users by using an Amazon Route 53 record.
    The company needs to reduce its RTO to less than 15 mins by giving the app the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy.
    To meet these requirements a solution architect should recommend Creating an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web app and sends an Amazon Simple Notification Service (SNS) notification to the Lambda function when the health check status is unhealthy. Update the app's Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.

  7. A company is hosting a critical app on a single Amazon EC2 instance. The app uses an Amazon ElastiCache for Redis single-node cluster for an in-memory data store. The app uses an Amazon RDS for MariaDB DB instance for a relational DB. For the app to function, each piece of the infrastructure must be healthy and must be in an active state.
    A solutions architect needs to improve the app's architecture so that the infrastructure can automatically recover from failure with the least possible downtime.
    Steps will meet these requirements are:
    • Use an ELB to distribute traffic across multiple EC2 instances can help ensure that the app remains available in the event that one of the instances becomes unavailable. Ensure that the EC2 instances are part of an Auto Scaling group that has a minimum capacity of two instances.
    • Modify the DB instance to create a multi-AZ deployment that extends across two AZs.
    • Create a replication group for the ElastiCache for Redis cluster. Enable multi-AZ on the cluster.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
+9
  1. A retail company is operating its ecommerce app on AWS. The app runs on Amazon EC2 instances behind an ALB. The company uses an Amazon RDS DB instance as the DB backend. Amazon CloudFront is configured with one origin that points to the ALB. Static content is cached. Amazon Route 53 is used to host all public zones.
    After an update of the app, the ALB occasionally returns a 502-status code (Bad Gateway) error. The root cause is malformed HTTP headers that are returned to the ALB. The webpage returns successfully when a solutions architect reloads the webpage immediately after the error occurs.
    While the company is working on the problem, the solution architect needs to provide a custom error page instead of the standard ALB error page to visitors.
    Steps will meet this requirement with the LEAST amount of operational overhead:
    • Create an Amazon S3 bucket. Configure the S3 bucket to host a static webpage. Upload the custom error pages to Amazon S3.
    • Add a custom error response by configuring a CloudFront custom error page. Modify DNS records to point to a publicly accessible web page.
      .
  2. A company has many AWS accounts and uses AWS Organizations to manage all of them. A solution architect must implement a solution that the company can use to share common network across multiple accounts.
    The company's infrastructure team has a dedicated infrastructure account that has a VPC. The infrastructure team must use this account to manage the network.
    Individual accounts cannot have the ability to manage their own networks. However, individual accounts must be able to create AWS resources within subnets.
    Actions the solution architect should perform to meet these requirements are:
    • Enable resource sharing from the AWS Organizations management account. To enables the organization to share resources across accounts.
    • Create a resource share in AWS Resource Access Manager in the infrastructure account. Select the specific AWS Organizations OU that will use the shared network. Select each subnet to associate with the resource share. To allows the infrastructure account to share specific subnets with the other accounts in the organization, so that the other accounts can create resources within those subnets without having to manage their own networks.
      .
  3. A company wants to use a third-party Software-as-a-Service (SaaS) app. The third-party SaaS app is consumed through several API calls. The third-party SaaS app also runs on AWS inside a VPC.
    The company will consume the third-party SaaS app from inside a VPC. The company has internal security policies that mandate the use of private connectivity that does not traverse the internet. No resources that run in the company VPC are allowed to be accessed from outside the company's VPC. All permissions must conform to the principles of least privilege.
    Solution meets these requirements is Create an AWS PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS app provides. Create a security group to limit the access to the endpoint. Associate the security group with the endpoint.
    AWS PrivateLink creates a secure and private connection between the company's VPC and the third-party SaaS app VPC, without the traffic traversing the internet. The use of a security group and limiting access to the endpoint service conforms to the principle of least privilege.

  4. A company needs to implement a patching process for its servers. The on-premises servers and Amazon EC2 instances use a variety of tools to perform patching. Management requires a single report showing the patch status of all the servers and instances.
    To meet these requirements a solution architect should Use AWS Systems Manager to manage patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
    AWS OpsWorks and Amazon Inspector are not specifically designed for patch management.
    Using Amazon EventBridge rule and AWS X-Ray to generate patch compliance reports is not a practical solution as they are not designed for patch management reporting.

  5. A company is running an app on several Amazon EC2 instances in an Auto Scaling group behind an ALB. The load on the app varies throughout the day, and EC2 instances are scaled in and out on a regular basis. Log files from the EC2 instances are copied to a central Amazon S3 bucket every 15 mins. The security team discover that log files are missing from some of the terminated EC2 instances.
    To ensure that log files are copied to the central S3 bucket from the terminated EC2 instances should Create an AWS Systems Manager document with a script to copy log files to Amazon S3. Create an Auto Scaling lifecycle hook and an Amazon EventBridge rule to detect lifecycle events from the Auto Scaling group. Invoke an AWS Lambda function on the autoscaling:EC2_INSTANCE_TERMINATING transition to call the AWS Systems Manager API SendCommand operation to run the document to copy the log files and send CONTINUE to the Auto Scaling group to terminate the instance.
    This will use the Auto Scaling lifecycle hook to execute the script that copies log files to S3, before the instance is terminated.

  6. A company is using multiple AWS accounts. The DNS records are stored in a private hosted zone for Amazon Route 53 in Account A. The company's apps and DBs are running in Account B.
    A solution architect will deploy a two-tier app in a new VPC. To simplify the configuration, the db.example.com CNAME record set for the Amazon RDS endpoint was created in a private hosted zone for Amazon Route 53.
    During deployment, the app failed to start. Troubleshooting revealed that db.example.com is not resolvable on the Amazon EC2 instance. The solution architect confirmed that the record set was created correctly in Route 53.
    To resolve this issue should:
    • Create an authorization to associate the private hosted zone in Account A with the new VPC in Account B.
    • Associate a new VPC in Account B with a hosted zone in Account A. Delete the association authorization in Account A.
      .
  7. A company used Amazon EC2 instances to deploy a web fleet to host a blog site. The EC2 instances are behind an ALB and are configured in an Auto Scaling group. The web app stores all blog content on an Amazon EFS volume.
    The company recently added a feature for bloggers to add video to their posts, attracting 10 times the previous user traffic. At peak times of day, users report buffering and timeout issues while attempting to reach the site or watch videos.
    The MOST cost-efficient and scalable deployment that will resolve the issues for users is Configure an Amazon CloudFront distribution. Point the distribution to an S3 bucket and migrate the videos from EFS to Amazon S3.
    Amazon CloudFront is a Content Delivery Network (CDN) that can be used to deliver content to users with low latency and high data transfer speeds. By Configuring a CloudFront distribution for the blog site and pointing it at an S3 bucket, the videos can be cached at edge locations closer to users, reducing buffering and timeout issues. Additionally, S3 is designed for scalable storage and can handle high levels of user traffic. Migrating the videos from EFS to S3 would also improve the performance and scalability of the website.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
16+
  1. A company with global offices has a single 1 Gbps AWS Direct Connect connection to a single AWS Region. The company's on-premises network uses the connection to communicate with the company's resources in the AWS Cloud. The connection has a single private virtual interface that connect to a single VPC.
    A solution architect must implement a solution that adds a redundant Direct Connect connection in the same Region. The solution also must provide connectivity to other Regions through the same pair of Direct Connect connections as the company expands into other Regions.
    Solution meets these requirements is Provision a Direct Connect gateway (GW). Delete the existing private virtual interface from the existing connection. Create the second Direct Connect connection. Create a new private virtual interface on each connection and connect both private virtual interfaces to the Direct Connect gateway. Connect the Direct Connect gateway to the single VPC.

    DX-with-DXGW.webp


    The Direct Connect Gateway allows to connect multiple VPCs and on-premises networks in different accounts and different regions to a single Direct Connect connection. It also provides automatic failover and routing capabilities.

  2. A company has a web app that allows users to upload short videos. The videos are stored on Amazon EBS volumes and analyzed by custom recognition software for categorization.
    The website contains static content that has variable traffic with peaks in certain months. The architecture consists of Amazon EC2 instances running in an Auto Scaling group for the web app and EC2 instances running in an Auto Scaling group to process an Amazon SQS queue. The company wants to re-architect the app to reduce operational overhead using AWS managed services where possible and remove dependencies on third-party software.
    Solution meets these requirements is Host the web app in Amazon S3 would make it highly available, scalable and can handle variable traffic. Store the uploaded videos in Amazon S3. Use S3 event notification to publish events to the SQS queue. Process the SQS queue with an AWS Lambda function that calls the Amazon Rekognition API to categorize the videos.
    This solution eliminates the need to manage EC2 instances, EBS volumes and the custom software. Additionally, using Lambda function in this case, eliminates the need for managing additional servers to process the SQS queue which will reduce operational overhead.
    By using this solution, the company can benefit from the scalability, reliability, and cost-effectiveness that these services offer, which can help to improve the overall performance and security of the app.

  3. A company has a serverless app comprised of Amazon CloudFront, Amazon API Gateway, and AWS Lambda functions. The current deployment process of the app code is to create a new version number of the Lambda function and run an AWS CLI script to update. If the new function version has errors, another CLI script reverts by deploying the previous working version of the function. The company would like to decrease the time to deploy new version of the app logic provided by the Lambda functions, and also reduce the time to detect and revert when errors are identified.
    This can be accomplished by Use AWS Serverless App Model (SAM) and built-in AWS CodeDeploy to deploy the new Lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback if Amazon CloudWatch alarms are triggered.
    AWS SAM is a framework that helps build, test and deploy serverless apps. It uses CloudFormation under the hood, so it is a way to simplify the process of creating, updating, and deploying to any instance, including on-premises instances and Lambda functions.

  4. A company is planning to store a large number of archived documents and make the documents available to employees through the corporate intranet. Employees will access the system by connecting through a client VPN service that is attached to a VPC. The data must not be accessible to the public.
    The documents that the company is storing are copies of data that is held on physical media elsewhere. The number of requests will be low. Availability and speed of retrieval are not concerns of the company.
    Solution will meet these requirements at the LOWEST cost is Create an Amazon S3 bucket. Configure the S3 bucket to use the S3 One Zone-Infrequent Access (IA) storage class as default. Configure the S3 bucket for website hosting. Create an S3 interface endpoint. Configure the S3 bucket to allow access only through that endpoint.
    Glacier Deep Archive can't be used for web hosting.

  5. A company is using an on-premises Active Directory (AD) service for user authentication. The company wants to use the same authentication service to sign in to the company's AWS accounts, which are using AWS Organizations. AWS Site-to-Site VPN connectivity already exists between the on-premises environment and all the company's AWS accounts.
    The company's security policy requires conditional access to the accounts based on user groups and roles. User identities must be managed in a single location.
    Solution will meet these requirements is Configure AWS IAM Identity Center (AWS Single Sign-On) to connect to AD by using SAML 2.0. Enable automatic provisioning by using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. Grant access to the AWS accounts by using Attribute-Based Access Controls (ABACs).
    ABAC is a method of granting access to resources based on the attributes of the user, the resource, and the action. This allows for fine-grained access control, which can be useful for implementing a security policy that requires conditional access to the accounts based on user groups- and roles.

  6. A company is running a data-intensive app on AWS. The app runs on a cluster of hundreds of Amazon EC2 instances. A shared file system also runs on several EC2 instances that store 200 TB of data. The app reads and modifies the data on the shared file system and generates a report. The job runs once monthly, reads a subset of the files from the shared file system, and takes about 72 hours to complete. The compute instances scale in an Auto Scaling group, but the instances that host the shared file system run continuously. The compute and storage instances are all in the same AWS Region.
    A solution architect needs to reduce costs by replacing the shared file system instances. The file system must provide high performance access to the needed data for the duration of the 72-hour run.
    Solution will provide the LARGEST overall cost reduction while meeting these requirements is Migrate the data from the existing shared file system to an Amazon S3 bucket that uses the S3 Intelligent-Tiering storage class. Before the job runs each month, use Amazon FSx for Lustre to create a new file system with the data from Amazon S3 by using lazy loading. Use the new file system as the shared storage for the duration of the job. Delete the file system when the job is complete.
    Lazy loading is cost effective because only a subset of data is used at every job.
    One EBS volume is limited to 16 nitro instances attached.
    Batch loading would load too much data.
    AWS Storage GW and File GW are solution for integrating on-premises storage with AWS storage.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
22+
  1. A software company has deployed an app that consumes a REST API by using Amazon API Gateway, AWS Lambda functions, and an Amazon DynamoDB table. The app is showing an increase in the number of errors during PUT requests. Most of the PUT calls come from a small number of clients that are authenticated with specific API keys.
    A solution architect has identified that a large number of the PUT requests originate from one client. The API is noncritical, and clients can tolerate retries of unsuccessful calls. However, the errors are displayed to customers and are causing damage to the API's reputation.
    To improve the customer experience solution architect should recommend Implementing API throttling through a usage plan at the API Gateway level, can limit the number of requests that a client can make, which will help to reduce the number of errors.
    Ensure that the client app handles code 429 replies without error, will help to improve the customer experience by reducing the number of errors that are displayed to customers. Additionally, it will prevent the API's reputation from being damaged by the errors.
    API throttling is a technique that can be used to control the rate of requests to an API. This can be useful in situations where a small number of clients are making a large number of requests, which is causing errors.

  2. A company is developing a new service that will be accessed using TCP on a static port. A solution architect must ensure that the service is highly available, has redundancy across AZs, and is accessible using the DNS name my.service.com, which is publicly accessible. The service must use fixed address assignments so other companies can add the addresses to their allow lists.
    Assuming that resources are deployed in multiple AZs in a single Region, solution will meet these requirements is Create Amazon EC2 instances for the service. Create one Elastic IP address for each AZ. Create a Network Load Balancer (NLB) and expose the assigned TCP port. Assign the Elastic IP addresses to the NLB for each AZ. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named my.service.com and assign the NLB DNS name to the record set.
    Non-HTTP port like TCP should use NLB.

  3. A company uses an on-premises data analytics platform. The system is highly available in a fully redundant configuration across 12 servers in the company's data center. The system runs scheduled jobs, both hourly and daily, in addition to one-time requests from users. Scheduled jobs can take between 20 mins and 2 hours to finish running and have tight SLAs. The scheduled jobs account for 65% of the system usage. User jobs typically finish running in less than 5 mins and have no SLA. The user jobs account for 35% of system usage. During system failures, scheduled jobs must continue to meet SLAs. However, user jobs can be delayed.
    A solution architect needs to move the system to Amazon EC2 instances and adopt a consumption-based model to reduce costs with no long-term commitments. The solution must maintain high availability and must not affect the SLAs.
    Solution will meet these requirements MOST cost-effectively is Split the 12 instances across three AZs in the chosen AWS Region. Run three instances in each AZ as On-Demand Instances with Capacity Reservations. Run one instance in each AZ as a Spot Instance.
    We need to guarantee 65% (of 12 servers is 8 servers) of capacity for the SLA, so 9 can do it and then let the others as spot.

  4. A security engineer determined that an existing app retrieves credentials to an Amazon RDS for MySQL DB from an encrypted file in Amazon S3. For the next version of the app, the security engineer wants to implement the following app design changes to improve security:
    • The DB must use strong, randomly generated passwords stored in a secure AWS managed service.
    • The app resources must be deployed through AWS CloudFormation.
    • The app must rotate credentials for the DB every 90 days.

      A solution architect will generate a CloudFormation template to deploy the app. Resources specified in the CloudFormation template will meet the security engineer's requirements with the LEAST amount of operational overhead is Generate the DB password as a secret resource using AWS Secrets Manager. Create an AWS Lambda function resource to rotate the DB password. Specify a Secrets Manager RotationSchedule resource to rotate the DB password every 90 days.
      .
  5. A company is storing data in several Amazon DynamoDB tables. A solution architect must use a serverless architecture to make the data accessible publicly through a simple API over HTTPS. The solution must scale automatically in response to demand.
    Solutions meet these requirements are Create an Amazon API Gateway:
    • REST API can run over HTTPS, configure this API with direct integrations to DynamoDB by using API Gateway's AWS integration type.
    • HTTP API, configure this API with integrations to AWS Lambda functions that return data from the DynamoDB tables, not support for DynamoDB.
      .
  6. A company has registered 10 new domain names. The company uses the domains for online marketing. The company needs a solution that will redirect online visitors to a specific URL for each domain. All domains and target URLs are defined in a JSON document. All DNS records are managed by Amazon Route 53.
    A solution architect must implement a redirect service that accepts HTTP and HTTPS requests. To meet these requirements with the LEAST amount of operational effort the solution architect should Create:
    • An AWS Lambda function that uses the JSON document in combination with the event message to look up the target URLs for each domain and respond with an appropriate redirect URL. No need to rely on a web server to handle the redirects, which reduces operational effort.
    • An Amazon CloudFront distribution. Deploy a Lambda@Edge function.
    • An SSL certificate by using AWS Certificate Manager (ACM). Include the domains as Subject Alternative Names. This can ensure that the redirect service can handle both HTTP and HTTPS requests.
      .
  7. A company that has multiple AWS accounts is using AWS Organizations. The company's AWS accounts host VPCs, Amazon EC2 instances, and containers.
    The company's compliance team has deployed a security tool in each VPC where the company has deployments. The security tools run on EC2 instances and send information to the AWS account that is dedicated for the compliance team. The company has tagged all the compliance-related resources with a key of 'costCenter' and a value or 'compliance'.
    The company wants to identify the cost of the security tools that are running on the EC2 instances so that the company can charge the compliance team's AWS account. The cost calculation must be as accurate as possible.
    To meet these requirements a solution architect should Activate the costCenter user-defined tag in the management account of the organization (because we do not depend on the users). Configure monthly AWS Cost and Usage Reports to save to an Amazon S3 bucket in the management account. Use the tag breakdown in the report to obtain the total cost for the costCenter tagged resources.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
29+
  1. A company has 50 AWS accounts that are members of an organization in AWS Organizations. Each account contains multiple VPCs. The company wants to use AWS Transit Gateway to establish connectivity between the VPCs in each member account. Each time a new member account is created, the company wants to automate the process of creating a new VPC and a transit gateway attachment.
    Steps will meet these requirements are:
    • From the management account, share the transit gateway with member accounts by using AWS Resource Access Manager (RAM).
    • Launch an AWS CloudFormation stack set from the management account that automatically creates a new VPC and a VPC transit gateway attachment in a member account. Associate the attachment with the transit gateway in the management account by using the transit gateway ID. This could help to streamline the process and reduce operational effort.
      .
  2. An enterprise company wants to allow its devs to purchase third-party software through AWS Marketplace. The company uses an AWS Organizations account structure with full features enabled, and has a shared services account in each Organization Unit (OU) that will be used by procurement managers. The procurement team's policy indicates that devs should be able to obtain third-party software from an approved list only and use Private Marketplace in AWS Marketplace to achieve this requirement. The procurement team wants administration of Private Marketplace to be restricted to a role named procurement-manager-role, which could be assumed by procurement managers. Other IAM users, groups, roles, and account administrators in the company should be denied Private Marketplace administrative access.
    The MOST efficient way to design an architecture to meet these requirements is Create an IAM role named procurement-manager-role in all the shared services accounts in the organization. Add the AWSPrivateMarketplaceAdminFullAccess managed policy to the role. Create an organization root-level SCP to deny permissions to administer Private Marketplace to everyone except the role named procurement-manager-role. Create another organization root-level SCP to deny permissions to create an IAM role named procurement-manager-role to everyone in the organization.

  3. A company is in the process of implementing AWS Organizations to constrain its devs to use only Amazon EC2, S3, and DynamoDB. The devs account resides in a dedicated OU. The solution architect has implemented the following SCP on the devs account:
    Code:
    {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "AllowEC2",
          "Effect": "Allow",
          "Action": "ec2:*",
          "Resource": "*"
        },
        {
          "Sid": "AllowDynamoDB",
          "Effect": "Allow",
          "Action": "dynamodb:*",
          "Resource": "*"
        },
        {
          "Sid": "AllowS3",
          "Effect": "Allow",
          "Action": "s3:*",
          "Resource": "*"
        }
      ]
    }
    When this policy is deployed, IAM users in the devs account are still able to use AWS services that are not listed in the policy. To eliminate the devs' ability to use services outside the scope of this policy, the solutions architect should Remove the FullAWSAccess SCP from the devs account's OU.
    By default FullAWSAccess policy is attached to all OUs when enable SCP.

  4. A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
    A solution architect needs to implement a solution so that the app can handle the new and varying load. Solution will meet these requirements with the LEAST operational overhead is Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
    Separating would be development overhead, but once done, the operational (ongoing day-to-day) overhead will be the least.

  5. A company has created an OU in AWS Organizations for each of its engineering teams. Each OU owns multiple AWS accounts. The organization has hundreds of AWS accounts.
    A solution architect must design a solution so that each OU can view a breakdown of usage costs across its AWS accounts.
    Solution meets these requirements is Create an AWS Cost and Usage Report (CUR) from the AWS Organizations management account. This would allow the management account to view the usage costs across all the member accounts and allow each team to visualize the CUR through an Amazon QuickSight dashboard. This allows the organization to have a centralized place to view the cost breakdown and the teams to access the cost breakdown in an easy way.

  6. A company is storing data on premises on a Windows file server. The company produces 5 GB of new data daily. The company migrated part of its Windows-based workload to AWS and needs the data to be available on a file system in the cloud. The company already has established an AWS Direct Connect connection between the on-premises network and AWS.
    Data migration strategy the company should use is Use AWS DataSync to schedule a daily task to replicate data between the on-premises Windows file server and Amazon FSx.
    EFS only support Linux FS.

  7. A company's solution architect is reviewing a web app that runs on AWS. The app references static assets in an Amazon S3 bucket in the us-east-1 Region. The company needs resiliency across multiple AWS Regions. The company already has created an S3 bucket in a second Region.
    Solution will meet these requirements with the LEAST operational overhead and most efficient is Configure/leverage built-in replication feature on the S3 bucket in us-east-1 to automatically replicate objects to the S3 bucket in the second Region. Set up Amazon CloudFront distribution with an origin group that contains the two S3 buckets as origins, the app benefits from CloudFront's global content delivery network, which improves load times (performance) and provides a built-in failover mechanism.

  8. A company is hosting a three-tier web app in an on-premises environment. Due to a recent surge in traffic that resulted in downtime and a significant financial impact, company management has ordered that the app be moved to AWS. The app is written in .NET and has a dependency on a MySQL DB. A solution architect must design a scalable and highly available solution to meet the demand of 200,000 daily users.
    To design an appropriate solution, the solution architect should Use AWS CloudFormation to launch a stack containing an ALB in front of an Amazon EC2 Auto Scaling group spanning three AZs. The stack should launch a multi-AZ deployment of an Amazon Aurora MySQL DB cluster with a Retain deleting policy. Use an Amazon Route 53 alias record to route traffic from the company's domain to the ALB.
    We will not use NLB for web app.
    Beanstalk is region service. It cannot automatically scale web server environment that spans two separate Regions.
    Spot instances can't meet highly available.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
AWS Partner Certification Readiness - Solutions Architect - Associate:

AWS Networking Basics:

  • AWS provides networking and content delivery services for network foundations, hybrid connectivity, edge networking, application networking, and network security.

  • Amazon Virtual Private Cloud (Amazon VPC) is a networking foundations service and gives you full control over your virtual networking environment, including resource placement, connectivity, and security.

  • AWS Direct Connect is a hybrid connectivity service and is the shortest path to your AWS resources. While in transit, your network traffic remains on the AWS global network and never touches the public internet.

  • Network design documents and diagrams visualize the components of a network, including routers, firewalls (security), and devices, and also show how those components interact.

  • The main Types of network protocols are management, communication, and security protocols.

  • There are two Internet protocols: Internet Protocol version 4 (IPv4) and Internet Protocol version 6 (IPv6).

  • The TCP/IP model and the OSI model are the most used communication networking protocols. OSI is a conceptual model. It defines how applications can communicate over a network. TCP/IP describes how the TCP and IP protocols establish links and interact on a network.

  • AWS created their global infrastructure as a collection of individual infrastructures located all over the world. So globally, AWS offers Regions, Availability Zones, local zones, and edge locations.

  • AWS is designed to help build secure, high-performing, resilient, and efficient infrastructure for your applications and offers network Access Controls List (ACL) and security groups to add precise security where it is needed.

  • AWS services secure your communications, ensuring no traffic traverses the internet are AWS PrivateLink and AWS Direct Connect.

AWS Identity and Access Management - Basics:

  • Can't use IAM to manage EC2 SSH keys for users.

  • Can't authenticate into AWS Management Console using Access key and Secret Access Key.

AWS Control Tower:
  • Code:
    https://aws.amazon.com/blogs/mt/customizing-account-configuration-aws-control-tower-lifecycle-events
  • Code:
    https://aws.amazon.com/blogs/mt/integrating-existing-cloudtrail-configurations-when-launching-aws-control-tower
  • Code:
    https://aws.amazon.com/blogs/mt/automate-enrollment-of-accounts-with-existing-aws-config-resources-into-aws-control-tower
  • Code:
    https://aws.amazon.com/blogs/mt/improve-governance-and-business-agility-using-aws-management-and-governance-videos-part-2
  • Code:
    https://aws.amazon.com/blogs/mt/organizing-your-aws-control-tower-landing-zone-with-nested-ous
Encryption Fundamentals:

  • Asymmetric encryption is a form of encryption that uses one key for encrypting data and another mathematically related key for decrypting data.

  • The goal of envelope encryption Protecting the keys that directly encrypt data.

  • Signatures help achieve data integrity by Signatures include a hash of the file being signed. If that hash matches a hash provided by the sender, the file has not been altered.

  • AWS KMS integrates with over 100 AWS offerings to encrypt data at rest using envelope encryption with AWS KMS keys.

  • Company can use CloudHSM to provision and manage dedicated FIPS 140-2 Level 3 single-tenant HSM instances.

  • Can use the AWS Database Encryption SDK in the applications to perform record-level encryption for data destined for the AWS databases.

  • AWS Private Certificate Authority offers a service that could use to create a managed, private CA in the cloud, with up to five tiers, can use it in combination with ACM to create private certificates. Can associate them with AWS resources, and benefit from automated management features, like renewals.

Webinar:

  • A company needs to maintain access logs for a minimum of 5 years due to regulatory requirements. The data is rarely accessed once stored but must be accessible with one day's notice if it is needed.
    The MOST cost-effective data storage solution that meets these requirements is Store the data in Amazon S3 Glacier Deep Archive storage and delete the objects after 5 years using a lifecycle rule.

  • A company uses Reserved Instances to run its data-processing workload. The nightly job typically takes 7 hours to run and must finish within a 10-hour time window. The company anticipates temporary increases in demand at the end of each month that will cause the job to run over the time limit with the capacity of the current resources. Once started, the processing job cannot be interrupted before completion. The company wants to implement a solution that would allow it to provide increased capacity as cost-effectively as possible. To accomplish this, a solution architect should Deploy On-Demand Instances during periods of high demand.

  • A solution architect wants to design a solution to save costs for Amazon EC2 instances that do not need to run during a 2-week company shutdown. The apps running on the EC2 instances store data in instance memory that must be present when the instances resume operation.
    To shut down and resume the EC2 instances, the solution architect should recommend Running the apps on EC2 instances enabled for hibernation. Hibernate the instances before the 2-week company shutdown.

  • A company has a two-tier app architecture that runs in public and private subnets. Amazon EC2 instances running the web app are in the public subnet and an EC2 instance for the DB runs on the private subnet. The web app instances and the DB are running in a single AZ.
    To provide high availability for this architecture, a solution architect should:
    • Create an Amazon EC2 Auto Scaling group and ALB spanning multiple AZs for the web app instances.
    • Create new public and private subnets in the same VPC, each in a new AZ. Create an Amazon RDS Multi-AZ DB instance in the private subnets. Migrate the old DB contents to the new DB instance.
      .
  • A company has an on-premises app that exports log files about users of a website. These log files range from 20 GB to 30 GB in size. A solution architect has created an Amazon S3 bucket to store these files. The files will be uploaded directly from the app. The network connection experiences intermittent failures, and the upload sometimes fails.
    A solution architect must design a solution that resolves this problem. The solution must minimize operational overhead. Solution will meet these requirements is Use multipart upload to S3 to handle the file exports.


  • A company is experiencing problems with its message-processing app. During periods of high demand, the app becomes overloaded. The app is based on a monolithic design and is hosted in an on-premises data center. The company wants to move the app to the AWS Cloud and decouple the monolithic architecture. A solution architect must design a solution that allows worker components of the app to access the messages and handle the peak volume.
    Solution meets these requirements with the HIGHEST throughput is Use Amazon Simple Queue Service (SQS) standard queues in combination with Amazon EC2 instances that are scaled by an Auto Scaling group.

  • A company asks a solution architect to implement a pilot light Disaster Recovery (DR) strategy for an existing on-premises app. The app is self contained and does not need to access any DBs.
    Solution will implement a pilot light DR strategy is Recreate the app hosting environment on AWS by using Amazon EC2 instances and stop the EC2 instances. When the on-premises app fails, start the stopped EC2 instances and direct 100% of app traffic to the EC2 instances that are running in the AWS Cloud.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
Amazon GuardDuty:

Code:
https://aws.amazon.com/blogs/security/how-to-use-new-amazon-guardduty-eks-protection-findings
Code:
https://aws.amazon.com/blogs/security/visualizing-amazon-guardduty-findings
Code:
https://aws.amazon.com/blogs/security/amazon-guardduty-threat-detection-and-remediation-scenario
Code:
https://aws.amazon.com/blogs/security/why-we-reduce-complexity-and-rapidly-iterate-on-amazon-guardduty-twelve-new-detections-added
Code:
https://aws.amazon.com/blogs/security/how-to-manage-amazon-guardduty-security-findings-across-multiple-accounts

AWS Compute Services:

  • AWS Lambda function is appropriate fit for the application that is small, fast, and runs when a person accesses the list and cost-effective way. It can focus time on developing applications and not have to worry about server or infrastructure needs. Scenario:
    • A pop-up for a website shows a list of things to do. The user can select completed items or delete items from the list.
    • The development team in a startup company is comfortable with breaking their code into small pieces. The team want to focus its resources on business logic and not on infrastructure.
      .
  • With containers, can package entire applications and move them to the cloud without the need to make any code changes. The application can be as large as need it to be and can run as long as require it to run.
    • A payroll application is running customized code. The code is complex, and the application needs to migrate without any changes to the underlying code.
      .
  • EC2 instance is a virtualized server in the cloud, EC2 instances lend themselves to support a large variety of applications. Anything can run on a physical server can be run on Amazon EC2. Amazon EC2 gives access to the OS and to the underlying files and can scale out and in as necessary.
    • An application that handles large education conferences and requires access to the operating system (OS) and control of the underlying infrastructure.

Amazon EKS Primer:

  • Amazon EKS supports native, upstream Kubernetes and is certified Kubernetes-conformant.

  • Pods are the basic building block within Kubernetes for deployment, scaling, and replication.

  • In Kubernetes, a service is a logical collection of pods and a means to access them. The service is continually updated with the set of pods available, eliminating the need for pods to track other pods.

  • The Kubernetes control plane is always managed by Amazon EKS.

  • Should use Amazon EKS API to:
    .
    • Create a cluster. Example: eksctl create cluster

    • Delete a managed node group. Example: eksctl delete nodegroup --cluster=clusterName --name=nodegroupName

    • Get the Fargate profile. Example: eksctl get fargateprofile --cluster clustername
      .
  • Should use Kubernetes API to:
    .
    • Create a deployment. Example: kubectl apply -f nginx-deployment.yaml

    • Get all the namespaces. Example: kubectl get namespace
      .
  • Can customize the behavior of eksctl by using the --region flag as you run the command or by creating a cluster.yaml file with specific attributes.

  • The NodePort service opens a port on each node, allowing access from outside the cluster. The LoadBalancer service extends the NodePort service by adding a load balancer in front of all nodes.

  • Amazon EKS integrates VPC networking into Kubernetes using the CNI plugin for Kubernetes. The CNI plugin allows Kubernetes pods to have the same IP address inside the pods as they do on the VPC network.

  • Benefits of a service mesh are Connects microservices, secures network traffic between microservices using encryption and authentication controls, and provides end-to-end visibility in application performance.

  • The default Amazon EKS add-ons are the kube-proxy and the Amazon VPC CNI.

  • Amazon EKS manage add-on updates When the add-on was installed using the AWS Management Console and the add-on was installed using eksctl with a configuration file.

  • The recommended method to update self-managed Amazon EKS cluster nodes to a new version of Kubernetes is Replace the nodes with the new AMI and migrate your pods to the new group.

  • Both the AWS Management Console and the eksctl utility provide the means to update a managed node group.

  • Compute resources is the largest driver of cost for running an Amazon EKS cluster.

  • On-demand cluster nodes are a good choice for workloads When workloads have spikes in demand that are stateful or do not tolerate interruption well.

Webinar:

  • A company is deploying a new app that will consist of an app layer and an OnLine Transaction Processing (OLTP) relational DB. The app must be available at all times. However, the app will have periods of inactivity. The company wants to pay the minimum for compute costs during these idle periods.
    Solution meets these requirements MOST cost-effectively is Run the app in containers with Amazon ECS on AWS Fargate. Use Amazon Aurora Serverless for the DB.

  • A company is deploying a new DB on a new Amazon EC2 instance. The workload of this DB requires a single Amazon EBS volume that can support up to 20,000 IOPS.
    To meets this requirement should use Provisioned IOPS SSD type.

  • A company is developing a chat app that will be deployed on AWS. The app stores the messages by using a key-value data model. Groups of users typically read the messages multiple times. A solution architect must select a DB solution that will scale for a high rate of reads and will deliver messages with microsecond latency.
    DB solution will meet these requirements is Deploy Amazon DynamoDB with DynamoDB Accelerator (DAX).

  • A company has an app that runs on a large general-purpose Amazon EC2 instance type that is part of an EC2 Auto Scaling group. The company wants to reduce future costs associated with this app. After the company reviews metrics and logs in Amazon CloudWatch, the company notices that this app runs randomly a couple of times a day to retrieve and manage data. According to CloudWatch, the maximum runtime for each request is 10 mins, the memory use is 4 GB, and the instances are always in the running stage.
    Solution will reduce costs the MOST is Refactor the app code to run as an AWS Lambda function (maximum is 15 mins).

  • A company has many apps on Amazon EC2 instances running in Auto Scaling groups. Company policy requires that the data on the attached Amazon EBS volumes be retained.
    Action will meet these requirements without impacting performance is Disable the DeleteOnTermination attribute for the Amazon EBS volumes.

  • A company plans to deploy a new app in the AWS Cloud. The app reads and writes information to a DB. The company will deploy the app in two different AWS Regions, and each app will write to a DB in its Region. The DB in the two Regions need to keep the data synchronized with minimal latency. The DBs must be eventually consistent. In case of data conflict, queries should return the most recent write.
    Solution will meet these requirements with the LEAST administrative work is Use Amazon DynamoDB. Configure the table as a global table.

  • A solution architect is reviewing the IAM policy attached to a dev's IAM user account:
:cool:
 
Last edited:

PlAwAnSaI

Administrator
AWS Transfer Family Primer:

  • Amazon S3 for object storage and Amazon EFS for network file system (NFS) file storage are available to choose from as backend storage services when setting up Transfer Family servers.

  • Transfer Family is built to support dynamic workloads using an elastic compute infrastructure. AWS can increase and decrease resources dynamically using the built-in auto scaling capabilities.

  • Applicability Statement 2 (AS2) is the AWS Transfer Family only supported outbound protocol as of September 2022.

  • AS2 protocol is used for business-to-business and trading partner transfers. Supply chain logistics, integration with ERP and CRM systems, and payments workflows are common use cases for AS2 protocol.

  • Data lakes, subscription-based data distribution, and file transfers internal to organization are common use cases for Amazon S3 as the backend storage system.

  • Amazon Route 53 is a highly available and scalable DNS web service.
    AWS CloudTrail is an audit logging service.
    AWS Application Discovery Service is a migration tool.
    AWS Certificate Manager manages and deploys TLS/SSL certificates.

  • SFTP supports the use of public endpoints that are outside of a VPC. It also supports using IAM as a service managed identity provider. The other options are available with FTPS or FTP or available with both FTPS and FTP.

  • For SFTP, FTP, and FTPS are billed for each hour the protocol is activated on endpoint and for inbound and outbound file transfer billed in GB. Messages are associated with AS2 protocol. Are billed for per protocol and not per Transfer Family server that deploy.

  • AWS CloudTrail monitors and logs API calls in a trail. Can use the trail to audit calls made to the Transfer Family service.

  • Whether use predefined steps or create custom file processing steps, the step type used to manage processing errors is an exception-handling step.

AWS Storage Services:

  • Amazon EBS – Designed to support block storage volumes with Amazon EC2 instances

  • Amazon EFS – Designed as a native multi-AZ file system for NFS file shares

  • Amazon FSx for Lustre – Designed for distributed high performance computing workloads

  • Amazon S3 – Designed to support millions of storage objects and accessed using REST APIs

  • Amazon FSx for Windows File Server – Designed to support SMB file shares in a Single-AZ or Multi-AZ configuration

  • For some storage services, such as Amazon Elastic File System (Amazon EFS) and Amazon Simple Storage Service (Amazon S3), you pay for the capacity that consume. For other storage services, such as Amazon Elastic Block Store (Amazon EBS), Amazon FSx for Lustre, and Amazon FSx for Windows File Server, you pay for the capacity that provision.

  • Lift and shift, self-managed database migrations are best suited for Amazon EBS. The EBS volumes are similar to running the application on dedicated servers on premises.

  • Amazon S3 includes Amazon S3 Glacier and Amazon S3 Glacier Deep Archive storage classes for long-term archival storage.

  • Amazon FSx for Lustre is designed for High Performance Computing (HPC) environments. Can service thousands of compute instances and serve millions of IOPS of performance.

  • The company's staff comes from a Microsoft Windows environment, they can use FSx for Windows File Server to operate in the same way they are familiar with.

Edge and Hybrid Storage Solutions:

  • AWS Snowcone is the smallest member of the AWS Snow Family. The device weighs under five pounds. Its size makes it ideal for use cases with limited space requirements or that require maximum portability.

  • Can order AWS Snowball Edge devices with compute capabilities as a cluster to increase durability and compute processing capabilities. Can order clusters for local compute and storage only jobs. The Snowball Edge device must have compute capabilities. AWS Snowcone devices are not available in a cluster configuration.

  • Amazon EBS is natively available on all AWS Outposts implementations. Can include Amazon S3 in Outposts configuration. As of this writing, other AWS Storage services are not available as local services on Outposts.

  • Amazon FSx File Gateway is used to work with Windows-based applications and workflows. Amazon S3 File Gateway supports SMB and Network File System (NFS) protocols while storing files as Amazon S3 objects.

  • Volume Gateway makes copies of local block volumes and stores them in a service-managed Amazon S3 bucket.

  • Amazon S3 File Gateway connects on-premises NFS and SMB file shares to customer-managed Amazon S3 object storage.

Transferring Data to the AWS Cloud:

  • AWS Transfer Family is designed to move your on-premises SFTP, FTPS, and FTP workflows to the AWS Cloud.

  • AWS DataSync supports asynchronous or one-direction at a time transfers between on-premises file systems to supported AWS Storage services in the AWS Cloud. Data and file systems include NFS file shares, SMB file shares, and self-managed object storage. DataSync also supports asynchronous data transfers between supported AWS Storage resources within the AWS Cloud.
    • It can be scheduled or run on demand. AWS Transfer Family is for data transfer workflows. AWS Snow Family is used for offline data transfers. AWS Application Migration Service copies changes to on-premises applications and data in real time.
      .
  • Organization is planning to migrate offline 135 TB of file data from a hosted-data center to new Amazon S3 buckets as part of overall migration strategy. There is an application developed that requires only 4 vCPUs and 8 GB of memory to operate. Should use AWS Snowball Edge Storage Optimized.

  • AWS Snowcone comes in an 8TB HDD version and an 14TB SSD version. Would need to check device availability and pricing for the AWS Region where Amazon S3 bucket is located to select appropriate device to meet the project's time requirements.

  • CloudEndure Migration uses Amazon EC2 instances and Amazon EBS volumes. CloudEndure Migration copies and updates in real time operating systems, applications, and data from on-premises application servers to the AWS Cloud. CloudEndure first stages application using low-cost EC2 instances and EBS volumes. When ready to migrate to the AWS Cloud, the low-cost EC2 instances and EBS volumes are upgrade to production level EC2 instances and EBS volumes.

Protecting Data in the AWS Cloud:

  • Snapshots is native to Amazon Elastic Block Store (Amazon EBS) and Amazon FSx for Lustre and offers point-in-time consistent incremental copies of the data.

  • AWS Backup and native snapshots are stored in AWS managed Amazon S3 buckets.

  • AWS Backup provides additional data durability by creating additional copies of data. Can store copies of backups for as long as required to retain data. Some compliance regulations require retention and immutable backup copies of data.

  • Like AWS Application Migration Service, CloudEndure Disaster Recovery uses Amazon EBS to copy the operating system, application files, and data to AWS. The on-premises block data is replicated to the EBS volumes.

AWS Databases:

  • AWS Schema Conversion Tool (AWS SCT) is designed to help manage migrations by estimating workloads and potential issues. In some cases, it can even migrate schemas automatically.

  • Semi-structured data is often stored as json documents.

  • The components of the AWS Well-Architected Framework are six pillars, a series of questions, and design principles.
    .
    • The reliability pillar features the "stop guessing capacity" design principle.
      .
  • Amazon Redshift is A fast, cloud-centered, fully managed, and secure data warehousing service that houses analytical data for use in complex queries, business intelligence reporting, and machine learning.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
  • Example pillars in the AWS Well-Architected Framework are Reliability, Performance efficiency, and Cost optimization.

  • Amazon RDS can run Amazon Aurora, Oracle, and PostgreSQL.

  • Amazon Neptune is a full-managed graph database.

  • A ledger database solution might want to use if want a transparent, immutable, and cryptographically verifiable transaction log.

  • Amazon Elasticache for Redis is Redis compatible, it generally requires a primary database as it is an in-memory cache solution.
    Amazon MemoryDB as it is not a cache, so does not require a "primary" database.

Well-Architected Framework:

  • Using Amazon CloudWatch, you can set alarms when certain metrics are reached.

  • Warm standby disaster recovery approach ensures that there is a scaled down, but fully functional, copy of production environment in another Region.

  • Multi-AZ is synchronous while read replicas are asynchronous.

  • Auto-scaling and read replicas are great ways to cost optimize database.

  • The performance efficiency pillar discusses the "go global in minutes" design principle.

Data Types:

  • Relational databases work with defined schemas. All other databases listed would be considered nonrelational databases.

  • Relational databases use structured data due to their defined schemas. Semi-structured data would use nonrelational databases, and unstructured data may be stored in object storage such as Amazon S3 such as mp3 audio files.

Relational DBs:

  • Foreign key is used to create relationships between tables in a relational database.

  • Structured Query Language (SQL) would use to access data in a relational database.

  • Aurora is not compatible with Oracle Database, Microsoft SQL Server, or Spark SQL. However, can use the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS) to convert and migrate content within these databases to Aurora.

  • Tools can use to create and modify an Amazon RDS instance are AWS Command Line Interface (AWS CLI), AWS Management Console, and Amazon RDS Application Programming Interface (API).

  • Aurora:
    • can have up to 15 read replicas.
    • automatically maintains six copies of data across three availability zones.
    • is managed by Amazon RDS.

Nonrelational DBs:

  • Amazon Neptune Database stores data as nodes and the relationships between each node.

  • Amazon DocumentDB database service sets up and scales MongoDB-compatible databases to the cloud.

  • Amazon DynamoDB components are attribute, item, and table.

  • Amazon ElastiCache database service offers fully managed Redis and Memcached distributed memory caches.

  • Amazon Redshift service acts as a datawarehouse and can access S3 data lakes.

  • Amazon Athena lets you analyze data in S3 using SQL.

37+
  1. A company is using AWS Organizations to manage multiple AWS accounts. For security purposes, the company requires the creation of an Amazon SNS topic that enables integration with a third-party alerting system in all the Organizations member accounts.
    A solution architect used an AWS CloudFormation template to create the SNS topic and stack sets to automate the deployment of CloudFormation stacks. Trusted access has been enabled in Organizations.
    To deploy the CloudFormation StackSets in all AWS accounts, the solution architect should Create a stack set in the Organizations management account, which is the central point of control for all the member accounts. This allows the solutions architect to manage the deployment of the stack set across all member accounts from a single location. Use service-managed permissions. Set deployment options to deploy to the organization. Enable CloudFormation StackSets automatic deployment.

  2. A company wants to migrate its workloads from on premises to AWS. The workloads run on Linux and Windows. The company has a large on-premises infrastructure that consist of physical machines and VMs that host numerous apps.
    The company must capture details about the system configuration, system performance, running processes, and network connections of its on-premises workloads. The company also must divide the on-premises apps into groups for AWS migrations. The company needs recommendations for Amazon EC2 instance types so that the company can run its workloads on AWS in the most cost-effective manner.
    To meet these requirements, a solutions architect should:
    • Assess the existing apps by installing AWS Application Discovery Agent on the physical machines and VMs.
    • Group servers into apps for migration by using AWS Migration Hub.
    • Generate recommended instance types and associated costs by using AWS Migration Hub.

      01-Architecture.jpg

      .
  3. A company is hosting an image-processing services on AWS in a VPC. The VPC extends across two AZs. Each AZ contains one public and one private subnet.
    The service runs on Amazon EC2 instances in the private subnets. An ALB in the public subnets is in front of the service. The service needs to communicate with the internet and does so through two NAT gateways. The service uses Amazon S3 for image storage. The EC2 instances retrieve approximately 1 TB of data from an S3 bucket each day.
    The company has promoted the service as highly secure. A solution architect must reduce cloud expenditures as much as possible without compromising the service's security posture or increasing the time spent on ongoing operations.
    Solution will meet these requirements is Set up an S3 gateway VPC endpoint in the VPC and Attach an endpoint policy to the endpoint to allow the required actions on the S3 bucket will allow the EC2 instances to securely access the S3 bucket for image storage without the need for NAT gateways. This reduces the costs associated with the NAT gateways and allows for faster data retrieval from the S3 bucket as traffic does not have to go through the internet gateway.
    By default, all the communication between servers (whether local or on AWS EC2-instance) and S3 is routed through the internet. Even though EC2 instances are also provided by AWS, all requests from EC2 to S3 routes through the public internet. Therefore, we will be charged for all this data transmission.

  1. .
  2. .
  3. file shard > Amazon EFS
    archival data > AWS Snow Family

    snowcone-vs-snowball-aws-docs-1536x862.png


  4. LEAST operational overhead > AWS Elastic Beanstalk
    Amazon EC2, EKS, and Refactor are not. EKS is good for container or existing already Kube.

  5. MongoDB cluster > Amazon DocumentDB (with MongoDB compatibility)
    IoT devices > AWS IoT Core
    general public > Amazon CloudFront - S3






  6. Code:
    https://nopnithi.medium.com/สร้าง-self-service-platform-บน-aws-ให้-user-จัดการ-non-aws-resource-เอง-46f591cc038

  7. minimizes admin overhead:
    Infrastructure as Code (IaC) > CloudFormation
    improved access time to users without CDN > Route 53 with latency-based routing
:cool:
 
Last edited:

PlAwAnSaI

Administrator

40+
  1. A company recently deployed an app on AWS. The app uses Amazon DynamoDB. The company measured the app load and configured the RCUs and WCUs on the DynamoDB table to match the expected peak load. The peak load occurs once a week for a 4-hour period and is double the average load. The app load is close to the average load for the rest of the week. The access pattern includes many more writes to the table than reads of the table.
    A solution architect needs to implement a solution to minimize the cost of the table. Solution will meet these requirements is Use AWS App Auto Scaling to increase capacity during the peak period. Purchase reserved RCUs and WCUs to match the average load.
    On demand mode is for unknown load pattern, auto scaling is for known burst pattern.

  2. A solution architect needs to advise a company on how to migrate its on-premises data processing app to the AWS Cloud. Currently, users upload input files through a web portal. The web server then stores the uploaded files on NAS and messages the processing server over a message queue. Each media file can take up to 1 hour to process. The company has determined that the number of media files awaiting processing is significantly higher during business hours, with the number of files rapidly declining after business hours.
    The MOST cost-effective migration recommendation is Create a queue using Amazon SQS. Configure the existing web server to publish to the new queue. Use Amazon EC2 instances in an EC2 Auto Scaling group to pull requests from the queue and process the files. Scale the EC2 instances based on the SQS queue length. Store the processed files in an Amazon S3 bucket.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
HWC Compute Services Practice:
  1. By default, servers deployed in different Huawei Cloud regions cannot communicate with each other through an internal network.

  2. An availability zone of Huawei cloud belongs to a specific region.

  3. After ECS is created, no need to install the OS for the ECS.

  4. When using an ECS, there is full control the guest OS.

  5. Since spot ECSs may be reclaimed at any time, they should not be used in a production environment.

  6. Both Object Storage Service (OBS) and Elastic Volume Server (EVS) are storage services, but they are accessed with different methods and are used in different scenarios.

  7. Direct Connect establishes a dedicated network connection that links your on-premises data center to cloud. With this connection, you can enjoy high-speed, stable and secure data transmission with very little latency.
  1. Choose region:

    901880e7eb8f4a4fa33079eb9b0c3744.png


  2. Create VPC:

    271d71cf17bf447394cdede294aed92e.png


  3. View the created VPC:

    b2b719e037da49a6995aaff68f01ce3f.png


  4. Buy ECS:

    54bd0d314a334004a3800b492ff10c89.png


  5. Configure Network:

    b9b933e09cd148ffa1bb1f509c18aed8.png


  6. View the created ECS:

    41103c8a4fc64d10b272bfc5806d238e.png


  7. Buy Disk:

    d1dd0fd77e70474882614f03ce96c6d1.png


  8. Set disk parameters:

    62a10f33617c40d2830ca732bd21ac50.png


  9. Attach Disk:

    24b8abc7c1f44cdc9157199c12cdba5b.png


  10. Initializing an EVS Disk for Windows:

    e499fc9a66024a9f94d34442bf27875b.png


  11. View the new data disk for Linux:

    64c73efada7443aead0f372329a50c42.png


  12. Partition the new data disk: fdisk /dev/vdb >

    66516bfc10644549b1d33e35e792909d.png


  13. Write the changes:

    9b3c05c3fb7c4b55840bae17e1550425.png


    Synchronize the changes to OS: partprobe

  14. Set the file system format: mkfs -t ext4 /dev/vdb1
    Create a mount point: mkdir /mnt/sdc

  15. mount /dev/vdb1 /mnt/sdc
    df -TH

  16. Backup image from ECS:
    Configure DHCP: vi /etc/sysconfig/network-scripts/ifcfg-eth0
    adding PERSISTENT_DHCLIENT="y"

    234e18282c6d4230b1b3b9c1eac9c4e8.png


  17. Check whether Cloud-Init is installed: rpm -qa |grep cloud-init

    a46750c6d6f14502a4630bdd2643c2fb.png


    If it is not, install it: yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
    yum install cloud-init

  18. Image Management Service:

    b7e721933a1641b0a47dc8d9926f1045.png


  19. Set the parameters:

    7e67d17b0d934bd9855ffe4540ef770d.png


  20. Click the username in the upper right corner and select My Credentials.

  21. Share to your project ID from 20.:

    d0a96941c556438fa7614b67061ccd5b.png


  22. Enter your Project ID:

    16cafb56e47b466781e066adb273ffe5.png


  23. Adding Tenants Who Can Use Shared Images: Image Management Service page, click Private Image > Click the name of the image to be shared

    c659ca285da943df9a7a60c113806718.png


  24. Enter other Project ID.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
HWC Creating a Load Balancer:

  1. Networking > Elastic Load Balance > Buy Elastic Load Balancer:

    8ad9b91cd6e2426e96efdcf65235a99d.png


  2. Add listeners:

    f4dd433cff6a4488afa5d4b8fda1f08d.png

Google Cloud Architect:

  • Can create a VM instance in Compute Engine by the gcloud command line tool and Cloud console.

  • Can create a Windows instance in Google Cloud by changing its Boot disk to Windows image in the VM instance console.

  • gcloud compute instances get-serial-port-output command is used to check whether the server is ready for an RDP connection.

  • Three basic ways to interact with Google Cloud services and resources are Client libraries, Command-line interface, and Cloud Console.

  • Every bucket must have a unique name across the entire Cloud Storage namespace.

  • Object names must be unique only within a given bucket.

  • Each bucket has a default storage class, which can specify when create bucket.

  • An Access Control List (ACL) is a mechanism can use to define who has access to buckets and objects.

  • Serverless lets write and deploy code without the hassle of managing the underlying infrastructure.

  • A publisher application creates and sends messages to a topic. Subscriber applications create a subscription to a topic to receive messages from it.

  • Cloud Pub/Sub is an asynchronous messaging service designed to be highly reliable and scalable.

  • Google Cloud Pub/Sub service allows applications to exchange messages reliably, quickly, and asynchronously.

  • A topic is a shared string that allows applications to connect with one another.

  • Terraform enables to safely and predictably create, change, and improve infrastructure.

  • With Terraform, can create own custom provider plugins.

  • Understanding GKE costs:

  • Monitoring GKE costs:

  • Virtual machines in GKE:

  • Autoscaling with GKE: Overview and pods:

  • Autoscaling with GKE: Clusters and nodes:



  • SELECT is A keyword that specifies the fields (e.g. column values) that want to pull from dataset.

  • FROM Specifies what table(s) to pull data from.

  • WHERE allows to filter tables for specific column values.

  • BigQuery is A fully managed petabyte-scale data warehouse that runs on the Google Cloud.

  • Projects contain datasets, and datasets contain tables.

  • With BigQuery, you can access datasets shared publicly from other Google Cloud projects.

  • GROUP BY Aggregates rows that share common criteria (e.g. a column value) and will return all of the unique entries found for such criteria.

  • COUNT is A SQL function will count and return the number of rows that share common criteria.

  • AS Creates an alias of a table or column.

  • ORDER BY Sorts the returned data from a query in ascending or descending order based on a specified criteria or column value.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
มาฝั่ง Microsoft กันบ้างคับ:

  1. Company has several departments. Each department has a number of virtual machines (VMs). The company has an Azure subscription that contains a resource group named RG1. All VMs are located in RG1.
    If want to associate each VM with its respective department, should Assign tags to the VMs.
    By assigning tags to the VMs, can easily filter and group them based on the department they belong to. This allows for efficient resource management and organization.
    Creating Azure Management Groups or resource groups would provide higher-level organization but do not directly associate individual VMs with specific departments.
    Modifying the settings of the VMs would not be the most practical way to associate them with departments.

  2. Premium file shares are hosted in a special purpose storage account kind, called a FileStorage account.
    Object storage data tiering between hot, cool, and archive is supported in Blob Storage and General Purpose v2 (GPv2) accounts. General Purpose v1 (GPv1) accounts don't support tiering. The archive tier supports only LRS, GRS, and RA-GRS.

    0*psYlRlCByKbz5soc


  3. There is an Azure subscription that contains three virtual networks named VNET1. VNET2. and VNET3. The virtual networks are peered and connected to the on-premises network. The subscription contains the virtual machines shown in the following table:

    NameLocationConnected to
    VM1West USVNET1
    VM2West USVNET1
    VM3West USVNET2
    VM4Central USVNET3

    Need to monitor connectivity between the virtual machines and the on-premises network by using Connection Monitor. The minimum number of connection monitors should deploy is 2 (regions).

  4. There is a windows 11 device named Device1 and an Azure subscription that contains the resources shown in the following table:

    NameDescription
    VNET1Virtual network
    VM1Virtual machine that runs Windows Server 2022 and does NOT have a public IP address
    Connected to VNET1
    Bastion1Azure Bastion Basic SKU host connected to VNET1

    Device1 has Azure PowerShell and Azure CLI installed. From Device1, need to establish a Remote Desktop connection to VM1. Should perform these actions in sequence:
    1. Upgrade Bastion1 to the Standard SKU.
    2. From Bastion1, select Native Client Support.
    3. From Azure CLI on Device1, run az network bastion rdp.
      .
  5. Azure Bastion requires a Standard SKU public IP address, and the public IP address must be static and regional. A Global Tier Public IP Address cannot be attached to Bastions.

  6. Example deploying a virtual machine by using an Azure Resource Manager (ARM) template:
    Code:
    {
      "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
      ...
        "type": "Microsoft.Compute/virtualMachines",
        ...
        "dependsOn": [
          "[resourceId('Microsoft.Network/networkInterfaces/', 'VM1')]"
        ],
        "properties": {
          "storageProfile": {
            "ImageReference": {
              "publisher": "MicrosoftWindowsServer",
              "Offer" : "WindowsServer",
              "sku" : "2019-Datacenter",
              "version" : "latest"
            ...
    }

  7. Azure provides four levels of scope: management groups, subscriptions, resource groups, and resources. The following image shows an example of these layers.

    1*nJ38LBojI0P-ZoifvdyHhw.png


    Can assign policy to Tenant Root Group, ManagementGroup, Subscription, and RG1. The exclusions, able to select all the items in the scope EXCEPT the Tenant Root Group.

  8. When app uses Managed Identity (MI), App can access the Storage Account via IAM to minimize the number of secrets used.
    A Shared Access Signature (SAS) provides secure delegated access to resources in storage account without compromising the security of data. With a SAS, there is granular control over how a client can access data. Can control what resources the client may access, what permissions they have on those resources, and how long the SAS is valid, among other parameters. It can limit time to access.

  9. User could reset password depending on this order:
    1. The group that user belong to, could do self-service password reset only enabled group.
    2. The Number of methods required was configured.
      To be able to add Security questions to the reset process need to be a Global Administrator, User Administrator cannot add. User Administrator doesn’t have MFA permissions.
      .
  10. Can use service tags to achieve network isolation and protect Azure resources from the general Internet while accessing Azure services that have public endpoints. Create inbound/outbound Network Security Group (NSG) rules to deny traffic to/from Internet and allow traffic to/from Azure Cloud or other available service tags of specific Azure services.

  11. Azure Internal Load Balancer (ILB) provides network load balancing / communication between web servers and the business logic tier spreads equally across virtual machines that reside inside a cloud service or a virtual network with a regional scope.
    Azure Web Application Firewall (WAF) on Azure Application Gateway provides centralized protection of web applications / servers from common exploits and vulnerabilities such as SQL injection attacks. Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities.

  12. The Desired State Configuration (DSC) extension for Windows requires that the target Virtual Machine is able to communicate with Azure. First start the VM, because need VM online to deploy DSC Extension.
    Code:
    https://parveensingh.com/azure-powershell-dsc-zero-to-hero

  13. The "Bulk create user" operation in Azure AD is typically used for creating new users directly within Azure AD tenant, not for creating guest user accounts for external users. To bulk invite external users as guests to Azure AD tenant, should use the "Bulk invite" operation, which specifically handles guest invitations. This process involves uploading a CSV file with the required information and sending invitations to these external users to join Azure AD as guests.

  14. Connection monitor would inspect only network / TCP traffic over a specific port.
    Packet capture inspect all the network traffic, session time limit default value is 18,000 seconds or 5 hours.

  15. To identify the network latency between an Azure virtual machine and an on-premises domain controller connected to Azure via ExpressRoute. Connection Monitor, a feature of Azure Network Watcher, helps achieve this by monitoring network connectivity between Azure endpoints and endpoints outside Azure, including on-premises Data Centers (DC). To leverage Connection Monitor, need an Azure Monitor agent extension installed on DC that can communicate with Azure Monitor and report network connectivity data.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
AWS Lab:
Create and Configure Amazon EC2 Auto Scaling with Launch Templates:
auto_scaling.png

  1. Create a Security Group (SG) for Launch template: Add rule under Inbound rules: protocol type, source

  2. Create a Key pair for the Launch template: File format: pem (Linux & Mac Users & VSCode & OpenSSH) / ppk (Windows & PuTTY)

  3. Creating a Launch template: use SG & Key pair from 1. & 2.
    Code:
    #!/bin/bash
    sudo su
    yum update -y
    yum install -y httpd
    systemctl start httpd
    systemctl enable httpd
    echo "<html> <h1> Response coming from server </h1> </ html>" > /var/www/html/index.html

  4. Create an Auto Scaling Group: Select at least two subnets, Desired, Minimum, & Maximum capacity are 2, No scaling policies

  5. Test Auto Scaling Group: Try to Terminate instance

Creating an Application Load Balancer and Auto Scaling Group in AWS:
alb_with_asg.png

  1. Create a Security Group for the Load balancer: same as 1. above

  2. Create a Security Group for Launch template: Inbound rule HTTP Source 1.'s SG

  3. Create a Key Pair for the Launch template

  4. Creating a Launch template: Choose any subnet, Select Launch-template-SG:
    Code:
    #!/bin/bash
    sudo su
    yum update -y
    yum install -y httpd
    systemctl start httpd
    systemctl enable httpd
    echo "Hello World from $(hostname -f)" > /var/www/html/index.html
    echo "Healthy" > /var/www/html/health.html

  5. Create Target group and The App Load Balancer: Health check path: /health.html, Select [us-east]-1a and [us-east]-1b, choose Load-balancer-SG, listener part select the created Target group.

  6. Create an Auto Scaling Group: Subnet same as 5., Existing load balancer target groups: web-server-TG, Turn on Elastic Load Balancing health checks, Health check grace period: 60 seconds, Desired & Min capacity: 1, Max: 4, Target tracking scaling policy, Target value: 30, Instance need: 60 seconds warm up

  7. Code:
    https://www.whizlabs.com/labs/support-document/ssh-into-ec-instance

  8. Install the stress:
    Code:
    sudo su
    yum -y update
    amazon-linux-extras install epel -y
    yum install stress -y
    stress --cpu 8 --timeout 300s

  9. Test Auto Scaling Group and Elastic Load Balancer: Copy the DNS of your load balancer, and paste it into the browser, it will change the IP.

Azure 15+:
  1. Azure Storage Replication Types:
    Storage_Replication_Options.png

    p1.png
    • Zone-Redundant Storage (ZRS) replicates data synchronously across three storage clusters in a single region. Remains available if a single data center in the region fails, only support StorageV2 / General Purpose (GP)v2.
    • LRS would not remain available if a data center in the region fails.
    • GRS and RAGRS use asynchronous replication. Protects against Zone failure.
      .
  2. ทำความรู้จักกับ Azure Log Analytics Workspace:
    Code:
    https://www.facebook.com/reel/873640411439002
    https://itgeist5blog.blogspot.com/2023/12/azure-log-analytics-workspace.html

  3. Object replication is a feature that allows to replicate data, such as blobs, across different storage accounts or containers within the same storage account. This can be configured to automatically copy data from one storage location to another, either within the same region or across different regions. Object replication can be used to create disaster recovery solutions or to distribute data globally for better performance and availability. It is similar to GRS, but it is more flexible as can choose the storage account and container to replicate the data.
    The GRS of a North Europe region is a secondary copy of the data stored in a different region. The exact location of the secondary region will depend on the specific Azure region have selected. For the North Europe region, the secondary copy is stored in the West Europe region. This means that if there is an outage or disaster in the North Europe region, data will still be available in the West Europe region. This provides a high level of data durability and protection.

  4. To perform a bulk delete of users in Azure Active Directory, need to create and upload a CSV file that contains the list of users to be deleted. The file should include the User Principal Name (UPN) of each user only. When use the bulk delete feature in the Azure Active Directory admin center, need to specify the UPN for each user that want to delete. The UPN is a unique identifier for each user in Azure AD and is the primary way that Azure AD identifies and manages user accounts. Including additional attributes like the display name or usage location is not required for the bulk delete operation, as the UPN is the only mandatory attribute for the user account. However, may include additional attributes in the CSV file if want to keep track of the metadata associated with each user account.

  5. ทำความเข้าใจและรู้จักกับ Azure Subscriptions, Accounts, RBAC, และ Azure Active Directory:
    Code:
    http://itgeist5blog.blogspot.com/2018/07/azure-subscripitons-accounts-rbac-azure.html
    Changing Subscription won't affect the downtime, it will just change the billing. Would need to redeploy the VM. After redeploy a VM, the temporary disk is lost, and dynamic IP addresses associated with virtual network interface are updated. From Overview there is no option to move the VM to another hardware to skip the maintenance. Ideally need an Availability Set and defining the Update Domains.

  6. When user using the Shared Access Signature (SAS), Allowed services will be allowed.
    Storage account access keys provide full access (any service) to the configuration of a storage account, as well as the data. Always be careful to protect access keys.

  7. IT Service Management Connector (ITSMC) allows to connect Azure to a supported IT Service Management (ITSM) product or service. Azure services like Azure Log Analytics and Azure Monitor provide tools to detect, analyze, and troubleshoot problems with Azure and non-Azure resources. But the work items related to an issue typically reside in an ITSM product or service. ITSMC provides a bi-directional connection between Azure and ITSM tools to help resolve issues faster. ITSMC supports connections with the following ITSM tools: ServiceNow, System Center Service Manager, Provance, Cherwell.

  8. To create an Azure Storage account that supports Azure Data Lake Storage, need to enable the hierarchical namespace. This allows to organize and manipulate files and folders efficiently in a data lake. It also enables compatibility with the Hadoop Distributed File System (HDFS) API, which is widely used for big data analytics.
    To minimize costs for infrequently accessed data, can choose the Cool access tier for storage account. This tier offers lower storage costs than the Hot access tier, but higher access and transaction costs. The Cool access tier is suitable for data that is infrequently accessed or modified, such as short-term backup, disaster recovery, or archival data. Data in the Cool access tier should be stored for at least 30 days.
    To automatically replicate data to a secondary Azure region, can choose the geo-redundant storage (GRS) for storage account. This replicates data synchronously three times within the primary region, and then asynchronously to the secondary region. GRS provides the highest level of durability and availability for data and protects against regional outages or disasters.
:cool:
 
Last edited:

PlAwAnSaI

Administrator
AWS Lab:
Find vulnerabilities on EC2 instance using Amazon Inspector:
inspector_lab.png

  1. EC2 SG open port 20 - 23

  2. Install an AWS Agent on EC2:
    Code:
    sudo su
    wget https://inspector-agent.amazonaws.com/linux/latest/install
    curl -O https://inspector-agent.amazonaws.com/linux/latest/install
    sudo bash install

  3. Create an assessment target: Amazon Inspector Classic > Cancel > Assessment Targets > Include all EC2 instances > Install the Amazon Inspector Agent > Save

  4. Create an assessment template: Target name: 3. > Select all four rules (Common Vulnerabilities & Exposures, CIS OS, etc.) > Create

  5. Run the assessment template: To see the Assessment Run and its result, click on the Assessment runs, Findings column

  6. Download the assessment run report

Azure 23+:
  1. Azure Load Balancer (LB) rules require a health probe to detect the endpoint status. The configuration of the health probe and probe responses determines which backend pool instances receive new connections. Use health probes to detect the failure of an application. Generate a custom response to a health probe. Use the health probe for flow control to manage load or planned downtime. When a health probe fails, the load balancer stops sending new connections to the respective unhealthy instance. Outbound connectivity isn't affected, only inbound.
    The network security configurations have to allow source from the LB, not client.

  2. Can only attach virtual machines that are in the same location and on the same virtual network as the LB. Virtual machines must have a standardSKU public IP or no public IP.
    The LB needs to be a standard SKU to accept individual VMs outside an availability set or Azure Virtual Machine Scale Sets (VMSS/AWS Auto Scaling). VMs do not need to have public IPs but if they do have them, they have to be standard SKU. VMs can only be from a single network. When they don't have a public IP, they are assigned an ephemeral IP.
    Also, when adding them to a backend pool, it doesn't matter in which status are the VMs.
    Note: Load balancer and the public IP address SKU must match when you use them with public IP addresses.
:cool:
 
Last edited:
Top