What are the best online course and practice tests set to prepare for the Amazon Certifications Certification (AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY exam, AWS-CERTIFIED-CLOUD-PRACTITIONER exam, AWS-CERTIFIED-DEVELOPER-ASSOCIATE exam, AWS-DEVOPS-ENGINEER-PROFESSIONAL exam, AWS-SOLUTION-ARCHITECT-ASSOCIATE exam, AWS-SOLUTION-ARCHITECT-PROFESSIONAL exam, AWS-SYSOPS exam)? (First: Exam practice test,
Second: Lead4pass Amazon expert.) You can get free Amazon exam practice test questions here.
Or choose: https://www.lead4pass.com/amazon.html Study hard to pass the exam easily!

Table of Contents:

Latest Amazon Certifications Exam questions

amazon certifications

Latest Amazon AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY List

AWS Certified Advanced Networking – Specialty: https://aws.amazon.com/certification/certified-advanced-networking-specialty/

Latest updates Amazon AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY exam practice questions(1-5)

QUESTION 1
A Systems Administrator is designing a hybrid DNS solution with spilt-view. The apex-domain “example.com” should be
served through name servers across multiple top-level domains (TLDs). The name server for subdomain
“dev.example.com” should reside on-premises. The administrator has decided to use Amazon Route 53 to achieve this
scenario.
What procedurals steps must be taken to implement the solution?
A. Use a Route 53 public hosted zone for example.com and a private hosted zone for dev.example.com
B. Use a Route 53 public and private hosted zone for example.com and perform subdomain delegation for
dev.example.com
C. Use a Route 53 public hosted zone for example.com and perform subdomain delegation for dev.example.com
D. Use a Route 53 private hosted zone for example.com and perform subdomain delegation for dev.example.com
Correct Answer: A

 

QUESTION 2
You deploy your Internet-facing application is the us-west-2(Oregon) region. To manage this application and upload
content from your corporate network, you have a 1?bps AWS Direct Connect connection with a private virtual interface
via one of the associated Direct Connect locations. In normal operation, you use approximately 300 Mbps of the
available bandwidth, which is more than your Internet connection from the corporate network.
You need to deploy another identical instance of the application is us-east-1(N Virginia) as soon as possible. You need
to use the benefits of Direct Connect. Your design must be the most effective solution regarding cost, performance, and
time to deploy.
Which design should you choose?
A. Use the inter-region capabilities of Direct Connect to establish a private virtual interface from us-west-2 Direct
Connect location to the new VPC in us-east-1.
B. Deploy an IPsec VPN over your corporate Internet connection to us-east-1 to provide access to the new VPC.
C. Use the inter-region capabilities of Direct Connect to deploy an IPsec VPN over a public virtual interface to the new
VPC in us-east-1.
D. Use VPC peering to connect the existing VPC in us-west-2 to the new VPC in us-east-1, and then route traffic over
Direct Connect and transit the peering connection.
Correct Answer: A

 

QUESTION 3
Your security team implements a host-based firewall on all of your Amazon Elastic Compute Cloud (EC2) instances to
block all outgoing traffic. Exceptions must be requested for each specific requirement. Until you request a new rule, you
cannot access the instance metadata service. Which firewall rule should you request to be added to your instances to
AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY PDF Dumps | AWS-CERTIFIED-ADVANCEDNETWORKING-SPECIALTY Exam Questions | AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY
allow instance metadata access?
A. Inbound; Protocol tcp; Source [Instance\\’s EIP]; Destination 169.254.169.254
B. Inbound; Protocol tcp; Destination 169.254.169.254; Destination port 80
C. Outbound; Protocol tcp; Destination 169.254.169.254; Destination port 80
D. Outbound; Protocol tcp; Destination 169 .254.169.254; Destination port 443
Correct Answer: C

 

QUESTION 4
You have to set up an AWS Direct Connect connection to connect your on-premises to an AWS VPC. Due to budget
requirements, you can only provision a single Direct Connect port. You have two border gateway routers at your onpremises data center that can peer with the Direct Connect routers for redundancy.
Which two design methodologies, in combination, will achieve this connectivity? (Choose two.)
A. Terminate the Direct Connect circuit on a L2 border switch, which in turn has trunk connections to the two routers.
B. Create two Direct Connect private VIFs for the same VPC, each with a different peer IP.
C. Terminate the Direct Connect circuit on any of the one routers, which in turn will have an IBGP session with the other
router.
D. Create one Direct Connect private VIF for the VPC with two customer peer IPs.
E. Provision two VGWs for the VPC and create one Direct Connect private VIF per VGW.
Correct Answer: AD

 

QUESTION 5
A legacy, on-premises web application cannot be load balances effectively. There are both planned and unplanned
events that cause usage spikes to millions of concurrent users. The existing infrastructure cannot handle the usage
spikes. The CIO has mandated that the application be moved to the cloud to avoid further disruptions, with the
additional requirement that source IP addresses be unaltered to support network traffic-monitoring needs. Which of the
following designs will meet these requirements?
A. Use an Auto Scaling group of Amazon EC2 instances behind a Classic Load Balancer.
B. Use an Auto Scaling group of EC2 instances in a target group behind an Application Load Balancer.
C. Use an Auto Scaling group of EC2 instances in a target group behind a Classic Load Balancer.
D. Use an Auto Scaling group of EC2 instances in a target group behind a Network Load Balancer.
Correct Answer: D

[PDF q1-13] Free Amazon AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY pdf dumps download from Google Drive: https://drive.google.com/open?id=1FVpRalk2flV9EmyUcNRJ2aRI72lfSln-

Full Amazon AWS-CERTIFIED-ADVANCED-NETWORKING-SPECIALTY exam practice questions: https://www.lead4pass.com/aws-certified-advanced-networking-specialty.html (Total Questions: 110 Q&A)

Latest Amazon AWS-CERTIFIED-CLOUD-PRACTITIONER List

AWS Certified Cloud Practitioner:https://aws.amazon.com/certification/certified-cloud-practitioner/

Latest updates Amazon AWS-CERTIFIED-CLOUD-PRACTITIONER exam practice questions (1-5)

QUESTION 1
Under the shared responsibility model, which of the following is a shared control between a customer and AWS?
A. Physical controls
B. Patch management
C. Zone security
D. Data center auditing
Correct Answer: B

 

QUESTION 2
What is Amazon CloudWatch?
A. A code repository with customizable build and team commit features.
B. A metrics repository with customizable notification thresholds and channels.
C. A security configuration repository with threat analytics.
D. A rule repository of a web application firewall with automated vulnerability prevention features.
Correct Answer: B
Amazon CloudWatch is basically a metrics repository. An AWS service — such as Amazon EC2 — puts metrics into the
repository, and you retrieve statistics based on those metrics. If you put your own custom metrics into the repository,
you can retrieve statistics on these metrics as well.
Reference: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_architecture.html

 

QUESTION 3
Which of the following allows users to provision a dedicated network connection from their internal network to AWS?
A. AWS CloudHSM
B. AWS Direct Connect
C. AWS VPN
D. Amazon Connect
Correct Answer: B
AWS Direct Connect lets you establish a dedicated network connection between your network and one of the AWS
Direct Connect locations. Using industry standard 802.1q VLANs, this dedicated connection can be partitioned into
multiple virtual interfaces. This allows you to use the same connection to access public resources such as objects
stored in Amazon S3 using public IP address space, and private resources such as Amazon EC2 instances running
within an Amazon Virtual Private Cloud (VPC) using private IP space, while maintaining network separation between the
public and private environments. Virtual interfaces can be reconfigured at any time to meet your changing needs.
Reference: https://aws.amazon.com/directconnect/

 

QUESTION 4
How does AWS charge for AWS Lambda?
A. Users bid on the maximum price they are willing to pay per hour.
B. Users choose a 1-, 3- or 5-year upfront payment term.
C. Users pay for the required permanent storage on a file system or in a database.
D. Users pay based on the number of requests and consumed compute resources.
Correct Answer: D
AWS Lambda is charging its users by the number of requests for their functions and by the duration, which is the time
the code needs to execute. When code starts running in response to an event, AWS Lambda counts a request. It will
charge the total number of requests across all of the functions used. Duration is calculated by the time when your code
started executing until it returns or until it is terminated, rounded up near to 100ms. The AWS Lambda pricing depends
on the amount of memory that the user used to allocate to the function.
Reference: https://dashbird.io/blog/aws-lambda-pricing-model-explained/

 

QUESTION 5
Which load balancer types are available with Elastic Load Balancing (ELB)? (Choose two.)
A. Public load balancers with AWS Application Auto Scaling capabilities
B. F5 Big-IP and Citrix NetScaler load balancers
C. Classic Load Balancers
D. Cross-zone load balancers with public and private IPs
E. Application Load Balancers
Correct Answer: CE
Elastic Load Balancing supports the following types of load balancers: Application Load Balancers, Network Load
Balancers, and Classic Load Balancers. Amazon ECS services can use either type of load balancer. Application Load
Balancers are used to route HTTP/HTTPS (or Layer 7) traffic. Network Load Balancers and Classic Load Balancers are
used to route TCP (or Layer 4) traffic.
Reference: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/load-balancer-types.html

[PDF q1-13] Free Amazon AWS-CERTIFIED-CLOUD-PRACTITIONER pdf dumps download from Google Drive: https://drive.google.com/open?id=1neA2rZn8Ub2HUG8Cl2k-MI4uvAAdA-Mr

Full Amazon AWS-CERTIFIED-CLOUD-PRACTITIONER exam practice questions: https://www.lead4pass.com/aws-certified-cloud-practitioner.html (Total Questions: 296 Q&A)

Latest Amazon AWS-CERTIFIED-DEVELOPER-ASSOCIATE List

AWS Certified Developer – Associate Certification:https://aws.amazon.com/certification/certified-developer-associate/

Latest updates Amazon AWS-CERTIFIED-DEVELOPER-ASSOCIATE exam practice questions (1-5)

QUESTION 1
An application is being developed to audit several AWS accounts. The application will run in Account A and must
access AWS services in Accounts B and C. What is the MOST secure way to allow the application to call AWS services
in each audited account?
A. Configure cross-account roles in each audited account. Write code in Account A that assumes those roles
B. Use S3 cross-region replication to communicate among accounts, with Amazon S3 event notifications to trigger
Lambda functions
C. Deploy an application in each audited account with its own role. Have Account A authenticate with the application
D. Create an IAM user with an access key in each audited account. Write code in Account A that uses those access
keys
Correct Answer: D

 

QUESTION 2
The Lambda function below is being called through an API using Amazon API Gateway. The average execution time for
the Lambda function is about 1 second. The pseudocode for the Lambda function is as shown in the exhibit.lead4pass AWS-CERTIFIED-DEVELOPER-ASSOCIATE exam question q2

What two actions can be taken to improve the performance of this Lambda function without increasing the cost of the
solution? (Select two.)
A. Package only the modules the Lambda function requires
B. Use Amazon DynamoDB instead of Amazon RDS
C. Move the initialization of the variable Amazon RDS connection outside of the handler function
D. Implement custom database connection pooling with the Lambda function
E. Implement local caching of Amazon RDS data so Lambda can re-use the cache
Correct Answer: AE

 

QUESTION 3
A company is using AWS CodePipeline to deliver one of its applications. The delivery pipeline is triggered by changes to
the master branch of an AWS CodeCommit repository and uses AWS CodeBuild to implement the test and build stages
of the process and AWS CodeDeploy to deploy the application.
The pipeline has been operating successfully for several months and there have been no modifications. Following a
recent change to the application\\’s source code, AWS CodeDeploy has not deployed the updates application as
expected.
What are the possible causes? (Choose two.)
A. The change was not made in the master branch of the AWS CodeCommit repository.
B. One of the earlier stages in the pipeline failed and the pipeline has terminated.
C. One of the Amazon EC2 instances in the company\\’s AWS CodePipeline cluster is inactive.
D. The AWS CodePipeline is incorrectly configured and is not executing AWS CodeDeploy.
E. AWS CodePipeline does not have permissions to access AWS CodeCommit.
Correct Answer: BC

 

QUESTION 4
An application is designed to use Amazon SQS to manage messages from many independent senders. Each sender\\’s
messages must be processed in the order they are received.
Which SQS feature should be implemented by the Developer?
A. Configure each sender with a unique MessageGroupId
B. Enable MessageDeduplicationIds on the SQS queue
C. Configure each message with unique MessageGroupIds.
D. Enable ContentBasedDeduplication on the SQS queue
Correct Answer: C
Reference: https://aws.amazon.com/blogs/developer/how-the-amazon-sqs-fifo-api-works/

 

QUESTION 5
A Developer created configuration specifications for an AWS Elastic Beanstalk application in a file named
healthcheckurl.yaml in the .ebextensions/directory of their application source bundle. The file contains the following:lead4pass AWS-CERTIFIED-DEVELOPER-ASSOCIATE exam question q5

After the application launches, the health check is not being run on the correct path, even though it is valid. What can be
done to correct this configuration file?
A. Convert the file to JSON format.
B. Rename the file to a .config extension.
C. Change the configuration section from options_settings to resources.
D. Change the namespace of the option settings to a custom namespace.
Correct Answer: C
Reference: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html

[PDF q1-13] Free Amazon AWS-CERTIFIED-DEVELOPER-ASSOCIATE pdf dumps download from Google Drive: https://drive.google.com/open?id=1SqDlleN5aSJ8tCV1uzBbnGDnKiNL0_UZ

Full Amazon AWS-CERTIFIED-DEVELOPER-ASSOCIATE exam practice questions: https://www.lead4pass.com/aws-certified-developer-associate.html (Total Questions: 224 Q&A)

Latest Amazon AWS-DEVOPS-ENGINEER-PROFESSIONAL List

AWS Certified DevOps Engineer – Professional:https://aws.amazon.com/certification/certified-devops-engineer-professional/

Latest updates Amazon AWS-DEVOPS-ENGINEER-PROFESSIONAL exam practice questions (1-5)

QUESTION 1
An Application team has three environments for their application: development, pre-production, and production. The
team recently adopted AWS CodePipeline. However, the team has had several deployments of misconfigured or
nonfunctional development code into the production environment, resulting in user disruption and downtime. The
DevOps Engineer must review the pipeline and add steps to identify problems with the application before it is deployed.
What should the Engineer do to identify functional issues during the deployment process? (Choose two.)
A. Use Amazon Inspector to add a test action to the pipeline. Use the Amazon Inspector Runtime Behavior Analysis
Inspector rules package to check that the deployed code complies with company security standards before deploying it
to production.
B. Using AWS CodeBuild to add a test action to the pipeline to replicate common user activities and ensure that the
results are as expected before progressing to production deployment.
C. Create an AWS CodeDeploy action in the pipeline with a deployment configuration that automatically deploys the
application code to a limited number of instances. The action then pauses the deployment so that the QA team can
review the application functionality. When the review is complete, CodeDeploy resumes and deploys the application to
the remaining production Amazon EC2 instances.
D. After the deployment process is complete, run a testing activity on an Amazon EC2 instance in a different region that
accesses the application to simulate user behavior if unexpected results occur, the testing activity sends a warning to an
Amazon SNS topic. Subscribe to the topic to get updates.
E. Add an AWS CodeDeploy action in the pipeline to deploy the latest version of the development code to preproduction. Add a manual approval action in the pipeline so that the QA team can test and confirm the expected
functionality. After the manual approval action, add a second CodeDeploy action that deploys the approved code to the
production environment.
Correct Answer: AC

 

QUESTION 2
A DevOps Engineer uses Docker container technology to build an image-analysis application. The application often
sees spikes in traffic. The Engineer must automatically scale the application in response to customer demand while
maintaining cost effectiveness and minimizing any impact on availability.
What will allow the FASTEST response to spikes in traffic while fulfilling the other requirements?
A. Create an Amazon ECS cluster with the container instances in an Auto Scaling group. Configure the ECS service to
use Service Auto Scaling. Set up Amazon CloudWatch alarms to scale the ECS service and cluster.
B. Deploy containers on an AWS Elastic Beanstalk Multicontainer Docker environment. Configure Elastic Beanstalk to
automatically scale the environment based on Amazon CloudWatch metrics.
C. Create an Amazon ECS cluster using Spot instances. Configure the ECS service to use Service Auto Scaling. Set up
Amazon CloudWatch alarms to scale the ECS service and cluster.
D. Deploy containers on Amazon EC2 instances. Deploy a container scheduler to schedule containers onto EC2
instances. Configure EC2 Auto Scaling for EC2 instances based on available Amazon CloudWatch metrics.
Correct Answer: D

 

QUESTION 3
A DevOps Engineer needs to design and implement a backup mechanism for Amazon EFS. The Engineer is given the
following requirements:
The backup should run on schedule.
The backup should be stopped if the backup window expires.
The backup should be stopped if the backup completes before the backup window.
The backup logs should be retained for further analysis.
The design should support highly available and fault-tolerant paradigms.
Administrators should be notified with backup metadata.
Which design will meet these requirements?
A. Use AWS Lambda with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run
backup scripts on Amazon EC2 in an Auto Scaling group. Use Auto Scaling lifecycle hooks and the SSM Run
Command on EC2 for uploading backup logs to Amazon S3. Use Amazon SNS to notify administrators with backup
activity metadata.
B. Use Amazon SWF with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run
backup scripts on Amazon EC2 in an Auto Scaling group. Use Auto Scaling lifecycle hooks and the SSM Run
Command on EC2 for uploading backup logs to Amazon Redshift. Use CloudWatch Alarms to notify administrators with
backup activity metadata.
C. Use AWS Data Pipeline with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run
backup scripts on Amazon EC2 in a single Availability Zone. Use Auto Scaling lifecycle hooks and the SSM Run
Command on EC2 for uploading the backup logs to Amazon RDS. Use Amazon SNS to notify administrators with
backup activity metadata.
D. Use AWS CodePipeline with an Amazon CloudWatch Events rule for scheduling the start/stop of backup activity. Run
backup scripts on Amazon EC2 in a single Availability Zone. Use Auto Scaling lifecycle hooks and the SSM Run
Command on Amazon EC2 for uploading backup logs to Amazon S3. Use Amazon SES to notify admins with backup
activity metadata.
Correct Answer: C

 

QUESTION 4
A healthcare company has a critical application running in AWS. Recently, the company experienced some down time. if
it happens again, the company needs to be able to recover its application in another AWS Region. The application uses
Elastic Load Balancing and Amazon EC2 instances. The company also maintains a custom AMI that contains its
application. This AMI is changed frequently. The workload is required to run in the primary region, unless there is a
regional service disruption, in which case traffic should fail over to the new region. Additionally, the cost for the second
region needs to be low. The RTO is 2 hours. Which solution allows the company to fail over to another region in the
event of a failure, and also meet the above requirements?
A. Maintain a copy of the AMI from the main region in the backup region. Create an Auto Scaling group with one
instance using a launch configuration that contains the copied AMI. Use an Amazon Route 53 record to direct traffic to
the load balancer in the backup region in the event of failure, as required. Allow the Auto Scaling group to scale out as
needed during a failure.
B. Automate the copying of the AMI in the main region to the backup region. Generate an AWS Lambda function that
will create an EC2 instance from the AMI and place it behind a load balancer. Using the same Lambda function, point
the Amazon Route 53 record to the load balancer in the backup region. Trigger the Lambda function in the event of a
failure.
C. Place the AMI in a replicated Amazon S3 bucket. Generate an AWS Lambda function that can create a launch
configuration and assign it to an already created Auto Scaling group. Have one instance in this Auto Scaling group
ready to accept traffic. Trigger the Lambda function in the event of a failure. Use an Amazon Route 53 record and
modify it with the same Lambda function to point to the load balancer in the backup region.
D. Automate the copying of the AMI to the backup region. Create an AWS Lambda function that can create a launch
configuration and assign it to an already created Auto Scaling group. Set the Auto Scaling group maximum size to 0 and
only increase it with the Lambda function during a failure. Trigger the Lambda function in the event of a failure. Use an
Amazon Route 53 record and modify it with the same Lambda function to point to the load balancer in the backup
region.
Correct Answer: C

 

QUESTION 5
A DevOps Engineer has a single Amazon Dynamo DB table that received shipping orders and tracks inventory. The
Engineer has three AWS Lambda functions reading from a DymamoDB stream on that table. The Lambda functions
perform various functions such as doing an item count, moving items to Amazon Kinesis Data Firehose, monitoring
inventory levels, and creating vendor orders when parts are low. While reviewing logs, the Engineer notices the Lambda
functions occasionally fail under increased load, receiving a stream throttling error.
Which is the MOST cost-effective solution that requires the LEAST amount of operational management?
A. Use AWS Glue integration to ingest the DynamoDB stream, then migrate the Lambda code to an AWS Fargate task.
B. Use Amazon Kinesis streams instead of Dynamo DB streams, then use Kinesis analytics to trigger the Lambda
functions.
C. Create a fourth Lambda function and configure it to be the only Lambda reading from the stream. Then use this
Lambda function to pass the payload to the other three Lambda functions.
D. Have the Lambda functions query the table directly and disable DynamoDB streams. Then have the Lambda
functions query from a global secondary index.
Correct Answer: C

[PDF q1-13] Free Amazon AWS-DEVOPS-ENGINEER-PROFESSIONAL pdf dumps download from Google Drive: https://drive.google.com/open?id=1YZAoidm1ZSYLyrVE0LH-1so9vGNardOX

Full Amazon AWS-DEVOPS-ENGINEER-PROFESSIONAL exam practice questions: https://www.lead4pass.com/aws-devops-engineer-professional.html (Total Questions: 266 Q&A)

Latest Amazon AWS-SOLUTION-ARCHITECT-ASSOCIATE List

AWS Certified Solutions Architect – Associate Certification:https://aws.amazon.com/certification/certified-solutions-architect-associate/

Latest updates Amazon AWS-SOLUTION-ARCHITECT-ASSOCIATE exam practice questions (1-5)

QUESTION 1
A company plans to use AWS for all new batch processing workloads. The company\\’s developers use Docker
containers for the new batch processing. The system design must accommodate critical and non-critical batch
processing workloads 24/7.
How should a Solutions Architect design this architecture in a cost-efficient manner?
A. Purchase Reserved Instances to run all containers. Use Auto Scaling groups to schedule jobs.
B. Host a container management service on Spot Instances. Use Reserved Instances to run Docker containers.
C. Use Amazon ECS orchestration and Auto Scaling groups: one with Reserve Instances, one with Spot Instances.
D. Use Amazon ECS to manage container orchestration. Purchase Reserved Instances to run all batch workloads at the
same time.
Correct Answer: C

 

QUESTION 2
Amazon rds provides a facility to modify the back-up retention policy for automated backups, with a value of 0 indicating
for no backup retention. What is the maximum retention period allowed in days?
A. 45
B. 35
C. 15
D. 10
Correct Answer: B

 

QUESTION 3
A Solutions Architect is designing a system that will store Personally Identifiable Information (PII) in an Amazon S3
bucket. Due to compliance and regulatory requirements, both the master keys and unencrypted data should never be
sent to AWS.
What Amazon S3 encryption technique should the Architect choose?
A. Amazon S3 client-side encryption with an AWS KMS-managed customer master key (CMK)
B. Amazon S3 server-side encryption with an AWS KMS-managed key
C. Amazon S3 client-side encryption with a client-side master key
D. Amazon S3 server-side encryption with a customer-provided key
Correct Answer: C
Reference: http://jayendrapatil.com/aws-s3-data-protection/

 

QUESTION 4
Your company plans to host a large donation website on Amazon Web Services (AWS). You anticipate a large and
undetermined amount of traffic that will create many database writes. To be certain that you do not drop any writes to a
database hosted on AWS. Which service should you use?
A. Amazon RDS with provisioned IOPS up to the anticipated peak write throughput.
B. Amazon Simple Queue Service (SOS) for capturing the writes and draining the queue to write to the database.
C. Amazon ElastiCache to store the writes until the writes are committed to the database.
D. Amazon DynamoDB with provisioned write throughput up to the anticipated peak write throughput.
Correct Answer: B
https://aws.amazon.com/sqs/faqs/
There is no limit on the number of messages that can be pushed onto SQS. The retention period of the SQS is 4 days
by default and it can be changed to 14 days. This will make sure that no writes are missed.

 

QUESTION 5
A customer has a service based out of Oregon, U.S. and Paris, France. The application is storing data in an S3 bucket
located in Oregon, and that data is updated frequently. The Paris office is experiencing slow response times when
retrieving objects.
What should a Solutions Architect do to resolve the slow response times for the Paris office?
A. Set up an S3 bucket based in Paris, and enable cross-region replication from the Oregon bucket to the Paris bucket.
B. Create an Application Load Balancer that load balances data retrieval between the Oregon S3 bucket and a new
Paris S3 bucket.
C. Create an Amazon CloudFront distribution with the bucket located in Oregon as the origin and set the Maximum Time
to Live (TTL) for cache behavior to 0.
D. Set up an S3 bucket based in Paris, and enable a lifecycle management rule to transition data from the Oregon
bucket to the Paris bucket.
Correct Answer: A

[PDF q1-13] Free Amazon AWS-SOLUTION-ARCHITECT-ASSOCIATE pdf dumps download from Google Drive: https://drive.google.com/open?id=1we-w0tGv_k83l4I9Nw6pLFhQk_yztejR

Full Amazon AWS-SOLUTION-ARCHITECT-ASSOCIATE exam practice questions: https://www.lead4pass.com/aws-solution-architect-associate.html (Total Questions: 424 Q&A)

Latest Amazon AWS-SOLUTION-ARCHITECT-PROFESSIONAL List

AWS Certified Solutions Architect – Professional:https://aws.amazon.com/certification/certified-solutions-architect-professional/

Latest updates Amazon AWS-SOLUTION-ARCHITECT-PROFESSIONAL exam practice questions (1-5)

QUESTION 1
A bank is re-architecting its mainframe-based credit card approval processing application to a cloud-native application
on the AWS cloud. The new application will receive up to 1,000 requests per second at peak load. There are multiple
steps to each transaction, and each step must receive the result of the previous step. The entire request must return an
authorization response within less than 2 seconds with zero data loss. Every request must receive a response. The
solution must be Payment Card Industry Data Security Standard (PCI DSS)-compliant. Which option will meet all of the
bank\\’s objectives with the LEAST complexity and LOWEST cost while also meeting compliance requirements?
A. Create an Amazon API Gateway to process inbound requests using a single AWS Lambda task that performs
multiple steps and returns a JSON object with the approval status. Open a support case to increase the limit for the
number of concurrent Lambdas to allow room for bursts of activity due to the new application.
B. Create an Application Load Balancer with an Amazon ECS cluster on Amazon EC2 Dedicated instances in a target
group to process incoming requests. Use Auto Scaling to scale the cluster out/in based on average CPU utilization.
Deploy a web service that processes all of the approval steps and returns a JSON object with the approval status.
C. Deploy the application on Amazon EC2 on Dedicated Instances. Use an Elastic Load Balancer in front of a farm of
application servers in an Auto Scaling group to handle incoming requests. Scale out/in based on a custom Amazon
CloudWatch metric for the number of inbound requests per second after measuring the capacity of a single instance.
D. Create an Amazon API Gateway to process inbound requests using a series of AWS Lambda processes, each with
an Amazon SQS input queue. As each step completes, it writes its result to the next step\\’s queue. The final step
returns a JSON object with the approval status. Open a support case to increase the limit for the number of concurrent
Lambdas to allow room for bursts of activity due to the new application.
Correct Answer: C

 

QUESTION 2
An organization has a write-intensive mobile application that uses Amazon API Gateway, AWS Lambda, and Amazon
DynamoDB. The application has scaled well, however, costs have increased exponentially because of higher than
anticipated Lambda costs. The application\\’s use is unpredictable, but there has been a steady 20% increase in
utilization every month.
While monitoring the current Lambda functions, the Solutions Architect notices that the execution-time averages 4.5
minutes. Most of the wait time is the result of a high-latency network call to a 3-TB MySQL database server that is onpremises. A VPN is used to connect to the VPC, so the Lambda functions have been configured with a five-minute
timeout.
How can the Solutions Architect reduce the cost of the current architecture?
A. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
Enable local caching in the mobile application to reduce the Lambda function invocation calls. Monitor the Lambda
function performance; gradually adjust the timeout and memory properties to lower values while maintaining an
acceptable execution time. Offload the frequently accessed records from DynamoDB to Amazon ElastiCache.
B. Replace the VPN with AWS Direct Connect to reduce the network latency to the on-premises MySQL database.
Cache the API Gateway results to Amazon CloudFront. Use Amazon EC2 Reserved Instances instead of Lambda.
Enable Auto Scaling on EC2, and use Spot Instances during peak times. Enable DynamoDB Auto Scaling to manage
target utilization.
C. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL. Enable caching of the Amazon API
Gateway results in Amazon CloudFront to reduce the number of Lambda function invocations. Monitor the Lambda
function performance; gradually adjust the timeout and memory properties to lower values while maintaining an
acceptable execution time. Enable DynamoDB Accelerator for frequently accessed records, and enable the DynamoDB
Auto Scaling feature.
D. Migrate the MySQL database server into a Multi-AZ Amazon RDS for MySQL. Enable API caching on API Gateway
to reduce the number of Lambda function invocations. Continue to monitor the AWS Lambda function performance;
gradually adjust the timeout and memory properties to lower values while maintaining an acceptable execution time.
Enable Auto Scaling in DynamoDB.
Correct Answer: A

 

QUESTION 3
By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a
duration as long as _________ hours.
A. 24
B. 36
C. 10
D. 48
Correct Answer: B
By default, temporary security credentials for an IAM user are valid for a maximum of 12 hours, but you can request a
duration as short as 15 minutes or as long as 36 hours.
http://docs.aws.amazon.com/STS/latest/UsingSTS/CreatingSessionTokens.html

 

QUESTION 4
Regarding Identity and Access Management (IAM), Which type of special account belonging to your application allows
your code to access Google services programmatically?
A. Service account
B. Simple Key
C. OAuth
D. Code account
Correct Answer: A
A service account is a special Google account that can be used by applications to access Google services
programmatically. This account belongs to your application or a virtual machine (VM), instead of to an individual end
user. Your application uses the service account to call the Google API of a service, so that the users aren\\’t directly
involved. A service account can have zero or more pairs of service account keys, which are used to authenticate to
Google. A service account key is a public/private key pair generated by Google. Google retains the public key, while the
user is given the private key. https://cloud.google.com/iam/docs/service-accounts

 

QUESTION 5
Your company runs a customer facing event registration site This site is built with a 3-tier architecture with web and
application tier servers and a MySQL database The application requires 6 web tier servers and 6 application tier servers
for normal operation, but can run on a minimum of 65% server capacity and a single MySQL database.
When deploying this application in a region with three availability zones (AZs) which architecture provides high
availability?
A. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling
Group behind an ELB (elastic load balancer), and an application tier deployed across 2 AZs with 3 EC2 instances in
each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) instance deployed
with read replicas in the other AZ.
B. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling
Group behind an ELB (elastic load balancer) and an application tier deployed across 3 AZs with 2 EC2 instances in
each AZ inside an Auto Scaling Group behind an ELB and one RDS (Relational Database Service) Instance deployed
with read replicas in the two other AZs.
C. A web tier deployed across 2 AZs with 3 EC2 (Elastic Compute Cloud) instances in each AZ inside an Auto Scaling
Group behind an ELB (elastic load balancer) and an application tier deployed across 2 AZs with 3 EC2 instances m
each AZ inside an Auto Scaling Group behind an ELS and a Multi-AZ RDS (Relational Database Service) deployment.
D. A web tier deployed across 3 AZs with 2 EC2 (Elastic Compute Cloud) instances in each AZ Inside an Auto Scaling
Group behind an ELB (elastic load balancer). And an application tier deployed across 3 AZs with 2 EC2 instances in
each AZ inside an Auto Scaling Group behind an ELB and a Multi-AZ RDS (Relational Database services) deployment.
Correct Answer: D
Amazon RDS Multi-AZ Deployments Amazon RDS Multi-AZ deployments provide enhanced availability and durability
for Database (DB) Instances, making them a natural fit for production database workloads. When you provision a MultiAZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a
standby instance in a different Availability Zone (AZ). Each AZ runs on its own physically distinct, independent
infrastructure, and is engineered to be highly reliable. In case of an infrastructure failure (for example, instance
hardware failure, storage failure, or network disruption), Amazon RDS performs an automatic failover to the standby, so
that you can resume database operations as soon as the failover is complete. Since the endpoint for your DB Instance
remains the same after a failover, your application can resume database operation without the need for manual
administrative intervention. Enhanced Durability Multi-AZ deployments for the MySQL, Oracle, and PostgreSQL engines
utilize synchronous physical replication to keep data on the standby up-to-date with the primary. Multi-AZ deployments
for the SQL Server engine use synchronous logical replication to achieve the same result, employing SQL Server-native
Mirroring technology. Both approaches safeguard your data in the event of a DB Instance failure or loss of an
Availability Zone. If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically
initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database
failure, a user-initiated point-in-time-restore operation will be required. This operation can take several hours to
complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will
not be available. Amazon Aurora employs a highly durable, SSD-backed virtualized storage layer purpose-built for
database workloads. Amazon Aurora automatically replicates your volume six ways, across three Availability Zones.
Amazon Aurora storage is fault-tolerant, transparently handling the loss of up to two copies of data without affecting
database write availability and up to three copies without affecting read availability. Amazon Aurora storage is also selfhealing. Data blocks and disks are continuously scanned for errors and replaced automatically. Increased Availability
You also benefit from enhanced database availability when running Multi-AZ deployments. If an Availability Zone failure
or DB Instance failure occurs, your availability impact is limited to the time automatic failover takes to complete: typically
under one minute for Amazon Aurora and one to two minutes for other database engines (see the RDS FAQ for details).
The availability benefits of Multi-AZ deployments also extend to planned maintenance and backups. In the case of
system upgrades like OS patching or DB Instance scaling, these operations are applied first on the standby, prior to the
automatic failover. As a result, your availability impact is, again, only the time required for automatic failover to
complete. Unlike Single-AZ deployments, I/O activity is not suspended on your primary during backup for Multi-AZ
deployments for the MySQL, Oracle, and PostgreSQL engines, because the backup is taken from the standby.
However, note that you may still experience elevated latencies for a few minutes during backups for Multi-AZ
deployments. On instance failure in Amazon Aurora deployments, Amazon RDS uses RDS Multi-AZ technology to
automate failover to one of up to 15 Amazon Aurora Replicas you have created in any of three Availability Zones. If no
Amazon Aurora Replicas have been provisioned, in the case of a failure, Amazon RDS will attempt to create a new
Amazon Aurora DB instance for you automatically

[PDF q1-13] Free Amazon AWS-SOLUTION-ARCHITECT-PROFESSIONAL pdf dumps download from Google Drive: https://drive.google.com/open?id=1ARkeSWYTRwzQOvSOAj7ZiLAu3XG4Y62y

Full Amazon AWS-SOLUTION-ARCHITECT-PROFESSIONAL exam practice questions: https://www.lead4pass.com/aws-solution-architect-professional.html (Total Questions: 559 Q&A)

Latest Amazon AWS-SYSOPS List

AWS Certified Sysops Administrator – Associate Certification:https://aws.amazon.com/certification/certified-sysops-admin-associate/

Latest updates Amazon AWS-SYSOPS exam practice questions (1-5)

QUESTION 1
A user has created an application which will be hosted on EC2. The application makes API calls to DynamoDB to fetch
certain data. The application running on this instance is using the SDK for making these calls to DynamoDB. Which of
the below mentioned statements is true with respect to the best practice for security in this scenario?
A. The user should create an IAM user with permissions to access DynamoDB and use its creden-tials within the
application for connecting to DynamoDB
B. The user should create an IAM user with DynamoDB and EC2 permissions. Attach the user with the application so
that it does not use the root account credentials
C. The user should attach an IAM role to the EC2 instance with necessary permissions for making API calls to
DynamoDB.
D. The user should create an IAM role with EC2 permissions to deploy the application
Correct Answer: C
With AWS IAM a user is creating an application which runs on an EC2 instance and makes requests to AWS, such as
DynamoDB or S3 calls. Here it is recommended that the user should not create an IAM user and pass the user\\’s
credentials to the application or embed those credentials inside the ap-plication. Instead, the user should use roles for
EC2 and give that role access to DynamoDB /S3. When the roles are attached to EC2, it will give temporary security
credentials to the application hosted on that EC2, to connect with DynamoDB / S3.
Reference:
http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_WorkingWithGroupsAndUsers.html

 

QUESTION 2
You are managing a legacy application Inside VPC with hard coded IP addresses in its configuration.
Which two mechanisms will allow the application to failover to new instances without the need for reconfiguration?
(Choose two.)
A. Create an ELB to reroute traffic to a failover instance
B. Create a secondary ENI that can be moved to a failover instance
C. Use Route53 health checks to fail traffic over to a failover instance
D. Assign a secondary private IP address to the primary ENIO that can be moved to a failover instance
Correct Answer: BD

 

QUESTION 3
A SysOps Administrator is using AWS CloudFormation to deploy resources but would like to manually address any
issues that the template encounters. What should the Administrator add to the template to support the requirement?
A. Enable Termination Protection on the stack
B. Set the OnFailure parameter to “DO_NOTHING”
C. Restrict the IAM permissions for CloudFormation to delete resources
D. Set the DeleteStack API action to “No”
Correct Answer: A
Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html

 

QUESTION 4
An Administrator has an Amazon EC2 instance with an IPv6 address. The Administrator needs to prevent direct access
to this instance from the Internet.
The Administrator should place the EC2 instance in a:
A. Private Subnet with an egress-only Internet Gateway attached to the subnet and placed in the subnet Route Table.
B. Public subnet with an egress-only Internet Gateway attached to the VPC and placed in the VPC Route Table.
C. Private subnet with an egress-only Internet Gateway attached to the VPC and placed in the subnet Route Table.
D. Public subnet and a security group that blocks inbound IPv6 traffic attached to the interface.
Correct Answer: B

 

QUESTION 5
An application is running on multiple EC2 instances. As part of an initiative to improve overall infrastructure security, the
EC2 instances were moved to a private subnet. However, since moving, the EC2 instances have not been able to
automatically update, and a SysOps Administrator has not been able to SSH into them remotely. Which two actions
could the Administrator take to securely resolve these issues? (Choose two.)
A. Set up a bastion host in a public subnet, and configure security groups and route tables accordingly.
B. Set up a bastion host in the private subnet, and configure security groups accordingly.
C. Configure a load balancer in a public subnet, and configure the route tables accordingly.
D. Set up a NAT gateway in a public subnet, and change the private subnet route tables accordingly.
E. Set up a NAT gateway in a private subnet, and ensure that the route tables are configured accordingly.
Correct Answer: BE

[PDF q1-q13] Free Amazon AWS-SYSOPS pdf dumps download from Google Drive: https://drive.google.com/open?id=1CP1xO2Om7E0XI_C-el10zYCM3b5Ncg-7

Full Amazon AWS-SYSOPS exam practice questions: https://www.lead4pass.com/AWS-SysOps.html (Total Questions: 795 Q&A)

Lead4Pass Year-round Discount Code

lead4pass coupon 2020

What are the advantages of Lead4pass?

Lead4pass employs the most authoritative exam specialists from Cisco, Amazon, Microsoft, CompTIA, etc. We update exam data throughout the year. Highest pass rate! We have a large user base. We are an industry leader! Choose Lead4Pass to pass the exam with ease!

why lead4pass 2020

Summarize:

It’s not easy to pass the Amazon exam, but with accurate learning materials and proper practice,
you can crack the exam with excellent results. https://www.lead4pass.com/amazon.html provides you with the most relevant learning materials that you can use to help you prepare.