Ready to Pass Your Certification Test

Ready to guarantee a pass on the certification that will elevate your career? Visit this page to explore our catalog and get the questions and answers you need to ace the test.

amazon AWS_CERTIFIED_DATA_ANALYTICS_SPECIALTY

Custom view settings

Exam contains 164 questions

Page 17 of 28
Question 97 🔥

A company is building a service to monitor fleets of vehicles. The company collects IoT data from a device in each vehicle and loads the data into AmazonRedshift in near-real time. Fleet owners upload .csv files containing vehicle reference data into Amazon S3 at different times throughout the day. A nightly process loads the vehicle reference data from Amazon S3 into Amazon Redshift. The company joins the IoT data from the device and the vehicle reference data to power reporting and dashboards. Fleet owners are frustrated by waiting a day for the dashboards to update.Which solution would provide the SHORTEST delay between uploading reference data to Amazon S3 and the change showing up in the owners' dashboards?

Which database solution meets these requirements?
Highly voted
Discussion of the question
Question 98 🔥

A company is migrating from an on-premises Apache Hadoop cluster to an Amazon EMR cluster. The cluster runs only during business hours. Due to a company requirement to avoid intraday cluster failures, the EMR cluster must be highly available. When the cluster is terminated at the end of each business day, the data must persist.Which configurations would enable the EMR cluster to meet these requirements? (Choose three.)

Which database solution meets these requirements?
Highly voted
Highly voted
Highly voted
Discussion of the question
Question 99 🔥

A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50 business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared with a group of 1,000 users.The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by year and month, and is stored in ApacheParquet format. The company is using the AWS Glue Data Catalog as its main data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from at any point is 200 GB.Which configuration will provide the MOST cost-effective solution that meets these requirements?

Which database solution meets these requirements?
Highly voted
Discussion of the question
Question 100 🔥

A central government organization is collecting events from various internal applications using Amazon Managed Streaming for Apache Kafka (Amazon MSK).The organization has configured a separate Kafka topic for each application to separate the data. For security reasons, the Kafka cluster has been configured to only allow TLS encrypted data and it encrypts the data at rest.A recent application update showed that one of the applications was configured incorrectly, resulting in writing data to a Kafka topic that belongs to another application. This resulted in multiple errors in the analytics pipeline as data from different applications appeared on the same topic. After this incident, the organization wants to prevent applications from writing to a topic different than the one they should write to.Which solution meets these requirements with the least amount of effort?

Which database solution meets these requirements?
Highly voted
Discussion of the question
Question 101 🔥

A company wants to collect and process events data from different departments in near-real time. Before storing the data in Amazon S3, the company needs to clean the data by standardizing the format of the address and timestamp columns. The data varies in size based on the overall load at each particular point in time. A single data record can be 100 KB-10 MB.How should a data analytics specialist design the solution for data ingestion?

Which database solution meets these requirements?
Highly voted
Discussion of the question
Question 102 🔥

An operations team notices that a few AWS Glue jobs for a given ETL application are failing. The AWS Glue jobs read a large number of small JSON files from anAmazon S3 bucket and write the data to a different S3 bucket in Apache Parquet format with no major transformations. Upon initial investigation, a data engineer notices the following error message in the History tab on the AWS Glue console: `Command Failed with Exit Code 1.`Upon further investigation, the data engineer notices that the driver memory profile of the failed jobs crosses the safe threshold of 50% usage quickly and reaches90`"95% soon after. The average memory usage across all executors continues to be less than 4%.The data engineer also notices the following error while examining the related Amazon CloudWatch Logs.What should the data engineer do to solve the failure in the MOST cost-effective way?

Which database solution meets these requirements?
Highly voted
Discussion of the question

Lorem ipsum dolor sit amet consectetur. Eget sed turpis aenean sit aenean. Integer at nam ullamcorper a.

© 2024 Exam Prepare, Inc. All Rights Reserved.
AWS_CERTIFIED_DATA_ANALYTICS_SPECIALTY questions • Exam prepare