1. What is the difference between Amazon Rekognition Image and Amazon Rekognition Video?
A) Amazon Rekognition Video can perform content moderation on videos, while Amazon Rekognition Image cannot. B) Amazon Rekognition Image can only analyze images, while Amazon Rekognition Video can only analyze videos. C) Amazon Rekognition Image can recognize text in images, while Amazon Rekognition Video cannot. D) Amazon Rekognition Video can recognize and track faces in real-time, while Amazon Rekognition Image cannot.
2. Which of the following is NOT a valid use case for Amazon Athena when working with machine learning workloads?
A) Running real-time predictions and inference on streaming data using Amazon Kinesis Data Analytics B) Querying data from multiple sources to enrich feature engineering in machine learning pipelines C) Analyzing data stored in Amazon S3 to train machine learning models using Amazon SageMaker D) Creating visualizations and dashboards to monitor model performance and data quality
3. Which of the following is an appropriate approach to address overfitting during model training in AWS SageMaker?
A) Decrease the regularization parameter B) Increase the number of training samples C) Decrease the dropout rate D) Increase the learning rate E) None of the above
4. You are building a machine learning pipeline on AWS that processes large datasets. The pipeline consists of multiple AWS services, including Amazon SageMaker, AWS Glue, Amazon S3, and Amazon Redshift. You need to monitor the pipeline's performance and detect any issues that may arise. Which of the following services can help you achieve this?
A) AWS Lambda B) Amazon CloudWatch C) Amazon QuickSight D) Amazon Kinesis
5. A data scientist is using AWS Batch to run a job that requires GPU instances. The job needs to access large amounts of data stored in Amazon S3, and the data needs to be transferred to the GPU instance before the job can start. Which of the following options should the data scientist choose to optimize the data transfer and minimize the job start time?
A) Use an Amazon S3 bucket policy to grant read access to an EC2 instance. Configure the job definition in AWS Batch to launch an EC2 instance with a GPU and mount the S3 bucket as a file system. Transfer the data from S3 to the EC2 instance before the job starts. B) Use an Amazon S3 bucket policy to grant read access to the GPU instances. Configure the job definition in AWS Batch to download the data from S3 directly to the GPU instances. C) Use AWS Snowball to transfer the data from Amazon S3 to the GPU instances before running the job. Configure the job definition in AWS Batch to use the data stored on the local storage of the GPU instances. D) Use Amazon S3 Transfer Acceleration to speed up the transfer of data from Amazon S3 to the GPU instances. Configure the job definition in AWS Batch to download the data from S3 directly to the GPU instances using the accelerated transfer.
Leave a comment