1. An organization is looking to optimize the performance and cost of their Amazon EC2 instances for their machine learning workloads. Which of the following options would best meet their requirements?
A) Use Amazon EC2 Auto Scaling to automatically adjust the number of instances based on the workload, and configure Amazon EC2 instances with Amazon Elastic Block Store (EBS) optimization to improve the I/O performance.
B) Use Amazon EC2 Reserved Instances to reduce costs, and configure Amazon EC2 instances with Amazon Elastic Fabric Adapter (EFA) to reduce the network latency and accelerate the performance of their machine learning models.
C) Use AWS Spot Instances to reduce costs, and configure Amazon EC2 instances with Elastic Inference to accelerate the performance of their machine learning models.
D) Use Amazon EC2 On-Demand Instances with the latest-generation instances, and configure Amazon EC2 instances with Amazon Elastic Container Service (ECS) to manage the containers for their machine learning models.
2. Which of the following statements is true regarding the maximum size of a deployment package for an AWS Lambda function?
A) The maximum size of a deployment package is 50 MB when using the console or API to create a function.
B) The maximum size of a deployment package is 250 MB when using the console or API to create a function.
C) There is no maximum size limit for a deployment package for an AWS Lambda function.
D) The maximum size of a deployment package is 100 MB when using the console or API to create a function.
3. Which of the following statements is true regarding the use of Amazon SageMaker Studio?
A) Amazon SageMaker Studio can be used to build and train machine learning models, but not to deploy them.
B) Amazon SageMaker Studio provides built-in support for distributed training across multiple instances and availability zones.
C) Amazon SageMaker Studio is only available in the US East (N. Virginia) region.
D) Amazon SageMaker Studio is a web-based integrated development environment (IDE) for building, training, and deploying machine learning models.
4. A company is building a machine learning pipeline to process a high volume of real-time streaming data from a social media platform. The pipeline needs to ingest the data from Kinesis Data Streams, preprocess it with Apache Spark, train a deep learning model with TensorFlow, and serve the model predictions with a REST API. Which AWS services should they use, and why?
A) Amazon EMR (Elastic MapReduce), because it provides a managed Hadoop and Spark cluster for data processing. Amazon SageMaker can be used for training the model and deploying the REST API.
B) Amazon Kinesis Data Analytics, because it can preprocess the data with Apache Spark and train the model with TensorFlow in real-time. It can also serve the model predictions with a built-in REST API.
C) Amazon Kinesis Data Firehose, because it can deliver streaming data to Amazon S3, which can be used as a data source for Apache Spark. Amazon SageMaker can be used for training the model and deploying the REST API.
D) Amazon EC2 (Elastic Compute Cloud), because it provides virtual machines that can run Apache Spark, TensorFlow, and the REST API.
5. Which of the following is a benefit of using an AWS Deep Learning AMI (DLAMI) for training a custom deep learning model on Amazon EC2?
A) Increased flexibility as the DLAMIs provide complete control over the underlying infrastructure and software stack used for training.
B) Reduced cost as the DLAMIs are offered at a lower price compared to the regular EC2 instances.
C) Reduced training time as the DLAMIs come with pre-installed hardware drivers that are optimized for deep learning workloads.
D) Reduced time-to-market as the pre-built AMIs come pre-installed with popular deep learning frameworks, libraries, and tools such as TensorFlow, Keras, and PyTorch.
Leave a comment