1. Right Answer: B
Explanation: The COPY command loads the data in parallel from multiple files, dividing the workload among the nodes in your cluster. When you load all the data from a single large file, Amazon Redshift is forced to perform a serialized load, which is much slower. Split your load data files so that the files are about equal size, between 1 MB and 1 GB after compression. For optimum parallelism, the ideal size is between 1 MB and 125 MB after compression. The number of files should be a multiple of the number of slices in your cluster.Referencehttps://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html
2. Right Answer: A
Explanation: Referencehttps://docs.aws.amazon.com/redshift/latest/dg/cm-c-implementing-workload-management.htmlhttps://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html
3. Right Answer: C
Explanation: Referencehttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-block-public-access.html
4. Right Answer: A,E
Explanation: Referenceshttps://aws.amazon.com/about-aws/whats-new/2017/07/amazon-emr-now-supports-launching-clusters-with-custom-amazon-linux-amis/ https://docs.aws.amazon.com/de_de/emr/latest/ManagementGuide/emr-plan-bootstrap.html
5. Right Answer: B
Explanation: Amazon Redshift Spectrum executes queries across thousands of parallelized nodes to deliver fast results, regardless of the complexity of the query or the amount of data. Referencehttps://aws.amazon.com/redshift/features/