(Spark maintains a document about this, but I find it's a bit difficult to understand if you are new to Spark. ) Here is a simpler guide (at least for me).
1. Log in to your ASW (EC2 Console) account, click your username on the top-right corner, and switch to "Security Credentials". You can see your Access Keys (Access Key ID and Secret Access Key). Create one and download it if you don't have one. Then export your Access Keys in terminal. e.g.,
2. Prepare the key file. You can create one and download it on EC2 (usually ends with .pem). Then change its mod.
chmod 600 zhiyuanKey.pem
3. Go to Spark/ec2 directory. Run the launch command:
./spark-ec2 -k <keypair> -i <key-file> -s <num-slaves> launch <cluster-name>
For example: (this also specifies instance type, custom AMI, and AMI region)
./spark-ec2 -k zhiyuanKey -i zhiyuanKey.pem -s 4 -t g2.2xlarge -a ami-71280941 --region=us-west-2 --zone=us-west-2a launch zhiyuanCluster
Now, you are all done!!
Login the master node: (you can either login as the root, or as the ec2-user)
./spark-ec2 -k <keypair> -i <key-file> login <cluster-name>
ssh -i <key-file> ec2-user@machine_ip // (if you are using ubuntu, make sure you have changed "c2-user" to "ubuntu")
./spark-ec2 --region=<ec2-region> stop <cluster-name>
./spark-ec2 -i <key-file> --region=<ec2-region> start <cluster-name>
./spark-ec2 --region=<ec2-region> destroy <cluster-name>