Amazon AWS Interview Questions
What is AWS?
Amazon Web Services (AWS) is a secure cloud services platform, offering compute power, database storage, content delivery and other functionality to help businesses scale and grow.
Explain Elastic Block Storage? What type of performance can you expect? How do you back it up? How do you improve performance?
EBS is a virtualized SAN or storage area network. That means it is RAID storage to start with so it's redundant and fault tolerant. If disks die in that RAID you don't lose data. Great! It is also virtualized, so you can provision and allocate storage, and attach it to your server with various API calls. No calling the storage expert and asking him or her to run specialized commands from the hardware vendor.
Performance on EBS can exhibit variability. That is it can go above the SLA performance level, then drop below it. The SLA provides you with an average disk I/O rate you can expect. This can frustrate some folks especially performance experts who expect reliable and consistent disk throughput on a server. Traditional physically hosted servers behave that way. Virtual AWS instances do not.
Backup EBS volumes by using the snapshot facility via API call or via a GUI interface like elasticfox.
Improve performance by using Linux software raid and striping across four volumes.
Interested in mastering Amazon AWS Training? Enroll now for FREE demo on Amazon AWS Training.
Mention what are the security best practices for Amazon EC2?
For secure Amazon EC2 best practices, follow the following steps
-Use AWS identity and access management to control access to your AWS resources
-Restrict access by allowing only trusted hosts or networks to access ports on your instance
-Review the rules in your security groups regularly
-Only open up permissions that your require
-Disable password-based login, for instance, launched from your AMI
What is an AMI? How do I build one?
AMI stands for Amazon Machine Image. It is effectively a snapshot of the root filesystem. Commodity hardware servers have a bios that points the the master boot record of the first block on a disk. A disk image though can sit anywhere physically on a disk, so Linux can boot from an arbitrary location on the EBS storage network.
Build a new AMI by first spinning up and instance from a trusted AMI. Then adding packages and components as required. Be wary of putting sensitive data onto an AMI. For instance your access credentials should be added to an instance after spinup. With a database, mount an outside volume that holds your MySQL data after spinup as well.
What is auto-scaling? How does it work?
Autoscaling is a feature of AWS which allows you to configure and automatically provision and spinup new instances without the need for your intervention. You do this by setting thresholds and metrics to monitor. When those thresholds are crossed a new instance of your choosing will be spun up, configured, and rolled into the load balancer pool. Voila you've scaled horizontally without any operator intervention!
What automation tools can I use to spin up servers?
The most obvious way is to roll-your-own scripts, and use the AWS API tools. Such scripts could be written in bash, perl or other language or your choice. Next option is to use a configuration management and provisioning tool like puppet or better it's successor Opscode Chef. You might also look towards a tool like Scalr. Lastly you can go with a managed solution such as Rightscale.
What is configuration management? Why would I want to use it with cloud provisioning of resources?
Configuration management has been around for a long time in web operations and systems administration. Yet the cultural popularity of it has been limited. Most systems administrators configure machines as software was developed before version control - that is manually making changes on servers. Each server can then and usually is slightly different. Troubleshooting though is straightforward as you login to the box and operate on it directly. Configuration management brings a large automation tool into the picture, managing servers like strings of a puppet. This forces standardization, best practices, and reproducibility as all configs are versioned and managed. It also introduces a new way of working which is the biggest hurdle to its adoption.
Enter the cloud, and configuration management becomes even more critical. That's because virtual servers such as amazon's EC2 instances are much less reliable than physical ones. You absolutely need a mechanism to rebuild them as-is at any moment. This pushes best practices like automation, reproducibility and disaster recovery into center stage.
Explain how you would simulate perimeter security using Amazon Web Services model?
Traditional perimeter security that we're already familiar with using firewalls and so forth is not supported in the Amazon EC2 world. AWS supports security groups. One can create a security group for a jump box with ssh access - only port 22 open. From there a web server group and database group are created. The web server group allows 80 and 443 from the world, but port 22 *only* from the jump box group. Further the database group allows port 3306 from the web server group and port 22 from the jump box group. Add any machines to the web server group and they can all hit the database. No one from the world can, and no one can directly ssh to any of your boxes.
Mention what is the relation between an instance and AMI?
From a single AMI, you can launch multiple types of instances. An instance type defines the hardware of the host computer used for your instance. Each instance type provides different compute and memory capabilities. Once you launch an instance, it looks like a traditional host, and we can interact with it as we would with any computer.
What is Cloud Computing?
Cloud computing is internet-based computing whereby shared resources, software, and information are provided to computers and other devices on-demand, like the electricity grid.
What are the components of Cloud Computing?
Components in a cloud refer to the platforms, like front end, back end, and cloud-based delivery and the network used. All together it forms an architecture for cloud computing. With the main components like SAAS, PAAS and IAAS there are 11 more major categories in cloud computing that are:
-Storage-as-a-Service: This is the component where we can use or request storage. It is also called disk space on demand.
-Database-as-a-Service: This component acts as a live database from remote.
-Information-as-a-Service: Information that can be accessed remotely from anywhere is called Information-as-a-Service.
-Process-as-a-Service: This component combines various resources such as data and services. This happens either hosted within the same cloud computing resource or remote.
-Application-as-a-Service: Application-as-a-Service (also known as SAAS) is the complete application built ready for use by the client.
-Platform-as-a-Service: This is the component where the app is being developed and the database is being created, implemented, stored and tested.
-Integration-as-a-Service: Integration-as-a-Service deals with the components of an application that has been built but must be integrated with other applications.
-Security-as-a-Service: This is the main component many customers require. There are three-dimensional securities found in cloud platforms.
-Management-as-a-service: This is a component that is mainly useful for management of the clouds, like resource utilization, virtualization and server up and down time management.
-Testing-as-a-Service: Testing-as-a-Service refers to the testing of the applications that are hosted remotely.
-Infrastructure-as-a-Service: This is called as nearly as possible the taking of all the hardware, software, servers and networking that is completely virtual.