Theory Question on AWS

Theory Question on AWS

Questions:

Name 5 aws services you have used and what are the use cases?

EC2 - Elastic Compute Cloud - It is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing it easier for developers. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

S3 - Simple Storage Service - It is a web service offered by Amazon Web Services that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network.

RDS - Relational Database Service - It is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizable capacity for an industry-standard relational database and manages common database administration tasks.

Lambda - It is a serverless computing service provided by AWS. It is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code only when needed and scales automatically, from a few requests per day to thousands per second.

CloudFront - It is content delivery network offered by AWS. It is a global content delivery network service that securely delivers data, videos, applications, and APIs to your viewers with low latency and high transfer speeds.

What are the tools used to send logs to the cloud environment?

CloudWatch - It is a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers.

CloudWatch provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.

CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers.

What are IAM Roles? How do you create /manage them?

IAM (Identity and Access Management) roles are a fundamental component of AWS (Amazon Web Services) that allow you to define who or what can take actions on AWS resources. IAM roles are not associated with a specific user or group; instead, they are meant to be assumed by AWS services, resources, or external entities such as applications or users from other AWS accounts. IAM roles are typically used to grant temporary permissions to entities without the need for long-term credentials like access keys.

Here's a basic overview of IAM roles, including how to create and manage them:

Creating an IAM Role:

  1. Sign in to AWS Console: Log in to your AWS account using the AWS Management Console.

  2. Open the IAM Dashboard: From the AWS Management Console, navigate to the IAM dashboard.

  3. Navigate to Roles: In the IAM dashboard, select "Roles" from the left navigation pane.

  4. Create Role: Click the "Create role" button to start the role creation process.

  5. Select a use case: Choose the use case or trusted entity type that best fits your needs. Common use cases include AWS service roles, Cross-account roles, and roles for third-party AWS applications.

  6. Choose the use case-specific permissions policy: AWS provides predefined policies that are suitable for various use cases. You can also create custom policies if needed.

  7. Set trust relationships: Define which entities are trusted to assume the role. For example, you can specify an AWS service or specify a trusted AWS account ID if you are sharing the role across accounts.

  8. Add tags (optional): You can optionally add metadata in the form of tags to help you manage and categorize roles.

  9. Review and create: Review your role configuration, make any necessary changes, and then create the role.

Managing IAM Roles:

Once you have created an IAM role, you can manage it by:

  1. Modifying the role: You can modify the role's permissions, trust relationships, and other settings as needed. To do this, select the role in the IAM dashboard and click the "Edit" button.

  2. Deleting the role: If a role is no longer needed, you can delete it. Be cautious when doing this, as it can impact any resources or services that rely on the role.

  3. Rotating role credentials: If you've attached temporary security credentials (e.g., AWS access keys) to a role, you can set up credential rotation policies to enhance security.

  4. Auditing and monitoring: Regularly review and audit the usage of IAM roles to ensure they are used appropriately and securely. AWS provides tools and services like CloudTrail and CloudWatch for this purpose.

  5. IAM Policies: Remember that the permissions associated with a role are determined by the IAM policies attached to it. You can modify these policies to grant or revoke specific permissions.

IAM roles are a powerful and secure way to manage access to AWS resources, and they play a critical role in implementing the principle of least privilege and enhancing security within your AWS environment.

How to upgrade or downgrade a system with zero downtime?

Upgrading or downgrading a system with zero downtime in AWS can be achieved using various strategies and services provided by AWS. The process involves carefully planning and implementing the changes to minimize any impact on the system availability. Here are some steps and techniques you can use to achieve zero-downtime upgrades or downgrades:

Elastic Load Balancers (ELB): Deploy your system behind an Elastic Load Balancer (ELB) to distribute incoming traffic across multiple instances. During the upgrade or downgrade, you can take instances out of service one at a time, update them, and then add them back to the ELB.

Auto Scaling Groups: Deploy your instances within an Auto Scaling Group (ASG) so that it automatically replaces instances with the new version while maintaining the desired capacity. This way, you can perform rolling updates with zero downtime.

Blue/Green Deployment: Create a new environment ("Blue") with the updated version of your system alongside the existing environment ("Green"). Use Route 53 or an ELB to switch traffic gradually from the old environment to the new one.

AWS CodeDeploy: Utilize AWS CodeDeploy, which allows you to automate application deployments to EC2 instances or on-premises instances. CodeDeploy supports both rolling updates and blue/green deployments.

Amazon Elastic Beanstalk: If your system is built on AWS Elastic Beanstalk, it provides seamless updates to the underlying instances while maintaining the application's availability.

Database Replication: If the upgrade or downgrade involves changes to the database, consider using database replication (e.g., Amazon RDS Multi-AZ or read replicas) to maintain a redundant copy of the database during the update process.

What is infrastructure as code and how do you use it?

Infrastructure as Code (IaC) is using code and automation to manage cloud resources. You write code to define your infrastructure, version-control it, and use tools to create, modify, or delete resources. Benefits include consistency, reproducibility, collaboration, scalability, and cost control. It's a best practice for modern cloud management.

What is a load balancer? Give scenarios of each kind of balancer based on your experience.

A load balancer is a network device or service that distributes incoming network traffic (such as HTTP requests) across multiple servers or resources. Its primary purpose is to ensure that no single server or resource is overwhelmed with traffic, thus improving the reliability, availability, and performance of applications or services. There are two main types of load balancers:

  1. Layer 4 Load Balancer (Transport Layer):

    • Scenario: Layer 4 load balancers operate at the transport layer (TCP/UDP) and distribute traffic based on network-level information (e.g., IP addresses and port numbers). They are suitable for scenarios where applications are stateless or where session persistence isn't critical.

    • Use Cases:

      • Load balancing incoming web traffic to multiple web servers in a web application.

      • Distributing DNS queries across multiple DNS servers for redundancy.

      • Load balancing FTP or SMTP traffic.

  2. Layer 7 Load Balancer (Application Layer):

    • Scenario: Layer 7 load balancers operate at the application layer (HTTP/HTTPS) and make routing decisions based on application-level data, such as URL paths, HTTP headers, or cookies. They are ideal for scenarios that require more advanced traffic management and routing based on application content.

    • Use Cases:

      • Routing HTTP requests to different backend services based on the URL path (e.g., "/api" routes to an API server, "/app" routes to a web application server).

      • Session affinity or sticky sessions, ensuring that user sessions are directed to the same backend server.

      • Content-based routing where specific requests are sent to specific backend servers based on HTTP headers or content.

Here are a few scenarios based on my knowledge:

  • Web Application Load Balancer:

    • Type: Layer 7

    • Scenario: You have a web application with multiple web servers. The load balancer routes incoming HTTP requests based on the URL path to different backend services. For example, "/api" requests go to the API server, "/app" requests go to the web application server, and "/images" requests go to a separate server for serving images.

  • Global Traffic Manager:

    • Type: Layer 4 or Layer 7

    • Scenario: You have a global network with data centers in multiple regions. A global traffic manager (GTM) load balancer directs traffic to the closest data center based on the user's geographic location. It can also perform health checks to ensure high availability.

  • SSL/TLS Offloading Load Balancer:

    • Type: Layer 7

    • Scenario: To offload SSL/TLS encryption and decryption from backend servers, you use a load balancer that terminates SSL/TLS connections. This reduces the computational load on your servers and improves performance.

  • Microservices Load Balancer:

    • Type: Layer 7

    • Scenario: In a microservices architecture, you use a load balancer to route traffic to various microservices based on the URL path or other attributes. This enables you to scale and update individual microservices independently.

  • Database Load Balancer:

    • Type: Layer 4 or Layer 7

    • Scenario: To distribute database queries and ensure high availability, you deploy a database load balancer that routes database connections to a pool of database servers. This can be used for read replicas, failover, or sharding scenarios.

What is CloudFormation and why is it used for?

AWS CloudFormation is a service that lets you define your AWS infrastructure as code using templates. With this CloudFormation, you can automate the creation, updating, and deletion of AWS resources. It helps maintain consistent and repeatable infrastructure configurations across different environments and simplifies the management of complex architectures. CloudFormation is useful for automating resource provisioning and ensuring that your infrastructure follows best practices and is easy to scale.

Difference between AWS CloudFormation and AWS Elastic Beanstalk?

AWS CloudFormation is an Infrastructure as Code service that allows you to provision and manage AWS resources using templates, giving you full control over infrastructure management.

AWS Elastic Beanstalk is a Platform as a Service offering focused on simplifying the deployment and management of applications, abstracting away much of the infrastructure complexity for developers.

Choosing between CloudFormation and Elastic Beanstalk depends on your specific use case. If you need fine-grained control over your infrastructure and want to manage the entire stack, CloudFormation is a better fit. On the other hand, if you want a managed platform for deploying applications without worrying about infrastructure details, Elastic Beanstalk is a more suitable choice.

What are the kinds of security attacks that can occur on the cloud? And how can we minimize them?

Common types of security attacks on the cloud include data breaches, DoS/DDoS attacks, MitM attacks, insider threats, data loss, account hijacking, injection attacks, XSS, CSRF, and SSRF.

To minimize these attacks:

Implement strong access controls and encryption.

Monitor and audit cloud activities regularly.

Keep software up to date with security patches.

Use firewalls and security groups to control traffic.

Deploy intrusion detection/prevention systems.

Conduct security testing and training for employees.

Backup critical data and have a disaster recovery plan.

Leverage cloud provider security services.

Can we recover the EC2 instance when we have lost the key?

Yes, we can recover the EC2 instance when we have lost the key. We can do this by creating a new key pair and attaching it to the instance. We can also create an AMI of the instance and launch a new instance from the AMI with a new key pair.

What is a gateway?

A gateway is a network node that connects two networks using different protocols together. It is a device that translates the protocols used by two different networks that are not directly connected. It is also known as a protocol converter.

What is the difference between Amazon RDS, Dynamodb, and Redshift?

  1. Amazon RDS (Relational Database Service):

    • Database Type: RDS is a managed relational database service that supports various database engines, including MySQL, PostgreSQL, Oracle, SQL Server, and MariaDB.

    • Use Case: It's ideal for applications that require a traditional relational database structure and features, such as e-commerce platforms, content management systems, and line-of-business applications.

    • Scaling: RDS provides automated backups, automated software patching, and allows you to scale your database vertically (by increasing CPU, RAM, etc.). It also supports read replicas for read scalability.

    • Data Model: RDS databases use a structured, tabular data model with support for SQL queries.

    • Performance: Good for transactional workloads but may have limitations with extremely high read or write throughput.

  2. Amazon DynamoDB:

    • Database Type: DynamoDB is a fully managed NoSQL database service designed for fast and scalable applications.

    • Use Case: It's suitable for applications that require high availability, low latency, and scalability, such as real-time applications, gaming, mobile apps, and IoT.

    • Scaling: DynamoDB automatically scales to handle varying workloads and offers low-latency performance. It's designed for horizontal scaling (adding more capacity) as your data grows.

    • Data Model: DynamoDB uses a flexible and schema-less data model with support for JSON-like documents, making it well-suited for semi-structured or unstructured data.

    • Performance: Excellent for read and write-intensive workloads, especially when data volumes are unpredictable or can grow rapidly.

  3. Amazon Redshift:

    • Database Type: Redshift is a fully managed data warehousing service built for analytics and data warehousing.

    • Use Case: It's designed for running complex analytical queries on large datasets, making it ideal for business intelligence, data warehousing, and reporting.

    • Scaling: Redshift can handle large volumes of data and scales horizontally by adding more nodes to the cluster. It uses columnar storage for efficient querying.

    • Data Model: Redshift stores data in a columnar format, which is optimized for analytical queries, but it's not suitable for transactional or OLTP workloads.

    • Performance: Excellent for analytical queries but not intended for transactional processing.

Do you prefer to host a website on S3? What's the reason if your answer is either yes or no?

Yes, you might prefer to host a website on Amazon S3 for the following reasons:

  1. Cost-Efficiency: Amazon S3 offers very cost-effective hosting for static websites. You only pay for the storage and data transfer you use, which can be significantly cheaper than traditional web hosting services.

  2. Scalability: S3 can handle high traffic loads, and it automatically scales to accommodate traffic spikes without any manual intervention.

  3. High Availability: S3 provides high availability by replicating data across multiple data centers, reducing the risk of downtime.

  4. Content Delivery: You can easily integrate Amazon CloudFront (AWS's content delivery network) with your S3-hosted website to improve global access speeds and reduce latency.

  5. Simplicity: Setting up and managing a static website on S3 is straightforward, and AWS provides tools to simplify the process, such as the AWS Amplify Console.

No, you might not prefer to host a website on Amazon S3 for the following reasons:

  1. Complexity of Dynamic Content: If your website relies heavily on server-side processing and dynamic content generation, S3 may not be suitable, as it's designed for static content hosting.

  2. Lack of Server-Side Scripting: S3 does not support server-side scripting languages like PHP, Node.js, or Ruby, which are required for many dynamic web applications.

  3. Database Integration: If your website relies on databases or backend services, you'll need a different hosting solution that can connect to and interact with those resources.

  4. Advanced Features: While S3 is excellent for basic web hosting, it may lack some advanced features offered by traditional web hosting services, such as server-side caching, database management, and easy integration with server-side applications.

*