Creating a Dynamic Website Using AWS

Michael Johnson
15 min read3 days ago

--

Recently I worked on a team based project to create a website for a fictious craft beer company located in the Pacific Northwest. We were each assigned a set of tasks (known as user stories) and organized into teams to simulate a real-world job scenario. The objective of this group project was to provide us with practical experience akin to working as a cloud engineer within an IT team.

Scenerio:

A renowned local craft brewery known as Hop Haven Brewery, celebrated for its unique and frequently updated selection of artisanal beers, desires to develop a website to showcase its weekly evolving beer menu. The objective is to keep aficionados and casual drinkers alike informed and excited about the brewery’s latest offerings, including limited edition brews and seasonal favorites.

Business Needs:

  • Regularly Updated Content: The brewery’s beer list is dynamic, changing weekly to introduce new brews and retire others, necessitating an easy-to-update solution.
  • Public Accessibility: The website should be accessible to everyone, ensuring that customers can view the current beer offerings at any time.
  • High Reliability and Performance: The website must be reliable and performant, catering to high visitor traffic, especially during new beer release announcements.
  • Cost-Effectiveness and Scalability: The solution must be affordable and scalable, aligning with the brewery’s budget and growth.

Phase One: EC2 Setup and Installation of Nginx

Phase One was completed by

Step One: EC2 Setup

For his contribution to the project he was tasked with creating and deploying our EC2 instance, setting up a security group configuration, and installing Nginx on our EC2 server by utlizing the CLI. The creation of the EC2 Instance and installation of Nginx would be the first steps in deploying our dynamic website since without a server to work on and to host our website none of our other steps could be completed.

The first thing Jeremy did was to log into our group AWS console and search for EC2. After navigating to the EC2 service he clicked on “Launch EC2 Instance”. After launching a new EC2 instance he proceeded to name our instance:

“Pod1-server”

Next he for our AMI he selected “Amazon Linux”, and to stay in the free tier we selected “Amazon Linux 2023”

For Instance type, to ensure that we stayed in free tier, he selected a t2.micro instance type. This would ensure we maintained maximum savings for this project as required.

Next we needed a key pair so we could SSH into the EC2 instance as needed to complete additional steps. It is important to note since we were using a Linux enviroment for our keypair we needed to selected a “.pem” file type.

Finally for our Security Group settings we needed to allow traffic from:

  • Port 22 (SSH)
  • Port 80 (HTTP)
  • Port 443 (HTTPS)

For our project it is important to note that at first Jeremy had only selected to allow traffic on Port 22 and Port 443. We had found out later from our Project Manager Troy, that our team did not yet secure the SSL/TLS certificate to upload to ACM(AWS Certificate Manager). So for the rest of the project we had to edit our security group to allow Port 80 and proceeded to work only with Port 80 and Port 22.

That said after creating our inital security group we kept the remaining options as their defaults and launched this EC2 instance.

Step Two: SSH into the EC2 Instance and Installing Nginx

After creating our EC2 Instance, Jeremy then shared our “.pem” file for our key pair with the group. He then opened his terminal on his local machine to SSH into our newly created EC2 instance. Using his terminal he navitgated to where the “.pem” file was stored on his local machine. Then he went to our EC2 instance via the AWS console and pressed on “Connect”

This would bring up a screen with the exact code he needed to type in his terminal in order to successfully SSH into our instance.

That said, Jeremy proceeded to copy ssh -i "pod_key.pem" ec2-user@ec2-3-80-57-56.compute-1.amazonaws.com into his terminal. Since he was in the directory where the “.pem” file was stored this command would SSH into our EC2. However it is important to note due to permissions settings on your local machine you may be prompted to then type chmod 400 "pod1_key.pem" to edit the file to have “read only” permissions. If successful you will be greated with the screen below

Now that Jeremy was connected to our EC2 instance via SSH the next step was to install Nginx on our server. You may be asking what is Nginx? Nginx is a high-performance web server and reverse proxy software widely used for serving web content efficiently. It’s known for its speed, scalability, and ability to handle high levels of concurrent connections. Additionally, Nginx functions as a load balancer and HTTP cache, making it a versatile tool for optimizing web server performance. This made it ideal for our business need.

To install Nginx so that we could take advantage of this powerful tool Jeremy typed into his console the command sudo yum install nginx

After executing this command he was met with a screen like this

This shows that Nginx was successfully installed on our EC2 instance. This did not mean however that Nginx was enabled and running. To ensure that Nginx was enabled and running, Jeremy ran the commands sudo systemctl start nginx and sudo systemctl enable nginx After running both these commands separately, Jeremly was presented with the screen below

To confirm that nginx was now fully enabled and active Jeremy ran the command sudo systemctl status nginx

This screen shows that Nginx is indeed enabled and running. After successfully getting our EC2 and Nginx running our next steps was getting our S3 Bucket setup as it would host our website images and to upload a index.html file to Nginx.

Phase Two: AWS S3 and Nginx Website Deployment

This Phase was completed by Me(

)

Step One: S3 Bucket Creation

For my contribution to our website launch, I was tasked with creating the S3 Bucket to store the website images and to upload an index.html file to our AWS EC2 Nginx server. I first thing I did was to log into our AWS console and open AWS S3.

Next I clicked on “Create Bucket”.

After pressing “Create Bucket” the first thing I did was name the bucket. For this I named it after our work group. So I named it pod1bucket

Next I left it the default recommended settings until I got to public access settings. I changed the Public Access from “Block All” to “Allow All” to use for a website. We need our website to easily access the images. To ensure security we will set a Bucket Policy under the security section later.

After this I left the remaining settings as recommended and saved our bucket.

Step Two: Uploading Images

For this step under the Objects tab within our bucket I clicked Upload and Add File.

From there I simply added images saved on my local machine to the bucket. Easy right?

Step Three: Add Bucket Policy

Next I needed to ensure that our bucket had read only permissions. To do this I went to the permissions tab and scrolled down to Bucket Policy and clicked “Edit”.

Click edit on the right-hand side

From here I used AWS built in JSON policy generator to create a read-only policy.

The first step was to change from SQS Queue Policy to S3 Bucket Policy.

AWS Policy Generator

Then under Principal I entered an * which means apply to all in the JSON language. Next under actions I selected “GetObject” which allows read-only access to bucket objects.

Finally I went back to grab the Bucket ARN from the permissions tab in the AWS console and added /* to the end of the policy to apply the permission to all objects in the bucket.

Bucket ARN

From there I clicked “Generate Policy” and copied the JSON document to our Bucket policy in the AWS console and saved it. Now our bucket has read-only access.

Make sure policy is ending with /* to include all items in the bucket

Step Four: Add Our index.html File to our EC2 Nginx Server

Lastly, I SSH into our EC2 instance after adding the image URLs to our index.html file. For privacy I will not show the image URL links. To SSH I used our key pair Jeremy had provided, and typed cd/usr/share/nginz/html into my terminal to navigate to where Nginx’s default HTML files are stored.

Result after running cd/usr/share/nginx/html in the terminal

I then used the command sudo nano index.htmlto write in our Nginx server’s default index.html file. I copied the edited index.html file that had the S3 bucket image URLs from my local machine to the terminal and saved it. After saving it I typed in our website’s IP address and saw our website successfully populated.

I did have to make an alteration to our index.html as the headline for our website was not loading correctly. It was appearing behind one of our splash images. To correct this I troublshooted our index.html file and found this line of code

 .splash-image img {
width: 100%;
height: auto;
border-radius: 8px;
display: block;
margin: auto;
margin-top: -60px;
}

I removed margin-top: -60px; and this resolved the issue.

Before on the left and After on the right

Phase Three: Production Ready Environment Set-up

This phase was completed by

Step One: Tagging the EC2 Instance & S3 Bucket

For Seth’s portion of the project he was tasked with getting our production ready environment set-up for the dynamic website. Seth created a naming convention for tags that made it easy to find what each item was related to in our infrastructure. (Note: This section is very detailed for an easy step-by-step)

  1. Open up AWS Console — navigate to EC2 Instance.

2. Once there — open up your instances. As you can see we have one running (we can click there) or click instances on the left-hand side to open up all of them.

3. Scroll down until you find the tabs for different options inside the instance. Here you can see the options — Click on Tags

4. Select Manage tags so we can add some tags.

5. Enter a name(known as a “key”) to help you identify what project this belongs to.

6. Then we can add a value to show what aspect of your project this is for.

7. Now select save to add this tag to the instance.

Seth then followed the same process for the S3 Bucket.

8. Search for S3 in the search bar

Click Buckets on the left-hand side first

9. Seth scrolled down then selected and clicked into the bucket used for this project.

10. Scroll down to view all our objects inside the bucket — Seth clicked on each of the objects names and added a tag.

11. Once inside the object Seth scrolled down to find the tags box again. Once he found the tags box Seth clicked on edit to add them or remove them.

12. Just like before Seth clicked on add tag and inputed a name(key) and value.

13. As you can see we could use the same name(key) from our EC2’s tag as it belongs to the same project. However, since this is an object located within our S3 bucket for best naming practices Seth gave a different value to easily identify where this object is related to within our project. After doing so he clicked on Save Changes.

14. Seth then navigated back to our S3 Bucket.

15. Finally Seth repeated the same process for all the objects in our S3 Bucket

Phase Four: Cloudwatch Dashboard Creation

This phase was completed by

Next we had Ahlam set up our Cloudwatch Dashboard. Amazon CloudWatch monitors your Amazon Web Services (AWS) resources and the applications you run on AWS in real-time. CloudWatch collects and tracks metrics, which are variables you can measure for your resources and applications. The CloudWatch home page automatically displays metrics about every AWS service you use. This is a useful tool to ensure cost effectiveness.

Step One: Create a Cloudwatch Dashboard

Ahlam first searched for CloudWatch on her AWS Console Home — then she clicked create dashboard. She then named the dashboard with a similar name to what we gave to our EC2 instance’s tag for integration.

Then she navigated to our AWS Cloudwatch to configure the widgets.

For this project, she selected the Line Graph and Number Widgets. Widgets are user interface elements that display the collected metrics in a visually understandable way to the user. The Line widget shows individual viewing and time-based metrics, allowing you to visualize trends and identify fluctuations in your selected metrics. The Number widget gives a quantifiable way to measure current resource utilization.

Step Two: Create Metrics

To create metrics you must first select your metrics type. For our first metric Ahlam selected EC2.

Amazon EC2 usage metrics correspond to AWS service quotas. You can configure alarms that alert you when your usage approaches a service quota.

Then she needed to select the Per-Instance Metrics. The Per-Instance metrics she selected for this project are:

  • CPU Utilization
  • NetworkIn
  • NetworkOut
  • Combined NeworkIn & NetworkOut

After selecting the Per-Instance Metrics from the above prompt Ahlam created a widget. These gave us some line graphs to visualize our usuage.

The first three images above show Line graph metrics for the CPUUtilization, NetworkIn, and NetworkOut.
Utilization currently at 0.26%

Here on the Dashboard, you will see she set the number percentage between 50 to 80% usage, to signal high CPU usage within this set range. Ahlam had also edited the color to change to red when it reaches high CPU usage to make it visually noticeable.

Phase Five: Cloudwatch Alarms and Metrics

This phase was completed by

Judy was tasked with configuring our CloudWatch Alarms for important events, such as high CPU utilization, ensuring timely responses and optimal website performance.

Step One: Navigate to CloudWatch and Create Alarms

In the AWS console Judy navigated to the CloudWatch services.

In the Alarms section in the navigation pane, she chose “In Alarm,” and proceed to create the alarm.

She then clicked on Select Metric and navigated to EC2 under the Browse tab. Then, select Per-Instance Metrics. As shown below:

Under the Browse tab Judy checked the CPUUtilizaltion box, the clicked on select metric.

She should of then seen a graph like below, and while making sure the default settings were kept, then she clicked next.

Next Judy chose Static as the threshold type, setting the trigger to Greater than, and the threshold value to 80. As was requested by our Brewery.

She then set the alarm state trigger to “In Alarm” for when it exceeds the the desired threshold. Next, she had to choose bewteen an existing SNS (Simple Notification Service) topic or create a new one for email alerts. For our project case since we didn’t have an existing SNS, Judy had to create a new topic and add an email address for notifications.

Next Judy had to give this alarm a name. She named it Pod1Alarm

Finally Judy simply had to review the alarm to ensure it had all our desired settings, and then clicked Create

Ensure the alarm is configured correctly by navigating to the “Alarms” and “All Alarms” tabs.

And just like that our alarms were created. We had only one final phase left. A security review.

Phase Six: Security Review

This phase was completed by

For Travis’ portion of the group project, He was tasked with conducting a security review of our S3 and EC2 configurations to ensure that only the necessary permissions are granted, that our setup follows AWS’s best practices, and that all expections of the client were met.

To this end Travis considered the following

  1. Ensure that the EC2 security inbound rules allowed HTTP from anywhere (0.0.0.0/0) and SSH from the customer’s preferred cidr.
  2. Display S3 bucket permissions meet best practices.
  3. Ensure no sensitive information is present on the EC2 or the S3 bucket that is accessible to the public.

Upon his review he verified that Hop Haven Brewery Development team had met all required security practices and that an initial hand over to the client could now take place.

Summary

The craft brewery now had a dynamic and user-friendly website, efficiently managed through AWS services. The website showcased their ever-changing beer selection, enhanced their customer engagement, and provided up-to-date information with minimal maintenance overhead. This digital solution not only strengthened the brewery’s market presence but also aligned perfectly with their innovative and customer-focused brand image. This is the strength Cloud based solutions can bring to table.

--

--