An Easy Way to Host Hobby Projects

Save money and time sharing your work!

Shivan Sivakumaran
10 min readAug 22, 2021
Photo by Sai Kiran Anagani on Unsplash

I’ve had this article topic for a while now and what sparked me to write it was the tweet from one of my idols below:

If I’ve got computer A in one room and computer B in another room, both on the same network.

What’s the best way to execute code written on computer A on computer B?

SSH?

If anyone’s written a guide or got one handy, I’d love to know.

— Daniel Bourke (@mrdbourke) August 17, 2021

I also had the problem of showcasing my work. Daniel Bourke is a strong proponent of showing one’s work. The dilemma I faced is cost. Every project needed its own domain and server. The more projects, the bigger the cost.

How do we combat this? Many projects. One domain and one virtual machine. And what we also need to automate this process. Automation makes things easier and introduces fewer errors. By doing this, we are also going to discover an easy way to communicate between two computers in different rooms. Only, the computer in the other room is a server in another country.

We are going to use two projects and host them on a separate subdomain (but one domain) and on the same virtual machine. Additionally, these are hobby projects, which do not require much computation, so storing them on one server should be fine.

The projects are listed below:

Furthermore, a quick gist of the technologies that we will be using include:

  • Django, Streamlit — this is project-specific code.
  • Docker and Docker-Compose — this allows us to run the application in its own isolated container.
  • NGINX and Certbot — this is an HTTP and reverse proxy server that allows us to forward out projects which are running on different ports to different subdomains. Additionally, we can generate SSL certificates so we can access HTTPS.
  • Ansible — the main tool we will be using to communicate between computers/servers instead of ssh as suggested (sorry Dan).

Link to original article.

The Projects

Let’s take a closer look at my projects and we want to get ready for production.

The first project is a streamlit application, Tubestats. Streamlit is a great open-source framework if you want to build a single page dashboard quickly. Originally, I hosted this on Heroku, but this is inuring bigger costs than the method we will be looking at shortly. More on the tubestats project here.

In order to turn this into production, we need to create a config.toml file in the .streamlit directory within the project directory.

[server]
port = 8001
headless = true
[browser]
serverAddress = "0.0.0.0"
serverPort = 8001

This tells streamlit to host on port 8001. Headless = true results in the browser not being opened when running streamlit.

The application will be sitting inside a Docker container. We set up our Docker image using the following code.

FROM python:3.8-slimARG YOUR_ENV
ENV YOUR_ENV=${YOUR_ENV} \
PYTHONDONTWRITEBYTECODE=1\
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100 \
STREAMLIT_SERVER_PORT=8001
COPY ./requirements.txt /tmp/RUN pip install -r /tmp/requirements.txtCOPY ./.streamlit .COPY . /usr/src/WORKDIR /usr/src/RUN chmod a+rx setup.shENTRYPOINT [ "sh", "-c", "./setup.sh" ]

We set environment variables specific to how python runs in the docker container.

The important line is declaring the port that we will use on the hosted server’s localhost. In this case, it’s STREAMLIT_SERVER_PORT=8001. We can in fact build the docker image on our local machine and forward this to localhost.

When the docker container is running on our server, we will use NGINX to expose this port so it is accessible externally. We need to set up NGINX to do this:

upstream tubestats_web {
server 0.0.0.0:{{ tubestats_port }};
}
server {
listen 80;
server_name tubestats.shivan.xyz;
location / {
include proxy_params;
proxy_pass http://tubestats_web;
}
location = / {
include proxy_params;
proxy_pass http://tubestats_web;
}
location /stream {
include proxy_params;
proxy_pass http://tubestats_web/stream;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
}
}

For streamlit, we need the local /stream code block to ensure the application runs else we'd get a blank screen. How NGINX is configured and installed will be taken care of using Ansible.

The next project is a Django application. To get the Django application ready for production, there is a number of steps we need to take. There is excellent documentation on the Django website on how to do this. We will briefly go over how this is done. The following is added to setting.py:

# Default to production settings
CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
SECURE_SSL_REDIRECT = True
SECURE_PROXY_SSL_HEADER = ("HTTP_X_FORWARDED_PROTO", "https")
SECURE_HSTS_SECONDS = 3600
SECURE_HSTS_INCLUDE_SUBDOMAINS = True
SECURE_HSTS_PRELOAD = True
# Development override
if os.environ.get("DJANGO_DEVELOPMENT") == "yes":
DEBUG = True
ALLOWED_HOSTS = ["*"]
CSRF_COOKIE_SECURE = False
SESSION_COOKIE_SECURE = False
SECURE_SSL_REDIRECT = False
SECURE_HSTS_INCLUDE_SUBDOMAINS = False
SECURE_HSTS_PRELOAD = False
SRF_COOKIE_SECURE = False

For my local machine, my fish shell sets the environment variable, DJANGO_DEVELOPMENT to yes. This means when I'm on my local machine, the development version of settings.py will run. When on the production server, the application will default to the production settings.

In production, we don’t use Django to directly serve the web content. Instead, we use Gunicorn as a WSGI server for UNIX. Gunicorn interface between the web server and the application code. We can simply install Gunicorn using pip. I’m using pipenv to manage the virtual environment and packages.

pipenv install gunicorn

And to run the application, we run Gunicorn not Django’s manage.py:

pipenv run gunicorn optomcalc.wsgi:application

Finally, we are going to use docker-compose to orchestrate a Postgres database along with the Django-Gunicorn application itself. First, we will write the docker-compose.yml.

version: '3.7'services:
db:
image: 'postgres:latest'
ports:
- "5432:5432"
env_file:
- ./optomcalc/optomcalc/.env
volumes:
- ./db-data/:/var/lib/postgresql/data/
restart: always
web:
container_name: web
volumes:
- static:/usr/src/optomcalc/optomcalc/static
- .:/usr/src/
env_file:
- ./optomcalc/optomcalc/.env
build: .
working_dir: /usr/src/optomcalc/
command: sh -c "./entrypoint.sh"
ports:
- "8000:8000"
depends_on:
- db
restart: always
volumes:
static:

The Postgres database image will be generated. The web application will require a bit more customisation and we do this using a Dockerfile.

FROM python:3.8-slimARG YOUR_ENVENV YOUR_ENV=${YOUR_ENV} \
PYTHONDONTWRITEBYTECODE=1\
PYTHONUNBUFFERED=1 \
PYTHONHASHSEED=random \
PIP_NO_CACHE_DIR=off \
PIP_DISABLE_PIP_VERSION_CHECK=on \
PIP_DEFAULT_TIMEOUT=100
RUN pip install pipenv
COPY ./optomcalc/Pipfile ./optomcalc/Pipfile.lock /tmp/
RUN cd /tmp && pipenv lock --requirements > requirements.txt
RUN pip install -r /tmp/requirements.txt
COPY ./optomcalc /usr/src/EXPOSE 8000

Also, we need to create the entry point as a shell file to run the application inside the docker container when it is run.

#!/bin/sh
python manage.py migrate --no-input
python manage.py collectstatic --no-input
gunicorn optomcalc.wsgi:application --workers=2 --threads=4 --worker-class=gthread --bind :8000 --worker-tmp-dir /dev/shm

In the shell file, the Django database is migrated. Next, the static files are collated into a different location. This needs to be done because static files like .css and .js are not served by Gunicorn or Django. NGINX will serve the static files, instead.

If we use the command:

sudo docker-compose up -d --build

This will build the docker images.

Below is the special configuration for NGINX, usually named a file called optomcalc.shivan.xyz.

upstream optomcalc_web {
server 0.0.0.0:{{ app_port }};
}
server {
server_name {{ app_name }}.shivan.xyz;
access_log /var/log/nginx/{{ app_name }}.shivan.xyz.log;

location /static/ {
root /home/shivan/{{ app_name }}/optomcalc;
}
location / {
include proxy_params;
proxy_pass http://optomcalc_web;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
}
location = / {
include proxy_params;
proxy_pass http://optomcalc_web;
proxy_connect_timeout 75s;
proxy_read_timeout 300s;
}
}

Introducing Ansible

Once we have the individual projects set up and ready for production, we take further steps to sharing our work.

Daniel Bourke mentioned using ssh. We could scp to transfer files and then ssh to run and set up docker containers. Instead, we will use Ansible.

Ansible is a useful automation tool. Ansible only needs to be on our local machine. In effect, it automates the process of what ssh does.

Once we have committed our individual’s project’s code to GitHub, we will use Ansible to:

  1. Install NGINX and Certbot to install certificates.
  2. Make updates on the server
  3. Set up NGINX for each project.
  4. Build Docker images and run them as containers.

To get started, we will set up Ansible separate from our projects:

cd # at home directory
mkdir -p ./ansible
cd ansible
pipenv lock # creating Pipfile and Pipfile.lock
pipenv install ansible # install ansible in a virtual environment
pipenv run ansible --version # this will verify if ansible has installed

In order to get Ansible minimally working, we need to ensure we have ssh access to the remote server. In fact, Ansible uses ssh.

Digital Ocean has excellent documentation on how to set up ssh keys.

Inside our ~/.ssh, we have our private and public keys. I use a .ssh/config files to save writing a verbose command everyone.

Host shivan.xyz                                                                  
IdentityFile ~/.ssh/id_rsa
IdentitiesOnly yes
User <Enter User>
Port <Enter Port Number>

If we were to ssh into our server, all we need to do is type:

ssh shivan.xyz

It knows what port to use and where to locate the keys. And this comes in handy when using Ansible.

Back to setting up Ansible. Ansible runs its commands using playbooks. For what we are using it for, here is a basic setup.

├── ansible.cfg
├── deploy
│ ├── common
│ │ └── tasks
│ │ └── main.yml
│ ├── optomcalc-app
│ │ ├── files
│ │ │ └── .env
│ │ ├── tasks
│ │ │ └── main.yml
│ │ └── templates
│ │ └── optomcalc.shivan.xyz
│ └── tubestats-app
│ ├── files
│ │ └── .env
│ ├── tasks
│ │ └── main.yml
│ └── templates
│ └── tubestats.shivan.xyz
├── deploy-optomcalc.yml
├── deploy-tubestats.yml
├── deploy-all.yml
├── hosts

This can be boiled down into three different files/directory types:

  1. hosts is a file that contains the server that is to be access (e.g. shivan.xyz).
  2. deploy-all.yml is the main playbook which is initiated. There are deplo-<app-name>.yml to deploy a single application.
  3. The deploy directory, which contains all the unique code and playbooks.

Let’s break it down even further.

hosts is self-explanatory.

deploy-all.yml is the first playbook. Here is the code:

- hosts: shivan.xyz # this points to the desired host
gather_facts: no
become: True # this means we have sudo privileges
roles:
- role: 'deploy/common/' # this deploys common tasks (e.g. updating server)
name: Common tasks
- role: 'deploy/tubestats-app'
name: Deploy Tubestats application
vars:
app_name: tubestats
repo_link: https://github.com/shivans93/tubestats
tubestats_port: 8999
letsencrypt_email: <email>
- role: 'deploy/optomcalc-app'
name: Deploy OptomCalc application
vars:
app_name: optomcalc
repo_link: https://github.com/shivans93/optomcalc
app_port: 8000
letsencrypt_email: <email>

We have comments to explain what parts of the playbook does. We can see that the playbook points to further files into the deploy directory for the specific application.

This moves us on to the deploy directory. Each application has its own directory, which contains files, tasks and templates. The tasks contains a main.yml contains more specific playbook actions for Ansible to perform specific to the application. Let's have a look at optomcalc's main.yml playbook.

---
#
# Setting up nginx and certbot
#
- name: Installing nginx
apt:
name: nginx
state: present
- name: Copy {{ app_name }}.shivan.xyz nginx config
template:
src: '{{ app_name }}.shivan.xyz'
dest: /etc/nginx/sites-available/
- name: Activate the {{ app_name }}.shivan.xyz site
file:
src: /etc/nginx/sites-available/tubestats.shivan.xyz
dest: /etc/nginx/sites-enabled/tubestats.shivan.xyz
state: link
- name: Installing certbot using snap
shell: 'snap install --classic certbot'
- name: Preparing certbot
file:
src: /snap/bin/certbot
dest: /usr/bin/certbot
state: link
- name: Generate certs
shell: >
certbot --nginx --email '{{ letsencrypt_email }}'
--non-interactive --agree-tos
-d '{{ app_name }}.shivan.xyz'
- name: Restart nginx.service
systemd:
state: restarted
name: nginx
#
# Setting up project
#
- name: Create {{ app_name }} directory
file:
path: /home/shivan/{{ app_name }}
state: directory
- name: Git pull/clone {{ app_name }} repo
ansible.builtin.git:
repo: '{{ repo_link }}'
dest: /home/shivan/{{ app_name }}
single_branch: yes
version: main
update: yes

- name: Copy .env file for {{ app_name }} app
ansible.builtin.copy:
src: .env
dest: /home/shivan/{{ app_name}}/{{ app_name }}/{{ app_name }}/.env
owner: 'shivan'
group: 'shivan'
mode: '0644'
- name: Build docker image and run
shell: >
cd /home/shivan/{{ app_name }} &&
docker kill web &&
docker-compose up -d --build

The above playbook will install NGINX as well as Certbot. In addition to this, Ansible uses optomcalc.shivan.xyz in templates and copies this over to the server's NGINX sites-available directory. A symbolic link is created to site-enabled on the server through Ansible. Certbot is installed and run to ensure the optomcalc.shivan.xyz have SSL certificates.

Ansible then pulls (or clones if the code is not present already) the code from GitHub, so we must ensure we commit and push our project code changes. The .env files is copied from the files directory. This contains sensitive keys and information so it not stored on GitHub.

Lastly, shell commands are used to kill the previous docker image and rebuild and re-initiate the new docker image and container. This does mean that there is downtime, but this is okay since these are only hobby projects.

Finally to run Ansible, we can use the following commands:

# to deploy all projects
pipenv run ansible-playbook deploy-all.yml -i hosts -K
# to specifically deploy the optomcalc project (saving time)
pipenv run ansible-playbook deploy-optomcalc.yml -i hosts -K

Conclusion

The inspiration for this project is to be able to host multiple hobby projects one one virtual machine and with one domain. The aim is to save money but also answer Daniel Bourke’s question.

We could use ssh but we can take this a step further with Ansible, which automates the process.

First, we need to get our projects ready for production using their own framework specific method as well as containing them inside Docker. We then created playbooks that allow us to automate a lot of the process. The playbook installs and sets up NGINX as well as Certbot. On top of this, it pulls the individual project code from GitHub, then build the docker image and runs the container.

Thank you for reading. I’m hoping this helps you out in your journey in putting your work out there! Please provide and feedback, I’m listening and willing to learn.

Resources and more links

  1. LearnLinuxTV, Getting started with Ansible (2020), https://youtu.be/3RiVKs8GHYQ
  2. Dot JA, Deploying Django with Docker Compose, Gunicorn, Nginx (2020), https://youtu.be/vJAfq6Ku4cI
  3. Django Lessons, Django in Production — From Zero to Hero (2020), https://youtu.be/JzUwiux2YRo
  4. Red Hat Inc., Ansible Documentation (2020), https://docs.ansible.com/
  5. Docker Inc., Docker Documentation (2021), https://docs.docker.com/
  6. Django Software Foundation (2021), https://www.djangoproject.com/
  7. Streamlit Inc., Streamlit (2021), https://streamlit.io/
  8. NGINX, nginx documentation (2021), https://nginx.org/en/docs/
  9. Electronic Frontier Founcation, Certbot (2021) https://certbot.eff.org/

--

--