Issue
I'm using Docker for the first time and am running into an issue where I think my main problem is I'm not sure of the source of the issue. I'm running an EC2 instance on AWS, which is properly configured, etc. to host a site.
If I have an update to my git repo, I can manually ssh into the git repo, run
git pull
sudo docker-compose down -v --remove-orphans
sudo docker-compose -f docker-compose.prod.yml up -d --build
and this works like a charm.
However, I've been working on a pipeline with GitHub actions, and I'm pretty much containerizing the code that was pushed into a Docker image, pushing this to an ECR private repo, and then sshing into the EC2 instance, pulling the new image I just pushed, decomposing the old one, and (well, herein lies the issue), restarting with the new image.
I've debugged a bunch and I am fairly certain that the image pushed to my ECR private repo is correct, and I'm certain that it is on the EC2 instance/I have access to it.
The relevant portion of my Dockerfile is:
- name: Permission for ecr
run: ssh staging 'aws ecr get-login-password --region us-east-1 | sudo docker login --username AWS --password-stdin ${{ secrets.AWS_ECR_REGISTRY }}'
- name: Pull new image
run: ssh staging 'sudo docker pull ${{ secrets.AWS_ECR_REGISTRY }}/my-repo:latest'
- name: Stop running container
run: ssh staging 'cd vms; sudo docker-compose down -v --remove-orphans'
- name: Start new container
run: ssh staging 'cd vms; sudo docker-compose -f docker-compose.prod.yml up -d --build'
I think the issue might actually lie in my Dockerfile itself:
FROM python:3
ENV PYTHONUNBUFFERED=1
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/ #line I'm concerned about
EXPOSE 8000
ENTRYPOINT python manage.py collectstatic --noinput && python manage.py runserver 0.0.0.0:8000
since what this does is copy everything in the current directory to the code volume. I think this might be a problem because, sure maybe I pull and push the docker image correctly to the server, but when I actually run sudo docker-compose -f docker-compose.prod.yml up -d --build
my best guess is that everything in the current directory is containerized and started, but I'd like my new image that I pushed to be started up.
Edit: here's the docker-compose.prod.yml file:
version: "3.8"
services:
web:
build:
context: ..
dockerfile: ./docker/Dockerfile.prod
command: gunicorn vms.wsgi:application --bind 0.0.0.0:8000
volumes:
- ..:/code
expose:
- 8000
nginx:
build: ../nginx
ports:
- 1337:80
depends_on:
- web
How can I do this?
Solution
The problem you have is that you’re trying to build the docker image again. Since you push it to ECR, instead of building it you can simply use the built image and run it out of the box.
Use this docker-compose.prod.yml
file instead:
version: "3.8"
services:
web:
image: <path-to-your-docker-image-in-ecr>
command: gunicorn vms.wsgi:application --bind 0.0.0.0:8000
nginx:
image: <nginx-image-you-need>
ports:
- 1334:80
depends_on:
- web
Note that I have removed the build
step in both your apps and replaced them with image
. You have to fill the proper values in there.
And replace the build command using the following:
sudo docker-compose -f docker-compose.prod.yml up -d
Hope this helps you. Cheers 🍻!!!
Answered By - Eranga Heshan