Opencv is not working in wsgi apache server - apache2

jpg_original = base64.b64decode(img_str.encode("utf-8"))
jpg_as_np = np.frombuffer(jpg_original, dtype=np.uint8)
img_matlab = cv2.imdecode(jpg_as_np, flags=1)
I am trying decode base64 image in apache server,numy operation is working control is stucked at imdecode part. I am new in apache wsgi. Dont know what I am missing. I am sharing apache2.conf
<VirtualHost *:80>
# Python application integration
WSGIDaemonProcess /apache-flask processes=1 threads=4 python-path=/var/www/apache-flask/:/usr/bin/python
WSGIProcessGroup /apache-flask
WSGIScriptAlias / /var/www/apache-flask/apache-flask.wsgi
<Directory "/var/www/apache-flask/app/">
Header set Access-Control-Allow-Origin "*"
WSGIProcessGroup /apache-flask
WSGIApplicationGroup %{GLOBAL}
Options +ExecCGI
Order deny,allow
Allow from all
</Directory>
Alias /static /var/www/apache-flask/app/static
<Directory /var/www/apache-flask/app/static/>
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
I have build docker image for apache server. Below is the Dockerfile.
# Set the base image
FROM debian:latest
# File Author / Maintainer
MAINTAINER nitinashu1995#gmail.com
RUN apt-get update -y
RUN apt-get install python3-pip apache2 libapache2-mod-wsgi-py3 -y
RUN apt-get -y install libapache2-mod-wsgi-py3
RUN apt-get install -y apache2
RUN pip3 install -U Flask
RUN pip3 install pymongo
RUN apt-get install iputils-ping -y
RUN apt-get install vim -y
RUN pip3 install pytz
RUN pip3 install -U flask-cors
RUN apt update
# Copy over and install the requirements
COPY ./app/requirements.txt /var/www/apache-flask/app/requirements.txt
RUN pip install -r /var/www/apache-flask/app/requirements.txt
# Copy over the apache configuration file and enable the site
COPY ./apache-flask.conf /etc/apache2/sites-available/apache-flask.conf
RUN a2ensite apache-flask
RUN a2enmod headers
# Copy over the wsgi file
COPY ./apache-flask.wsgi /var/www/apache-flask/apache-flask.wsgi
COPY ./run.py /var/www/apache-flask/run.py
COPY ./app /var/www/apache-flask/app/
RUN a2dissite 000-default.conf
RUN a2ensite apache-flask.conf
# LINK apache config to docker logs.
RUN ln -sf /proc/self/fd/1 /var/log/apache2/access.log && \
ln -sf /proc/self/fd/1 /var/log/apache2/error.log
EXPOSE 80
WORKDIR /var/www/apache-flask
RUN pip install opencv-python-headless==4.5.3.56 numpy
RUN apt install cmake -y
RUN pip install face_recognition
CMD /usr/sbin/apache2ctl -D FOREGROUND
I am not able to run opencv with apache2, I am getting timeout.
[Tue Dec 27 10:15:03.701128 2022] [wsgi:error] [pid 11:tid 140363481642752] [client 192.168.0.138:41568] Timeout when reading response headers from daemon process '/apache-flask':
/var/www/apache-flask/apache-flask.wsgi
I am able to run opencv within container seems some configuration is missing.
Not able to find whats wrong.

Related

Unable to run a react app inside docker container using nginx

I'm trying to run a react app inside a docker container using DOCKER-multistage.
The server is written on deno and I tried to add nginx server to dispatch the requests from the frontend to the server.
Dockerfile:
FROM ubuntu:20.04
# install curl
RUN apt-get update && apt-get install -y curl unzip sudo nginx
# install node.js v16.x
RUN curl -fsSL https://deb.nodesource.com/setup_16.x | bash -
RUN apt-get install -y nodejs
# install postgresql
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
curl vim wget \
build-essential \
libpq-dev &&\
apt-get update && apt-get install -y tzdata nodejs yarn postgresql postgresql-contrib
# install deno v1.21.3
RUN curl -fsSL https://deno.land/install.sh | sh -s v1.21.3
ENV DENO_INSTALL="/root/.deno"
ENV PATH="${DENO_INSTALL}/bin:${PATH}"
# Install denon
RUN deno install -qAf --unstable https://raw.githubusercontent.com/nnmrts/denon/patch-4/denon.ts
# The working directory of the project
WORKDIR /app
# Copy the app package and package-lock.json file
COPY frontend/build/ /usr/share/nginx/html
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
# Copy the backend directory
RUN mkdir backend
COPY backend/deps.ts ./backend/deps.ts
RUN cd ./backend && deno cache --unstable deps.ts
ADD backend ./backend
EXPOSE 3000
COPY ./script.sh script.sh
CMD ./script.sh
script.sh:
#!/bin/bash
# create the database in postgres
service postgresql start && su - postgres -c "psql -U postgres -d postgres -c \"alter user postgres with password 'postgres';\"" \
&& su - postgres -c "psql postgres -c \"create database db;\""
# start nginx
sudo service nginx start
# Populate database tables
cd backend && deno run --unstable --allow-env --allow-net database/seeds.ts && denon start &
# Wait for any process to exit
wait -n
# Exit with status of process that exited first
exit $?
nginx.conf:
server {
listen 3000;
root /usr/share/nginx/html;
location / {
try_files $uri $uri/ /index.html;
proxy_pass http://localhost:5000/;
}
}
I build the image :
docker build . -t server-app
And I create a new container:
docker run -p 3000:3000 server-app
Everything is working and the deno server is listening on the 5000 port but when I run the app on localhost:3000/ I got this error:
The connection was reset
The connection to the server was reset while the page was loading.
The site could be temporarily unavailable or too busy. Try again in a few moments.
If you are unable to load any pages, check your computer’s network connection.
If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web.
What's wrong with the config I have done?
I fixed the issue by updating the nginx.conf:
server {
listen 3000;
server_name localhost;
root /usr/share/nginx/html;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /api {
rewrite ^/api/?(.*) /$1 break;
proxy_pass http://localhost:5000;
}
}

You don't have permission to access this resource [while deploying angular on apache]

docker-compose
version: "2"
services:
django-apache2:
build: .
container_name: django-apache2
ports:
- '8005:80'
volumes:
- ./www:/var/www/html/django_demo_app
- ./angular/dist/ng7:/var/www/dist
- ./demo_site.conf:/etc/apache2/sites-available/000-default.conf
dockerFile
FROM python:3.7-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
apt-utils \
vim \
curl \
apache2 \
apache2-utils
RUN apt-get -y install \
libapache2-mod-wsgi-py3
ADD ./demo_site.conf /etc/apache2/sites-available/000-default.conf
COPY ./www /var/www/html
COPY ./angular/dist/ng7 /var/www/html/dist
EXPOSE 80
RUN rm /var/www/html/index.html
RUN chmod -R 777 /var/www/html/dist/index.html
CMD ["apache2ctl", "-D", "FOREGROUND"]
demo_site.conf
<VirtualHost *:80>
ServerName localhost
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html/dist
Alias "/" "/var/www/html/dist"
<Directory "/var/www/html/dist">
Options FollowSymLinks Includes ExecCGI
AllowOverride All
Require all granted
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
I ran ng build and place build artifacts including index.html under /var/www/html/dist.
I'm getting Forbidden You don't have permission to access this resource. after hitting localhost:8005.
what am I missing here?

How to make a single React Docker build/image to run through all environments?

i'm trying to deploy a React application in a single Docker container able to run through dev, preprod and prod on an OpenShift platform on which i can only push tagged docker images. In order to do this i have:
a React application generated with create-react-app hosted on Github
a Dockerfile at project root containing the docker build/run process
a .gitlab-ci.yml file containing the continuous integration checks and deploy methods
an accessible OpenShift platform
What i can do:
I can easily generate a production build '/build' that will be added in the docker image build phase and will run with a production context or i can build at run start (but just writing it feels bad).
The problem:
This generated image isn't able to run through all environment, and i don't want to have a specific build for each environment.
What i want:
I want to be able to generate a single docker image that will run through all environments without having to install dependencies or build when only RUN is needed.
here is my Dockerfile:
FROM nginx:1.15.5-alpine
RUN addgroup --system app \
&& adduser --uid 1001 --system --ingroup app app \
&& rm -rf /etc/nginx/conf.d/default.conf \
&& apk add --update nodejs nodejs-npm \
&& mkdir /apptmp
COPY . /apptmp
RUN chmod 777 /apptmp
COPY config/default.conf /etc/nginx/conf.d/
COPY config/buildWithEnv.sh .
RUN touch /var/run/nginx.pid && \
chown -R app:app /var/run/nginx.pid && \
chown -R app:app /var/cache/nginx && \
chown -R app:app /usr/share/nginx/html
EXPOSE 8080
USER 1001
CMD sh buildWithEnv.sh
And here is the script buildWithEnv.sh
#!/bin/bash
echo "===> Changin directory: /apptmp ..."
cd /apptmp
echo "===> Installing dependencies: npm install ..."
npm install
echo "===> Building application: npm run build ..."
npm run build
echo "===> Copying to exposed html folder ... "
rm -rf /usr/share/nginx/html/*
cp -r build/* /usr/share/nginx/html/
echo "===> Launching http server ... "
nginx -g 'daemon off;'
A possible approach is described on this blog-post
Basically: You use placeholders for all environment specific parts in your static resources. Then you configure nginx' sub_filter to replace those at runtime with the environment specific values.
I finally found a way.
Build application
The first step is to make a static build of your application, environment agnostic. (Actually for my situation this is done by the CI so i directly retrieve the static build in Docker build part)
Create Nginx conf
In your project, create a file holding the default configuration for nginx, plus the reverse proxy configuration your need, and fill the proxy_pass with values easily and uniquely replacable.
server {
gzip on;
gzip_types text/plain application/xml application/json;
gzip_vary on;
listen 8080;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_set_header x-clientapikey REACT_APP_BAMBOO_API_KEY;
proxy_pass REACT_APP_SERVICE_BACK_URL;
}
location /auth {
proxy_set_header Authorization "Basic REACT_APP_AUTHENTIFICATION_AUTHORIZATION";
proxy_pass REACT_APP_SERVICE_AUTH_URL;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Openshift configuration
In your OC application, go to the "environment" tab, and add all the environnement variables you will use in your nginx conf.
Dockerfile build
In the dockerfile build part, move your build to its correct position into nginx html folder, and handle your configuration files, including nginx.
FROM nginx:1.15.5-alpine
RUN addgroup --system app \
&& adduser --uid 1001 --system --ingroup app app \
&& rm -rf /etc/nginx/conf.d/default.conf \
&& apk --update add sed
COPY ./build /usr/share/nginx/html/
COPY config/default.conf /etc/nginx/conf.d/
RUN chmod 777 -R /etc/nginx/conf.d/
COPY config/nginxSetup.sh .
RUN touch /var/run/nginx.pid && \
chown -R app:app /var/run/nginx.pid && \
chown -R app:app /var/cache/nginx && \
chown -R app:app /usr/share/nginx/html
EXPOSE 8080
USER 1001
CMD sh nginxSetup.sh
Dockerfile run
In the dockerfile run part (CMD), just use the command line tool sed in order to replace paths in your nginx reverse proxy configuration.
#!/usr/bin/env bash
sed -i "s|REACT_APP_SERVICE_BACK_URL|${REACT_APP_SERVICE_BACK_URL}|g" /etc/nginx/conf.d/default.conf
sed -i "s|REACT_APP_SERVICE_AUTH_URL|${REACT_APP_SERVICE_AUTH_URL}|g" /etc/nginx/conf.d/default.conf
sed -i "s|REACT_APP_BAMBOO_API_KEY|${REACT_APP_BAMBOO_API_KEY}|g" /etc/nginx/conf.d/default.conf
sed -i "s|REACT_APP_AUTHENTIFICATION_AUTHORIZATION|${REACT_APP_AUTHENTIFICATION_AUTHORIZATION}|g" /etc/nginx/conf.d/default.conf
echo "===> Launching http server ... "
nginx -g 'daemon off;'
And here you are.

Issue using certbot with nginx

I'm actually working on a webapp, I use Reactjs for the frontend and Golang for the backend. Those 2 programs are hosted separately on 2 VMs on Google-Compute-Engine. I want to serve my app through https so I choose to use Nginx for serving the frontend in production. Firstly I made my config file for Nginx:
#version: nginx/1.14.0 (ubuntu)
server {
listen 80 default_server;
listen [::]:80 default_server;
root /var/www/banshee;
server_name XX.XXX.XX.XXX; #public IP of my frontend VM
index index.html;
location / {
try_files $uri /index.html =404;
}
}
For this part everything works as expected but after that I want to serve my App over https following this tutorial. I installed the packages software-properties-common,python-certbot-apache
and certbot but when I tried
sudo cerbot --nginx certonly
I get the following message:
gdes#frontend:/etc/nginx$ sudo certbot --nginx certonly
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Could not choose appropriate plugin: The requested nginx plugin does not appear to be installed
The requested nginx plugin does not appear to be installed
I made some searches on Google and here and I still can't figure out which plugin is missing or an other way to fix this.
Does someone have an idea tohelp me ?
Thanks a lot :)
I was trying to create Let's Encrypt certificate using certbot for my sub-domain and had the following issue.
Command:
ubuntu#localhost:~$ certbot --nginx -d my_subdomain.website.com -d my_subdomain2.website.com
Issue:
The requested Nginx plugin does not appear to be installed
Solution:
Ubuntu 20+
ubuntu#localhost:~$ sudo apt-get install python3-certbot-nginx
Earlier Versions
ubuntu#localhost:~$ sudo apt-get install python-certbot-nginx
You will need to replace
apt install python-certbot-nginx
by
apt install python3-certbot-nginx
You can install the Certbot nginx plugin with the following commands:
add-apt-repository ppa:certbot/certbot
apt update
apt install python-certbot-nginx
You have to re-install a python3 version of Lets Encrypt's certbot.
Run
sudo apt-get install python3-certbot-nginx
On Debian 10, certbot returns a "could not find a usable nginx binary" issue because, "/usr/sbin" is missing from the PATH. Add /usr/sbin to the PATH
export PATH=/usr/sbin:$PATH
Then certbot can make a certificate for nginx
certbot --nginx -d <server name> --post-hook "/usr/sbin/service nginx restart"
As explained on the debian wiki page for letsencrypt.

How to Add SSL to Google Cloud Wordpress Launcher Site

Google provide Wordpress Launcher although in beta. I have try them by adding custom domain via Google Cloud DNS, but i still not success in adding custom domain with SSL (https).
Any Idea ?
Wordpress from Google Click to Deploy launches on Google Compute Engine, not Google App Engine, meaning you are getting an entire Debian virtual machine, not just an App Engine instance. The App Engine instructions are not applicable.
Here's the process I used (replace "www.veggie.com" with your domain):
Go to Deployment Manager and select your Wordpress deployment.
Under "Get Started with WordPress", click "SSH" to open a Google Cloud Shell console on the Debian virtual machine hosting your site.
If you haven't already, generate a CSR using openssl req -new -newkey rsa:2048 -nodes -keyout www_veggie_com.key -out www_veggie_com.csr. More info
You will be prompted with some questions. Answer them using letters and numbers only. For example:
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:New Mexico
Locality Name (eg, city) []:Albuquerque
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Veggie Inc # put "NA" if not applicable
Organizational Unit Name (eg, section) []:NA # put "NA" if not applicable
Common Name (e.g. server FQDN or YOUR name) []:www.veggie.com # MUST BE the website you are securing. Use *.veggie.com if you purchased a wildcard certificate
Email Address []:webmaster#veggie.com
A challenge password []: # just leave this blank
An optional company name []: # leave this blank too
Move the private key to a safe place, e.g. sudo mv www_veggie_com.key /etc/ssl/ssl.key/
View the CSR (Certificate Signing Request) file using cat www_veggie_com.csr. It should look something like this:
-----BEGIN CERTIFICATE REQUEST-----
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
sdkfjhksdjfhkjsdvbksdjfkhsdkfhskdjfhskjdfhksdjfhkdsjvbnksjksjkjh
2398dfjk3290fdsjk3290slk093koldfj3j0igr0/4387yvdjkn4398fdh92439h
3fjkbdjgkedkj4vie929ckw0gfjdfgjs90q=
-----END CERTIFICATE REQUEST-----
Copy the contents of the CSR file to your clipboard (in Google Cloud Shell, just highlight the text with your mouse and hit Ctrl+C).
Go to the site where you purchased the certificate and find the option to Activate the certificate. You should be prompted to upload or copy and paste the CSR. If you are prompted to confirm the server type, it is an Apache server. After I did that, my certificate issuer sent me the certificates via email.
Once you have your certificates, return to the Google Cloud Shell.
Use the gear menu > Upload File to upload your SSL certificates to your server. I put the certificates in /etc/ssl/ssl.crt/.
Enter sudo nano /etc/apache2/sites-available/wordpress.conf to use Nano to edit your server's configuration file to point to your certificate(s) and your key file. My wordpress.conf only had a <VirtualHost *:80> section, so I added a <VirtualHost *:443> section at the bottom:
<VirtualHost *:443>
ServerAdmin webmaster#veggie.com
ServerName www.veggie.com:443
DocumentRoot /var/www/html
# Copy <Directory /> and other settings from <VirtualHost *:80> here as well
SSLEngine on
SSLCertificateFile /etc/ssl/ssl.crt/www_veggie_com.crt
SSLCertificateKeyFile /etc/ssl/ssl.key/www_veggie_com.key
SSLCertificateChainFile /etc/ssl/ssl.crt/www_veggie_com.ca-bundle
</VirtualHost>
Restart the Apache server using sudo service apache2 restart
Try visiting your homepage via https (e.g. https://www.veggie.com) and see if it worked.
This is bit of a complicated process. Despite Googles efforts to https the whole internet and the fact that every App Engine app gets a secure appspot.com-subdomain, adding your own domain with your own certificate is bit complicated.
The process is documented here
1. SSH into the server
SSH into the server running your HTTP website as a user with sudo privileges.
2. Install snapd
You'll need to install snapd and make sure you follow any instructions to enable classic snap support.
Follow these instructions on snapcraft's site to install snapd.
https://snapcraft.io/docs/installing-snap-on-debian
3. Ensure that your version of snapd is up to date
Execute the following instructions on the command line on the machine to ensure that you have the latest version of snapd.
sudo snap install core; sudo snap refresh core
4. Remove certbot-auto and any Certbot OS packages
If you have any Certbot packages installed using an OS package manager like apt, dnf, or yum, you should remove them before installing the Certbot snap to ensure that when you run the command certbot the snap is used rather than the installation from your OS package manager. The exact command to do this depends on your OS, but common examples are sudo apt-get remove certbot, sudo dnf remove certbot, or sudo yum remove certbot.
If you previously used Certbot through the certbot-auto script, you should also remove its installation by following the instructions here.
5. Install Certbot
Run this command on the command line on the machine to install Certbot.
sudo snap install --classic certbot
6. Prepare the Certbot command
Execute the following instruction on the command line on the machine to ensure that the certbot command can be run.
sudo ln -s /snap/bin/certbot /usr/bin/certbot
7. Choose how you'd like to run Certbot
Either get and install your certificates...
Run this command to get a certificate and have Certbot edit your Apache configuration automatically to serve it, turning on HTTPS access in a single step.
sudo certbot --apache
Or, just get a certificate
If you're feeling more conservative and would like to make the changes to your Apache configuration by hand, run this command.
sudo certbot certonly --apache
8. Test automatic renewal
The Certbot packages on your system come with a cron job or systemd timer that will renew your certificates automatically before they expire. You will not need to run Certbot again, unless you change your configuration.
You can test automatic renewal for your certificates by running this command:
sudo certbot renew --dry-run
If that command completes without errors, your certificates will renew automatically in the background.
9. Confirm that Certbot worked
To confirm that your site is set up properly, visit https://yourwebsite.com/ in your browser and look for the lock icon in the URL bar.
Source:
https://certbot.eff.org/lets-encrypt/debianstretch-apache
Here's the steps I used for free SSL certificate on wordpress VM launched in google cloud.
Make sure "Allow http" and "Allow https" are selected in vm settings and also overall firewall rules for the your project has 443/80 rules (which are there by default).
Enabling Free SSL Certificate :
Change the domain name below from my-domain.com to required domain name.
Step 1 :
wget https://dl.eff.org/certbot-auto && chmod a+x certbot-auto
sudo ./certbot-auto certonly --webroot -w /var/www/html/ -d my-domain.com -d www.my-domain.com
Step 2:
sudo vi /etc/apache2/sites-available/default-ssl.conf
Add following after ServerAdmin
<Directory /var/www/html/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
At the end of file
Comment out two SnakeOil Certs lines and add below.
SSLCertificateFile "/etc/letsencrypt/live/my-domain.com/cert.pem"
SSLCertificateKeyFile "/etc/letsencrypt/live/my-domain.com/privkey.pem"
SSLCertificateChainFile "/etc/letsencrypt/live/my-domain.com/chain.pem"
Step 3 :
sudo vi /etc/apache2/sites-available/wordpress.conf
Remove all 3 lines
Add below lines
<VirtualHost *:80>
ServerAdmin webmaster#localhost
DocumentRoot /var/www/html
ServerName www.my-domain.com
ServerAlias my-domain.com
Redirect permanent / https://www.my-domain.com/
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/html/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
ErrorLog ${APACHE_LOG_DIR}/error.log
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
Step 4 :
Restart apache
sudo a2ensite default-ssl
sudo a2enmod ssl
sudo service apache2 restart
Step 5 (Optional) :
Enable auto renewal for cert :
sudo mv certbot-auto /etc/letsencrypt/
sudo crontab -e
Add following at the end
45 2 * * 6 cd /etc/letsencrypt/ && ./certbot-auto renew && /etc/init.d/apache2 restart
Step 6 :
Test https version is working in browser.
Only after making sure that https is working :
Go to WP-Admin :
Settings > General > change site url and host url to https://my-domain.com
Note : Any error in typing wrong url in step 6, you can loose web access to wordpress. After that, you have to follow other steps to gain the access back through ssh.
Hope this helps.

Resources