How to make a live backup of your Raspberry Pi Ubuntu/Raspberry Pi OS Server to create live bootable ISO Images on an external drive

Over the course of a few years, I have been teaching myself how to build secure and functional web servers on the Raspberry Pi. While building these systems, I have run into a problem. I have been unable to figure out how to fully backup my running Ubuntu 20.04 server system (including root files), into a bootable ISO file, in case of severe disaster or hackage. Until now, most tried solutions were using backup software that was either too complicated, or didn’t accomplish full ISO/disk image backups to an external disk. After some browsing of the Raspberry Pi Forums, I finally found the solution. A developer by the name of RonR on the Raspberry Pi Forums, has developed a simple set of scripts to accomplish exactly this… A FULL (Or SEVERAL FULL) bootable ISO Backups of a live running Ubuntu/Raspberry Pi OS linux system. You can find the original post with the downloadable scripts here in the raspberry pi forums.

What we are going to accomplish in this tutorial is the following:

  1. Create full daily bootable backups of your running Ubuntu or Raspberry Pi OS server in bootable ISO/IMG format, that will update themselves (using incremental updates) to the current live running server every 7 days (i.e. mondaybackup.img, is updated to reflect changes in the running OS since last Monday).
  2. Create full monthly bootable backups of your Raspberry Pi running Ubuntu Server or Raspberry Pi OS in bootable ISO format, that will update themselves to the current live running server every month (i.e. Monthy.iso, is update to reflect changes in the running OS since last Month).

Things required for this tutorial are a Raspberry Pi 3 or 4, an external USB Drive as big as possible & no smaller than 300GB in data capacity, and a machine (preferably a Raspberry Pi 4) running either Ubuntu Server 20.04 (untested on other versions of ubuntu) or Raspberry Pi OS. All of this tutorial will be done in the terminal on the command line. Now, LETS BEGIN!

1) SSH Into your Ubuntu/Raspberry Pi OS Server, and download “
Image File Utilities” (image-utils.zip) from https://forums.raspberrypi.com/viewtopic.php?t=332000#p1511694 the type the following command while inside your user folder (/home/yourusername/):

cd ~/ && wget -O image-files.zip "https://forums.raspberrypi.com/download/file.php?id=54873"

2) Create a separate folder, move the downloaded zip file to it, and decompress/extract the zipped files/scripts to that folder. Finally, delete the original zipped file.

mkdir -p ~/image-files && mv ~/image-files.zip ~/image-files && cd ~/image-files && sudo apt install unzip && unzip ~/image-files/image-files.zip && rm ~/image-files/image-files.zip

3) Change owner of scripts to root, and change permissions of scripts to be executable.

sudo chown root:root ~/image-files/image* && sudo chmod +x ~/image-files/image*

4) Create directory /usr/local/bin if it doesn’t exist, and move all of the unzipped files to that directory. Change to the newly created directory and rename README.txt to image-readme.txt.

sudo mkdir -p /usr/local/bin/ && mv ~/image-files/* /usr/local/bin/ && sudo mv /usr/local/bin/README.txt /usr/local/bin/image-readme.txt

5) Add /usr/local/bin to your $PATH if it is not already in your $PATH. If it is in your $PATH, skip this step. To check if it’s in your $PATH, run the following command…

echo $PATH

To add it to your path run the following command. This will add PATH=/usr/local/bin:$PATH to the bottom of your ~/.bashrc file, and then source it to enable it in your $PATH permanently.

echo 'PATH=/usr/local/bin:$PATH' >> ~/.bashrc && source ~/.bashrc

6) Now you have all of your scripts in the correct location with the correct permissions, it’s time to plug in your hard drive into either a mac, linux, or windows computer and format it as “exFAT” with the sceme being “Master Boot Record”. If you format in your linux desktop using gparted or gnome-disks, you may also format it as ext2 or ext4 filesystem. We prefer ext4. Now go ahead and format your external usb drive for backups, and then plug it into your Ubuntu/Raspian/debian server. Once it’s plugged into your server, continue to step enter the following command:

sudo blkid

My Output:

/dev/sda: LABEL="USBDRIVE" UUID="20737ag0-52b8-5d38-831b-9d570f0ffec4" TYPE="ext4" /dev/sdb1: LABEL_FATBOOT="system-boot" LABEL="system-boot" UUID="1CB3-D69A" TYPE="vfat" PARTUUID="4ad3ea62-01" /dev/sdb2: LABEL="writable" UUID="58ccb32d-ef91-4182-930d-e423439cf786" TYPE="ext4" PARTUUID="3ac8ed52-03" /dev/loop1: TYPE="squashfs" /dev/loop2: TYPE="squashfs" /dev/loop3: TYPE="squashfs" /dev/loop4: TYPE="squashfs" /dev/loop5: TYPE="squashfs" /dev/loop6: TYPE="squashfs" /dev/loop7: TYPE="squashfs" /dev/loop8: TYPE="squashfs" /dev/sdc1: LABEL="BACKUPDISK" UUID="521B-2C71" TYPE="exfat"

As you can see, my exFAT “BACKUPDISK” is device /dev/sdc1, and it has a UUID of 521B-2C71. Make note of this UUID.

7) Create the directory for your “BACKUPDISK“.

sudo mkdir -p /mnt/BACKUPDISK

8) Gain root privileges and backup your /etc/fstab file.

sudo su && cp /etc/fstab /etc/fstab.bak

Then add your BACKUPDISK to it’s own /etc/fstab entry to force the disk to auto-mount upon every boot. Do this by adding the UUID for /dev/sdc1 (or whatever your drive is) to the UUID Section in the below code, and then pressing enter.

echo 'UUID=YOUR-UUID-GOES-HERE-CHANGETHIS       /mnt/BACKUPDISK       ext4    defaults,noatime       0       1' >> /etc/fstab

In my case, the commmand looks like this:

echo 'UUID=521B-2C71       /mnt/BACKUPDISK       ext4    defaults,noatime       0       1' >> /etc/fstab

9) Next Mount your new USB Drive.

mount -a

Check to see that it mounted correctly.

lsblk
sdc 8:32 1 16M 0 disk
└─sdc1 8:33 1 16M 0 part /mnt/BACKUPDISK

Note: You may want to try rebooting at this point to be sure that your drive mounts automatically upon boot.

10) Create your initial backup image on your BACKUPDISK.

If running Ubuntu Server on a Raspberry Pi, run the following command to create your initial backup image:

sudo image-backup -u --initial /mnt/BACKUPDISK/00-sundaybackup.img,,8000

If running Raspberry Pi OS on a Raspberry Pi, run the following command to create your initial backup image:

sudo image-backup --initial /mnt/BACKUPDISK/00-sundaybackup.img,,8000

Wait for the backup to complete, then move on to the next step.

11) After the backup is complete, you will need to make a copy of the backup for each day of the week. You can run this single line command to do so. This may take a while, like a half hour to a few hours.

sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/01-mondaybackup.img && sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/02-tuesdaybackup.img && sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/03-wednesdaybackup.img && sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/04-thursdaybackup.img && sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/05-fridaybackup.img && sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/06-saturdaybackup.img

12) Make a copy of the initial image for your monthly backups. This may take a while.

sudo cp /mnt/BACKUPDISK/00-sundaybackup.img /mnt/BACKUPDISK/07-monthlybackup.img

13) Create a crontab as the sudo user to create incremental updates to each image for every day of the week, as well as the first of every month. This will keep your .img files updated with a clone of your OS that you were using prior to the current day of the week, as well as a clone of your OS that you were using prior to the current month. So you will have backups for every day of last week, and one single backup for the first of last month. Feel free to modify the crontab and disk images to suit your needs with backup times.

sudo crontab -e

For Ubuntu Server Users, copy and paste the following text into the bottom of your crontab window.

########################################
########## Full ISO Backups ############
########################################
# Incremental ISO Backup for every Sunday at 4am
0 4 * * 0 image-backup -u /mnt/BACKUPDISK/00-sundaybackup.img
# Incremental ISO Backup for every Monday at 4am
0 4 * * 1 image-backup -u /mnt/BACKUPDISK/01-mondaybackup.img
# Incremental ISO Backup for every Tuesay at 4am
0 4 * * 2 image-backup -u /mnt/BACKUPDISK/02-tuesdaybackup.img
# Incremental ISO Backup for every Wednesday at 4am
0 4 * * 3 image-backup -u /mnt/BACKUPDISK/03-wednesdaybackup.img
# Incremental ISO Backup for every Thursday at 4am
0 4 * * 4 image-backup -u /mnt/BACKUPDISK/04-thursdaybackup.img
# Incremental ISO Backup for every Friday at 4am
0 4 * * 5 image-backup -u /mnt/BACKUPDISK/05-fridaybackup.img
# Incremental ISO Backup for every Saturday at 4am
0 4 * * 6 image-backup -u /mnt/BACKUPDISK/06-saturdaybackup.img
# Incremental ISO Backup for every first of the month
@monthly image-backup -u /mnt/BACKUPDISK/07-monthlybackup.img

For Raspberry Pi OS Users, copy and past the following text into the bottom of your crontab window.

########################################
########## Full ISO Backups ############
########################################
# Incremental ISO Backup for every Sunday at 4am
0 4 * * 0 image-backup /mnt/BACKUPDISK/00-sundaybackup.img
# Incremental ISO Backup for every Monday at 4am
0 4 * * 1 image-backup /mnt/BACKUPDISK/01-mondaybackup.img
# Incremental ISO Backup for every Tuesay at 4am
0 4 * * 2 image-backup /mnt/BACKUPDISK/02-tuesdaybackup.img
# Incremental ISO Backup for every Wednesday at 4am
0 4 * * 3 image-backup /mnt/BACKUPDISK/03-wednesdaybackup.img
# Incremental ISO Backup for every Thursday at 4am
0 4 * * 4 image-backup /mnt/BACKUPDISK/04-thursdaybackup.img
# Incremental ISO Backup for every Friday at 4am
0 4 * * 5 image-backup /mnt/BACKUPDISK/05-fridaybackup.img
# Incremental ISO Backup for every Saturday at 4am
0 4 * * 6 image-backup /mnt/BACKUPDISK/06-saturdaybackup.img
# Incremental ISO Backup for every first of the month
@monthly image-backup /mnt/BACKUPDISK/07-monthlybackup.img

Exit and save crontab with “ctrl-x”, then press “y”, then hit “enter”.

FINALLY, your Raspberry Pi should be doing full backups of itself and converting them to the .img files in your external hard drive that we set up. Every day of the week, incremental backups will happen to each IMG File the backup corresponds to, leaving your backups no less than a week old at all times. Finally, you will also have an incremental backup corresponding to it’s img file for the first of every month. This will always leave you with a backup from the first of last month. I hope this tutorial helped and thanks to RonR for his backup scripts.

14) This is the end of the tutorial, and if it worked for you (which it should have), you could spread the love back and donate some change to my bitcoin address. Please send any BTC donations to:

bc1q3klg86zej8y852hp04qv8569k4fe45jpjfj763

Sources:https://forums.raspberrypi.com/viewtopic.php?t=332000

The Trick to compiling Modsecurity-nginx (>v.1.24) on Raspberry Pi

To any users trying to compile the ModSecurity module for nginx 1.21.5 and up, there are some changes to be made according to this github issue. 2.

The issue is related to a change in nginx (now nginx is built with the PCRE2 library by default).
PCRE2 support must be added to the library (libmodsecurity) and then to the connector. Applying just the connector’s PR will lead to enormous memory leaks in regex processing.

Long story short: use –without-pcre2 configure argument when building ModSecurity-nginx V3 connector module.

So your full module configure line should look like this:
./configure --with-compat --without-pcre2 --add-dynamic-module=/usr/local/src/ModSecurity-nginx

How to compile the ngx_pagespeed (Nginx Pagespeed) module for ARM architecture on the Raspberry Pi 4, or any other aarch64 devices, running Ubuntu Server 20.04

Before you read:

To skip this tutorial and download a precompiled version of Pagespeed for Nginx-1.21.6 Mainline, you can check out the nerd-tech github link here: https://github.com/nerd-tech/Pagespeed-nginx-RaspberryPi/releases/download/pagespeed/v1.21.6.ngx_pagespeed.so.for.Nginx-v1.21.6-Mainline.On.Ubuntu-20.04-aarch64.RaspberryPi3+4.zip For this module to be compatible, you must be running Nginx Mainline v1.21.6 on Ubuntu Server 20.04 for the Raspberry Pi 3 or Raspberry Pi 4. Check the github page for more compatibility info on this precompiled Pagespeed module for the Raspberry Pi.

Preface:

If you are using a Raspberry Pi as a LEMP (Linux, Nginx, MariaDB, PHP) server to host your website, you may want to consider speeding up your site on the server level by using the pagespeed module for Nginx. In this tutorial, we are going to learn how to compile (build) and install google’s ngx_pagespeed module for the Nginx Mainline Version 1.21.6, on a Raspberry Pi 3, Raspberry Pi 4, or any 64 bit ARM (aarch64) Device.

Natively, the ngx_pagespeed module does not support ARM device, nor do they have ARM support in their development roadmap. This poses a problem for users running a web server on aarch64 devices such as the Raspberry Pi 3 and above. This is mostly due to the fact that the PSOL binaries required to compile the pagespeed module are not 64-bit ARM or armv7l compatable. Fortunately, a developer by the name of Mogwai (@gusco) on gitlab, has solved this issue by patching the PSOL Binaries to make them compatible with aarch64/armv7l operating systems and their respective devices. This in turn, makes compiling the Nginx pagespeed module now possible for ARM devices.

As a reference note, This tutorial was created by merging/modifying three different tutorials and sets of instructions into one clean ARM versioned tutorial. You can find the original sources/tutorials/content on Linuxbabe’s page, on the Mogwai Gitlab page and on the official google pagespeed documentation page. Keep in mind, this has only been tested by me on Ubuntu Server (64-bit) 20.04 for the Raspberry Pi 4. However, it should also work on any 64 bit ARM device running Ubuntu 20.04, or any 64-bit version of Rasperry Pi OS. So, without further ado, lets begin:

Before you begin: You should do this on a machine (or seperate SD card) that IS NOT your real production web server or LEMP server! This will prevent code litter scattered throughout your server. Just be sure to use the same OS, Same Architecture, and Same Nginx version.

Step 1) Install the Nginx Mainline repository and corresponding signing keys

sudo nano /etc/apt/apt/sources.list.d/Nginx.list

Then copy and paste the following text into the file.

# Nginx Official Repositories for Ubuntu 20.04 Focal

# Official Nginx Stable Repository for Ubuntu 20.04 Raspi
# Stable Nginx and Nginx source repositories
#deb [arch=arm64] https://nginx.org/packages/ubuntu focal nginx
#deb-src [arch=arm64] https://nginx.org/packages/ubuntu focal nginx

# Official Nginx Mainline Repository for Ubuntu 20.04 Raspi
deb [arch=arm64] https://nginx.org/packages/mainline/ubuntu focal nginx
deb-src [arch=arm64] https://nginx.org/packages/mainline/ubuntu focal nginx

Then update or add the Official Nginx Signing Key to your GPG Keyring.

cd ~/ && curl -O https://nginx.org/keys/nginx_signing.key && sudo apt-key add ./nginx_signing.key

Then type:

apt-key list

and you should see the following text appear.

pub   rsa2048 2011-08-19 [SC] [expires: 2024-06-14]
      573B FD6B 3D8F BC64 1079  A6AB ABF5 BD82 7BD9 BF62
uid           [ unknown] nginx signing key <signing-key@nginx.com>

/etc/apt/trusted.gpg.d/ubuntu-keyring-2012-archive.gpg

Be sure the fingerprints (573B FD6B 3D8F BC64 1079 A6AB ABF5 BD82 7BD9 BF62) match. Then proceed to update, upgrade, and finally install nginx.

sudo apt update && sudo apt upgrade && sudo apt install nginx

Once installed you can check and be sure your version of nginx is 1.21.6 with nginx -v. Then proceed to the next step, which is taken almost directly from linuxbabe.com.

STEP 2) Download nginx source package

Modify this line by changing “yourusername” to your real Ubuntu login user name. Then copy the rest of the line and paste it in your linux terminal.

sudo chown yourusername:yourusername -R /usr/local/src/ && sudod mkdir -p /usr/local/src/nginx && cd /usr/local/src/nginx/ && sudo apt install dpkg-dev && sudo apt source nginx && ls

The final “ls” command should show the following output:

nginx-1.21.6
nginx_1.21.6-1~focal.debian.tar.xz
nginx_1.21.6-1~focal.dsc
nginx_1.21.6.orig.tar.gz

STEP 3) Download the Pagespeed source package

Here is where we do something different for ARM devices. The PSOL (pagespeed optimization Libraries) provided by google are not compatable with ARM devices. Therefore, we are going to use a patched version of the PSOL. Paste this entire one liner to clone the pagespeed module from github, convert it to the stable branch, and then download and extract the aarch64 patched version of the PSOL into the proper directory.

cd /usr/local/src && git clone https://github.com/apache/incubator-pagespeed-ngx.git && cd incubator-pagespeed-ngx/ && git checkout latest-stable && wget https://gitlab.com/gusco/ngx_pagespeed_arm/-/raw/master/psol-1.15.0.0-aarch64.tar.gz && tar xvf psol-1.15.0.0-aarch64.tar.gz && sed -i 's/x86_64/aarch64/' config && sed -i 's/x64/aarch64/' config && sed -i 's/-luuid/-l:libuuid.so.1/' config

STEP 4) Configure the Pagespeed module.

CD into the Nginx directory and install the build dependancies.

cd /usr/local/src/nginx/nginx-1.21.6 && sudo apt build-dep nginx && sudo apt install uuid-dev

Finally, we need to configure the environment with the exact same arguments that are already in your currently installed Nginx. To do that, you have to first check your nginx arguments with the following command:

nginx -V

Should return:

configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.21.6/debian/debuild-base/nginx-1.21.6=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

If the “--with-compat” and “--with-cc-opt='-DNGX_HTTP_HEADERS'” arguments are not in there, then you must be sure to them to the next command or your module won’t be compatable with your Nginx. If “--with-compat” is in the above set of arguments, then just continue to copy and paste the above arguments in the next command along with “--with-cc-opt='-DNGX_HTTP_HEADERS'“. Either way, the arguments”--with-compat” and “--with-cc-opt='-DNGX_HTTP_HEADERS'” MUST be in the next command. Lastly, since we are compiling pagespeed as a dynamic argument, we must also include “--add-dynamic-module=/usr/local/src/incubator-pagespeed-ngx” argument. To summarize, the new arguments look like this:

--add-dynamic-module=/usr/local/src/incubator-pagespeed-ngx --with-compat --with-cc-opt='-DNGX_HTTP_HEADERS'

RUNNING MY FINAL ./CONFIGURE COMMAND LOOKS LIKE THIS (adjust according your output of the nginx -V command):

./configure --add-dynamic-module=/usr/local/src/incubator-pagespeed-ngx --with-compat --with-cc-opt='-DNGX_HTTP_HEADERS'

So in my case, since I am compiling for the latest Mainline version of Nginx v1.21.6, my entire configure command, including arguments (with the NEW Additional Arguments first) would be as follows:

./configure --add-dynamic-module=/usr/local/src/incubator-pagespeed-ngx --with-cc-opt='-DNGX_HTTP_HEADERS' --with-compat --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.21.6/debian/debuild-base/nginx-1.21.6=. -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'

So to complete this step, just copy and paste:

./configure --add-dynamic-module=/usr/local/src/incubator-pagespeed-ngx --with-cc-opt='-DNGX_HTTP_HEADERS' --with-compat

before the rest of the output of nginx -V and then hit enter to start configuring! Don’t forget to check for an already existing --with-compat argument before you paste the above line.

STEP 5) Make the Pagecache module!

cd /usr/local/src/nginx/nginx-1.21.6/ && make modules

Then copy your newly created module to the Nginx modules directory.

cd objs && sudo cp ngx_pagespeed.so /etc/nginx/modules/

Step 5b) – (Recommended) – Copy the pagespeed module to your production server

If you built this module on an alternate copy of ubuntu rather than your main production server (which you should have), then you should copy your newly created ngx_pagespeed.so module to a usb stick from your development server, and then from your usb stick to your production server.

First insert your ext4 or dos formatted usb stick into your development server. Then make a mount directory for it.

sudo mkdir /mnt/usb1

Next, Check your device name for your usb stick

lsblk

It should show up as /dev/sda or /dev/sdb. Mount it (depending on what is shows up as in lsblk):

sudo mount /dev/sda /mnt/usb1

then copy your module to your mounted usb stick:

sudo cp /usr/local/src/nginx/nginx-1.21.6/objs/ngx_pagespeed.so /mnt/usb1/

Unmount your usb with sudo umount /dev/sda then insert your usb into your production server, mount it, and copy the ngx_pagespeed.so module to your modules folder in /etc/nginx/modules/.

STEP 6) Load the module

sudo nano /etc/nginx/nginx.conf

Then add the following line to the beginning of the file:

load_module modules/ngx_pagespeed.so;

Underneath the lines…

user www-data;
worker_processes auto;
pid /run/nginx.pid;
load_module modules/ngx_pagespeed.so;

Now it’s your job to set up and configure the Nginx Filter settings. You can find the list of filters here.

Congrats, you can STOP because you are all done! Now you have a working pagespeed module for your 64 bit ARM Raspberry Pi 4!

References:

https://gitlab.com/gusco/ngx_pagespeed_arm

https://www.modpagespeed.com/doc/build_ngx_pagespeed_from_source

https://github.com/apache/incubator-pagespeed-ngx

How to access and delete saved system Mail (mbox) in Mac OS Big Sur terminal

If you are using crontab or some other terminal based system applications on Mac OS Big Sur, you may encounter a message saying “You have mail” after opening your Mac OS Terminal application. This is a system message telling you that you have system mail in your systems mail box, NOT, in the Mac Mail GUI application. To read your mail, you can type “mailx”, in your Terminal application. After you are done reading your mail, you may type “q” to quit the system mail application. After this, MacOS will tell you that it is “saving your messages to mbox”.

Once your mac has saved these messages, you can re-access these old messages by typing the following inside your terminal app:

mail -f ~/mbox

Mac OS will then show you all of your saved messages in mbox. It will also give you the number of saved mbox message like “71 messages”, in the top right corner of your terminal.

To delete these messages (all of them), you can issue the following command where “x” is the first message you want to delete, and “y” is the last message you want to delete.

d x-y

For example, I have 71 saved messages in my mbox. I want to delete all of them. So I would issue the following two commands.

mail -f ~/mbox

… to activate mbox, and…

d 1-71

to delete all of my 71 saved mbox messages.

Once deleted, you can exit out of mbox by typing q and then hitting “enter”.

There you go. This is just a quick tip on how to delete and manage your Mac OS System mail in mbox.

How to batch convert images to WebP with Imagemagick on MacOS

1 Install homebrew by opening up your terminal application and copying and pasting the following line of code in it. Then hit enter.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/.sh)"

2. Run the following command in your terminal application:

brew install imagemagick

3. Create an empty folder on your mac.

mkdir ~/Pictures/WebP

4. Navigate to the folder of png or jpg images you want to convert to WebP in your terminal.

cd ~/YourImages/

5. Issue the following command replacing “.png” with the file extension of the files you need converted (the source files).

magick mogrify -format webP -path ~/Pictures/WebP/ *.png

6. To change the quality of your images using lossless compression, you can use the “-quality” and “-define” arguements like so…

magick mogrify -format webP -quality 80 -define webp:lossless=true -path ~/Pictures/WebP/ *.png

where -quality 80 is to reduce the quality of your images by 20% (only 80% of the original quality is used).

7. To use lossy compresssion with reduced quality you can use the “-define webp:lossless=false” arguement like so…

magick mogrify -format webP -quality 60 -define webp:lossless=false -path ~/Pictures/WebP/ *.png

This will reduce your image quality to 60% of the original image, but shrink the filesize of your photos tremendously. This will also make your original image quality unrecoverable since it is lossy. So don’t delete your original photos unless you are absolutely sure you don’t need them scaled up to a larger size and better quality. For website use, it is recommended that you scale down your image quality anywhere from 60%-80%, to decrease page load times and image load times. However, if you already are using small images this may not be neccessary, and your image compression and filesize (lossy or lossless) is relatively subjective depending upon what you are trying to accomplish with your website. If you desire High quality UHD or 4K photos all the time with slower load times, then stick with lossless with less compression. If you desire faster load times with lesser quality images, the oposite extreme end is to use lossy compression with more compression, which should shrink your files pretty well and prep them for quick load times on a website.

7. Wait for the images to be converted. Your newly converted images can be found in your ~/Pictures/WebP folder after its complete.

How to secure copy (SCP) a file from a local machine to a remote server using a Yubikey 5

For anyone having difficulties with the scp command using yubikey, here is the proper syntax I used to copy a file from a local machine to a remote server:

SYNTAX:
scp -i ~/.ssh/id_rsa_yubikey.pub -P 22 local_file_to_be_transferred.txt remote_username@local_server_ip_address:/remote/directory_of_server

where -i = your yubikey identities file, -P = your ssh port, remote_username = your username that you use to log in to your server

In my case:
scp -i ~/.ssh/id_rsa_yubikey.pub -P 40001 /Users/Danrancan/Downloads/myfile.zip boopi@192.168.1.2:/home/Danrancan

Yubico Yubikey & SCP Command

How to SSH Secure copy (SCP) a file to a remote server, using a Yubikey

For anyone having difficulties with the scp command using yubikey, here is the proper syntax I used to copy a file from a local machine to a remote server:

SYNTAX:

scp -i ~/.ssh/id_rsa_yubikey.pub -P 22 local_file_to_be_transferred.txt remote_username@local_server_ip_address:/remote/directory_of_server

where -i = your yubikey identities file, -P = your ssh port, remote_username = your username that you use to log in to your server

In my case:

scp -i ~/.ssh/id_rsa_yubikey.pub -P 40001 /Users/Danrancan/Downloads/myfile.zip boopi@192.168.1.2:/home/Danrancan

This should successfully securely copy your file from your local machine to your remote machine or server.

How to install Letsencrypt Certificates on Open VPN Access Server Web Interface

In this tutorial we are going to show you how to install letsencrypt certificates on your OpenVPN Access Server’s Web Interface. This tutorial assumes you are using an ubuntu or debian based distribution.

STEP 1:

SSH into your openvpn access server in your terminal, and install certbot:

sudo apt update && sudo apt install certbot

STEP 2:

Configure your DNS A records from your registrar to point to your server’s public IP address. If you are using cloudflare, it should look like this:

STEP 3:

Run certbot and enter the answers to its questions.

sudo certbot certonly

How would you like to authenticate with the ACME CA?

1: Spin up a temporary webserver (standalone)
2: Place files in webroot directory (webroot)

Select the appropriate number [1-2] then [enter] (press ‘c’ to cancel): 1

Enter email address (used for urgent renewal and security notices): contact@nerd-tech.net

Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory

(A)gree/(C)ancel: A


Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let’s Encrypt project and the non-profit
organization that develops Certbot? We’d like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.

(Y)es/(N)o: N


Please enter in your domain name(s) (comma and/or space separated) (Enter ‘c’
to cancel): vpn.yourdomain.com (ex: vpn.nerd-tech.net)
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for vpn.nerd-tech.net
Waiting for verification…
Cleaning up challenges

IMPORTANT NOTES:

  • Congratulations! Your certificate and chain have been saved at:
    /etc/letsencrypt/live/vpn.nerd-tech.net/fullchain.pem
    Your key file has been saved at:
    /etc/letsencrypt/live/vpn.nerd-tech.net/privkey.pem
    Your cert will expire on 2021-12-18. To obtain a new or tweaked
    version of this certificate in the future, simply run certbot
    again. To non-interactively renew all of your certificates, run
    “certbot renew”
  • Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.
  • If you like Certbot, please consider supporting our work by: Donating to ISRG / Let’s Encrypt: https://letsencrypt.org/donate
    Donating to EFF: https://eff.org/donate-le

Now enter the following lines replacing vpn.mydomain.com with your domain prefaced with a vpn and a dot (ex: vpn.nerd-tech.net).

/usr/local/openvpn_as/scripts/sacli --key "cs.priv_key" --value_file "/etc/letsencrypt/live/vpn.mydomain.com/privkey.pem" ConfigPut
/usr/local/openvpn_as/scripts/sacli --key "cs.cert" --value_file "/etc/letsencrypt/live/vpn.mydomain.com/cert.pem" ConfigPut
sudo /usr/local/openvpn_as/scripts/sacli --key "cs.ca_bundle" --value_file "/etc/letsencrypt/live/vpn.mydomain.com/chain.pem" ConfigPut
sudo /usr/local/openvpn_as/scripts/sacli start

RunStart warm None
{
“active_profile”: “Default”,
“errors”: {},
“last_restarted”: “Sun Sep 19 10:09:45 2021”,
“service_status”: {
“api”: “on”,
“auth”: “on”,
“bridge”: “on”,
“client_query”: “restarted”,
“crl”: “on”,
“daemon_pre”: “on”,
“db_push”: “on”,
“ip6tables_live”: “on”,
“ip6tables_openvpn”: “on”,
“iptables_live”: “on”,
“iptables_openvpn”: “on”,
“iptables_web”: “restarted”,
“log”: “on”,
“openvpn_0”: “on”,
“subscription”: “on”,
“user”: “on”,
“web”: “restarted”
}
}
WILL_RESTART [‘web’, ‘client’]

Now, restart your Openvpn Access Server.

sudo service openvpnas restart

Now you can browse to your new domain on port 943 (unless you changed openvpnas default web interface port).

So open your web browser and go to https://vpn.yourdomain.com:943/admin

You should see a lock icon in the top left corner of your browser, indicating that you are now using your secure letsencrypt certificates.

FINALLY, you need to log into your admin web interface, and change your hostname to the hostname you created for it.

And that is how you install letsencrypt certificates on the Openvpn Access Server Web Interface!

The best Linux tutorials on the Internet? Linuxbabe.com

I just wanted to give a heads up to users who flock to my tutorials, but are in need of advanced linux setups. I am not an expert, but just post the things that I learn as I go. Although my tutorials are relatively accurate, they are NOT for advanced users looking to really get the most out of their linux server experience. If you are looking for tutorials that have absolutely zero flaws, and are generally about setting up a proper server environment, please head to linuxbabe.com for some of the best tutorials out on the internet! Xiao, the owner of Linuxbabe.com, is a top notch pro, and really knows his stuff. So if you are trying to create an email server, a vpn server, a website, or secure your wordpress, etc. etc., then you should take my recommendation and check out Linuxbabe.com for all of your server and advanced tutorials!

How to install and update Mega command line (megacmd) on your Raspberry Pi running Ubuntu 20.04

This quick guide will teach you how to add the Mega.nz Repositroy so you can easily install and upgrade the “Megacmd” and “Megasync” apps on your Linux Distribution. This tutorial will show you how to do it specifically on Ubuntu 20.04, however, the instructions can be easily modified for any ARM based debian based distribution.

STEP 1:

Go to the Mega.nz repository at https://mega.nz/linux/repo/ in your web browser, and select the folder that pertains to your working distribution. For the Raspberry Pi (because you need the ARM version), that is going to be the Raspbian_10.0 folder located at https://mega.nz/linux/repo//.

Then Go to your terminal and add the Release.key file to your apt repository:

wget https://mega.nz/linux/repo/Raspbian_10.0/Release.key && sudo apt-key add Release.key

STEP 2: Figure out your systems architecture.

Before adding mega.nz to your repository list, you first need to verfiy that you are using the 32-bit Ubuntu or Raspberry Pi OS distribution. If you are not, then you need to add the 32 bit architecture to your OS.

Verify your architecture with the following command:

dpkg --print-architecture

If the above command returns “arm64”, then proceed to the next step (STEP 2b). If the above command returns “armhf”, then you shoudl skip the next step (STEP 2b). If it returns “arm64” then you should continue with the next step (STEP 2b).

STEP 2b:

Add support for a 32-bit arm foreign architecture (armhf) with the following command:

sudo dpkg --add-architecture armhf

Verify you are now using armhf as a foreign architecture, with the following command:

dpkg --print-foreign-architectures

You should see “armhf” from this command. Now you may move on to STEP 3.

STEP 3:

Add the mega.nz repo to your apt repository by openining up your nano editor in terminal…

sudo nano /etc/apt/sources.list.d/mega.nz.list

then pasting the indicated code below:

# Source Repository for Mega-CMD and Mega Desktop (For Raspbian ARM)
# Updated Mega Repo with 4092 bit Release Key
deb [arch=armhf] https://mega.nz/linux/repo/Raspbian_10.0/ ./

Type “Control-X“, then “y“, then “Enter“, to save and quit your nano editor.

Now, update your apt list, then install mega-cmd from your newly added repository with the following command:

sudo apt update && sudo apt install megacmd

Now, whenever you run the command sudo apt update && sudo apt upgrade your “megacmd” installation will automatically update, when updates are available.

To run your newly installed mega command line application run the following command:

mega-cmd

DONE!

How to install OpenVPN 3 client on Ubuntu 20.04

Today we are going to learn how to install openvpn3 client on Ubuntu 20.04 using the command line. For those who don’t know, the client is what connects to your openvpn service provider and tunnels its connection out to your openvpn service provider.

In this tutorial we will take the following steps to complete this task:

  1. Add openvpn3 repository to your apt sources.list to get automatic updates.
  2. Install the OpenVPN3 repository signing key
  3. Install OpenVPN3
  4. Download and modify your my-openvpn-client-config-file.ovpn to work with openvpn3
  5. Create a simple yet secure and useful my-openvpn3-client-config-file.autoload file to automatically load openvpn3 at boot time and start up.
  6. Create a simple yet secure and useful my-openvpn3-client-config.file.autoload file to automatically reconnect openvpn after it unexpectedly disconnects.
  7. Enable openvpn3 permanantely to connect on boot and after any unexpected disconnects.

Lets begin.

1st,

Open open up your terminal and run the following command to add openvpn3 to your apt repository…

sudo nano /etc/apt/sources.list.d/openvpn3.list

Your nano text editor will open up and your terminal should be blank. Then you must copy and paste the following lines into your nano editor:

# OpenVPN3 Official Apt Repository for openvpn3.
deb https://swupdate.openvpn.net/community/openvpn3/repos focal main

Once you have pasted the text into your nano text editor (using the terminal), you can save and exit by typing “Control-X“, then hit “y” for the save option, then hit “Enter” to save and exit nano.

2nd,

Ensure your apt supports the https transport by installing apt-transport-https. Then install the OpenVPN3 repository signing key used by the openVPN 3 Linux packages. You can do all of this by running the following commands:

cd ~/
sudo apt install apt-transport-https && wget https://swupdate.openvpn.net/repos/openvpn-repo-pkg-key.pub && sudo apt-key add openvpn-repo-pkg-key.pub
rm ~/openvpn-repo-pkg-key.pub

Now, you can install your openvpn3 package with the following command:

sudo apt update && sudo apt install openvpn3

Now, navigate to /etc/openvpn3/autoload.

cd /etc/openvpn3/autoload/

Download your openvpn.ovpn configuration file from your vpn service provider and open it with a text editor. Then add the following to its configuration with each option on its own seperate line:

auth-user-pass
push-peer-info
resolv-retry infinite
persist-key
persist-tun
keepalive 10 120

Now, copy all of the text in your openvpn.ovpn file that you downloaded and edited, and paste it into a new file called “myvpn3client.conf” located in the /etc/openvpn3/autoload directory, using nano.

sudo nano /etc/openvpn3/autoload/myvpn3client.conf

Type ctrl+x, y, then Enter, to save your file.

Now, create your autoload file by openining up your nano editor with the following command:

cd /etc/openvpn3/autoload && sudo nano myvpn3client.autoload

Copy and paste the following text into the currently opened “myvpn3client.autoload” file with your nano editor.

{
   "autostart": true,
    "name": "myvpnclient",
    "acl": {
        "set-owner": "my_ubuntu_username"
    },
    "tunnel": {
        "ipv6": "no",
        "persist": true,
        "dns-fallback": "google",
        "dns-setup-disabled": false
    },
    "user-auth": {
        "username": "my_vpn_username",
        "password": "my_vpn_password"
    }
}

Fill in “my_ubuntu_username” “my_vpn_username” and “my_vpn_password” with your corresponding information. DO NOT DELETE THE QUOTES! Leave them. Your username should be the name that you registered when you set up your ubuntu installation. It should also be noted in your terminal on the left next to your computer’s hostname i.e. mrubuntu@mrubuntusdesktop.

Once you have finished filling in the blanks inside the quotes, press “control-X“, then “y” to save, and hit “Enter” to exit out of nano with a newly saved .autoload file.

Now, lets secure the permissions for your myopenvpn.conf and myopenvpn.autoload files.

sudo chmod 644 /etc/openvpn3/autoload/myvpn3client.conf && sudo chmod 644 /etc/openvpn3/autoload/myvpn3client.autoload

Now we’re ready to start your VPN. The following command will automatically connect your VPN on boot, as well as reconnect it if your internet connection drops, and restarts again. In other words, this will keep you connected to your vpn after reboot or connection failure.

Run this last command to do so:

sudo systemctl enable openvpn3-autoload.service

Now reboot and check to see if your vpn is connected by running the following command:

curl https://ipinfo.io/ip

It should show the IP Address of your vpn provider.

Next,

lets test to see if your DNS is leaking or not.

Download the dns command line dns leak test from github, and make it executable by your user by running the following command:

cd ~/ && curl https://raw.githubusercontent.com/macvk/dnsleaktest/master/dnsleaktest.sh -o dnsleaktest.sh && chmod +x dnsleaktest.sh

Run your dnsleaktest!

./dnsleaktest.sh

After a minute or so, it should show the IP addresses of your VPN provider. If it does not, then your dns may be leaking, and the leaktest will tell you that.

THE END

Understanding and Interpereting Posts and guides on Tonymacx86.com correctly

OK, You can consider this one of my major contributions to the Hackintosh community. I’m still a noob, so this took me a LOOOOOONG time, and is still needs user edits before the final version is posted. This is most likely going to be my longest post I’ve ever written, but hopefully, it will start a thread that clears up many of the power management instructions in your Hackintosh.

Read More »

How to prepare, create, secure, organize and futureproof your children’s digital identity and assets in the modern age!

The other day I went over to my cousin’s, who has a wife and two kids. My cousin, is what you would call an average parent overwhelmed by our infinite momentum into the digital age. Like many parents and adults his age, his young children are starting to understand electronics, computers, and technology a lot faster and better than he does. For the majority of you running a family, this is pretty much inevitable. Although this is more so a good thing, it can potentially have unwanted affects and facilitate dubious, (or at the very least, unconventional) technological behavior by our children, without us even knowing. A major debacle that I’m sure you are familiar with, is properly organizing your digital life and identies (how many email addresses do you have by now, how many facebook profiles do you have, is your email for your linkedin account different than your email for facebook and instagram, do you also have a work email, do you and your spouse share an email address and thus, share contacts, possibly having duplicate contacts in each others address books? etc. etc.?) into a cohesive structure.

Read More »

Keepass 2.43, The best Password manager for Mac OS thats not for Mac OS… Until Now

I’ve always been fascinated with password managers, as without them, my life would be an utter mess. When trying different password managers for the Mac, I discovered that none of them were really perfect. Being a security freak, I frown upon security based applications that are riddled with private code and made from closed source. For those of you who don’t understand what that means, it means that only the company who creates the application can review and modify the code that the app is built on. This means, that the entire world outside of the developers for that company, are excluded from checking the app for security holes. Open source, is the exact opposite. Open source, allows the code for an app to be viewed transparently (as opposed to encrypted), by every software engineer or developer in the entire world. Often you will hear programmers screaming that open source is the most secure, and it is, because it effectively invites every programmer in the world to oversee the code and check it for bugs or security holes. There is a lot of strength when inviting the eyes of the world to check your work for errors, as opposed to only allowing the ten or fifteen people at your small company to check their code for errors. That being said, I wanted a cross platform open source password manager that stored my password database files locally or in my private cloud and had excellent encryption algorithms. After a lot of searching and sifting through apps on iOS, MacOS, windows, and ubuntu, I came to realize that the password manager of my desire didn’t actually exist.

I used to be an avid user of Datavault Password manager, which is a pretty decent app that is is with Mac OS and iOS. However, it has no compatibility with linux, and once again, is Closed source (untrustworthy). Same goes for the rest of the password managers for Mac and iOS. Well, except one, that technically, doesn’t exist for Mac (aside from closed source ports that aren’t compatible with keepass 2.4 databases). The app is an open source app made for windows called Keepass Password Safe, the most recent version being Keepass 2.4. It is full of great encryption features, security features, plugins, best of all, completely open source, and quite possibly, the most secure password manager in existence. Fortunately, it is also compatible with a great open source iOS app Called MiniKeePass, that is also a free for download. It’s compatible with Keepass 1 and Keepass 2 databases, meaning that you can sync your passwords from your iOS device to your Windows Keepass v2.4 app. This is AWESOME! But what about syncing it from iOS (or Windows for that matter) to your Mac? Well, until now, it wasn’t possible (at least not for the most up to date version of Keepass 2).Keepass 2.23 for mac on official keepass website, this is an outdated version of keepass for mac os.pngBut fortunately, Nerd-Tech has created a solution. We have used Wine for Mac, to port Keepass V2.40 to Mac OS, compatible with High Sierra. Furthermore, we have packaged it with the majority of plugins already installed. Our favorite, is the auto mount plugin for vera-crypt. Oh yes, Keepass 2 is compatible with Vera-crypt, one of the best if not the best, encryption solutions for private data, EVER!

If you are looking for the best cross platform password manager ever, look no further then our custom ported version of KeePass 2.43 for Mac OS! Download it and start tinkering. Shortly, we will post a much longer write up on how to sync all of your keepass apps in one single cloud database and auto update themselves across windows, Mac and iOS. Enjoy this free app, and feel free to post any questions!