Author: wililupy

  • Deploying Deepseek R1 in Ubuntu

    Hello everyone! Hope you all have been well. I’ve been messing around with AI and different models for my job as we are implementing AI in our software.

    I wanted to learn more about this, and it just so happened that Deepseek R1 was announced and I decided to start there. I originally installed this on my Macbook Pro and I installed a smaller model, and for the hardware, it worked well. However, my son needed the laptop so that he could record music so I restored it back to MacOS and am now using my old Linux laptop that I used when I was at Canonical. This laptop is a beast. It’s a little on the older side, but here’s what it has under the hood:

    • Intel 7th Gen Core i7 processor, 8 core, 3.8 gHz
    • 32 GB DDR4 Memory
    • 256GB SSD
    • 1TB HDD
    • Nvidia Geforce GTX 1050 with 4GB vRAM

    So, these are the steps that I did to install Deepseek R1 and Open-WebUI as a docker container on my laptop for testing.

    First thing I did was install Ollama, which is the LLM from Meta that works with Deepseek R1 Models.

    First thing, you need to download and install Ollama. To do this all you need to do is run the following command:

    curl -fsSL https://ollama.com/install.sh | sh

    After this, I had to add an Environment variable to the systemd service. The systemd service is located in /etc/systemd/system/ollama.service

    Under the [Service] section, add the following:

    Environment="OLLAMA_HOST=0.0.0.0"

    This will allow Ollama to listen and serve on all clients. Since I use Docker this works best. I kept running into issue getting Open-WebUI to connect to my Deepseek model without doing this.

    Next, you need to reload the daemon and the Ollama Service:

    sudo systemctl daemon-reload
    sudo systemctl restart ollama.service

    Now, we need to load the model. I use the 8 Billion Parameter model since my laptop can handle that fairly easily. To load this model use the following command:

    ollama run deepseek-r1:8b

    There are other models you can use depending on your system. The 1.5 billion parameter is the smallest, and works farily well on most systems. I ran this model on Raspberry Pi’s and on my Mac Laptop with 16GB of and no GPU, and it ran well. To see the different models, you can check out the details on Ollama’s website here:

    https://ollama.com/library/deepseek-r1

    You will be dropped in to the Ollama shell where you can interact with the model here. To exit, just type /bye in the prompt and you will be back at the Linux shell.

    Next, we need to install a nice Web front end. I use Open-WebUI since it works like ChatGPT, and is super simple to setup.

    I use Open-WebUI as a Docker container on my laptop to keep it nice and clean. If I want to disable and stop using this, I can remove the container and my system is nice and clean. Plus updating the web front end is really easy with Docker containers.

    Make sure you install Docker on your machine. You can use Snaps or Apt. I followed the instructions on Docker website. It’s pretty straight forward. After you install Docker, and add yourself to the docker group. After that, log out and log back in so that the group membership gets applied.

    I also had to install the Nvidia Container Toolkit so that I could use the GPU in my containers. To do this run the following command to add the repo to Ubuntu and then use Apt to install:

    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
      && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
        sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
        sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

    Next, we need to update the sources and install the toolkit:

    sudo apt update
    sudo apt install -y nvidia-container-toolkit

    Next, we need to restart the Docker daemon so it uses the toolkit:

    sudo systemctl restart docker

    Once that has been completed, we need to pull the container:

    docker pull ghcr.io/open-webui/open-webui:cuda

    After this, run the following command to start the container and have it run at startup:

    docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always --gpus all ghcr.io/open-webui/open-webui:cuda

    Now, open your web browser, and point it to:

    http://localhost:3000

    On the landing page, setup a new Admin user and you done. Select your model from the pull down in the top left corner. Ask your new chatbot a question and your done.

  • LetsEncrypt it!

    Hello, Quite a quick turnaround for blog posts and me. This one is going to be helpful for those of us that use Wildcard Certificates in our environment and found out that our SSL provider changed their policies based on industry standards, but now their certificates cost 200x more than they used to so we are moving to an opensource and free solution.

    For those of you that don’t know what the previous paragraph means, Google, and other major web site providers implemented that all communications on the Internet be secured. To do this, we use SSL or Secure Socket Layer certificates. These certificates verify and validate that the site you are on is the real one and any information that you provide on it will be encrypted and secure. SSL Certificate do this encrypting and signing to make sure everything is good. In the past, we used to have to spend $100’s, if not $1000’s (Like it did) to have this capability. LetsEncrypt came about to make this free and accessible to everyone. The downside is that the certificates are only valid for 90 days instead of a year, but you get what you pay for.

    I am moving to this model because my vendor of SSL, Digicert changed their model and now I can’t renew certificates without spending another $600 on top of the $5k I’ve already spent. So I am moving to LetsEncrypt.

    LetsEncrypt is a SSL company that uses a software package called certbot that can automatically create and install certificates that are trusted to systems.

    My DNS host provider, however, is not one of their partners. However, they do allow me to edit records on the fly, which is important since that is how LetsEncrypt verifies that you own the domain and won’t generate a certificate if you don’t. This means that I can’t automate the deployment or the generation, and I have to run the following command to update my certificates every 90 days. Some of my systems can be automated, which those, like this one that runs my web server, can. However, I do have some systems like my Virtual Center server or my Email server that use Wildcard or a single certificate to cover multiple servers. This blog will discuss how to do this, mainly so that in 90 days I can remember how to do this.

    So, on to how to do this.

    First, install certbot on a machine. Since I’m a Linux person, and I used Ubuntu, I installed this on my local machine:

    sudo apt update
    sudo apt install letsencrypt

    This installs the base LetsEncrypt software with no plugins. Since my DNS provider does not have a plugin, I have to do this manually.

    I also had to add the wildcard, or “*” to my domain to prove I owned the domain, so I logged in to my DNS provider, and created an “A” record that pointed to my webserver with the *.lucaswilliams.net name. This will allow me to use this certificate on any of my server inside my lucaswilliams.net domain. Very useful for virtual server for VMware, email, and other servers that need HTTPS and SSL Certificates.

    Once I created the wildcard domain entry in my DNS record. I then went to the terminal on my Linux machine and typed the following:

    sudo certbot certonly --manual \
    --preferred-challenges=dns \
    --email user@domain.com \
    --server https://acme-v02.api.letsencrypt.org/directory \
    --agree-tos \
    -d *.domain.com

    for the --server https://acme-v02.api.letsencrypt.org/directory line, you have to use this server to create the certificate as this is the only one that LetsEncrypt uses to for this requirement.

    After hitting enter to start the process, I was presented a prompt asking if I wanted to share my information and details about the certificate, which I replied “N” but if you want you can.

    The next prompt is the important one. It looks like the following:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Please deploy a DNS TXT record under the name:
    
    _acme-challenge.domain.com.
    
    with the following value:
    
    q12yr1dyFyrh143HHRTe42HH_hf#1d7&ewftgs8H
    
    Before continuing, verify the TXT record has been deployed. Depending on the DNS
    provider, this may take some time, from a few seconds to multiple minutes. You can
    check if it has finished deploying with aid of online tools, such as the Google
    Admin Toolbox: https://toolbox.googleapps.com/apps/dig/#TXT/_acme-challenge.domain.com.
    Look for one or more bolded line(s) below the line ';ANSWER'. It should show the
    value(s) you've just added.
    
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Press Enter to Continue

    I then logged into my DNS provider and created this TXT record and then hit enter in the terminal to complete the key generation. LetsEncrypt verified the record and created my certificates in /etc/letsencrypt/live/domain.com. You will need ‘root’ access for the live directory, so I ran sudo -i to change to root user and access the certificates.

    I then copied the privekey.pem and fullchain.pem files to my servers and renamed them to what the system understands as the private key file and the certificate file.

    The biggest takeaway from this is that LetsEncrypt will let us create certificates for our systems for free, however, they only are valid for 90 days which means that we have to do this every 3 months. Some systems can be fully automated if their names don’t change or the certificate is used only for validation and verification of the site and name, but more complex certificates, like email verification and signature signing, this is the way to go for that.

    If any of you know of a better way of doing this, please let me know and I’ll share it and give you credit for the improvment!

  • Replacing failed disks in a MD RAID in Ubuntu Server

    Hello everyone! Been a moment since my last blog update. This is a special one that I have been wanting to write, but wanted to wait until I actually had to do it so I can show real world examples, and boy, is this one for the record books.

    So, my secondary KVM server has a 5 disk hot swappable chassis that I bought on NewEgg about 7 years ago that allows you to install 5 SATA disks and these disks are connected to the mother board from the chassis into the 5 SATA ports. This allows me to hot swap the hard drives if they ever fail, and well, two of them did about a month ago. The system is setup as a RAID-5. So all of the disks are members of the RAID and then the 5th disk is a Hot Spare. Well, Disk 4 and 5 failed together. Basically, disk 4 failed, and while 5 was becoming the 4th disk, it failed. Luckily the Array was still good, but now I need to replace the failed disks.

    I bought 2 new 2TB disks from NewEgg and installed them in the array. Unfortunately, the system does not automatically detect new drives installed or removed, so I had to run the following commands to get the disks recognized by the system.

    sudo -i
    echo "0 0 0" >/sys/class/scsi_host/host0/scan
    echo "0 0 0" >/sys/class/scsi_host/host1/scan
    echo "0 0 0" >/sys/class/scsi_host/host2/scan
    echo "0 0 0" >/sys/class/scsi_host/host3/scan

    I then listed the /dev/ directory to make sure that /dev/sdd and /dev/sde were no longer being seen as they have been removed. I also checked the raid configuration to make sure that they were not listed any longer:

    mdadm -D /dev/md0
    mdadm -D /dev/md1

    Both arrays no longer listed the failed disks, so I’m ready to physically add the new disks.

    I installed the new disks. Now I need to re-scan the bus for Linux to see the disks:

    echo "0 0 0" >/sys/class/scsi_host/host0/scan
    echo "0 0 0" >/sys/class/scsi_host/host1/scan
    echo "0 0 0" >/sys/class/scsi_host/host2/scan
    echo "0 0 0" >/sys/class/scsi_host/host3/scan

    I then listed the /dev directory and I can now see the new disks, sdd and sde.

    I then need to make sure that they have the correct format and partition layout to work with my existing array. For this I used the sfdisk command to copy a partition layout and then apply it to the new disks:

    sfdisk -d /dev/sda > partitions.txt
    sfdisk /dev/sdd < partitions.txt
    sfdisk /dev/sde < partitions.txt

    If I do another listing of the /dev directory I can see the new drives have the partitions. I’m now ready to add the disks back to the array:

    mdadm --add /dev/md0 /dev/sdd2
    mdadm --add /dev/md1 /dev/sdd3
    mdadm --add-spare /dev/md0 /dev/sde2
    mdadm --add-spare /dev/md1 /dev/sde3

    I then check the status of the array to make sure it is rebuilding:

    mdadm -D /dev/md0
    mdadm -D /dev/md1

    The system shown it was rebuilding the arrays and at the current rate it was going to take about a day.

    The next day I go and check the status, and low and behold I found out that disk 5 (sde) had failed and was no longer reporting in. I got a bad disk shipped to me. So I contacted NewEgg and they sent me out a replacement as soon as I sent them the failed disk. Luckily it was the hot spare so it didn’t have any impact on the system removing it or adding it back, but I did run the following command to remove the spare from the array and then re-scanned the bus so that the disk was fully removed from the server:

    sudo mdadm --remove /dev/md0 /dev/sde2
    sudo mdadm --remove /dev/md1 /dev/sde3
    sudo echo "0 0 0" >/sys/class/scsi_host/host0/scan
    sudo echo "0 0 0" >/sys/class/scsi_host/host1/scan
    sudo echo "0 0 0" >/sys/class/scsi_host/host2/scan
    sudo echo "0 0 0" >/sys/class/scsi_host/host3/scan
    sudo mdadm -D /dev/md0
    sudo mdadm -D /dev/md1

    The MDADM reported that there was no longer a spare available and the listing of the /dev directory no longer shown /dev/sde. A week later, I got my new spare from NewEgg and installed it and ran the following:

    sudo -i
    echo "0 0 0" >/sys/class/scsi_host/host0/scan
    echo "0 0 0" >/sys/class/scsi_host/host1/scan
    echo "0 0 0" >/sys/class/scsi_host/host2/scan
    echo "0 0 0" >/sys/class/scsi_host/host3/scan
    ls /dev
    sfdisk /dev/sde < partitions.txt
    ls /dev
    mdadm --add-spare /dev/md0 /dev/sde2
    mdadm --add-spare /dev/md1 /dev/sde3
    mdadm -D /dev/md0
    mdadm -D /dev/md1

    This added the disk and then added it as a hot spare for the arrays. Since it’s a hot spare, it does not need to resync.

    And there you have it, how to replace the disks in a MD RAID on Ubuntu.

  • Growing Ubuntu LVM After Install

    Hello everyone. I hope you have all been well.

    I have a new blog entry on something I just noticed today.

    So I typically don’t use LVM in my Linux Virtual Machines, mainly because I have had some issues in the past trying to migrate VM’s from one hypervisor type to another, for example, VMware to KVM or vice versa. I have found that if I use LVM, I have mapping issues and it takes some work to get the VM’s working again after converting the raw disk image from vmdk to qcow2 or vice versa.

    However, since I don’t plan on doing that anymore (I’m sticking with KVM/Qemu for the time being) I have looked at using LVM again since I like how easy it is to grow the volume if I have to in the future. While growing a disk image is fairly easy, trying to grow a /dev/vda or /dev/sda is a little cumbersome, usually requiring me to boot my VM with a tool like PMagic or even the Ubuntu install media and using gparted to manipulate the size and then rebooting back into the VM after successfully growing it.

    With LVM, this is much simpler. 3 commands and I’m done, and don’t need a reboot. Those commands:

    • pvdisplay
    • lvextend
    • resize2fs

    Now, One thing I have noticed after a fresh install of Ubuntu Server 22.04.2, using LVM, I don’t get all my hard drive partition used. I noticed this after I installed, I ran df -h and noticed that my / folder was at 32%. I built the VM with a 50G hard drive, yet df was only seeing 23GB. I then ran

    sudo pvdisplay

    Sure enough, the device was 46GB in size. I then ran

    sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv

    This command extended my partition out to the remaining space. Next, I grew the file system to use the new space:

    sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv

    I then ran df -h again, and low and behold, my / folder is now saying 46GB and 16% used instead of 32%.

    I hope this helps anyone else!

  • Building ONIE with DUE

    Howdy everyone, been a while since I’ve had a post but this one is long overdue.

    I’m still working in Networking, and every once in a while, I need to update the ONIE software on a switch, or even create a KVM version for GNS3 so that I can test latest versions of NOS’s.

    Well, a lot has changed and improved since I had to do this. ONIE now has a build environment using DUE, or Dedicated User Environment. Cumulus has made this, and it is in the APT repos for Ubuntu and Debian. This does make building much easier as trying to build a build machine with today’s procedure from OCP’s GitHub repo is 100% broken and doesn’t work. They still ask to use Debian 9, which most of the servers hosting packages have been retired since Debian 9 has EOL’d. I’ve tried with Debian 10, only to have packages not be supported. So I found out about DUE and was having issues with that, but after much searching and reading, I finally found a way to build ONIE images successfully and consistently.

    Just a slight Caution: At the rate of change with ONIE, this procedure can change again. I will either update this blog or create a new one when necessary.

    So, lets get to building!

    The first thing I did, was install Docker and DUE on my Ubuntu 22.04.4 server

    sudo apt update
    sudo apt install docker.io
    sudo usermod -aG docker $USER
    logout

    I then log back in to the server so that my new group association takes place and install DUE

    sudo apt update
    sudo apt install due
    

    I then installed the ONIE DUE environment for Debian 10. From my research this one is the most stable and worked the best for me:

    due --create --from debian:10 --description "ONIE Build Debian 10" --name onie-build-debian-10 \
    --prompt ONIE-10 --tag onie --use-template onie

    This download and sets up the build environment to build ONIE based on Cumulus’s best practices. Once this process is complete, we now get into the environment with the following command:

    due --run -i due-onie-build-debian-10:onie --dockerarg --privileged

    You are now in the Docker Container running Debian 10 and has the prerequisites for building ONIE already installed. Now we need to clone the ONIE repo from GitHub and do some minor settings to make sure the build goes smoothly.

    mkdir src
    cd src
    git clone https://github.com/opencomputeproject/onie.git

    I then update the git global config to include my email address and name so that during the building process when it grabs other repos to build, it doesn’t choke out and die and tell me to do it later:

     git config --global user.email "wililupy@lucaswilliams.net"
     git config --global user.name "Lucas Williams"

    So, I am building for a KVM instance of ONIE for testing in GNS3. First thing I need to do is build the security key

    cd onie/build-config/
    make signing-keys-install MACHINE=kvm_x86_64
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim
    make -j4 MACHINE=kvm_x86_64 shim-self-sign
    make -j4 MACHINE=kvm_x86_64 shim

    I had to run the shim-self-sign after the shim build option again to create self-signed shims after creating the shim, and then had to run shim again to install the signed shims in the correct directory so that ONIE build would get pass the missing shim files.

    Now we are ready to actually build the KVM ONIE image.

     make -j4 MACHINE=kvm_x86_64 all

    Now, I’m not sure if this is a bug or what, but I actually had to run the previous command about 10 times after every time it completed, because it didn’t actually complete. I would just press UP on my keyboard arrow key to re-run the previous command, and I did this until I got the following output:

    Added to ISO image: directory '/'='/home/wililupy/src/onie/build/kvm_x86_64-r0/recovery/iso-sysroot'
    Created: /home/wililupy/src/onie/build/images/onie-updater-x86_64-kvm_x86_64-r0
    === Finished making onie-x86_64-kvm_x86_64-r0 master-06121636-dirty ===
    $

    I then ran ls ../build/images to verify that my recovery ISO file was there:

    $ ls ../build/images
    kvm_x86_64-r0.initrd       kvm_x86_64-r0.vmlinuz.unsigned
    kvm_x86_64-r0.initrd.sig   onie-recovery-x86_64-kvm_x86_64-r0.iso
    kvm_x86_64-r0.vmlinuz      onie-updater-x86_64-kvm_x86_64-r0
    kvm_x86_64-r0.vmlinuz.sig
    $

    I then logged out of the DUE environment and my ISO was in my home directory under the src/onie/build/images/onie-recovery-x86_64-kvm_x86_64-r0.iso file. From here I was able to upload it to my GNS3 server and create a new ONIE template and map the ISO as the CD-ROM and created a blank qcow2 hard disk image to use the recovery and build the image to use on my GNS3.

    One thing to note is that this procedure is for building the KVM version of ONIE. To build others, just change the MACHINE= variable to be what ever platform you are building for.

    Good luck and let me know in the comments if this worked for you.

  • NVMe over TCP setup using BE Networks Verity

    Hello everyone. It’s been a while, almost a year. I have an updated blog that I have written for my company I work for, BE Networks. The link is below. Please enjoy and please share and comment.

    Thanks!

    How to Build NVMe over TCP Storage Networks with BE Networks Verity

  • Emptying Zimbra mailbox from the Command Line

    Hello everyone. I hope you are all doing well and staying safe!

    I wanted to document this procedure for clearing out an email box in Zimbra. I recently had to update my Zimbra mail server and I noticed that my admin account was strangely full. Over 200,000 messages in the inbox. Looking at it, they ended up being storage alerts that the Core snap in my Ubuntu Server was out of disk space. This is normal for snaps since they are SquashFS file systems for the applications they run and that is how they are designed. However, the amount of alerts was quite amazing.

    Since I’m not using snaps on this system, I removed the core snap and all of it’s revisions, and then removed snapd from the system so that the alerts would stop. I did this by doing the following:

    $ sudo snap list --all

    This listed all the snaps and revisions running on my mail server. I then noted the revision number and removed all the disabled snap versions of core by running the following:

    $ sudo snap remove --revision=xxx core

    where xxx is the revision number of the snap. I ran this twice since snaps only keep the previous two versions by default. I than deleted snapd from the system so that it won’t update and remove the core snap from the system:

    $ sudo apt purge snapd

    After this ran, I ran df -h to verify that the /dev/loop2 which is where core was mounted on my system was no longer mounted, which it wasn’t. Since I don’t plan on using snaps on this system, I have no issues.

    Next, I needed to delete the over 200,000 alerts in the admin account. I tried to use the web UI to do this, but it was taking forever. After some Google searching and reading the Zimbra documents, I found out about the command zmmailbox.

    Since I didn’t care about any of the email in the mailbox, I was ready to just delete the entire contents. Use the following commands to do it:

    $ ssh mailhost.example.net
    $ sudo su - zimbra
    $ zmmailbox
    mbox> adminAuthenticate -u https://mailhost.example.net:7071 admin@example.net adminpassword
    mbox> selectMailbox admin@example.net
    mbox admin@example.net> emptyFolder /Inbox
    mbox admin@example.net> emptyFolder /Trash
    mbox admin@example.net> exit
    $ exit

    It took a little while after the emptyFolder command but it cleared out the inbox and trash folders.

    Let me know if this helps you.

  • Minecraft Server for Ubuntu 20.04.2

    Hello everyone. I hope you are all doing well. I am writing this blog entry because I created a Minecraft server for my kids some time ago, but I had a hardware failure in the system and never replaced it. At the time, it was no big deal since the boys decided that they were done with Minecraft. But lately, with this new version of Minecraft, they have gotten back into it, and they wanted to have a shared sandbox that they can play with their friends on.

    So, I rebuilt their Minecraft server, but this time, I did it from 16.04 to 20.04. It was pretty straight forward and not much has changed in the way of doing this, but this is here for those of you that want to deploy your own Minecraft server.

    NOTE: This will only work for the Java version of Minecraft. If you are using the Windows 10 version or the one on Xbox or Switch, you will not be able to connect to this server.

    So, the first thing you need is a clean installation of Ubuntu 20.04.2 Server. The system specs should be at least 4GB of RAM and 2 CPU Cores and 80GB of Storage. After you install Ubuntu, do the normal first boot practices, update, upgrade, reboot if required, etc.

    sudo apt update && sudo apt upgrade -y

    Once that is completed, you need to install a couple things on top.

    One thing I like is the MCRcon tool from Tiiffi. I use this to do backups and statistics of my server, and it is really easy to use, and it’s super small. So I install the Build-Essential package as well as git. Minecraft also leverages Java, so I install the Open Java Development Kit packages with headless mode:

    sudo apt install git build-essential openjdk-11-jre-headless

    Once that is completed, I then create a minecraft user so that when I run this as a service, it is a lot more secure, and I have a location where to keep all the dedicated Minecraft files.

    sudo useradd -m -r -U -d /opt/minecraft -s /bin/bash minecraft

    This creates the Minecraft user with the home directory in /opt/minecraft. This also doesn’t create a password for this account so we don’t have to worry about someone gaining access to our system with this account. You can only access this account via sudo su - minecraft with your local admin account.

    Now, we need to switch to the minecraft user and run the following:

    sudo su - minecraft
    mkdir -p {server,tools,backups}
    git clone https://github.com/Tiiffi/mcrcon.git ~/tools/mcrcon
    cd ~/tools/mcrcon
    make
    

    This will create the required directories for Minecraft, and download and build the MCRcon tool. You can verify that the MCRcon tools built successfully by running the command:

    ~/tools/mcrcon/mcrcon -v

    You will get the following output:

    mcrcon 0.7.1 (built: Mar 26 2021 22:34:02) - https://github.com/Tiiffi/mcrcon
     Bug reports:
         tiiffi+mcrcon at gmail
         https://github.com/Tiiffi/mcrcon/issues/

    Now, we get to installing the Minecraft Server Java file.

    First, we need to download the server.jar file from Minecraft. You can go here to download the file, or what I did, is I go to the link, and from there, I right click the link and select ‘Copy Link Address’ so I can paste it into my terminal on the server and use wget to install it.

    wget https://launcher.mojang.com/v1/objects/1b557e7b033b583cd9f66746b7a9ab1ec1673ced/server.jar -P ~/server 

    Now, we need to run the Minecraft server. It will fail on the first run because we need to accept the EULA. We also need to modify the server.properties file since the first run creates these files:

    cd ~/server
    java -Xmx1024M -Xms1024M -jar server.jar nogui

    After the program fails to start, we need to modify the eula.txt file and change the eula=false at the end of the file to eula=true. Save this file and exit.
    Next, we need to enable RCON in Minecraft. Open the server.properties file and search for the following variables, and change them accordingly:

    rcon.port=25575
    rcon.password=PassW0rd
    enable-rcon=true

    Also, while you are in this file, you can make any other changes that you want to the server, such as the server name, listening port for the server, the MOTD, etc. Also, choose a complex password so that not just anyone can remote control your server.

    Now, I like to have this run as a service using SystemD. To do this, create a service script. First you have to exit as the Minecraft user by typing exit and getting back to your local admin shell. Then run the following:

    sudo vi /etc/systemd/system/minecraft.service

    Paste the following in the file:

    [Unit]
    Description=Minecraft Server
    After=network.target
    
    [Service]
    User=minecraft
    Nice=1
    KillMode=none
    SuccessExitStatus=0 1
    ProtectHome=true
    ProtectSystem=full
    PrivateDevices=true
    NoNewPrivileges=true
    WorkingDirectory=/opt/minecraft/server
    ExecStart=/usr/bin/java -Xmx2G -Xms2G -jar server.jar nogui
    ExecStop=/opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p PassW0rd stop
    
    [Install]
    WantedBy=multi-user.target

    Save the document. Next, run

    sudo systemctl daemon-reload

    This will refresh systemd with the new minecraft.service.

    Now, you can start the minecraft service:

    sudo systemctl start minecraft

    To get it to start on reboots, execute the following:

    sudo sytemctl enable minecraft

    The last thing we have to do is create the backup job for your server. This uses the MCRcon tool and crontab to clean up the server as well.

    Switch back to the Minecraft user and perform the following:

    sudo su - minecraft
    vi ~/tools/backup.sh

    Paste the following script into the new file you are creating:

    !/bin/bash
     function rcon {
       /opt/minecraft/tools/mcrcon/mcrcon -H 127.0.0.1 -P 25575 -p PassW0rd "$1"
     }
     rcon "save-off"
     rcon "save-all"
     tar -cvpzf /opt/minecraft/backups/server-$(date +%F-%H-%M).tar.gz /opt/minecraft/server
     rcon "save-on"
     # Delete older backups
     find /opt/minecraft/backups/ -type f -mtime +7 -name '*.gz' -delete

    Now, create a crontab to run the backup:

    crontab -e
    0 0 * * * /opt/minecraft/tools/backup.sh

    Now exit as the Minecraft user and return as the local admin. Lastly, because I leverage UFW for my firewall, I need to open the port to the world so that people can connect to it. I do that with the following commands:

    sudo ufw allow from 10.1.10.0/24 to any 25575
    sudo ufw allow 25565/tcp
    

    This allows the Remote console to be accessed only by my internal network, and allows the Minecraft game to be accessed by the outside world.

    Now, you are ready to connect your Minecraft clients to your server and have some fun!

    Let me know if this guide worked for you or if you have any questions or comments, please leave them below.

  • Installing Ubuntu 20.04.2 on Macbook Pro Mid 2012

    Hello everyone. Been a while and I have a new blog entry so that I don’t forget how to do this if I ever have to do it again.

    I got my girlfriend a new Macbook Pro M1 for Hanukkah and she gave me her old one (It’s a Macbook Pro Mid 2012, or 14,1). I was going to update it to Mac OS 11, but found out that it didn’t support it, so I figured I would try to revive life to it by installing Ubuntu on it. This proved to be harder than I expected, but if you keep reading, I’ll tell you how I finally did it. (I’m actually writing this blog from the laptop running Ubuntu.)

    So, the installation was pretty straight forward. I burned Ubuntu 20.04.2 on a DVD (From https://releases.ubuntu.com) and booted up the mac by inserting the DVD in the drive and holding down the “Option” key while booting up and I select the first EFI Partition to boot from by pressing the Up arrow after selecting it. It booted right into Ubuntu no problem.

    I managed to install Ubuntu, and everything went smoothly. After installation, I noticed a weird error about MOK and EFI. I found out that Mac’s EFI wants a signed OS. To fix this, all I did was:

    sudo su -
    cd /boot/efi/EFI/ubuntu
    cp grubx64.efi shimx64.efi

    This will clear the black screen and error when booting.

    Next, I ran sudo apt update & sudo apt upgrade -y to make sure I had all the updates to my laptop.

    With the 20.04.2 update of Ubuntu, everything works out of the box with the Mid 2012 version of the Mac Book. If you run into any issues during the installation, leave a comment and I will try to help.

    Leave a comment if it helps.

  • Installing Jitsi Meet on Ubuntu 20.04

    Hello everyone! It’s been a while since I updated my blog. I hope you all are staying safe and healthy.

    I decided that I would write a blog about how I built my own video conferencing server during this whole outbreak with COVID and having to social distance and stay home.

    My family is all over the country, and with travel and get togethers not being possible, I figured I would reach out and try to video conference with my family. However, we found that not all of us have iPhone or Androids, Laptops, and even comptuers running the same OS. Plus we all are Zoom’d out after work, so we didn’t want to use Zoom. So while taking a college class, I found out about Jitsi and decided I would try to create my own hosted video conference server. The thing I liked about Jitsi is that it has it’s own web client. So you can host and have meetings directly from the server using any web browser on any OS. It also has Apple Store and Google Play Store apps so you can connect that way, however, I had issues with the Google Play version of the App connecting to my server, but figured out the problem was with certificates and the Google version of the app not trusting my SSL certificates on my server. I will detail further on what I did to fix this issue.

    This blog will detail how I did it using Ubuntu 20.04 as well as securing the server down so that not just anyone can use it and host video conferences.

    First thing you need to do, is have a spare server that is capable of hosting the video conferencing software, as well as the users you want to have per conference. There are many discussions in forums about how to scale your server, but what I did for mine is 4 core CPU, 8GB of RAM, and 80GB of Storage. It has a 1GB NIC connected to my external network pool so that it is accessible directly on the Internet. I have had over 15 people at a time conferencing and it never went above 40% utilization of the CPU and never maxed out the network, and the experience was perfect. You can adjust as you see fit.

    First, install Ubuntu 20.04.1 on the server. I use the Live Server ISO and configure the server and SSH and install my SSH Keys. I disable SSH password since I don’t use it and use keys only. I don’t install any Snaps since I don’t need that on this server. Once the OS installation is complete, reboot the server and login.

    Next, I update all the repos and packages to make sure my system is fully updated:

    $ sudo apt update && sudo apt upgrade -y

    Next, I setup UFW to secure the server so that it is protected from the outside:

    $ sudo ufw allow from xxx.xxx.xxx.xxx/24 to any port 22
    $ sudo ufw enable

    xxx.xxx.xxx.xxx is my internal network.

    Next, I copy my SSL certificates and SSL keys to the server. I use the default locations in /etc/ssl/ to store my keys and certificates. I put the key in private/ and the certificates in certs/.

    Now, before we can install Jitsi, I needed to make sure my hostname and /etc/hosts are configured for Jitsi to work correclty. I set the FQDN for my server using hostnamectl:

    $ sudo hostnamectl set-hostname meet.domain.name

    You can verify that it takes by running hostname at the prompt and it return the name you just set.

    Next you have to modify the /etc/hosts file and put the FQDN of your server in place of the localhost entry.

    Now, I create the firewall rules for Jitsi.

    $ sudo ufw allow 80/tcp
    $ sudo ufw allow 443/tcp
    $ sudo ufw allow 4443/tcp
    $ sudo ufw allow 10000/udp

    Now we are ready to install Jitsi. Luckily, it has a repo that we can use, but we have to have the system trust it, so first we have to download the jitsi gpg key using wget:

    $ wget https://download.jitis.org/jitsi-key.gpg.key
    $ sudo apt-key add jitsi-key.gpg.key 
    $ rm jitsi-key.gpg.key

    Now we create the repo source list to download Jitsi:

    $ sudo vi /etc/apt/source.list.d/jitsi-stable.list
    i
    deb https://download.jitsi.org stable/
    

    Press the <esc> key to get the vi prompt and then type :wq to save and quite vi.

    Now, run sudo apt update to refresh the repos on your system and then you are ready to install Jitsi by running:

    $ sudo apt install jitsi-meet

    You will be brought to a prompt where it asks for the server’s name, enter the FQDN of your server here. Next you will be asked about certificates. Select “I want to use my own certificates” and enter the path of your certificates and key.

    Thats all it takes to install Jitsi. You now have a server that people can connect to and join and create video conferences. However, I don’t just want anyone to be able to create conference rooms on my server, so I locked it down by modifying some of the configuration files.

    The first configuration file we need to modify is the /etc/prosody/conf.avail/meet.domain.name.cfg.lua file. This file will tell Jitsi to allow anonymous room creation, or password creation. Open the file in vi and find this line:

    authentication = "anonymous" 

    and change it to:

    authentication = "internal_plain"

    Then, go all the way to the bottom of the file and add the following line:

    VirtualHost "guest.meet.domain.name"
         authentication = "anonymous"
         c2s_require_encryption = false

    Save the file and exit. These settings allow it so that only someone authenticated in Jitsi can create a room, but guests are allowed to join the room once it is created.

    Next we need to modify the /etc/jitis/meet/meet.domain.name-config.js file. Edit and uncomment the following line:

    // anonymousdomain: 'guest.meet.domain.name',

    You uncomment it by removing the // from the front of the line. Save the file and quit vi.

    The last file we have to modify is /etc/jitsi/jicofo/sip-communicator.properties file. Go all the way to the bottom of the file and add the following line:

    org.jitsi.jicofo.auth.URL=XMPP:meet.domain.name

    Now you are ready to add users to the system that you want to have the permissions to create rooms on the server. You will use the prosodyctl command to do this:

    $ sudo prosodyctl register <username> meet.domain.name <password> 

    You can do this for as many users as you want.

    Last, restart all the Jitsi services so that everything you changed will take effect:

    $ sudo systemctl restart prosody

    You can now login to your meet server by opening a web browser to it, create a room, and you will be prompted to enter your Jitsi ID that you just created. It will be <username>@meet.domain.name and the password you set using the prosodyctl command.

    Android Users and Jitsi

    As I mentioned earlier, you can download the Jitsi app from the Apple Store and the Google Play Store. However, there is an issue with the Android version of Jitsi app where it only trusts Jitsi’s servers hosted on jitsi.org. To get around this with my friends and family, I shared with them my certificates for Jitsi in an email to them, and they installed them on their device. Once they did this they were able to connect to my Jitsi server using the Android app. IPhone and Web users do not have this issue.

    Conclusion

    I hope you liked this blog entry on installing your own video conferencing server. If you have any questions, or just want to leave a comment, leave it below.

    Thanks and Happy Hollidays!