Author: wililupy

  • RetroPie on the Intel NUC Hades Canyon

    RetroPie on the Intel NUC Hades Canyon

    Hey everyone! Been a while since I wrote a blog and I figured this would be a good one. So, because of the COVID-19, and everyone Social Distancing and schools being closed down. I decided that I was going to do a project. Me and my boys love retro gaming. We have used RetroPie in the past on Raspberry Pi’s and loved it, but we wanted to play some more modern games like from their Wii or Gamecube or PlayStation2, and those just can’t be done on the Raspberry Pi version of RetroPie.

    I read a blog about running RetroPie on an Intel NUC, which I followed and for the most part, I got it working right. Some of my lessons learned will be in this HowTo. But after a while, we noticed that it just didn’t perform like it does on the PC or laptop with a decent graphics card. So after doing some research I found the Intel NUC gaming section, and purchased a Hades Canyon, which I feel will give us the performance we are looking for. Plus it will definitely give us the storage since PS2 Games are HUGE!

    So, without further ado, here is how to get RetroPie working on your Hades Canyon (or regular NUC):

    First thing you going to want to do is install Linux on the NUC. Burn the 18.04 Server ISO to a thumbdrive. I downloaded the Live Server version, but you can use the alternate. I recommend Server since you don’t need to install the full Desktop expirience, and I can use the space savings for more ROMS!!

    Next, plug it in to the front USB port. Connect a keyboard up to the other USB port and connect the video and network. Luckily I have a mini switch next to my TV, so I just connected to that. Hades Canyon and NUC have WiFi, but I don’t use it.

    Power on the NUC and it will automatically boot off of USB. You’ll get the GRUB menu and it will start the Live Installer for Ubuntu. I kept everything default, full disk, no LVM, no Encryption, DHCP on the Ethernet settings, and clicked install.

    Setting up the user, I kept this simple and as close to the regular RetroPie settings:

    • User: Retro Pie
    • Server Name: retropie
    • Username: pi
    • Password: raspberry

    I also told it to enable SSH and to download my public SSH keys from Launchpad.net and to allow remote password authentication.

    After a few minutes, the install completed and I unplugged the USB thumbdrive and reboot the NUC. It restarted in Ubuntu 18.04.4.

    I like to use the HWE kernel for my RetroPie since the newer kernel’s have new features and RetroPie might make use of them. So I do the following:

    sudo apt update
    sudo apt upgrade
    sudo apt install --install-recommends linux-generic-hwe-18.04

    And then you reboot the NUC again. Once it comes back up you are ready to do RetroPie installation. This is what I did:

    SSH into the NUC

    Set it so the the pi user can execute sudo without a password (makes scripting much easier)

    sudo sed -i -e '$a\pi ALL=(ALL) NOPASSWD:ALL' /etc/sudoers

    Type the pi password and you’re all set. You won’t have to type that password again if you use sudo.

    Now to add the universe repo and all of the RetroPie dependancies:

    sudo apt-add-repository universe
    sudo apt update -y
    sudo apt upgrade -y 
    sudo apt install xorg openbox pulseaudio alsa-utils menu \ libglib2.0-bin python-xdg at-spi2-core dbus-x11 git \
    dialog unzip xmlstarlet --no-install-recommends -y

    Now we need to create an OpenBox autorun script to start terminal and start Emulation Station

    mkdir -p ~/.config/openbox
    echo 'gnome-terminal --full-screen --hide-menubar -- \ emulationstation' >> ~/.config/openbox/autostart

    Next, create the .xsession file:

    echo 'exec openbox-session' >> ~/.xsession

    Now we need to make it so X11 starts on reboots:

    echo 'if [[ -z $DISPLAY ]] && [[ $(tty) = /dev/tty1 ]]; then' >> ~/.bash_profile
    sed -i '$ a\startx -- -nocursor >/dev/null 2>&1' ~/.bash_profile 
    sed -i '$ a\fi' ~/.bash_profile

    Next, we make it so the that pi user automatically logs in and that way Emulation Station will be what you see on the screen:

    sudo mkdir /etc/systemd/system/getty@tty1.service.d
    sudo sh -c 'echo [Service] >> /etc/systemd/system/getty@tty1.servcie.d/override.conf' 
    sudo sed -i '$ a\ExecStart=' /etc/systemd/system/getty@tty1.service.d/override.conf
    sudo sed -i '$ a\ExecStart= /sbin/agetty --skip-login --noissue --autologin pi %I $TERM' /etc/systemd/system/getty@tty1.servcie.d/override.conf
    sudo sed -i '$ a\Type=idle' /etc/systemd/system/getty@tty1.servcie.d/override.conf

    Now we are ready to download RetroPie from the Git Repo and run the installation scripts:

    git clone --depth=1 https://github.com/RetroPie/RetroPie-Setup.git
    sudo RetroPie-Setup/retropie_setup.sh

    This will start the RetroPie installer. Accept the EULA and select Basic Installation. Select Yes to install all packages from Core and Main. They system will then start downloading and building RetroPie directly on the NUC

    Note: This takes some time. Grab a beverage, some food. I’ll wait.

    Once building and installation is complete, you can reboot the NUC from the menu. However, I have a few more customizations that I do.

    I use an Xbox One Controller with my NUC, so I have to install the driver for it on Ubuntu. To do that, cursor down to Driver, and select the xboxdrv and install from source. It takes about 3 minutes to download and build the driver. When it completes, you are back at the Menu. Select Back from the bottom to go up a level, and do it again to get back to the main menu.

    I also install Dolphin Emulator, and the Playstation2 Emulator, and they are found in the experimental section. They can be a trick to setup and work correctly. For me to get Dolphin to recognize the Xbox Controller correctly, I actually had to change my .bash_profile to enable cursor and window mode so that I could use the mouse on the screen so I could point and click the settings. Now that I have that done, I backed up the configuration from the old NUC, and just copied it to the new one with ease.

    I also use Dolphin for Wii games, and I have a Dolphin Bar so that I can use my Wii controllers. This was a slight bear to setup. First, you need to create a couple udev rules:

    sudo touch /etc/udev/rules.d/10-local.rules
    sudo vi /etc/udev/rules.d/10-local.rules

    Now paste the following into your rules file:

    #GameCube Controller Adapter
    SUBSYSTEM=="usb", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0337", TAG+="uaccess"
    
    #Wiimotes or DolphinBar
    SUBSYSTEM=="hidraw*", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0306", TAG+="uaccess"
    SUBSYSTEM=="hidraw*", ATTRS{idVendor}=="057e", ATTRS{idProduct}=="0330", TAG+="uaccess"

    Now, you can plug in the Dolphin Bar to a USB port and connect your controller. Make sure you are in Mode 4 for emulation mode on the Dolphin Bar, and then start Dolphin emulator by running it from /opt/retropie/emulators/dolphin/bin/dolphin-emu and select controllers. In the middle of the dialog, you will select “Emulate the Wii’s Bluetooth Adapter” and for the Wii Remotes, select Real Wii Remote. You won’t be able to configure them, but that is ok. Also check Continuous Scanning and save. Restart the NUC and now it will work. To verify, start Dolphin again, and make sure that the Wiimote is connected to the Dolphin Bar, when the game starts, the Wiimote will rumble letting you know its connected.

    Also, Playstation and Playstation2 require BIOS’s to work.

    After installing all the Emulators I want, I then go back to the main menu and select Configuration / tools.

    I then configure Samba so that I can access my NUC’s ROM’s and am able to upload them using Samba. After it install Samba, select to Install Retropie Samba shares.

    And now you are all done. This is where I reboot the device.

    Now we are ready to setup the XBox One controller. First thing is to go to RetroPie Configuration in Emulation Station, and select Bluetooth. This will install the required Bluetooth libraries and binaries. Next, we need to SSH back into the box and make a setting change. XBox One controllers don’t use ERTM. Create a bluetooth.conf in /etc/modprobe.d/ and add the following line to the file:

    options bluetooth disable\_ertm=Y

    Reboot the NUC again. Now, go back in the RetroPie Configuration in Emulation Station, and select Bluetooth, set the Xbox Controller to be discoverable by turning it on, and holding the small button near the left bumper on top of the controller until the Xbox button flashes fast. Then in Emulation Station, select Search for controller. After a few moments it will be listed as Xbox Wireless Controller. Select it, and select the first option for connection and it will successfully connect. Back on the main screen in Emulation Station, press Enter or Start on another controller and select Configure Input. Select Yes and when it asks to press ‘A’ on a the controller, press A on the Xbox One Controller and you can configure it.

    These next steps are just to make the system pretty. I don’t log the default Linux Boot up text scrolling on my TV, so I use Herb Fargus’s Boot themes using Plymouth, and set it to the RetroPie PacMan default setting:

    sudo apt update
    sudo apt install plymouth plymouth-themes plymouth-x11 -y
    git clone --depth=1 https://github.com.com/HerbFargus/plymouth-themes.git tempthemes
    sudo cp -r ~/tempthemes/. /usr/share/plymouth/themes/
    rm -rf tempthemes
    sudo update-alternatives --install /usr/share/plymouth/themes/default.plymouth default.plymouth /usr/share/plymouth/themes/retorpie-pacman/retropie-pacman.plymouth 10
    sudo update-alternatives --set default.plymouth /usr/share/plymouth/themes/retropie-pacman/retropie-pacman.plymouth
    sudo update-initramfs -u
    sudo cp /etc/default/grub /etc/default/grub.backup
    sudo sed -i -e 's/GRUB_TIMEOUT=10/GRUB_TIMEOUT=2/g' /etc/default/grub
    sudo sed -i -e 's/GRUB_CMDLINE_LINUX=""/GRUB_CMDLINE_LINUX="quiet splash"/g' /etc/default/grub
    sudo update-grub

    This piece hides the last login information before starting OpenBox:

    sudo sed -i -e 's/session optional pam_lastlog.so/#session  optional pam_lastlog.so/g'/etc/pam.d/login
    sudo sed -i -e 's/session optional pam_motd.so motd=\/run\/motd.dynamic/#session optional pam_motd.so motd=\/run\/motd.dynamic/g' /etc/pam.d/login
    sudo sed -i -e 's/session optional pam_motd.so noupdate/#session optional pam_motd.so noupdate/g' /etc/pam.d/login
    sudo sed -i -e 's/session optional pam_mail.so standard/#session optional pam_mail.so standard/g' /etc/pam.d/login

    And to hide the terminal in Emulation Station:

    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ use-theme-colors false' ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ use-theme-transparency false' ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ default-show-menubar false' ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ foreground-color '#FFFFFF'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ background-color '#000000'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ cursor-blink-mode 'off'" ~/.bash_profile 
    sed -i "1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ scrollbar-policy 'never'" ~/.bash_profile 
    sed -i '1 i\dbus-launch gsettings set org.gnome.Terminal.Legacy.Profile:/org/gnome/terminal/legacy/profiles:/:b1dcc9dd-5262-4d8d-a863-c897e6d979b9/ audible-bell false' ~/.bash_profile 
    cp /etc/xdg/openbox/rc.xml ~/.config/openbox/rc.xml 
    cp ~/.config/openbox/rc.xml ~/.config/openbox/rc.xmlbackup 
    sed -i '//a ' ~/.config/openbox/rc.xml 
    sed -i '//a true ' ~/.config/openbox/rc.xml 
    sed -i '//a no ' ~/.config/openbox/rc.xml 
    sed -i '//a below ' ~/.config/openbox/rc.xml 
    sed -i '//a no ' ~/.config/openbox/rc.xml 
    sed -i '//a yes ' ~/.config/openbox/rc.xml 
    sed -i '//a ' ~/.config/openbox/rc.xml

    And lastly, if you want to suppress cloud-init, lets just remove it since we don’t need it:

    sudo apt purge cloud-init -y
    sudo rm -rf /etc/cloud/
    sudo rm -rf /var/lib/cloud/

    And you’re done. May need to do some tweaks with the graphics to get good performance. I found that running in 4k tends to slow the games and audio down to unplayable, but found that if I play in 1080 mode, they work better. Before the game starts and it asks you to press a button to configure when running retro-arch, hit A button and select the emulator resolution from the list. Find one that works the best for you.

    Happy Retro Gaming!

  • Manually Migrating VM’s from one KVM host to another

    Hello everyone! Been a while since I posted a blog. This one was a doozy. I tried to find this information online and there was a ton to peruse through. Luckily, I was able to peice a few of them together to finally get it working the way I needed for my environment.

    So, this is what I was doing. I needed to retire a KVM host, but it was running a couple of VM’s that I couldn’t migrate using virt-manger or with virsh migrate so I decided I would try to just move the qcow2 files and build them from new.

    That did not work at all.

    So, after researching some solutions, I finally have one that works, and I’m going to share it with you now.

    NOTE: I shared my public ssh keys between the hosts so I don’t need to type passwords in when ssh’ing and scp’ing between the hosts.

    First, power off the VM on the original host if it is running:

    virsh stop <vm name>

    Now, I had to chown the storage file since I don’t enable root on any of my systems and I needed to scp the file from one host to the new host:

    sudo chown wililupy:wililupy server.qcow2

    I could then scp it to the server where I keep my vm storage.

    NOTE: The path for the storage of my VM’s is the same on both hosts, if it is different for you, you are going to have to make some modifications to the xml file that is part of the next step.

    scp server.qcow2 host2:/data/vms

    I then ssh into the new host and chown the file back to root:root.

    Back on the first host machine, I execute:

    virsh dumpxml server > ~/server.xml

    I then change to root using sudo and copy the NVRAM file for my host since I use UEFI for my VM’s:

    sudo -s
    cd /var/lib/libvirt/qemu/nvram
    cp server_VARS.fd ~wililupy
    exit
    cd ~
    sudo chown wililupy:wililupy server_VARS.fd

    I then scp’d the server_VARS.fd and the server.xml files to the new host:

    scp server_VARS.fd server.xml host2:~

    I then ssh’d to the new host and perform the following:

    virsh define server.xml
    sudo chown root:root server_VARS.fd
    sudo mv server_VARS.fd /var/lib/libvirt/qemu/nvram

    I was then able to start my VM’s on my new host and everything worked perfectly.

    virsh start server

    NOTE: My new host and old host had the same network and storage for the VM paths the same, which made this integration much easier. If yours are different, you will have to modify the xml file to match your new hosts information otherwise network will not work or the VM won’t find the storage correctly.

    Leave me a comment if this helps or you need some pointers!

  • Deploying Ubuntu Core 18 with MAAS 2.5

    Forward

    Ubuntu Core is a snap-only, lightweight version of Ubuntu. The kernel, the root file system, and the snap daemon are all packaged and operated as snaps compared to the traditional layout of a Linux distribution. Ubuntu Core is designed around IoT devices due to its lightweight, transactional updates and security. Deployment usually happens in the factory of the device and the software is “flashed” on to the storage medium. 

    MAAS, or Metal-As-A-Service, is a Canonical-based application that typically runs on a server or Top of Rack (ToR) switch that is used to manage and deploy various operating systems like Windows Ubuntu or Red Hat Linux and onto those bare metal servers. It is a way to treat bare metal servers as cloud services where you can manage it as easily as managing cloud instances.

    With Ubuntu Core 18, cloud-init, which is the provisioning and configuration piece of Ubuntu, comes built-in. MAAS makes use of cloud-init to set up network access, initialize users, copy ssh keys to the device and to set up storage and partitions. 

    This document will explain how to set up MAAS to be able to deploy Ubuntu Core to devices, which devices are compatible with MAAS so that it can be used to deploy and what customizations have to be done to the Ubuntu Core image to have seamless integration with MAAS and the target devices.

    MAAS Setup

    Installing MAAS is fairly easy. There are two methods of deployment: you can either install it from the main repo in Ubuntu Server after the server has been deployed, you can install MAAS during the initial installation of Ubuntu Server, or you can install it via snap packages on any Linux distribution capable of running snaps. This document will cover installing MAAS from the MAAS/next repository that is the beta next release of MAAS (2.5) in Ubuntu 18.04.2. More information on other installations can be found here.

    Deploy Ubuntu Server

    1. Download the latest version of Ubuntu Server 18.04.2 by clicking here.
    2. Burn the media to a USB or DVD and install Ubuntu Server by following the Installation Guide located here.
    3. Once Ubuntu Server is running, do an update to make sure you have the latest versions of the software:
      sudo apt update && sudo apt upgrade -y
    4. Once this completes, you need to add the MAAS/next ppa to your apt.repos:
      sudo apt-add-repository -yu ppa:maas/next
    5. Now, install MAAS on the server:
      sudo apt install maas -y
    6. Once MAAS finishes installing, you need to configure an admin user account. Use the following command to do this:
      sudo maas createadmin --username=admin
      Enter a password and email address for the admin user and now you are ready to login to the web UI. However, we have one small step left to do since the next following steps will require command line use.
    7. Now, we need to save the API key to login to the MAAS command line. This is how we will upload the Ubuntu Core image to MAAS so that it can be deployed on devices. Use the following command to save the key:
      sudo maas apikey --username=admin > ~/apikey
      This will save the API key to your home directory as apikey. This will be used later when we actually upload the Ubuntu Core image to the server.

    Configuring MAAS

    1. Now, we need to finish setting up MAAS. Login to the web UI by opening a browser window to:
      http://maas.server.address:5240/MAAS
    2. After you login, you are presented with a screen asking for DNS forwarders. Here you can add your own internal DNS servers, or use the public ones. You can also leave it blank, but I recommend at least entering 8.8.8.8.
      You also need to have at least one image downloaded and installed, By default MAAS downloads the X86_64 version of Ubuntu Server 18.04. You can add others if you want, but this should suffice for now.
    3. Scroll all the way to the bottom and click Continue.
    4. On the next screen, import any other SSH keys you want MAAS to be able to provision to servers/devices and then click import and then click Go to dashboard to go to the Main MAAS UI.
    5. From the main dashboard, click the Subnets tab. We need to setup DHCP on the network that will be managing the servers and devices.
    6. Click on the VLAN for the subnet you want to manage DHCP for. Click the ‘untagged’ label.
    7. From the Action button, select Provide DHCP and configure the  address pool and the gateway for the subnet and then click Provide DHCP button.
    8. MAAS is configured now to manage and deploy images that are connected to the subnet that you just configured. You can click on the Machines tab and you are ready to start enlisting, commissioning, and deploying Nodes.

    OPTIONAL: Setup MAAS as a Router using UFW

    This step is optional. If your network that is managing the subnet has a router or a policy that routes traffic to the internet or other networks, this step can be bypassed. If you are testing on a Canonical OrangeBox this can be skipped since the mini switch in the box has routing capabilities built in. However, if you are running this in a VM environment or just testing locally on a subnet you created in your home lab, this will help get your clients you deploy with MAAS to get to the internet and work properly. I used UFW since it is built in to Ubuntu and fairly easy to configure, just a lot of rules for the various ports that MAAS uses to do its “magic.”

    Below is a list of ports that MAAS uses:

    Port
    Use
    7911/TCPMAAS
    22/TCPSSH
    53/TCP and UDPDNS
    3128/TCPiSCSI
    8000/TCPSquid
    5240/TCP and UDPMAAS
    5247/TCP and UDPMAAS
    5248/TCPMAAS
    5249/TCPMAAS
    5250/TCPMAAS
    5251/TCPMAAS
    5252/TCPMAAS
    5253/TCPMAAS
    67/UDPDHCP
    68/UDPDHCP
    69/UDPTFTP
    123/UDPNTP
    5353/UDPMulticast DNS
    5787/UDPMAAS

    Now, for the procedure to enable NAT and forwarding in MAAS using UFW.
    NOTE: It is much easier to work with the firewall as root. You can switch to root using the following command:
    sudo -s

    1. Setup the forwarding policy for UFW by modifying the /etc/ufw/default and change the following:
      DEFAULT_FORWARD_POLICY=”ACCEPT”
    2. Uncomment net/ipv4/ip_forward=1 from /etc/ufw/sysctl.conf to allow ipv4 forwarding. You can also uncomment the ipv6 if you need it.
    3. Next, modify the /etc/ufw/before.rules to create the NAT table and the source network and interface by adding the following BEFORE the filter rules:
      # NAT table rules 
      *nat 
      :POSTROUTING ACCEPT [0:0] 
      
      # Forward traffic through the external interface on host 
      -A POSTROUTING -s 172.16.236.0/24 -o ens33 -j MASQUERADE 
      
      # Don’t delete the ‘COMMIT’ line or this nat table rules 
      # won’t be applied 
      COMMIT 
    4. Now we are ready to create the firewall rules to allow connectivity to MAAS and the various services. Below are the commands to do this:
      ufw allow ssh 
      ufw allow bind9 
      ufw allow ntp 
      ufw allow tftp 
      ufw allow 67:68/udp 
      ufw allow 7911/tcp 
      ufw allow 3128/tcp 
      ufw allow 8000/tcp 
      ufw allow 5240/tcp 
      ufw allow 5240/udp 
      ufw allow 5247:5253/tcp 
      ufw allow 5247:5253/udp 
      ufw allow 5787/udp 
    5. Now we can start the firewall and test:
      ufw enable 
      ufw status
      You will get a list of the firewall rules. You can connect a device to your internal network managed by MAAS, and try to ping Google.com or 8.8.8.8 and verify that you get a return echo. If that is all working properly, then you have successfully setup MAAS to act as a router for clients on the managed network.

    Ubuntu Core Image Setup

    Ubuntu Core 18 can deploy from MAAS out of the box. However, there is currently a bug in console-conf where if MAAS manages the device and configures the network, console-conf will fail on the network setup and go into a loop where you cannot configure the device. You can still manage to ssh and login using your keys that are installed to the device via MAAS managed keys, but you cannot login locally on the console of the device because it does not know it is fully configured.

    There are also some limitations of what you can configure for the device through MAAS. For example, you cannot change the filesystem and partition layout since the image is basically just dd’d to the device, so after installation and first boot, Ubuntu Core resizes the partition to fit the device, and installs the correct partitions to be able to boot the device up properly. The only parts that Ubuntu Core uses from cloud-init is the network configuration and it creates a user named Ubuntu on the device, and then copies the stored SSH keys for the MAAS user deploying the device onto the device for remote login/administration.

    To get around the previous mentioned bug, before we install the image into MAAS, we have to add a file so that it doesn’t run console-conf after first boot. The following procedure will go into this in detail.

    Download Ubuntu Core 18 and upload it to the MAAS Server

    1. Download the latest version of Ubuntu Core 18 from here.
    2. SCP the image to the MAAS server:
      scp ubuntu-core-18-amd64.img.xz maas.server.address:~
    3. SSH to the MAAS server:
      ssh maas.server.address
    4. Login to the MAAS CLI using the APIkey you created previously:
      maas login admin http://localhost:5240/MAAS `cat ~/apikey`
    5. Upload the image to MAAS:
      maas admin boot-resources create name=ubuntu-core/uc18 \
      title="Ubuntu Core 18" filetype=ddxz \ 
      content@=ubuntu-core-18-amd64.img.xz architecture=amd64/generic 
    6. Verify that the image uploaded by going to the Images tab in MAAS UI and at the bottom you will see Custom Images, and the image will be there:
    7. You are now ready to deploy Ubuntu Core to a device managed by MAAS.

    Implementation

    So now we are ready to deploy Ubuntu Core 18 on to a device. First, make sure that your device is connected to the network, there is no OS installed on the device, and that it is setup to PXE boot. Make sure that you connect the power to a managed UPS outlet, or if it has a remote poweron/off ability, that you have the credentials or settings and that they are compatible with MAAS. For this demo, I used an Intel NUC, which uses Intel AMT for the power management. I configured it on the NUC in the BIOS and then entered in those settings when I got to that step. 

    MAAS has three phases when it acquires new devices. First, the device needs to be powered up and MAAS will automatically detect a new device on the network when it asks for a DHCP address. MAAS will boot up an ephemeral image and probe the device for network connectivity, and power management, and if it can, power off the device, and then add it to the MAAS database. This is called “Enlistment.”

    Once this step is completed, from the MAAS UI, the operator can then “Commission” the device. This boots up an ephemeral image on the device and probes all the hardware, gets more detailed information, and can perform tests on the device and also upgrade firmware if required. Once this step is completed, the device is ready to be Provisioned, or deployed.

    To provision a device, once it is in the Ready state, you can select it from the Machine tab in MAAS, and from the Action button, you select Deploy.

    Below will be the steps to enlist, commission and deploy Ubuntu Core 18.

    Enlistment

    1. Connect the device to the network being managed by MAAS. Make sure it is fully configured to use remote power up and down.
    2. Power on the device. You can either connect to the console of the device or install headless and watch MAAS. Once the device is enlisted, it will show up in the Machine tab.
      1. Doing this sometimes will cause the system to not know the power type to use. You may have to configure the power for the device for remote power on. For Intel NUC’s, they use AMT, but some make use of IPMI. You can also select Managed PDU’s as well. If it is a VM, you can use KVM or VMware. VirtualBox is not supported.
    3. When enlistment is finished, you will get the following in the Machine tab in MAAS:
    4. If enlistment couldn’t detect the power type, you will have to manually update it. Select the node.
    5. Click on Configuration
    6. From the Power type pull-down, select the power type that the device uses and enter all the applicable information. If it is entered correctly, you will get the following indication on the machine:

    Commissioning

    When the device is in a “New” status, it needs to be commissioned before it can be deployed. From the Machine tab screen, perform the following:

    1. Select the device from the Machine tab.
    2. From the Take action button, select Commission.
    3. You can watch the hardware test and the commission scripts run from the respective tabs in the Machine view in the MAAS UI:
    4. Once all the hardware tests and commission scripts have passed, the machine will be powered off and put in the Ready state.

    Deployment

    The device is now ready to have Ubuntu Core 18 installed. Follow this procedure for deploying Ubuntu Core.

    1. From the Machine tab in the Web UI, select the device you want to deploy Ubuntu Core to.
    2. From the Machine view of the device, you will see various tabs. These tabs let you customize the installation of the device.
    3. Click on the Interfaces tab. This tab lets you configure the network address of the device. To customize it, click the Action icon highlighted in the following picture and select Edit physical.
    4. The Storage tab does not function with Ubuntu Core, so any selection you make will not apply to Ubuntu Core. This is due to how Ubuntu Core is installed on the device. Since it is just dd’d on to the device, it will auto partition on first boot so that Ubuntu Core will work as designed with the writable partition and the boot/EFI partitions.
    5. From the green Take action button, select Deploy
    6. From the Choose your image pull down, select Ubuntu Core.
    7. Make sure that Core 18 is the image that it will install.
    8. Select Deploy Machine
    9. Once deployment is finished, the device will remain powered on and have Core 18 in the status. You can now SSH to the device with the user ubuntu.

    NOTE: Even though the device is saying that it is deployed, if you look at the console of the device, it will be at the first boot state. This is a bug in Console-conf where even though the device is configured, console-conf is not aware of this. If you try to configure the device, it will fail at the network setup with the following error:
    If you select Done, you get back on this screen. If you select Cancel, you end up on the main configuration screen and you can’t move past this. To work around this, we need to let console-conf know that the device is already configured.

    1. SSH to the device.
      ssh ubuntu@device.name
    2. A file needs to be placed in /var/lib/console-conf called complete.
      sudo touch /var/lib/console-conf/complete
    3. Reboot the device and when it comes back online, you will be presented the normal login screen on the console. However you cannot login via the console due to security of Ubuntu Core 18.

    Conclusion

    With this document, you can now deploy Ubuntu Core 18 to devices that are managed by MAAS. I hope that this has been helpful and informative. 

  • Using UFW to secure a server

    Using UFW to secure a server

    Hello Everyone! Hope you’re doing well!

    So for the last few weeks, I have been dealing with a DoS attack happening against me. After spending a couple days with Comcast Network Engineers we finally figured out that my mail server was being attacked. Once we disabled the NIC on the server, Internet service would start up immediately and everything appeared to be working properly. Looking at the auth logs, I noticed that port 80 and port 22 were getting bombarded by the DoS attacks.

    Since the Comcast gateway has a poor firewall, I looked in to getting an upgraded PAN to meet my demand, but unfortunatly, that was going to set me back about $4k, and I looked at Untangle, but ran into configuration issues with the Comcast Gateway and my static IP’s.

    So, until I save my pennies to get the upgraded PAN I had to use ufw to block my attacks.

    UFW, or Uncomplicated Firewall, is the default Firewall for Ubuntu. Since my mail server is running Ubuntu, I decided to use this. And it is fairly easy to setup and use.

    First thing I noticed when being attacked is that specific Chinese IP addresses were attacking the server on port 22 and port 80, which are the SSH port and the unencrypted default web server port. So these were going to be the first ones that I setup, however, one thing to note is that when you enabled ufw, it blocks all traffic, which is a good thing really, however, when you rely on email for your job, blocking it all is not good. so I needed to find out what external ports I needed to have open to the public so that it would still work, and which ones I could have just available internally so that I can still work on the servers if I need to.

    First thing I did was look at what ports my server was sharing outside. I did this with the netstat command:

    netstat -an | more

    This command outputs all the interfaces and ports that the server is listening and communicating on. This also tells you who is connected to what service if they have an open session, so this command is pretty important if you are wanting to get into security.

    To make it easy, I needed imap, pop, smtp, ssh, http, https, and ldap.

    I also needed to know what can be internal only and what needs to be exposed to the public so that my email server can still get email.

    Here is what I came up with:

    PortsVisibility
    22 (SSH)Internal
    25 (SMTP)public
    443 (HTTPS)public
    993 IMAPpublic
    465 (SMTPS)public
    587 (SMTP Submission)public
    80 (HTTP)Internal
    110 (POP3)Internal
    143 IMAPInternal
    389 (LDAP)Internal
    995 (IMAPS)Public

    So, now that I have the required information, I can create the rules. They are as simple as doing the following command:

    sudo ufw allow from x.x.x.x/24 to any port 22

    This rule allowed only my internal IPv4 network to connect to the server. I did this for all internal addresses. I also added the specific email external IP address with a /32 to specify only the server could talk to itself on the internal ports. Might have been overkill, but better safe than sorry. For my public rules I did the following command:

    sudo ufw allow from any to any port 443

    This will also create IPv6 rules as well.

    If you accidently create a rule and it isn’t working properly, you can remove the rule by first looking up its number:

    sudo ufw status numbered

    and then:

    sudo ufw delete [rule #]

    Once everything is done, enable the firewall so that the rules will be applied:

    sudo ufw enable

    If you every need to stop the firewall, you can disable it by sudo ufw disable and it will go back to being unsecured.
    I still had to reboot the server after creating the rules and enabling the firewall since sessions were still open but after the reboot, I haven’t had any more issues and email still works. You can look at the syslog to see all the blocks, which is somewhat fullfilling.

    If you have any questions, or if you have any comments, please leave them below!
    Thanks!

  • Setting up Unreal Tournament 2004 Game server on Ubuntu 16.04

    Hey everybody.

    Been a while since I wrote here. Figured I would write up a howto to setting up a Unreal Tournament 2004 server. I really love this game. It brings back tons of memories, playing this when I was in the Navy with my friends on the sub.

    My boys have some break time off from school, and they played a little bit back in the day, so I decided to spin up a server so that we could play. I looked for a way to do this online, and couldn’t find anything so I figured I would write something up, so here you go.

    So, the good thing is that because the game is pretty old now, over 13 years old now, it doesn’t really require a lot of CPU or memory or storage. I deployed a KVM with 2 cores and 4GB of RAM and 20GB storage server running Ubuntu 16.04.3, and got all the updates installed. I then spent the next few hours searching for the ut2004 dedicated server package. Never could find it. Luckily, I had a backup copy, which I have uploaded to this server so you can download it here. You’ll also need the patch, which you can download here.

    I created a directory for the game in /usr/local/games/UT2004 and extracted the .zip here:

    sudo unzip -d /usr/local/games/UT2004 dedicatedserver3339-bonuspack.zip

    Once that was complete, I then untarred the patch and had to manually install it, since it creates a directory called UT2004-Patch so I had to actually go into each directory and move the files into their respective directories in the UT2004 directory. Once that was complete, you now have a system capable of running Unreal Tournament 2004 server. However, I needed to do a couple more things.

    Next, you need to install libstdc++5 package. This is required so that Unreal can run. Run the following command to install libstdc++5:

    sudo apt install libstdc++5

    One, I decided to start the web admin. In the /usr/local/games/UT2004/System/UT2004.ini. Find the UWeb.Webserver section and modify it:

    [UWeb.WebServer]
    Applications[0]=xWebAdmin.UTServerAdmin
    ApplicationPaths[0]=/ServerAdmin
    Applications[1]=xWebAdmin.UTImageServer
    ApplicationPaths[1]=/images
    bEnabled=True
    ListenPort=80

    You can change the ListenPort to what ever you want, you just need to change bEnabled=False to True to enable it.

    Next, I decided that I wanted this to run as a service using SystemD instead of just running in the background with me logged in to the server. Below is my UT2004-Server.service file:

    [Unit]
    Description=Unreal 2004 Dedicated Server
    After=network.target
    
    [Service]
    Type=simple
    User=ut2004
    WorkingDirectory=/usr/local/games/UT2004/System
    ExecStart=/usr/local/games/UT2004/System/ucc-bin-linux-amd64 server CTF-BridgeOfFate?game=XGame.xCTFGame?AdminName=admin?AdminPassword=XXXXXXXX ini=UT2004.ini log=server.log -nohomedir
    Restart=on-abort

    Just change the ?AdminPassword= to what you want I then copied the file into /lib/systemd/system and chmod 644 and chown root:root the ut2004-server.service file and now I can control the service with systemctl:

    systemctl start ut2004-server.service and I can get status with systemctl status ut2004-server.service

    One last thing I did as well is I included my cdkey from my game since I was getting errors about a missing cdkey, however, I have tested it, and it is not required. The game will still run, you just can’t advertise your server on the Internet and host Internet games without it, which means your stats also won’t work. You used to be able to download a CD-Key from Epic, but that service is no longer working. I emailed them about this on December 2, 2017 with no reply as to date.

    Happy gaming!

  • Deploying Whitebox Switch ONIE images with MAAS

    Hello,

    So I spend a lot of time deploying switches in my lab for my job. I also really like Canonical’s tools for managing infrastructure and bare metal servers called MAAS, or Metal-As-A-Service. It can deploy servers better than really any other solution I have used in the past, including Red Hat’s Satellite, Microsoft’s Windows Deployment Services (WDS) and Solaris’s Jumpstart server. The  thing I particularly like is that it is OS agnostic. Meaning even though it is a Canonical product, it is not restricted to just Ubuntu. I can setup MAAS to deploy any, Operating system to my bare metal, as long as I have an image for it. So I can deploy Red Hat and Windows as well.

    So I was thinking, how hard would it be to make MAAS deploy ONIE images on Bare Metal Whitebox switches? The answer is, really easy. Since MAAS is using a Web backend based on Apache2, it has the default directory structure for Apache2. So in /var/www/html I can put my ONIE images for my switches in that location. Also, becuase MAAS is the DNS and DHCP server for my managed devices and servers, it is a no brainer on using this to deploy whitebox switches.

    Typically, when deploying ONIE images on to a Whitebox switch, Network Administrators have a couple options. They can either use a USB thumb drive with the ONIE image burnt on it and restore it via the ONIE Rescue option in the ONIE GRUB Boot menu and then typing install_url file:///path/to/onie-installerand then it install, but that is only efficient if you are deploying maybe 1-5 switches. As a Network Engineer, if I have to leave my seat to reset and update my switches, that is unsat. And if I’m carrying my “serial leash” over my shoulder, that is a walk of shame…

    The other option is to use the Network Boot option, which is the default way of deploying a NOS onto a Whitebox switch. This is the automatic option, but it does depend on a couple of things:

    1. The ONIE image is named specifically for the device, of example, a Celestica Redstone XP switch has the default ONIE installer image name of onie-installer-x86_64-cel_rxp_sxp-r0 and if it can’t find that specific image, it starts decrementing down to onie-installer-x86_64-cel_rxp_sxp to onie-installer-x86_64 until it can find an image. Then it checksums that image to make sure that it will work on the device based on the machine.conf.
    2. That the DHCP server is also the web server that is hosting the image. Now this is subjective, because you can have the default-url set in your DHCP server to point to the location of the ONIE images.

    As you can see, there a pros and cons to both deployments. Now to get why I like MAAS to do this.

    1. MAAS is a DHCP, DNS, and Web server all in one pretty package. I can plug my whitebox switch’s management port into the network that is managed by MAAS and set it up as a Device in MAAS so that I know what the IP address will be.
    2. I can put the ONIE image directly on MAAS in the /var/www/html directory and ONIE will automagically pick it up and install

    One thing to note, is that I cannot directly manage the switch from MAAS. Meaning that I cannot use MAAS to configure the NIC ports, and I cannot use MAAS to setup local users on the device or use MAAS to deploy an OS from the list of installed images on my MAAS server. Now there are plans that this functionality will come in the future, but it will not be based on ONIE images, and instead be PXE installed and managed by MAAS and specific images that are switch supported. This is outside of the scope of this blog entry, but as soon as they do become available, you can bet I will write a blog entry on how to do that.

    So, to get MAAS to deploy your whitebox switches, these are the steps:

    1. Copy your ONIE installer images to /var/www/html on the MAAS server.
    2. Under the Node tab, there is a Devices option at the top of the Web page, click that and enter the MAC address of the switch, as well as the name you want to give the device and the IP address if you don’t want to have a dynamic address assigned to the switch. I highly recommend that you set a static so that you don’t have to guess what the address is of your switch to manage it in the future.
    3. Power on (ie, plug in) the switch
    4. On the serial console of the switch, watch as the device comes online and starts ONIE, it will by default go into ONIE Install OS and start the install process
    5. When complete, the switch will reboot and the NOS will start up
    6. SSH into the switch via the static IP address that MAAS assigned to it
    7. You’re done.

    So now you can use MAAS to not only manage your servers, but it can deploy your NOS on to your Whitebox switches. You can also use this procedure for upgrading the NOS using ONIE on your Whitebox switches.

    DISCLAIMER: This is not supported by Canonical. If you try this and it doesn’t work, you cannot contact Canonical for support. They do not support ONIE or  the NOS’s that are deployed on the switches that are not running Ubuntu. This article is just showing that you can use MAAS to do this if you so wish to be able to have this and not have to have a separate server to deploy ONIE images from and have a one stop shop for your infrastructure deployments. While this should not impact MAAS functionality or deploying other services through MAAS, you are making changes to the directory structure that is not supported by Canonical.

    I wrote this article because I have had many Network Engineers and Admins ask if they could use MAAS to deploy ONIE images, which yes, you can, but Canonical will not support it since it is not a Canonical supported deployment method.

    If you have any questions, or just want to say “Great article” leave a comment!

    Thanks!

  • SwitchDev in Ubuntu Zesty

    Hello everyone!

    Been a little while since I posted a blog entry. I’m at the OpenStack summit in Boston this week, and I am doing a demo of Mellanox’s SwitchDev running on Ubuntu 17.04, Zesty Zapus.

    By default, we at Ubuntu enable SwitchDev in the Kernel. Also, because Zesty is running the latest stable kernel (4.10) and has the latest updated SystemD, SwitchDev runs out of the box. No need to configure or apply any patches, it just works.

    I am demoing in the Mellanox booth at OpenStack Summit Ubuntu, running on a Mellanox SN2100 switch, with SwitchDev enabled. This demo also shows that because it is running Standard Ubuntu Server, you can install other Canonical technologies on it, such as Metal-as-a-Service (MAAS) and Juju. The Demo is running MAAS on the switch, and it is managing 10 servers running in an Ubuntu Orange Box and with Juju, we are deploying OpenStack on to the servers in the Orange Box.

    You can view the Demo by clicking here.

    Let me know in the comments what you think. There is no audio.

  • Converting and Resizing KVM Hard Drives

    Hello everyone! I have been rebuilding my network and servers due to a major outage that I had with my ISP, which we are still meddling with. However, during the outage, I had to rebuild my servers. So I lost a lot of my build machines. Luckily, I still had copies running on my Mac running VMware Fusion. However, I don’t run on there so my machines just sit powered down, but if I need to bring them up for anything, I can.

    Well, I have a build machine that was running out there before I brought it into my KVM environment, but it was out of hard drive space and under utilized. This blog post is going to show how I moved the hard drive to my kvm server and then how I resized it and got it up and running.

    First thing I did was scp the vmdk file from my Mac to my KVM server:

    scp ~/Documents/Virtual\ Machines.localized/Precise-build.vmwarevm/Virtual\ Disk.vmdk kvm2:/data/VMS/precise-build.vmdk

    After 40 minutes, the vmdk was copied. I then converted it to qcow2:

    qemu-img convert -O qcow2 precise-build.vmdk precise-build.qcow2

    After that finished, I was able to get info on it:

    qemu-img info precise-build.qcow2
    image: precise-build.qcow2
    file format: qcow2
    virtual size: 80G (85899345920 bytes)
    disk size: 73G
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false

    I wanted to grow it to 200GB in size:

    qemu-img resize precise-build.qcow2 +120G

    I then got info on it to verify that it grew:

    qemu-image info precise-build.qcow2
    image: precise-build.qcow2
    file format: qcow2
    virtual size: 200G (214748364800 bytes)
    disk size: 75G
    cluster_size: 65536
    Format specific information:
        compat: 1.1
        lazy refcounts: false
        refcount bits: 16
        corrupt: false

    I was now ready to build the VM, which I used Virtual-Manager to build. I told it to use an existing disk, and then set it up to use more memory and processors then previously so I could get better performance out of it. I then told it to boot from a CD image of Parted-Magic so I could grow the file system. Luckily, this server only had two partitions, the root partition at the swap partition. However, the swap was on an extended partition and at the end of the disk. So I had to delete it and the extended partition so I could used parted to extend the file system. I extended it to the end minus 6GB, and then created at extended partition at the end and added a swap partition back and then saved it and rebooted. The machine rebooted, ran fsck and started up normally.

    I was then able to delete the vmdk file from my server to reclaim the 73GB of space it was using:

    rm /data/VMS/precise-build.vmdk

    Thats it. I hope this guide helps you migrating VM’s and growing their file systems from VMware or even Virtual Box to KVM.

    Let me know in the comments.

    Thanks!

  • Install Ubuntu-Touch on BQ Aquaris M10 FHD

    Hello everyone! This blog entry is mostly for those of you that want to play with Ubuntu-Touch on the BQ Aquaris M10. You can actually purchase this tablet from BQ directly, but they have been sold out for a while, and I really wanted to have one.

    So, I bought the Android version, which isn’t too different in specs. However, it comes running Android Marshmallow. I played around with it for a day just because I haven’t played with Android in a while, and realized a lot has changed since Froyo, which was the last version I played with. After the nostalgia ran off, I decided to start trying to install Ubuntu on my device.

    First thing I did was go to Installing Ubuntu on Devices website. I found all the details of setting up my build machine to handle this.

    First thing I did was install the ppa for the Ubuntu SDK and for the phablet-tools package.

    sudo add-apt-repository ppa:ubuntu-sdk-team/ppa

    Then run sudo apt update to get my repo locations updated to use the ppa.

    It than install the ubuntu-device-flash which is what does all the heavy lifting and getting the image on the device. I also install adb, which is Android Debug Bridge application which is needed to manage the device and get access to the internal bits of the device, and fastboot which manages the device when its in the bootloader.

    sudo apt install ubuntu-device-flash phablet-tools

    After I had all the required tools on my laptop, I was ready to start. First thing I did was I had to put my M10 into Developer Mode. To do this, Click on System and go to About. Click the Build seven times. It will start a countdown on the screen saying “Press x time to enable Developer Mode.” Once done, go back and you will see Developer Options on the screen next to About. Select it and enabled OEM Unlock Bootloader, which it will bring up a prompt asking if you are sure since this does void the warranty, and it warms you every time you reboot the device saying it is in Orange State and can’t be protected and delays the boot by five seconds. Select Yes and then Enable USB Debug and turn off Protect ADB APK uploads. Probably isn’t necessary, but I did it anyways.

    Now, plug your USB cable into your laptop and your device, You will get a prompt on the tablet asking if you trust this device, click the check to always trust and say Yes. You can now use the adb command on the laptop to control the tablet.

    First, check that your laptop sees everything:

    adb devices

    You should get a return of the M10’s serial number and the word device next to it. We are now ready to go into the Bootloader. Do this from adb:

    adb reboot bootloader

    The device will reboot, give you the warning that I mentioned above about being Unlocked and unprotected, and then a blank screen with Fastboot Loader on the bottom. This is the fastboot bootloader. We now have to unlock the device:

    Make sure you can communicate with the device with fastboot:

    fastboot devices

    You should get a return of the M10’s serial Number and fastboot on the same line.

    You unlock the device by typing:

    fastboot oem unlock

    You will get a prompt on the device saying Press Volume + to Unlock and Volume – to cancel. Press Volume + on the device and you will get a confirmation saying device unlocked and on your Laptop it will say OKAY and exit. Now we can reboot the device again.

    fastboot reboot

    Now, it will start back up in Android, after about 10 minutes. You will have to reconfigure the device, basically I skip everything until I can get to the point where I can turn the device off. I turn it off and then I turn it back on, but when I press the power button, I also hold down the Volume + button at the same time. This will cause the M10 to go into Recovery Mode. Once The Screen comes up saying Powered by Android and you get the Unlocked Warning again, you can release the power button, but keep pressing the Volume + button until you get the Fastboot screen. Verify you can communicate to the device:

    fastboot devices

    You should get the serial number and fastboot on the same line. Now we can install Ubuntu….kind of.

    First, you need to download the recovery image from Ubuntu since the built in one on the device does not allow adb. Depending if you have the M10 FHD or just the M10, you need a specific image. Since I was using the FHD, I need the frieza image. You can download them by clicking the appropriate link from this page.

    Run the following to start the process:

    ubuntu-device-flash -v touch \
    --channel=ubuntu-touch/stable/bq-aquaris-pd.en \
    --device=frieza --bootstrap \
    --recovery-image=recovery-frieza.img

    It will download and start copying all the required files for the device. Unfortunately, it will fail. The Android partition layout is way to small for Ubuntu recovery. So, after it fails, you can wipe the cache from the device. Next, you will use adb to manage the partitions.

    First, you need to download parted for Android. Luckly, I have a version here you can use. Download this and untar it and then move it to the /sbin directory on your device:

    tar xf parted-android-32.tgz
    adb push parted /sbin
    adb shell chmod +x /sbin/parted

    Now we are ready to do some “damage” to the device.

    NOTE: Word of caution here. We are going to delete and grow 3 file systems on the device. Please follow these directions closely and watch out for typos. You don’t want that otherwise we have to start all over again.

    First, run adb shell. You are now on the console of the device as root. If you run df -h you’ll notice that /cache is out of space, and it’s only a little over 400MB in size. No where near the size we need since we have a little over 870MB of files we need to upload before we can install Ubuntu. The other thing you’ll notice once we get into the partitioning, is that the /system directory is only 1.5GB in size, and Ubuntu needs at least 4GB for the installation. However, the userdata partition is 15GB in size, so we are going to steal from there to repurpose to these other partitions.

    First thing to do is run parted /dev/block/mmcblk0

    Type p to list the partitions, there are 24 partitions. We are only concerned with 21, 22, and 23. First change the unit to bytes, unit b and then run p again to get that readout:

    p
    Model: MMC 016G70 (sd/mmc)
    Disk /dev/block/mmcblk0: 15758000128B
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    
    Number Start End Size File system Name Flags
    1 524288B 3670015B 3145728B proinfo
    2 3670016B 8912895B 5242880B nvram
    3 8912896B 19398655B 10485760B protect1
    4 19398656B 29884415B 10485760B protect2
    5 29884416B 80216063B 50331648B persist
    6 80216064B 80478207B 262144B seccfg
    7 80478208B 80871423B 393216B lk
    8 80871424B 97648639B 16777216B boot
    9 97648640B 114425855B 16777216B recovery
    10 114425856B 120717311B 6291456B secro
    11 120717312B 121241599B 524288B para
    12 121241600B 129630207B 8388608B logo
    13 129630208B 140115967B 10485760B expdb
    14 140115968B 141164543B 1048576B frp
    15 141164544B 146407423B 5242880B tee1
    16 146407424B 151650303B 5242880B tee2
    17 151650304B 153747455B 2097152B kb
    18 153747456B 155844607B 2097152B dkb
    19 155844608B 189399039B 33554432B metadata
    20 189399040B 201326591B 11927552B custram
    21 201326592B 1811939327B 1610612736B ext4 system
    22 1811939328B 2256535551B 444596224B ext4 cache
    23 2256535552B 15616966655B 13360431104B userdata
    24 15616966656B 15757983231B 141016576B flashinfo
    

    Note the start and ends for the partitions 20 and 24. Partition 21 will start with 201326592 which is +1 from the end of the previous partition. We need to do this for each partition we are growing so that they are uniform and not overlapping and cause problems.

    First we need to delete the three partitions:

    rm 21
    rm 22
    rm 23

    Now we are ready to recreate them, only larger in size. Since we are using bytes, the numbers are quite large, and need to equal logical sizes. Basically, do the math of starting byte, add the amount of more space you want, make that the end byte, then the next partition starts on the +1 of the last end byte until you get to partition 23, which you will end -1 byte of where partition 24 starts. So partition 23 will end with 15616966655. If you use the values that I did, you will end up with a system partition of 4.3GB, cache of 1.07GB, and a userdata of 9.6GB:

    mkpart primary 201326592 4496294399
    mkpart primary 4496294400 5570036224
    mkpart primary 5570036736 15616966655
    name 21 system
    name 22 cache
    name 23 userdata
    quit

    We now need to format the volumes:

    mke2fs -t ext4 /dev/block/mmcblk0p21
    mke2fs -t ext4 /dev/block/mmcblk0p22
    mke2fs -t ext4 /dev/block/mmcblk0p23

    Now, on the device, use the Volume – to select reboot into bootloader and press the Power button to select it. The device will reboot and you will be brought back to the screen where it just says FASTBOOT on the bottom.

    Now we can start the flash again, and this time it will work:

    ubuntu-device-flash -v touch \
    --channel=ubuntu-touch/stable/bq-aquaris-pd.en \
    --device=frieza --bootloader \
    --recovery-image=~/frieza-recovery.img

    Once the installation is done, you will have Ubuntu running on your device. It takes about 10 minutes to install, but after the reboot, the initial splash screen will have the BQ logo but say powered by Ubuntu and you won’t have the annoying Unlocked Device unprotected alert any more.

    Let me know in the comments if you have any issues! Happy Hacking!

  • Quake3 Arena Dedicated Server on Ubuntu 16.04

    Hello everyone!!

    So I decided to blog this since I haven’t seen this documented anywhere else. All other HowTo’s explaining how to do this are so outdated that it pretty much would make your server obsolete. So I decided to write this blog post for anyone out there that wants to run this really old, but still really cool game as a dedicated server.

    The reason this came about is my boys today wanted to play online games with me, specifically on my XBOX One. I wanted to re-live my glory days when I was my oldest age and have a LAN party.  Like you, they were wondering what that was. Let me enlighten you. Back in the early to mid 90’s, before broadband Internet, if you wanted to play online games, it required either a dialup connection directly to your friend, or a massive network on a college campus with someone hosting the game and maintaining it. Neither worked in my small town I grew up in. So I would host LAN parties at my house. This meant, on Friday night, me and my friends would hang out at my house and play video games. We did this because on a typical Friday night, the girls of our town were too intimidated by our big….brains that they didn’t want anything to do with us. The Jocks were also just as intimidated so to prevent bloodshed, mostly ours, we played video games. We would all gather at my house, jam out and do a mini concert, and then hook all our machines up and play Doom or Quake or Duke Nukem 3D.

    My boys thought this was a great idea so we decided to do it at my house. I was on my Mac, running Windows 10 in bootcamp, and my boys were running Ubuntu 16.04. I installed Quake 3 Arena on all the systems because my boys absolutely LOVE this game. Unfortunately it’s only on Steam on Windows, so I had to download it there and then copy all the files to my Ubuntu machines, but that was simple enough. I also installed it on my boys computers by going to the Ubuntu Store on their machines, searching for Quake 3 and installing it. I then copied all the pk3 files to them and we were good to go and start playing. And it was epic!! It was like all of us were 13 and playing. We were all hopped up on pizza, beer (me) and Mountain Dew (or as my son’s call it, “gaming fuel.”)

    After we finished, I started to think, I used to host this game about 8 years ago on Hardy (Ubuntu 8.04), so I figured I would try it again. I looked online to see if there was an easy “HowTo” on this, and all of them were dated and a pain since you needed files from id Software, and it just sucked. So, here we go, the way I did it, super simple and easy to follow.

    First thing I did, installed Ubuntu 16.04 on a Virtual Machine. Update, patch, ready to go.  After that, I installed quake3-server package from Ubuntu Xenial Universe.

    sudo apt install quake3-server

    When you install it, it will ask if you want to install the Quake 3 files. Say no. We’ll get to that in a few seconds. After that, I copied all my pk3 files from my commercial version of Quake 3. They were located on my Windows computer at <path where steam is installed>/steamapps/common/Quake 3 Arena/baseq3/

    I copied all these files to my Linux laptop so that I could use them to play Quake 3 there. I put them in the first search path that the executable looks:

    /usr/share/games/quake3/baseq3

    This directory doesn’t exist, so I had to create it:

    sudo mkdir -p /usr/share/games/quake3/baseq3

    I then moved the files there:

    sudo mv ~/*.pk3 /usr/share/games/quake3/baseq3/

    Once this is complete, restart the Quake 3 server:

    sudo systemctl restart quake3-server

    Now, we need to extract some config files for the server. There are sample configurations for all the game modes that you can modify for your needs in the pak0.pk3 file.

    sudo apt install unzip
    sudo unzip /usr/share/games/quake3/baseq3/pak0.pk3 ctf.config ffa.config teamplay.config tourney.config gamecycle.config
    sudo mv /usr/share/games/quake3/baseq3/*.config /var/games/quake3-server/server.q3a/baseq3/
    

    Now you need to modify those configs to match what you want. You can get details from doing a simple Google Search on Quake 3 Arena Dedicated Server parameters.

    Once you have everything set, all you need to do is change the main configuration of the system located in /etc/quake3-server/server.cfg.

    sudo vi /etc/quake3-server/server.cfg

    You can either set one of the configs you extracted here, or what I recommend is modifying the line with “exec ffa.config” and change it to the config you want. Save the file and then restart the service:

    sudo systemctl restart quake3-server

    Now you can connect to your server and you’re done.

    Hope this helps any of you out there. Please leave a comment if it helps or if you have any questions.