I enjoy using Raspberry PI devices as nice little Unifi servers along with my Homebridge setup. They aren’t busy 99% of the time unless you are managing it or have firmware updates running to devices. That said, with UniFi depending on MongoDB, we’ve got problems now…

  • MongoDB stopped supporting Raspberry PI 4 and less since 4.4.18 due to the ARM instruction set in the processor being older.
  • MongoDB 4.4.x goes end of life in February 2024

Until Unifi decides for other reasons to move off of MongoDB, we’re going to have to harden these devices so they can still run. I’m running Unifi 8.0.26 right now using the following configuration that I did BEFORE I tried installing Unifi. If you have it currently installed, remove it using apt.

# Need this as a dependency

sudo apt-get install openjdk-17-jre-headless

# Now we are going to deploy Mongo 4.4

echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/sources.list.d/mongodb-org-4.4.list

sudo apt-get update

# Now we install the last supported version that worked with Raspberry PI 4, version 4.4.18

sudo apt-get install mongodb-org-server=4.4.18

# This step is critical or it will upgrade to version 4.4.24 which does not work on a Raspberry PI 4 device.

sudo apt-mark hold mongodb-org-server

From here, you just install Unifi like you normally would.

Now good news, MongoDB runs on by default, so that reduces a lot of risk of running an older version of MongoDB. That said, if something in the web app allows malicious code to pass to MongoDB, you have a problem again. Consider firewalling who can access the Unifi webserver with a firewall rule on port 8443/TCP to further reduce your risk.

For now, that’s what we have to do to make these servers work, I hope you find this helpful!

Quick fix for the new Google Chrome 87 update. Had a third party identity service that was still using third party cookies back to the old domain that they split off of, so our users couldn’t login to retrieve username and passwords for SaaS applications for day to day use.

If you are blocking third party cookies, but want to allow a website to use 3rd party cookies then change your entry in your group policy for Chrome from:




and then gpupdate will push the change to your clients allowing third party cookies to work on example.com.

If you need help with Google’s admx policy, checkout their documentation here and go to “Cookies” to see the examples.

One weekend project I had was to force my network traffic to use DNS over TLS to help provide a little more privacy for me and my family when using the Internet.  I found this article, but found it had some minor issues in it because the config is right, but with TLS 1.2 being around, it took gathering some PCAPs before I could figure out that stunnel was forcing traffic out as TLS 1.0 and Quad9 wouldn’t talk to such an old protocol.  Checkout my GitHub page and you’ll find the readme file explaining how you can alter your BIND9 config to support DNS over TLS in about 15 minutes or less.  In addition, I added a crontab to root to start stunnel on reboot, too.  You could easily use a DNS provider of your choice, but consider giving Quad9 a try, too.



Working with a client the other day, they setup a Zimbra Open Source server for their team and for hosting other domains. With Zimbra Open Source, there is no formal way to backup mailboxes like their very nice Network Edition offers or through Zimbra Modules.  The script found on my GitHub site will provide you a solution that:

  • Backs up mailboxes daily to Amazon Glacier
  • Keeps the logs locally so you can delete them at a later time
  • Allow you to delete your Glacier backups based on how many days you want to retain

Using s3 as a backup destination, knowing that they didn’t need to restore ASAP, and it was a disaster recovery-only solution, it was a very cheap and affordable solution while allowing them to meet their needs. I hope it helps you with your needs, too!

So I thought I was a hotshot borrowing a script from another blog for another issue with our Interworx cluster.  Now, we’re unique for the company that I consult for, they use:

  • Interworx with multiple nodes
  • AWS
  • Cloudflare
  • Load balancers not part of Interworx

It’s quite the slick setup… Cloudflare perks, load balancing, and great performance from the Interworx product.  That said, to just manage the website, we wanted to use Let’s Encrypt for SSL use to manage the nodes.   Below is a script I modified from this blog to serve the purpose and is free for you looking to do the same!


user=[email protected]

key=$(cat /etc/letsencrypt/live/interworxnode.yourdomain.com/privkey.pem)
sslcrt=$(cat /etc/letsencrypt/live/interworxnode.yourdomain.com/fullchain.pem)


nodeworx -u $user -o pretty -n -v -c Ssl –ssl_ciphersuite $cypher –key $key –crt $sslcrt –restart_now 1 –services SSL_Apache,SSL_IWorxWeb –action updateall

So, a few things here:

  • You need to create a siteworx account just for the domain itself, in my example, interworxnode.yourdomain.com should be created just for the sole purpose of generating Let’s Encrypt certs using the standard method in the Interworx SSL config panel.  So create the SSL certs first using Interworx for your domain you just created so it is there before running the script above.
  • Interworx will take care of renewing the certificate through it’s own jobs that are active, so you don’t need to do anything with renewals for Let’s Encrypt.
  • This script can be a cronjob to run quarterly during the year, then you will ensure you get a new cert updated into the admin portal.  I run mine in July, October, January, and April, for example.  The goal of this script is to just “steal” the SSL certs from the interworx site and reapply them to your admin portal.

TL;DR Shutoff “Priority enabled and VLAN enabled” in device manager, set it to just “Priority Enabled”

So I switched to VirtualBox 5.1 and so far, so good.  The one issue I was having was running a trunk with an untagged VLAN and a tagged VLAN for my guest. I doublechecked my tagging on the switch, I doublechecked my tagging in Fedora, but it never worked in Windows 10 for the tagged VLAN, only the untagged VLAN.  Finally, and I don’t understand why, but I went into the Windows 10 Network settings for the actual NIC in device manager and changed “Priority enabled and VLAN enabled” to just “Priority Enabled” and it works like a champ.  It appears that Windows ability with the Intel Pro card and the way VirtualBox will tag and untag packets for you is incompatible with the Windows 10 driver even though Windows isn’t managing the VLAN functionality.

So if you haven’t tried it yet, I strongly suggest giving Vivaldi a try.  It combines the base code of Chromium without the Chrome features you may not want with an interesting user interface.  To me, this is the best setup:

  • Chrome/Chromium underpinnings, so you don’t have website compatibility issues (Think Opera, a good browser, but odd support for online banking and other strict websites.  Yes, I know you can spoof the user agent, but the fact you have to do that doesn’t help non-power users, IMO.)
  • The user interface is fun, some of my favorites:
    • Mouse Gestures
    • Adaptive user interface
    • Notes

For Fedora users, the Vivald RPM from their website does install easy enough, but you can’t play Vines, Twitter Video, etc. because you don’t have functional ffmpeg support nor Flash if you use SiriusXM or other Flash only websites.  So, here’s a quick fix for you:

The Long Way

For Vine/Twitter Video support, the Vivaldi RPM comes with a libffmpeg.so located in /opt/vivaldi/lib , but it doesn’t run have support for h264/mp4 due to licensing restrictions.  What I did next was fire up a VM with Linux Mint and built chromium-codecs-ffmpeg-extra .  From there, I took libffmpeg.so and copied it to my Fedora install at /opt/vivaldi/lib after backing up the stock libffmpeg.so .  Double check that your standard user can read the plugin by:

# chmod 644 libfmpeg.so

For Flash Player, I had Chrome installed on the Linux Mint VM, so I just copied the directory /opt/google/chrome/PepperFlash to my Fedora machine and:

# chmod -R 644 /opt/google/chrome/PepperFlash

Then I restarted Vivaldi and had up-to-date Flash and working H.264/MP4 support.  Because I had the VM, it literally took me minutes to build, copy, and run these updates. Still, I’d love a repo from RPMFusion or someone that is really trustworthy, but that won’t happen with h.264/MP4 it appears.

The Easy Way

You are welcomed to use my plugin I built for libffmpeg.so and Flash, I realize not everyone is as paranoid as I am or just want to run it for testing purposes in a VM, etc.  I do not believe I’m violating any distribution rights from Adobe for the Flash Player since it is the Pepper plugin as opposed to Linux 11.2 version.  I’m sure Adobe will let me know otherwise…  Save libffmpeg.so to /opt/vivaldi/lib and libpepflashplayer.so to /opt/google/chrome/PepperFlash, enjoy!

libffmpeg.so   built on August 12th 2016
SHA1: d6514e2c0a16318d1feaf162ff6e6e035e36972c

libpepflashplayer.so     version
SHA1: 727799f1aba7a98052ec855a81c2b797c6f0025b

So I run Fedora 23 at home and one of my VMs was running Windows 10, but the sound was awful.  The sound would have a horrible echo and “scratchy” sound, sometimes after it would play for a bit, it would “fix itself”.  I tried the following:

  • Different drivers in the .vmx file including sb16 (didn’t work at all)
  • Tried Windows 7 to see if it was a Windows 10 issue (nope the issue happened with any version of Windows)
  • Issue didn’t happen with virtual Linux guests
  • Fresh Windows 7, then 10 install (still had the issue by default
  • Tried the fix from VMWare for audio with the speaker output (didn’t fix it, in fact, 24bit made it worse)

Long story short, I finally found this thread that worked for me.  It just installed a legacy adapter in Windows 10 and it worked perfect!!  As long as I was in the .vmx, I installed the VMXNET3 driver by changing the network adapter from “E1000” to “vmxnet3” for better performance!


So Amazon requires their EC2 machines to use private IPs, regardless if you have an elastic IP.  The guys at Directadmin have a nice guide to help you setup, but making it work on CentOS requires a little help that I found on the amazon forums, but in case it goes away


$ cat /etc/sysconfig/network-scripts/ifcfg-eth0\:1

$ cat /etc/sysconfig/network-scripts/route-eth0\:1
default via dev eth0:1 table main dev eth0:1 src 10.0.x.x table main

$ cat /etc/sysconfig/network-scripts/rule-eth0\:1
from 10.0.x.x/32 table main
to 10.0.x.x/32 table main



So following this guide from the folks at Zimbra, it has most of what you need to migrate, I’m going to add some extra steps that I found to be helpful during the migration as it was botched in testing a couple times.  Migrating to AWS adds a few surprises to the migration

  • Absolutely obsess about getting your hostname, hosts files, and anything to do with DNS.  If you reboot, the AWS cloud config will nuke your settings, so make sure when you make your changes, which are normally simple changes for a Linux Admin, reboot and verify.
    • Before you make your changes to the settings above, go into /etc/cloud/cloud.cfg and comment, using # , the cloud init modules called:
      • set_hostname
      • update_hostname
  • The steps on copying the SSL certificates are a little rough, I went ahead and rsynced the /opt/zimbra/ssl folder right on over.  I had all sorts of issues with the server.key not matching and it was much easier to do this and let Zimbra fix the perms with the utility at the end.

Otherwise, make sure your elastic IP from amazon isn’t on a blacklist already with Spamhaus.   I’m a fan of mxtoolbox to check the blacklists for.  Finally, don’t forget your RevDNS setup for your static IP, you can just click here and submit your information to Amazon.  This is also the same spot to ask to get off of Spamhaus, if your IP is on a list, expect up to a week to get it resolved, though.