Beeper Self Hosted

I saw some posts about Beeper last weekend, and figured I’d take a look. It seemed like a cool Trillian-esque idea. For those of you not aware, Beeper links a ton of different chat services together by bridging applications. That means, all your chats show up in one app. It supports everything from Facebook Messenger, to Instagram, Signal and Whatsapp.

The security implications did seem a little iffy, but they offer a self-hosted option. So, with a static IP in hand, and a decided lack of linux skills, I decided to set it up.

I spun up a VM on my Hyper-V server at home. Did Debian because that’s what I’m “most” familiar with, and started following their instructions.

Initially, went pretty well. It used Ansible, so once I did the initial DNS [cloudflare ftw] and Firewall [OPNSense] config, it seemed to be a breeze. Ran into a small pitfall that it needs port 80 open for the initial configuration. Once I figured THAT part out, it seemed to be moving again.

Then I ran into the second problem. The version of Ansible that was in the official guide didn’t seem to work. Thankfully, the official self-hosted Matrix server guide [here] had the correct ansible version listed. Once I modified to call that, the installation resumed.

When the install was done, it was time to run the app, and make my account. I used Schildichat on Android as well as on Windows. Account creation was a breeze. Then, time to start bridging chats.

I decided to use Linkedin, Instagram, Discord, Whatsapp, and Signal. Everything but Discord was idiotproof. Attempt login, enter 2fa, or scan QR code, and then in business.

Discord was easy, but then you have subscribe to each channel per server that you’re on. Thankfully, I only follow a few channels, but it did take me a little bit to figure out what was going on.

It also has plugins for Steam [didn’t work, ancient account with spaces in the name], SMS [requires a LOT of work to get working on Android, so skipped for now], and Facebook [I don’t have one].

Post setup, I wanted to set up the built in Borg Backup, but that ended up being….a whole new headache. I’ll post about THAT ordeal later. 🙂

All in all, I’ve been running this for a week, and it’s been rock solid. The desktop app seems a little quirky compared to the android app, but it’s nothing too terrible.

Definitely recommended!

About that iPad I Got…

So I’m back after a bit of an absence. I need to get back into the swing of keeping this up to date.

My last post was about switching to an iPad, which is amusing. So it served me well for a bit, but I ended up switching it out for a Windows-based tablet again. Main reason? The screen was just too small.

I’ve ended up getting a Razer z13 Flow tablet. It’s a “gaming” tablet, with a GeForce 3050 RTX built in. I’ve not done too much gaming-gaming with it, but it runs No Man’s Sky and Grim Dawn really well.

Only real complaint I have is that the battery life isn’t that great, but it’s good enough to get by.

On My Failure to Maintain a Windows Environment, and My Abandonment of Android for Tablets for an iPad Pro

Recently, I’ve been considering upgrading my current laptop (a 7th gen i7 Asus best buy special, with 16gb and 256gb). I was looking at various ultrabooks and low-end gaming laptops, but none of them really clicked with me. 

I realized rather quickly that my use case is really not one demanding of a full-fledged PC. 

Here’s what I used my Windows laptop for:

First, I used it for Indie gaming. Stuff like Stardew Valley, Darkest Dungeon, and others. Nothing super demanding, but stuff that was fun to play on the go without having a full setup.

Secondly, I use it for quick configs on my firewall/routers/switches. I also use it with a usb->serial cord to do console configs as needed.  

Thirdly, general web browsing when I’m on the go, as well basic document editing. Nothing crazy.

Finally, I use it to RDP back to my main desktop to do Lightroom, and other tasks that need too much CPU for my lowly laptop to handle.

I had some complaints with the laptop itself, which we’ll cover now.

First and foremost, an i7 in an ultrabook is way too hot. I ended up keeping it on a hardcover book when I’m in bed, and used powerconfig to max it at 50% speed while on battery. It was the only way that I could comfortably use it without cooking myself medium rare. 

Secondly, the keyboard was starting to go. The keys were always a bit flimsy, but after 5 years, they started popping out. 

Finally, the screen was 14” and 1080p. It’s not a huge issue, but it always felt a bit fuzzy. 

So, what were my options?

I looked at some basic gaming laptops (so that I could do more gaming) as well as the surface line. After some thought, I decided that that wasn’t really what I was looking for. Cost and heat would be an issue, and then, with the Surfaces, there’s definitely an additional cost to get what I was looking for. 

I glanced at Android, since most of my RDP/SSH/console needs would be met. But, gaming on Android is pretty limited. 

That brought me to iPads. What started as a “haha, I’ll just look for fun” quickly crossed into “Wait, this is actually viable.” The games I like have an iPad version. RDP works solidly on it. Logitech has some good keyboard options. I would just have to sell my soul a little tiny bit.

So, I got the iPad Pro 11”, and a logitech keyboard case, and have been using it for a few days now. I have not needed my laptop for anything yet. I even could sync my Darkest Dungeon saves over so I didn’t have to restart anything there. 

All in all, I am quite happy. 

I suspect I’ll be writing some more posts in the future on this. 

Test-NetConnection: Use Powershell to Check Port Connectivity

For the longest time, I’ve used a combination of PING, TRACERT, and TELNET to do basic network connectivity testing and troubleshooting. However, telnet is not installed by default, which can present a problem while doing on the fly testing on machines.

Enter Test-NetConnection.

Introduced with Windows 8, Test-NetConnection is a Powershell command that can handle most of the features of the above command line tools. This cmdlet is not supported on Windows 7, but I’m sure that all of you have upgraded by now. Right? 🙂

In it’s most basic functionality, it can ping a host.

TEST-NETCONNECTION GOOGLE.COM

It can also check to see if a port is open on the site:

TEST-NETCONNECTION GOOGLE.COM -PORT 443


So, what if the port is closed? You’ll get this, complete with whether or not the host is even pingable

TEST-NETCONNECTION GOOGLE.COM -PORT 445

You may be wondering, what are all those other IP’s that it tried? Fortunately, they have a detailed command that we can run. By adding the informationlevel command, we will get more diagnostic information that can be useful.

TEST-NETCONNECTION GOOGLE.COM -INFORMATIONLEVEL DETAILED


As you can see, it lists all of the different IP’s that resolve to that hostname. It will then check each of those when checking whether or not a port is open. Needless to say, this can be very helpful when trying to sort out erratic issues.

You can also see the next hop that it took to get to the site. That will normally be your default gateway, so it can provide an added level of detail when troubleshooting.

I mentioned that this tool can also do some traceroute commands as well. Let’s take a look at that in action. Just add -TRACEROUTE and you’re in business.

TEST-NETCONNECTION 10.42.196.3 -TRACEROUTE

There you go, you can see the hops taken to get to the destination. You’ll note that this doesn’t show as much information as a traditional traceroute, however it gives enough detail to handle the basics.

While it is a bit more complicated, if you run as an admin, you can use the -diagnoserouting -informationlevel detailed switches, and get far, far more information. That’s a bit beyond the scope of this writing however.

Hopefully this has been helpful to you! I am gradually shifting over to this as my go-to basic networking troubleshooting tool, and it is making quite a difference.

My Semi-Automated Phone to Lightroom Setup

I’ve been trying to automate some more of my mundane home IT tasks in order to make them take less time. As a general rule, I do avoid using the cloud as much as possible in an automated fashion. It would make some parts of this a little easier (particularly with automatic syncs, however for running processes against the files, it seems better keeping it all local. I’d rather use my home storage backed up to Crashplan than try to integrate my workflow with Onedrive.

So the first step is to get the photos off of my phone and onto my home server. I run an Android phone so I’m using Foldersync Pro to get them to my server. Foldersync supports a huge array of transfer types. Currently, I’m running FTPES to do the transfers. It’s a paid app, but there is a free version. Worth the $5 in my opinion. I have it scheduled to run nightly to back up the previous day’s photos. This is a bit aggressive, so I may increase the lag time to a few days so that I have more photos immediately accessible on my phone.

Once the photos are on my server, a nightly scheduled task runs to split the live photos into separate JPG and MP4 files. The resulting movies and any other MP4 files are dumped in my home video directory for future sorting, while the JPGs and DNG files get pushed over to my desktop for import to Lightroom. To split the files, I’m using ExtractMotionPhotos from here. Free app, highly recommended.

This is where the automated process ends. Lightroom does not import automatically to folders by date. So, every few days I have to do an import off of my import directory. I’m going to have to go digging through the plugin API to see if that’s something I can cobble together,or whether I’ll have to forever manually import as needed.

That’s my current semi-automated workflow. At some point, I’ll need to share my Lightroom back to phone workflow.

My Derbycon Experience

Last week, I went to my first conference. It was Derbycon’s last year, and since I live in Louisville, it made sense. I managed to get tickets on the second wave of sales, so I got lucky there with how fast they sold out.

I’m going to be upfront here, I’m not the most social of people, nor am I that good with crowds. Definitely took a huge step going. It was completely worth it.

The talks were excellent. Two talks stood out in particular to me. Heather Smith did an amazing talk about how to communicate risks to upper management to help you ensure the security of the company. Link is here. Jayson Street did a great talk on physical security, here. There were some talks that sounded good on Sunday, but I’m going to have to watch them on Youtube. Managed to get ConFlu on Sunday, and didn’t make it.

More than the talks, there was so much else going on. I spent far too much money at the No Starch Press booth. I managed to get enough points to get a coin from the Bank of America CTF challenge. I met a lot of cool people. Even had some good conversations with vendors about their products and capabilities.

As someone who doesn’t really do well in large crowds, this turned out well. They had a quiet room which I used a few times to catch my breath. Also hung out with people in the quieter areas to work on the CTF.

I didn’t do the main CTF. I took a stab at it, and realized that it was far beyond me. Going to have to study up on tools and techniques and give it a try at the next con I go to.

All in all, it was a good experience, and I’m now looking forward to my next conference.

Adventures with Linux

One of the areas that I’m weak with when it comes to is IT. Sure, I’ve done some basic things with RaspPi, and I can do some ESXI stuff, but by and large, I’m pretty lost when it comes to Linux.

Since the best way to learn something is to jump right in, over the weekend I installed Debian on my main laptop. My primary use of my laptop is RDP when I travel (over a VPN of course) to my main desktop, so this was a low-risk move. However, it would give me the option to really dig into it.

Right off the bat, I hit some difficulties. Debian doesn’t include WiFi drivers for my wireless card (an Intel one). I had to download those from Debian’s site and drop them on my USB key. That was a bit of a surprise.

However, once I was past that, the rest of the install went smoothly. Once I was in and set up, I spent some time getting all the things set up the way I liked.

All the other drivers seemed to be set up automatically. I remember (years ago) having to fight with display drivers, but all that was seamless.

I then started installing the software that I wanted to use. Since I use O365 for most everything, I just ran that through the browser, and called it day. Same with WhatsApp. However, I saw that Slack had an app, so I used that instead.

Just for fun, I installed Steam. I was surprise to see the amount of supported games that they had added. I did some reading, and it looks like a lot of them run using emulators. I’ll have to do some testing on that at a later date. While my laptop isn’t much for gaming, I’ve always enjoyed playing simpler games on it, like Darkest Dungeon, Sunless Seas, and Stardew Valley.

I’ll be adding more writings on this as I go.

OpenHab Part 3: More Hue

Finished getting all of my lights configured with OpenHAB, as well as configuring the sitemap.

Once I got the initial syntax down yesterday, the rest progressed very quickly.

What took most of the time, however, was getting my sitemap configured. Took me about 2 hours of comparing config files, and reading help files to realize that I hadn’t capitalized “switch” at one point in the config.

Oops.

Once I fixed that, the rest came together very quickly.

Next up, getting my smartthings connected. But that’s for tomorrow.

OpenHAB Part 2: Hue

Since I already had a Hue bridge configured and setup with my lights, I decided to I’d start with integrating that into OpenHAB. I figured it would be relatively simple to do.

Well, that turned out to be rather optimistic.

Using the PaperUI, I was able to quickly identify the Hue Bridge. But, all attempts to connect kept failing. I checked the error message, and apparently I needed a username. No matter, I’m smart, I can Google things and get that information. Well, it sounded easy enough. There’s a hue API page (hueIP/debug/clip.html) where I was able to run the command (
{“devicetype”: “openhabHueBinding#openhab”} )to get the user ID. I had to press the button on the HUB, hit “Post” in the API page, and then got the User.

Thinking that that was all I needed to do, I copied the username to the PaperUI field, which is where I hit the next issue. PaperUI was insisting that I needed to do it via the .Items file. After finding the official openHab Hue page, I was able to generate the file and interact with the hub that way. Suddenly, all the connected lights popped up in the “Inbox” ready for configuration. As did 2 Hubs. Apparently, PaperUI only writes to the database, not the files, even though it pulls config from the files. At this point, I threw my hands up in the air, and abandoned PaperUI for config, and decided to do all the config from the files themselves.

This led to significantly more trial and error. At this point, I’ve got several things set up (lights and hub), as well as “items” for each of them (the actions that can take place).

My next step is get a sitemap up and running to interact with the lights. That’s tomorrow’s project.