On My Failure to Maintain a Windows Environment, and My Abandonment of Android for Tablets for an iPad Pro

Recently, I’ve been considering upgrading my current laptop (a 7th gen i7 Asus best buy special, with 16gb and 256gb). I was looking at various ultrabooks and low-end gaming laptops, but none of them really clicked with me. 

I realized rather quickly that my use case is really not one demanding of a full-fledged PC. 

Here’s what I used my Windows laptop for:

First, I used it for Indie gaming. Stuff like Stardew Valley, Darkest Dungeon, and others. Nothing super demanding, but stuff that was fun to play on the go without having a full setup.

Secondly, I use it for quick configs on my firewall/routers/switches. I also use it with a usb->serial cord to do console configs as needed.  

Thirdly, general web browsing when I’m on the go, as well basic document editing. Nothing crazy.

Finally, I use it to RDP back to my main desktop to do Lightroom, and other tasks that need too much CPU for my lowly laptop to handle.

I had some complaints with the laptop itself, which we’ll cover now.

First and foremost, an i7 in an ultrabook is way too hot. I ended up keeping it on a hardcover book when I’m in bed, and used powerconfig to max it at 50% speed while on battery. It was the only way that I could comfortably use it without cooking myself medium rare. 

Secondly, the keyboard was starting to go. The keys were always a bit flimsy, but after 5 years, they started popping out. 

Finally, the screen was 14” and 1080p. It’s not a huge issue, but it always felt a bit fuzzy. 

So, what were my options?

I looked at some basic gaming laptops (so that I could do more gaming) as well as the surface line. After some thought, I decided that that wasn’t really what I was looking for. Cost and heat would be an issue, and then, with the Surfaces, there’s definitely an additional cost to get what I was looking for. 

I glanced at Android, since most of my RDP/SSH/console needs would be met. But, gaming on Android is pretty limited. 

That brought me to iPads. What started as a “haha, I’ll just look for fun” quickly crossed into “Wait, this is actually viable.” The games I like have an iPad version. RDP works solidly on it. Logitech has some good keyboard options. I would just have to sell my soul a little tiny bit.

So, I got the iPad Pro 11”, and a logitech keyboard case, and have been using it for a few days now. I have not needed my laptop for anything yet. I even could sync my Darkest Dungeon saves over so I didn’t have to restart anything there. 

All in all, I am quite happy. 

I suspect I’ll be writing some more posts in the future on this. 

Test-NetConnection: Use Powershell to Check Port Connectivity

For the longest time, I’ve used a combination of PING, TRACERT, and TELNET to do basic network connectivity testing and troubleshooting. However, telnet is not installed by default, which can present a problem while doing on the fly testing on machines.

Enter Test-NetConnection.

Introduced with Windows 8, Test-NetConnection is a Powershell command that can handle most of the features of the above command line tools. This cmdlet is not supported on Windows 7, but I’m sure that all of you have upgraded by now. Right? 🙂

In it’s most basic functionality, it can ping a host.


It can also check to see if a port is open on the site:


So, what if the port is closed? You’ll get this, complete with whether or not the host is even pingable


You may be wondering, what are all those other IP’s that it tried? Fortunately, they have a detailed command that we can run. By adding the informationlevel command, we will get more diagnostic information that can be useful.


As you can see, it lists all of the different IP’s that resolve to that hostname. It will then check each of those when checking whether or not a port is open. Needless to say, this can be very helpful when trying to sort out erratic issues.

You can also see the next hop that it took to get to the site. That will normally be your default gateway, so it can provide an added level of detail when troubleshooting.

I mentioned that this tool can also do some traceroute commands as well. Let’s take a look at that in action. Just add -TRACEROUTE and you’re in business.


There you go, you can see the hops taken to get to the destination. You’ll note that this doesn’t show as much information as a traditional traceroute, however it gives enough detail to handle the basics.

While it is a bit more complicated, if you run as an admin, you can use the -diagnoserouting -informationlevel detailed switches, and get far, far more information. That’s a bit beyond the scope of this writing however.

Hopefully this has been helpful to you! I am gradually shifting over to this as my go-to basic networking troubleshooting tool, and it is making quite a difference.

My Semi-Automated Phone to Lightroom Setup

I’ve been trying to automate some more of my mundane home IT tasks in order to make them take less time. As a general rule, I do avoid using the cloud as much as possible in an automated fashion. It would make some parts of this a little easier (particularly with automatic syncs, however for running processes against the files, it seems better keeping it all local. I’d rather use my home storage backed up to Crashplan than try to integrate my workflow with Onedrive.

So the first step is to get the photos off of my phone and onto my home server. I run an Android phone so I’m using Foldersync Pro to get them to my server. Foldersync supports a huge array of transfer types. Currently, I’m running FTPES to do the transfers. It’s a paid app, but there is a free version. Worth the $5 in my opinion. I have it scheduled to run nightly to back up the previous day’s photos. This is a bit aggressive, so I may increase the lag time to a few days so that I have more photos immediately accessible on my phone.

Once the photos are on my server, a nightly scheduled task runs to split the live photos into separate JPG and MP4 files. The resulting movies and any other MP4 files are dumped in my home video directory for future sorting, while the JPGs and DNG files get pushed over to my desktop for import to Lightroom. To split the files, I’m using ExtractMotionPhotos from here. Free app, highly recommended.

This is where the automated process ends. Lightroom does not import automatically to folders by date. So, every few days I have to do an import off of my import directory. I’m going to have to go digging through the plugin API to see if that’s something I can cobble together,or whether I’ll have to forever manually import as needed.

That’s my current semi-automated workflow. At some point, I’ll need to share my Lightroom back to phone workflow.

My Derbycon Experience

Last week, I went to my first conference. It was Derbycon’s last year, and since I live in Louisville, it made sense. I managed to get tickets on the second wave of sales, so I got lucky there with how fast they sold out.

I’m going to be upfront here, I’m not the most social of people, nor am I that good with crowds. Definitely took a huge step going. It was completely worth it.

The talks were excellent. Two talks stood out in particular to me. Heather Smith did an amazing talk about how to communicate risks to upper management to help you ensure the security of the company. Link is here. Jayson Street did a great talk on physical security, here. There were some talks that sounded good on Sunday, but I’m going to have to watch them on Youtube. Managed to get ConFlu on Sunday, and didn’t make it.

More than the talks, there was so much else going on. I spent far too much money at the No Starch Press booth. I managed to get enough points to get a coin from the Bank of America CTF challenge. I met a lot of cool people. Even had some good conversations with vendors about their products and capabilities.

As someone who doesn’t really do well in large crowds, this turned out well. They had a quiet room which I used a few times to catch my breath. Also hung out with people in the quieter areas to work on the CTF.

I didn’t do the main CTF. I took a stab at it, and realized that it was far beyond me. Going to have to study up on tools and techniques and give it a try at the next con I go to.

All in all, it was a good experience, and I’m now looking forward to my next conference.

Adventures with Linux

One of the areas that I’m weak with when it comes to is IT. Sure, I’ve done some basic things with RaspPi, and I can do some ESXI stuff, but by and large, I’m pretty lost when it comes to Linux.

Since the best way to learn something is to jump right in, over the weekend I installed Debian on my main laptop. My primary use of my laptop is RDP when I travel (over a VPN of course) to my main desktop, so this was a low-risk move. However, it would give me the option to really dig into it.

Right off the bat, I hit some difficulties. Debian doesn’t include WiFi drivers for my wireless card (an Intel one). I had to download those from Debian’s site and drop them on my USB key. That was a bit of a surprise.

However, once I was past that, the rest of the install went smoothly. Once I was in and set up, I spent some time getting all the things set up the way I liked.

All the other drivers seemed to be set up automatically. I remember (years ago) having to fight with display drivers, but all that was seamless.

I then started installing the software that I wanted to use. Since I use O365 for most everything, I just ran that through the browser, and called it day. Same with WhatsApp. However, I saw that Slack had an app, so I used that instead.

Just for fun, I installed Steam. I was surprise to see the amount of supported games that they had added. I did some reading, and it looks like a lot of them run using emulators. I’ll have to do some testing on that at a later date. While my laptop isn’t much for gaming, I’ve always enjoyed playing simpler games on it, like Darkest Dungeon, Sunless Seas, and Stardew Valley.

I’ll be adding more writings on this as I go.

OpenHab Part 3: More Hue

Finished getting all of my lights configured with OpenHAB, as well as configuring the sitemap.

Once I got the initial syntax down yesterday, the rest progressed very quickly.

What took most of the time, however, was getting my sitemap configured. Took me about 2 hours of comparing config files, and reading help files to realize that I hadn’t capitalized “switch” at one point in the config.


Once I fixed that, the rest came together very quickly.

Next up, getting my smartthings connected. But that’s for tomorrow.

OpenHAB Part 2: Hue

Since I already had a Hue bridge configured and setup with my lights, I decided to I’d start with integrating that into OpenHAB. I figured it would be relatively simple to do.

Well, that turned out to be rather optimistic.

Using the PaperUI, I was able to quickly identify the Hue Bridge. But, all attempts to connect kept failing. I checked the error message, and apparently I needed a username. No matter, I’m smart, I can Google things and get that information. Well, it sounded easy enough. There’s a hue API page (hueIP/debug/clip.html) where I was able to run the command (
{“devicetype”: “openhabHueBinding#openhab”} )to get the user ID. I had to press the button on the HUB, hit “Post” in the API page, and then got the User.

Thinking that that was all I needed to do, I copied the username to the PaperUI field, which is where I hit the next issue. PaperUI was insisting that I needed to do it via the .Items file. After finding the official openHab Hue page, I was able to generate the file and interact with the hub that way. Suddenly, all the connected lights popped up in the “Inbox” ready for configuration. As did 2 Hubs. Apparently, PaperUI only writes to the database, not the files, even though it pulls config from the files. At this point, I threw my hands up in the air, and abandoned PaperUI for config, and decided to do all the config from the files themselves.

This led to significantly more trial and error. At this point, I’ve got several things set up (lights and hub), as well as “items” for each of them (the actions that can take place).

My next step is get a sitemap up and running to interact with the lights. That’s tomorrow’s project.

OpenHAB on Raspberry Pi Part 1

For Christmas, I got a Raspberry Pi 3 B. I’ve been wanting to set up a homegrown smarthome, and am planning on using this as a base for that. I’ll be running OpenHAB on the Raspbian OS.

I already have some smarthome equipment, including a SmartThings Hub, some HUE lights, and several Alexa’s. I’d prefer to run as much as I can inhouse, without relying on cloud services. Obviously, (and especially with the Alexa’s), that isn’t entirely possible, but I want to do as much as I can.

I’ve not spent much time with Linux, know nothing about Python, and my electrical/circuitry knowledge is what I can remember from a kids book of experiments from my childhood. This will very much be a learning experience for me, and should broaden my knowledge about those subjects a great deal.

As it stands now, I’ve got Raspbian running, OpenHAB deployed via Docker, and am ready to start configuring over the coming days. I’ve not used Docker before either, but it seems pretty cool.

Let’s see what the coming days bring!

Philips Hue and a Baby

My wife and I had our first child recently, and are currently going through the “waking up all night to feed” stages. This is rough on everyone involved, so I wanted to see if I could set up a technical solution to make some parts of it easier.

I had some Hue lights already set up in my house, so I decided to start with them to make late night feedings a little easier.

I set up some in our bedroom, and some in our nursery. Since the baby is so little, he’s still sleeping in a crib in our room, but we’re keeping the changing stuff, and comfortable chair for feeding in the nursery.

I then got one of the Hue dimmer switches, and mounted it next to the bed. I configured it to control both the nursery and the bedroom groups of light, and set the default to be a very low powered light.

Now, when we get up in the night to get the baby, we tap a single button, and it sets the lights to a low setting, thus helping us not wake each other up, as well as not fully waking the baby up.

StorageSpace Oddity

So, I’ve been trying to figure out a weird issue with how drives in my Storage Space pool are being displayed. I first noticed this using Server Manager.

First, for background, my pool is a 2-way mirror with 17 drives, and weighs in at 23.6 TB formatted.

However, in this screenshot of server manager, you can see that Windows thinks it is 4 drives less than that. Disregard the retired drive, that’s removing currently.

If I go to just the basic drives view in server manager, I see some of the missing drives, as well as a few that are reflecting in the storage pool.

At this point, I was growing concerned, but figured it could be a GUI glitch. So I went to Powershell to pull out the list of drives associated with the pool. That showed exactly what I expected, all 17 drives:

I figured, since I was good there, I’d just verify that the visible disks to Windows showed correctly. I ran a get-disk, which shows the disks visible to the OS. This shouldn’t show the Storage Spaces drive, since they are abstracted by Windows. However, I can see a similar set of drives to what I saw in Server Manager, as you can see below.

Honestly, I have no idea what is causing this behavior. The storage spaces rebuild as expected after a drive failure. All pool health is returned as healthy. Just some drives show both in pool and visible to Windows.

Anyone seen this behavior before?