First Parks on the Air Activation – K-4160, Volo Bog

I mentioned in my Field Day post from a few weeks ago that I was hoping to get out to a Parks on the Air activation soon, and this past Saturday, I made it happen!

Parks on the Air is an international program inspired by the ARRL’s 2016 National Parks on the Air event. While that program ended at the beginning of 2017, a group of invested amateurs set about booting up an independent, ongoing program in the style of Summits on the Air, Islands on the Air, World Wide Flora and Fauna, etc. The program engages two (overlapping) sets of radio operators: ‘Activators’ who set up portable, temporary operations in state and national parks and wildlands, and ‘Hunters’ who seek them out on the air from more permanent setups. Of course, you can make ‘park to park’ contacts and be a hunter and an activator at the same time.

Parks on the Air | POTA | Parks program for amateur radio.

The draw of this for me is, as I alluded to in the Field Day post – I love the energy of being in the middle of a pile-up. Even if these contacts have a more lighthearted and friendly feel than a rapid fire contest, being a desirable contact on the air is a really jam.


After attending the South Milwaukee Amateur Radio Club’s swapmeet in the morning, I headed back south to Volo Bog State Natural Area, a wilderness preserve in Northern Illinois.

May be an image of nature, tree and text that says 'VOLO BOG M STATE NATURAL AREA'

The park surrounds a large natural mashland, with many miles of a loop hiking path, scenic overlooks and, importantly for radio operations, a picnic area. I did some scouting on Google Maps ahead of time, and guessed that the picnic tables would be far enough apart that I could find a quiet corner to operate in.

And indeed, apart from a few hikers and what looked like a field-trip just departing, the park was pretty quiet. I found a nice picnic table in the shade next to the marsh to set myself up.

May be an image of outdoors

My setup for the day was somewhat more powerful than my Field day setup, including:

No photo description available.

Oh what fun was had! I made 98 QSOs in roughly two hours of operating – all but 4 of which were on 20m, the last few on 15m. The bands were all over the place. I had wild swings of propagation into the eastern seaboard and the Southeast; at one point, I had five consecutive contacts from the same corner of Northwest Georgia. But I also reached some ears out in the Southwest, and even a handful of stations out in Oregon and Washington. I also made 8 (I believe) Park to Park contacts with other operators out the in wild.

All in all, a tremendous day of fun and excitement, and I’m looking forward to getting back on the air in a park soon.

73

Ham Radio Field Day 2021 – Storms and Stations

This post is cross-posted to my Ham Radio specific blog, kk9jef.wordpress.com

Despite sunburns, shattered plastic, and a literal tornado warning, sunburns, I had a great time at Field Day 2021 this year.

My intention had been to head out to the northern Chicago suburbs on Saturday morning for some testing of the setup. My wife and I had purchased an annual pass to the Lake County dog parks in anticipation of the Fourth of July weekend, and our plan had been to drop her (and Winnifred, a very good dog) at one of these parks, go around the corner to a quieter, non-dog-filled local park, setup and operate for an hour and change, then pick the two of them up and head home. I’d charge my battery using an inverter in the car, grab some water and lunch at home, then be back out in a closer local park Saturday afternoon for a long operating session. And maybe, if I woke up in time, I’d pop back out in the morning to the park around the corner from my place for some grayline work on Sunday morning.

A picture of me, my wife, and my dog Winniefred, a yellow lab, on a hike in the woods.

What Saturday should have looked like.

Chicago weather had other plans.

We spent pretty much the entire day on Saturday huddled indoors against a pretty fearsome storm, which including multiple tornado warnings, thunderstorm wanrings, flash flood alerts… it was a wild day. I did make it out once in the afternoon to run to the hardware store to work on an impromptu Magnetic Loop antenna project (more on that soon), but other than that, we were holed up with our poor frightened dog.

This image has an empty alt attribute; its file name is img_3794-edited-1.jpg

At least we have a sense of humor about these things

But like a breath of fresh air, Sunday brought cool(ish) clear skies, dry weather, and a rather perfect operating day by mid-morning. So I pulled the portable-rig back together and headed out to the originally-planned local park to catch the last few hours of Field Day.

My setup’s changed a fair bit in the last couple years (and will likely be changing again soon). Here’s what I was playing with on Field Day this year:

  • Xiegu X108-G 20W Transceiver for SSB, CW, etc.
  • Wolf River Coils “Mega Mini TIA” portable antenna, a stainless steel collapsible whip with a base-loading coil that sits on three removable tripod legs. WRC sells a wide variety of configurationsand sizes of this basic setup – mine is a 17′ whip with the larger (~14″) coil and 24″ tripod legs. It’ll tune around 80m to 10m, though of course on 80m it’s reeeeally short.
    • The antenna ships with three 10m radials, which attach to the tripod base with ring lugs. I added three 7.5m radials (1/4 wavelength on 30m) and re-used my 5m radials from the QRPGuys Tri-Band Vertical setup I had been using. Each set has a bullet connector on it, and a single ring-lug-to-three-bullet-connector squid attaches to the tripod
  • The battery for the day was a TalentCell 12V, 6A, 8300mAh battery i picked up from Amazon. We used these batteries all the time in live theatre for their size and weight, and while I doubt that I’m getting the full 8300mAh from it (especially since the X108-G draws around 6A on transmit), it lasted me through a solid afternoon’s operating.
  • I remounted my iambic paddles from their cast-iron base to a lighter plastic one
  • I picked up an Autek VA-1 antenna analyzer earlier this year, which makes a great quick tuning method for the antenna. I played around with bringing my Nano NVA H-4 out, but it’s just a little too fiddly for regular field use.
  • 25′ of RG8x from the radio to the antenna
  • A camp chair and a portable picnic table make from easy ergonomic in the field
  • Logging is pen-and-paper, for now
This image has an empty alt attribute; its file name is fd1.png

The setup in the park

I spent the first couple hours hunting and pouncing, mostly on 20m with a stint up to 15m. 20m was super-densely populated as usual; 15 less so, but still with some decent stations holding the band down. I made roughly 25 contacts in that time with some decent distance – Arizona, Florida, Arkansas, Pennsylvania..

At 1pm local time, Field Day hit 24 hours elapsed, which is the maximum operating time for home, mobile, and emergency operations center stations… but Class A (club) and Class B (1/2 op portable) were permitted to go to 4pm. And with the bands newly clearly of the massive home stations, I figured, why not call CQ for awhile?

WHAT A RIDE.

I held a frequency on 20 meters for roughly 75 minutes, during which I made 90+ contacts. Being the run station was an absolute blast – I’ve done such things at Field Days before, but never as a solo operator and never with my own personal station. Knowing there’s no logger to have your bag, just you and the airwaves and the piles of people calling… a truly great time. I know I won’t set any records for speed or quantity of contacts, but I had a blast.

I’m currently looking for a day to go out and do a Parks on the Air activation to recapture some of the rush of running a station. Really, what a joy. And I’ll have a couple of new toys to play with by then…

This image has an empty alt attribute; its file name is parks.png

There’s lots of Parks on the Air parks in the Chicagoland area – I hope to be activating them soon!

73

Advent of Code 2020: Day 2-9

Code for all days is on GitHub

Day 2

Nothing terribly fancy going on with the day 2 challenge – essentially, reading in a text file and doing operations on strings. After developing the ability to split the input lines and validate them in part one, part 2 throws a curveball by changing up how the lines should be validated. I refactored my “isValidLine” code to take a validation function as one of its arguments, since the parsing/splitting of the line is the same for both parts 1 and 2.

(The names of the validation functions come from the problem description – they validate the passwords for a sled company and a toboggan company, respectively.)

Day 3

A slightly more involved bit of data and text processing today, as we help our would-be tobogganist make it down a snowy slope covered in trees. As a way of pushing my Python knowledge, I tried to complete both parts of today’s challenge using list comprehensions and iterative functions like map, reduce, enumerate, etc. My impulse is to write things out more explicitly by iterating through the elements of input in a for loop, but practicing the list comprehensions and their related utility functions feels good.

Is the inline code better than the more verbose versions? I’m not entirely sure – it’s certainly less readable than the expanded-into-a-loop versions. It’s also somewhat harder to debug, because there aren’t logical places to, say, print intermediate results. So something like my getSlopeTrees() function, as written, is just silly-long and hard to read – the getSlopeTreesVerbose() function, which I wrote as part of troubleshooting a specific issue is definitely more readable.

The punchline of my issue was: at least in Python 3.9, floats can’t be used as list indices, even if they’re integers. That is, even if a float for which is_integer() returns true, you must explicitly cast that float to an integer to use it index a list. In code form:

So, another thing learned. Thanks Advent of Code!

Here’s the full code from Day 3:

Day 4

We can slowly feel the difficulty starting to ramp up here in day four. We’re still walking on paved roads, as it were, but they’re not as well maintained as they used to be in day one.

Today’s problem concerns more text parsing – the first part just says to validate, essentially, that 7 unique strings are present between sets of blank lines in the input file. The code for this is pretty straightforward – I tried for a little while to do it all in one list comprehension, but ended up splitting it into two lines, which I think is clearer. To be sure, I don’t think that doing it in one comprehension would be better, just that I thought it would be fun practice.

Of course, with the problem statement framing these 7 strings as labels for values in a passport (eyc:blu, hgt:171cm, etc), it seemed like a straight guess that we’d actually have to parse those values by field and do something with them in part two. And of course, we were right. For each of the 7 fields, validation criteria are listed, including ensuring certain fields are numerical and within certain bounds, prepended or postended by certain characters, and so on.

This part turned out ok – it takes the text file, splits it into individual passport strings, then splits each of those into a list of strings of the form “key:value”. The part that feels most “un-Pythonic” to me is the part (commented below) that turns that list of lists of strings into a list of dictionaries. I figure there’s got to be a way to do that with a comprehension, but I couldn’t quite make it work, so I did it as a couple For loops. It works fun, just feels a little clunky.

I also implemented my own prettyPrintPassword function (and its alias ‘ppp’) – it doesn’t do any sorting of the fields, and it doesn’t show you why a passport is invalid if it fails, but it did what I needed it to do for troubleshooting purposes.

Day 5

Wow, a quickie today! The title of the day’s challenge (‘Binary Boarding’) gives you a pretty strong clue what it’s going to be about. The challenge is essentially to parse text representing binary numbers in your language of choice and find the minimum, maximum, and missing values in between.

This is my shortest solution so far, at only 5 lines of code (for both parts!):

This is where Python’s use of convenience generators (like range), built in math functions on general iterators (like min and max), and lots of string functionality (like replace) really shines – the code is easy to write and clean.

Looking back at my goals for Code Advent 2020, I’d say I’m doing pretty well – I’m already feeling more fluent/comfortable with list/dictionary comprehensions, the git workflow is becoming more natural, and I’ve completed each project on the day it’s issued. Not too much challenge in terms of the algorithms and data structures so far, but then it is only day 5….

This post will be updated with code from a handful of future days, until it gets too long/unwieldy.

Day 6

As it turns out, talking out loud while generating algorithms while writing code is… hard.

I coded up the solution to Day 6 live on stream on Sunday night, which was both fun and challenging. Part one of the challenge wasn’t too terribly hard – it basically asks whether each letter of the alphabet is contained between any pair of blank links (“\n\n”) in the input file. That’s a solution that only takes a few lines to write.

I ended up writing three solutions to part 2. I ended up ordering them in the code in order of their complexity/lines of code, but that’s not the order I wrote them in. I first wrote a really over-complicated solution (3), then condensed it down to a single list comprehension (1), then expanded that back out just a little to make it more readable. Like I said on stream, if I were writing this code to go into some kind of actual codebase, I think solution (2) is the one I’d use – it’s concise enough to be comprehensible, but long enough to not be overwhelmingly dense.

Day 7

Oof, this day too far, far longer than it should have, all because I misunderstood Python’s strip() function. strip, for those who are wondering, removes any of the set of characters given as its arguments from the beginning or end of a string. So, “Hello World”.strip(‘Hld’) => “ello Wor”. Unfortunately, I thought that the strip function removed the exact string given it it as an argument, leading to it stripping additional characters off of the end of some inputs and causing my parsing to be wrong. Oof.

In any case, the two halves of day 7 involve creating a tree of data in two different ways (one in which parents reference children, and one in which children reference parents). Then we sum up the total number of parents or children, unweighted or weighted, respectively.

Day 8

Day 8 is giving me flashbacks to the intcode challenges of 2019! But it’s a much softer start this time – we only have three pseudo-assembly instructions to parse, and simpler questions to answer. Once we’ve built a simple function for processing a given list of these instructions, we’ve solved part one. Part 2 requires iterating over our input data and manipulating it slightly, and testing to see whether the new version of the input fulfills the required condition, so our code will need to work over general lists of instructions.

The only thing that hung me up today was forgetting to take into account how Python’s lists handle objects. Specifically, this is the behavior that I was (rightly, but unwantedly) seeing:

listA = [[1,2],[2,4],[3,6]]
listB = [a for a in listA]
listB[0] = [4, 8]
print(listA)
>>>[[4,8],[2,4],[3,6]]

Though it doesn’t look like listA is ever being modified, the way we’ve constructed listB, it actually references the same objects as listA. So when we change the object [1,2] to be [4,8], it changes everywhere that object is referenced in both lists. A little thing I once knew, but had skipped my brain for about 8 minutes. Whoops!

Day 9

Well that felt pretty good! The consensus across the interwebs (twitter, reddit) seems to be that today was relatively easy, and I’d agree. The problem involved two different ways of comparing integers in an input list to the previous 25 numbers, and doing some math on them. There are probably slightly more efficient algorithms, especially for part 2 – currently when the running sum starting from a given position, I throw out the entire sum and start again from the next position, which is likely wasteful. But for only 1000 inputs, the code still runs in ~160 milliseconds, so I don’t think it’s worth the time to make it more efficient. If this problem comes back in future days, that may be worth revisiting.

Advent of Code 2020: Day 1

I know in my introductory post I said I wasn’t going to post something every day, and I meant it! But I ended up with a little extra time on my hands today and this casual summary has turned into an actual post… I’m going to have to think about how I categorize these posts so anyone who stumbled across my blog isn’t wading through five pages of Advent of Code writeups before getting to tiny moving lights. But for now, here’s day 1.

Full code is available on GitHub.

Much like last year, this year’s Day 1 challenge starts by essentially making sure we can read in a text file and do basic math on it. The first problem asks us to find which two integers in a text file that sum to 2020 and retrieve their product; the second asks the same question, but for a set of 3 integers.

Just for gits and shiggles, I implemented the solution to part one in two different ways. The first, in a function I title “naiveFind”, just loads all of the numbers from the file into a list, then loops over every pair of numbers until it finds a pair that sums to 2020 (the success() function is detailed below). This is a fine way to approach this problem, but not terribly efficient for long lists:

The speedier way to solve this problem is to use a hashmap, which we get for free in the form of Python dictionary lookups (in most implementations of Python.) Rather than looping over all pairs of numbers, we can just proceed through the the list once, storing each member in a dictionary, and as we load each new number, we check to see if it’s “2020’s complement” is already in our dictionary’s keys. This is faster than a raw comparison because looking via hashing is cheaper than doing a ‘by-hand’ comparison of all of the numbers ourselves.

For the second part of the problem, I only implemented a “naive” solution, running in O(n³) time:

With the need to now communicate a set of three numbers (and their product) that form a solution, I rewrote my success() function to accommodate any number of inputs as arguments. (The original, two-input function is commented-out at the bottom.)

To see how efficient these various functions were, I wrote a quick decorator function that allows me to see the (rough) execution time of each solution:

Running all three of our search functions in turn:

We can see that:

  • The naïve way of looping over all the pairs of products takes about 1.5 ms to complete
  • The hashset (dictionary) method of finding one pair of numbers takes about 0.6 ms to complete
  • The naïve way of finding a triple of numbers takes about 65 ms to complete

Some stray thoughts from today

  • When I originally tried to run my basic test code to ensure I was reading the input text file correctly, I got the error: “No such file or directory.” Which is odd, because the text file was just sitting in the same folder as my Python script, like always. It turns out that by default, VSCode uses the open folder as its source, not where the script is actually being executed. You can change this in the Python terminal settings:image
  • I’ve made use of the functools.wraps wrapper to assist me in writing my own decorator functions before, but using it again today to write the timer function makes me want to look a little deeper under the hood to see what it’s doing.

Postscript:

I was just kicking around the #AdventOfCode hashtag on Twitter after completing my solutions, and ran across these super-nifty “Pythonic” solutions by @Brotherluii:

https://twitter.com/Brotherluii/status/1333756750579830784

For those for whom the embedded tweet doesn’t work:

with open('input.txt', 'r') as file:
    data = {int(number) for number in file}

#Part 1
print({entry * (2020-entry) for entry in data if (2020-entry) in data})

#Part 2
print({entry1 * entry2 * (2020 - entry1 - entry2)
    for entry1 in data for entry2 in data
    if (2020 - entry1 - entry2) in data})

Though I understand list comprehensions, I feel like they’re never my go-to tool, but seeing them composed like this, I can see how they can be pretty beautiful in the right hands.

Advent of Code 2020

Last winter, I participated in the annual Advent of Code Challenge, a website which offers small (but not necessarily easy) programming challenges every day from December 1 through 25. It turned out to be a great way to get exposed to different corners of development in my language of choice (Python), and with a little more time on my hands this Winter, I’m excited to dive into it again.

The challenges are all written in a non-programming-language-specific way. For example, the first part of the problem from December 1, 2019 boils down to:

* Ingest a list of numbers from a text file, with one line per number
* For each number, divide it by 3, round down, and subtract 2
* Sum all these results together
* Print/return/somehow give the user back the sum

While I was doing this in Python, there’s no reason you couldn’t do it in C, or Java, or Haskell, or ALGOL, or any language of your choice (though of course, some of the problems will be more tractable using structures built into some langauges and not others). The actual prompts are a bit more flavorful that that example – a narrative about needed to rescue Santa from outer-space was woven through all 25 problem last year.

I’m confident in saying that my Python has gotten significantly stronger over the past year, but I’m feeling like I could be stronger in some algorithmic thinking (the mazes last year slayed me) and in some process crevices around my workflow. To that end, my goals for this year are:

  • To strengthen my intuition for solving data-based problems with time-efficient algorithms
  • To cement the core concepts around Pythonic data structures in my knowledgebase
  • To become more comfortable with Git/GitHub, in particular its command line interface and the branch/merge/HEAD flow
  • To complete each challenge on the day it’s issued

Because nobody needs their RSS feed flooded by me every day for a month, I think I’ve found a way to start a blog post on, say, December 1st, update it every day for a week, then only push to the RSS feed on the 7th – so if you want to check on them daily, you can go to the actual factual blog, or just wait for the summary posts to come out.

If you’re just interested in the code (or are reading this from the future) and want to see my solutions, I’ll be posting the code over on GitHub. I’m not going to be striving to be one of the first 100 people posting successes to each problem (for which there is a leaderboard), I’m just solving these for me. And I encourage anyone out there looking to build their programming confidence to do the same!

Demilight v 0.9.1

The Demilight (miniature moving light) project has been slowed down in the past few months, mostly by good things. Namely, my return to my fulltime job and other interesting technical nerdery. But the project soldiers on!

I made a video detailing the trials and tribulations of getting version 0.9.1 built, which you can watch below (embedded) or over on YouTube.

How to Livestream a (Technology Focused) Class

In BC times (Before Covid), I had often dreamed of setting up a semi-regular gathering with some nerd friends to make things. We’d all sit around, drink beer, eat trail mix, and bash things together with Arduinos and Raspberry Pis and servos and LEDs and what have you. And then March 2020 rolled around – getting together in person was suddenly passé, but with my day job sending us home for “three weeks” of shelter-at-home, I also had a lot more time on my hands…

And so, the Electronics Bash live video classes were born. Starting Sunday, March 15, I begin streaming live electronics classes every Sunday night. They have centered around Arduino programming and usage, but I’ve also branched off into electrical theory, battery types, microcontroller hardware, and other related topics. After 20 weeks of that, I shifted gears to Raspberry Pi programming and single board computers. Many of the topics have been suggested by the small but enthusiastic core group of nerds who come together on Sunday nights to share ideas and learn things.

It’s now late-August 2020, I’ve taught 22 of these classes, I’m back at my day job, and having “completed” the Arduino course, it feels like I’ve created “one whole thing” . And so I thought it might be a fun time to look back at what I’ve learned about online teaching, streaming setups, electronics, and life over the first 22 Electronics Bash classes.

Some of this is going to be technical, some philosophical, some nonsensical. But what else is new.

The stream looks pretty good these days, I like to think.


Technology

My technology setup has been relatively consistent since about week 4 of Electronics Bash, with a few adjustments along the way as noted below. Let’s break it down by technology categories.

(My setup in many areas changed significantly with the shift to Raspberry Pi classes, so all those changes are noted at the end of this section.)

Goals

When I leapt into the idea of teaching these classes, the thought was to focus on “Arduino, Electronics, and Related Stuff.” I knew I would need at least two things to be visible: a computer desktop (for the programming IDE and explanatory slides) and the workbench itself (for showing wiring and physical demos). Seeing my face I’d count as a bonus. I also wanted to stream in reasonably high resolution – 720p as a goal, 1080p would be nice – and to make the process of switching between what the viewer is seeing as seamless as possible. Most topics would involve a good amount of swapping back and forth between slides, code, the workbench, and verbal explanation. And it should all look reasonably clear and clean.

The setup that I came up with has served me well in these regards over time, and wasn’t terribly complicated nor expensive to put together.

Computer

I use my Lenovo Legion Y7000 laptop for basically all my computer purposes these days, including streaming and programming. It’s a “gaming laptop”, which essentially means it has a mid-tier GPU stuffed inside a laptop chassis with some extra fans. I personally like the versatility this gives me – I can run Fusion360 or AutoCAD pretty well, rendering a video out from Da Vinci Resolve is pretty efficient, and my setup is still portable.

Lenovo Legion Y7000P-1060 - Notebookcheck.net External Reviews

I have an external monitor more or less permanently behind my workbench to accommodate the streaming setup – it’s a basic 1600×900 monitor that I picked up from FreeGeek Chicago at some point, just fed from the HDMI output on my laptop.

Cameras

My stream setup centers around two primary views- looking at something on the workbench (with my face in a little window in the corner) and looking at something on the computer (with my face in a little window in the corner). Sometimes, it’s looking at my face alone, but that’s mostly for the beginning and end of the class, and any long explanations in the middle. The full list of stream looks is below, but these are the big two/three.

To achieve these core looks, I have three cameras: two Logitech c920 HD webcams as the face-cameras, and a Sony a5100 mirrorless camera feeding an Elgato CamLink 4k HDMI capture dongle pointing straight down at the workbench.

The c920s are both mounted on 3D-printed reposition-able arms, which mount to some 2020 aluminum extrusion that clips onto the front of my workbench shelves. They’re really decent face cameras, with a wide field-of-view and decent autofocus. It’s a shame that the Logitech drivers don’t like to save their settings very well, so I end up needing to reconfigure things like color temperature and gain every time I restart my streaming software. But that’s only an annoyance.

You can see both ducting tape (NOT duct tape) and Black Tack in the pictures below, used as barn-doors to shield the cameras from the nearby lights to avoid flare. I have one for when I’m working at the workbench and another for when I’m looking at the laptop screen.

The a5100 is usually attached to an 11″ magic arm with a soft-clamp on a higher shelf; I also have a desktop boom-arm for filming things up-close, but I almost never stream that way. I originally had a cheaper, plastic-y 11″ magic arm, in the theory that I wasn’t sure if it would actually be useful. Turns out they’re a great tool, but the cheapiest ones wear out pretty quick – the metal ones like the one linked above are worth the investment.

I use the kit OSS 18-55mm lens that the A5100 came with – with “digital true zoom” providing another 2x magnification beyond the longest zoom range, I find I get a really good range of full-desk to close-up-on-table. A battery-replacer (wall-wart-to-battery-form-factor-plug) is a must for streaming, because any internal battery is going to die very quickly. The a5100 also requires a micro-HDMI to HDMI cable.

Software

I use Open Broadcast System (OBS) as my primary streaming software. I find it does most everything I want it to, and a couple other things besides. Since I’m not monetizing my streams at all, and don’t need features like pop-up notifications when somebody throws me some digi-chits or something, I don’t feel the need to switch to something like Streamlabs or Stream Elements. But perhaps someday I should play with them.

As I mentioned above, my big 3 scenes are: Computer Screen (+ small face), Workbench (with small face), and Face (With small computer screen and workbench). But I have 13 different scenes in my active collection; for the sake of completeness, they are:

  • Just facecam
    • Facecam with small workbench and laptop views
  •  Just workbench
    • Workbench with small facecam
    • Workbench with small facecam and laptop views
  • Just laptop screen
    • Laptop with small facecam
    • Laptop screen with small facecam and workbench views
  • Raspberry Pi Display with small facecam
  • “Video Adjustments in Progress” slide with microphone ON – I use this mostly when I need to stand up from my workbench to grab something on the shelves behind it, and I don’t want viewers to be staring at my tummy
  • “We’ll Be Right Back” slide with Microphone OFF and music on – For times I actually need to step away for a moment
  • Stream Starting Soon” slide with countdown to start
  • “Goodnight” slide – for end of streams

Switching between the various views smoothly on the fly as necessary to explain a concept is, I think, critical to maintaining flow. For that, I use the Stream Deck Mobile app for my iPhone, which emulates a Stream Deck controller. The Stream Deck configuration app is easy to use if just a little bit buggy – it allows me to have up to 15 buttons on my phone which switch between scenes in OBS on the fly.

My Streamdeck App configuration

To do the “Starting Soon” and “waiting for stragglers to arrive” countdowns, I use a little script called My Stream Timer, which updates a .txt file with the current countdown time and specified by some very basic controls. OBS then uses this text file as the source for some text that appears on the screen.

Lighting

I spent more than a decade as a stage lighting professional before shifting gears into my current job. As such, I have opinions about lighting.  Of all the physical elements of my setup, this is the one that’s changed most over time. But thankfully, it doesn’t take a ton of cash to make a halfway decent lighting environment, particularly when you’re in charge of your own camera angles.

One good rule of thumb for video that’s meant to be clear and communicative – get a lot of light on your subject, and get light off of whatever’s behind your subject. In my case, I have an 11W 6500K LED bulb strung above my workbench as the primary bench light, as well as a small LED A-lamp fixture that used to be in a bedroom as some fill light. These just blast the bench with light, and allow me to turn the ISO on my camera down to keep the grain away.

On my face, I have a small LED gooseneck that was on an alternate workbench in my last apartment. Hanging above my chair is a clip light with another cool-while LED acting as a hair light. Finally, down near my left knee is a small clip light with a blue LED bulb, which acts as a fill light when I turn 45 degrees to look at my laptop screen.

The background behind your subject doesn’t need to be totally dark, though relative darkness does help with contrast. Creating color contrast can help draw a figure out from the background as well. To that end, I have some RGB LED tape (with only blue and green hardwired on) on my storage shelves that sit behind me on camera, and a red LED PAR bulb that scrapes up my blinds for some additional color and texture. Just provides a little additional pop and saturation to the scene.

All together this adds up to what I feel is a balanced lighting look, that keeps my face visible and clear, illuminates the desktop, and hopefully doesn’t look too cheesy.

Audio

For the first 16 weeks or so of classes, my microphone setup was incredibly inexpensive – a wired BOYA lavalier from Amazon and a generic USB Audio Interface that a picked up when I was experimenting with Audio input to the Raspberry Pi a few years back. I like the BOYA a lot for the price – decent response, nice long cable, fairly durable. More decently, I’ve been used a Fifine wireless boom-style microphone, which gives me a little more freedom to move around, but the low-frequency response isn’t nearly as good.

I’m not in love with the look of the boom mic, but it does its job.

To make things sound just a little rounder, use a couple of OBS’s built-in VST audio plugins – EQ and Compressor – to keep the frequency response pleasant and the volume to a reasonable level.

I used an inexpensive pair of over-the-ear headphones to hear myself and any notification sounds that come up. They’re pretty darn good for headphones that cost less than $20.

I enjoy having a little background music on my stream, just to fill air and make things a little more cozy. All of it is pulled from YouTube’s music library, which guarantees I won’t be hit with an obscure copyright strike someday.

Raspberry Pi Class Adjustments

When I start the Raspberry Pi classes, I’m wanted to capture the HDMI output directly from the Pi into the capture software as well, so I went ahead and picked up one of the $20 HDMI capture dongles that have popped up from overseas in the past couple months. The thing works really amazingly well for how inexpensive it is – decent color, framerate, resolution, HDCP support… I’ve had no issues with it so far, and at least on my system the automatically-installed drivers work just fine. There does seem to be about 200ms of lag going into OBS, but for desktop instruction this is just fine. If you were using it to capture the output of an external camera, it might be necessary to delay your audio to match.

It could not look any more generic, but it actually works pretty well.

For my very first RPi class, I interacted with the Pi via OBS – that is, my view of the Raspberry Pi’s desktop was inside of my streaming output inside of OBS. This wasn’t ideal. The display is, of course, somewhat shrunk down; worse, the slight lag made the interface feel very floaty and hard to use. By the next class, I had dropped an HDMI splitter in between the Pi and the capture card, whose second output feeds a second external monitor. So now I have my laptop screen (where slides/IDE live), my streaming screen (HDMI output from laptop, where OBS/chat lives) and a Raspberry Pi screen (showing Pi desktop). This works really quite well as an interface.

Sometime I had discovered during my initial setup about USB video sources and USB hubs has also popped up again with this setup. I won’t claim to fully understand the issue, but something about the way USB 2.0/3.0 handle video streaming resources is less than ideal. The result is that putting multiple video devices (webcams, capture cards) into the same USB port on a computer (via a hub) doesn’t necessarily allow them to utilize all the available bandwidth, so having multiple video devices on one hub can be a problem. This blog post by Yokim encapsulates the same experiences I had.

My workaround for this is to have two of the video sources on the same hub, and then only ever activate one of them at a time. The two I chose are the webcam which shows my face when I’m looking at my laptop, and the cheapie capture card bringing in the Raspberry Pi desktop. These are the two feeds I think I’m least likely to ever need at the same time.

I had to take both monitors off their OEM stands to fit them under the lowest shelf in my workspace. Currently fitting them with 3D-printed stands.


Teaching: In Person vs. Streaming vs. Zooming

There was a time in my life that I thought I was going to be a school teacher. All of my summer jobs in high-school involved teaching a theater camp for kids and teens. Many of my college classes focused on “teaching artist” work, theater for young audiences, and pedagogical theory. I even accidentally ended up in a “How to Teach High School English” class in college that was meant for M.S.Ed. students, and stuck it out because it was so fascinating. And while that’s not ultimately the direction my career has lead me at the moment, I’ve always had an interest in teaching skills and sharing knowledge.

There’s been a real learning curve to teaching a course online though. And in my case, teaching it via stream, which I think is worth distinguishing from teaching via Zoom (or one of its thousand clones), which I’ll shorten to ‘Zooming.’ When one is Zooming, whether with friends or students, there’s still a modicum of feedback, even when no ones saying something. You can see faces. You can see confusion or comprehension. You can roughly gather whether a class is engaged or lost or checked out or eager for what’s next. It’s a poor substitute for in-person lessons, I think, but at least there’s still some faces in the digital crowd.

In a streaming setup like I use, none of that is guaranteed. I spend a good chunk of my classes essentially talking to myself, and assuming it’s being absorbed on the other side of the internet. Which is not to say the participants are unresponsive – they’re wonderfully good about asking questions, poking fun, chiming in, giving suggestions. But especially for more complex topics, it’s difficult to not be able to look into somebody’s eyes every 30 seconds and make sure they’re following along.

Classes 16, 17, and 18 on Interrupts and Timers are a great example of these challenges. These topics are super interesting (I think), but they’re fairly dense. You need to understand a little bit about program flow, a little bit about memory, a little bit about hardware, and a little bit about timing to understand them. All of which we covered. But it’s the kind of thing where one wants to ask “Does that make sense? Are we all following?” after each tidbit… and that’s just not practical or actionable in a streaming environment. Especially with 6-10 seconds of lag between question and response.


Dealing with Errors: Doing it live

In teaching over 60 hours of live classes at this point, some errors were inevitable. Especially in an electronics course where I think it’s valuable to build up the circuits, code, and understanding in real time. No matter how much I prep, experiment, and try to plan, there is inevitably going to be something that goes wrong. Such is life.

The challenge, then, is what to do when something fails? I personally find it throws me very much off my game – but I’ve consistently gotten feedback that the process of working through problems on camera is actually super useful to those watching.

I’ve wondered as part of these classes if a whole stream on just “Troubleshooting” would be valuable, but I think the more useful version of that is to make an earnest effort to solve the real issues as they come up. Of course, spending 20 minutes tracking down typos would suck. Those are the times I pull out a cake-I-baked-earlier version of the code. But most errors can be fixed quickly, and talking out how to find them – “Oh, this error message usually means…” “Oh, this behavior is wrong because…” is valuable to those learning to code and wire.


Lesson Development

Anyone who’s ever built a course from scratch (and I know that’s what a lot of traditionally-in-person instructors are doing these days!) knows how time consuming it is. First to make sure you fully understand the topic for a given lesson. Then to synthesize that knowledge into a logical sequence of explanations, topics, and themes. And finally to reify those ideas into tangible explanations and demos. Especially with a sweeping topic like Fundamentals of Electricity– where do you even start?

This did end up being a really fun week.

Especially since I was making these classes up as I went along, week to week, my process typically looked something like this:

  • Previous Saturday – identify a potential theme for the following week’s lesson; ruminate, ponder while finalizing the current week’s lesson
  • Sunday is stream-day – focus on the day’s lesson. Possibly announce the next week’s lesson if feeling very confident
  • Monday/Tuesday – Do broad research, identify gaps in current knowledge (‘wait I didn’t know that was a thing’), form idea of scope of topic
  • Wednesday – Start prepping slides with specific research, rearranging and re-shaping the lesson order as they form. Announce stream on Facebook/YouTube
  • Thursday/Friday – Finalize slides while starting to build demo circuits, programs.
  • Saturday – Finish building demo circuits, test that they can be built in real time for stream. Start pondering the following week…
  • Sunday – STREAM IT!

Taking Breaks and ‘Bye’ Days

Writing a new 2-3 hour class every week and teaching it online would be exhausting enough, especially for someone a little rusty with teaching. Doing it in the throws of a Pandemic was… well, let’s just say a lot.

I really wanted to keep to the every-single-week schedule as much as I could, both for continuity of those watching and frankly to maintain some structure for myself as the world changed. To that end, I did 20 straight streams from March through the end of July, every single Sunday (well, 1 Monday). Which I felt great about, but I did need to find ways to give myself little breaks in there.

The outlet I came up with was taking what I thought of as ‘bye weeks;’ like when a team is doing so well in a sports tournament that they’re just “assumed to have won” they’re week and advance automatically. I did this by selecting topics that I either knew well enough to be able to teach with minimal preparation, or that I had already taught for some other purpose.

The two weeks that exemplified this were Week 10: Write Better Code and Week 13: Creating a Printed Circuit Board. The former was essentially refactoring existing code in an IDE, a straightforward thing to do live. The latter was based on a lesson I had actually given at my previous job to some employees and interns. Both provided a little brain space in weeks where I was otherwise swamped.

Now that I’m back to work at my fulltime job, I’ve elected to go to an every-other-weekend schedule, which gives me a lot more breathing room in terms of ruminating, absorbing, and developing the upcoming lessons. And I think the lessons themselves are turning out better for it. Slamming a lesson together in a week on top of  a 40-hour-a-week job would lead to some substandard teaching, no doubt.


Conclusion

I don’t think there’s any better way to illuminate the holes in your knowledge of a topic than to try to teach that topic. Once you have to verbalize/write down/illustrate/demo a subject to someone who’s never touched it before, you discover exactly what you’ve always glossed over. What does happen in that edge case? What situations would cause this specific thing to happen? Why this and not that?

Though I wouldn’t have wished for the current state of the world, I’m grateful to have spent so many Sundays in the last five-and-a-half months with other nerds, teaching, learning, and exploring. I hope we can do the same over beer and trail mix real soon.


Many of the above links are Amazon Affiliate links; they are also all products I use in my everyday work and think are decent and worth the money.

Demilight Version 0.8.1

The newest round of Demilight PCBs and 3D-Prints have taken shape as version 0.8.1. Here’s a brief video overview of the current state of thing:

The biggest change, as I mention in the video, is that I tried out JLCPCB’s surface mount parts assembly service for the firs time. Overall, I’m very satisfied, and I’m delighted to have such a useful shortcut for assembly of these PCBs. The version 0.7 and 0.8 prototype boards, which are essentially the same as 0.8 with their 0603 passives and tqfp ATmega, took between 60 and 90 minutes each to assemble. I wouldn’t say they were an enormous challenge to assemble, they just took time and concentration.

But now, with JLCPCB assembling the surface mount components, each of the 0.8.1 PCBs took just 3 minutes to finalize assembly, and it’s all easy thru-hole parts. As I’m considering making a little flock of these, or providing them to folks who aren’t as practiced at soldering, finding ways to accelerate the assembly process is a huge boon.

Of course, there’s some additional cost to getting the boards machine-assembled. And for ordering just two assembled boards, of course the unit-cost is going to be high. But it drops off quickly with any kind of scale. I just put in an order for some 0.9 PCBs, and getting 10 of them instead of 2 dropped the unit-cost by almost 70%. All the fixed costs – DHL shipping, extended-part-charges from JLCPCB – start to amortize real quick. Most of the components themselves have a 10- or 20-part minimum order, due to part-loss loading and unloading the pick-n-place machines, so the component cost didn’t actually increase all that much except for the expensive IC’s (ATmega, AL8860).

Looking forward to 0.9.0.

Reverse Engineering and Replacing an Industrial 7-Segment Display – Part 2, Investigation

This is Part 2 of an N-part series. See also [Part 1].

In part one of this series, we began the process of developing a replacement for the LASCAR EM-4-LED 4-digit industrial 7 segment display. To recap: we mined the display’s datasheet for all it we could, then opened up the device to reveal its component parts and continued to dig into their datasheets until we had a reasonably complete view of the device’s functions. With the research phase complete, it’s time to move into in investigation, and we’ll think about how we might begin to probe an unknown device and its connections more specifically.

Author’s Note: The post has been sitting fully written in my drafts since before things locked-down in mid-March, but was lacking a couple of illustrative screenshots/pictures of the signal-capture process. Since the pandemic’s effects are still dragging on, I’m pushing this post out now with a couple of substitute images – they are noted below where applicable.

A refresher – this is the little display we are attempting to replace.

As you move into the phase of actually powering a device up and testing it, there are a few key parameters to keep in mind. Power and signal voltage levels are key – is this a 5V part, perhaps 3.3V, perhaps 12 or 24 or higher for industrial parts? And even if the device has a high or wide-range power voltage, any I/O ports may be more limited. This is why gathering as much data on-paper first is useful: to avoid letting the magic smoke out of the device-under-test before you get all its juicy secrets out.

Other specs worth keeping in mind are:

  • Voltage level of outputs – can you safely probe all external pins with a TTL logic probe? Do you need to start with an oscilloscope to verify voltage ranges? Or even a multimeter?
  • Output clock rates – does your instrumentation have the bandwidth to reveal useful information.
  • Open-collector vs. current-source outputs – if you’re expecting to see some output (for driving LEDs, relays, etc), do you need to supply external power to see if anything is actually happening?

Since we have this info (fairly) confidently in hand, let’s dive into probing our hardware and see what new things we can learn.

Utilizing a Logic Analyzer

One thing that many folks pointed out in the comments of my writeup of useful electronics bench tools was the lack of a logic analyzer on my list. I confess before this project, I had never used one, nor particularly found a need for one. For many years, my primary electrical hobby was amateur radio (indeed, I had a whole separate blog for ham radio pursuits) – which, as a side note, is also a wonderful place to jump into learning about electricity in a very hands on way. Working in the handfuls-of-megahertz with analog signals, a 25MHz analog oscilloscope  was a much more useful tool than something that operated only on digital logic. But for this particular project, while a scope is useful for verifying voltage levels and seeing whether a signal is present or not, the right tool for the job is a logic analyzer.

The old analog oscilloscope that got me through years of Ham Radio adventures

A logic analyzer is a piece of digital test gear that reads the voltage on two or more input connectors and creates a digital representation of the logic-levels of the voltages present over time. So where a digital oscilloscope records and displays analog voltages over time with some degree of precision, a logic analyzer is only interested as to whether the voltage is above or below a threshold, so as to be a logic high or logic low (typically 0v-12v, 0v-5v, 0-3.3v).

For talking to some other nerds and receiving some feedback online, it seems like the standout stars in the relatively-low-cost logic analyzer space are the offerings from Saleae and the Analog Discovery and Digital Discovery from Digilent. All of the above are modules that plug into a computer via USB for their control and display capabilities, so they cannot be used as stand-alone devices in the field. While some mid-to-high-end oscilloscopes also have signal-analysis capabilities built in – these are often listed as “mixed signal” oscilloscopes –  those are a bit beyond my current needs at the moment. And in fact, while the Digilent products have had my eye for awhile, as a place to get my feet wet with signal analyzers for this project, I wanted to verify that this would be a useful tool before I committed my department’s funding to a few-hundred-dollar purchase.

A fancy Rigol scope with logic analyzer functions – note the multipin connector under the display.

I ended up with a $25 8-Channel Sparkfun Logic Analyzer, which handles 3.3V and 5V signals with a sample rate of us to 24 MHz. This nominally means it can handle digital signals up to about 12 MHz, but in practice, something somewhat lower would be a safer choice. Since the LASCAR display we’re working on has a nominal data rate of 500 KHz, this should be plenty for my purposes.

The basic 8-channel logic analyzer from Sparkfun

The Sparkfun Analyzer seems to essentially be a branded version of the many inexpensive logic analyzers floating around Amazon – all of which pretty much will work with the open-source logic analysis software PulseView, which is itself a graphical frontend for the command line program Sigrok. While PulseView doesn’t allow access to all of Sigrok’s many capabilities, its a significantly more approachable way to get started with these devices in my opinion.

Script to compile and install PulseView on Ubuntu · One Transistor

Pulseview can capture samples and decode them visually for you.

Sparkfun has already written up a great Getting Started with Sigrok, Pulseview, and the Logic Analyzer tutorial, so I won’t try to duplicate their work here. Suffice to say, after getting the software installed, you attach the ground probe on the analyzer to a ground point on the circuit you’re probing, and attach one or more signal probes to the signal lines you’re like to test. After configuring the sample rate at which you want to capture data points and how many points to capture, you “run” the analyzer, which then then a few seconds to minutes capturing the number of points you selected. After capture, you can select one of a number of “decoders” that attempt to turn the individual high-or-low, one-or-zero datapoints into a structured view of what data contained therein. For example, if you’re probing what you think is a serial UART stream, the UART decoder will give you a view of the data as ASCII characters being transmitted over the UART, which is much easier than looking at pure sample points.


Here’s a look at the data and power lines going to the existing LASCAR display:

(Getting this picture has been pre-empted by a global pandemic! A picture will be here when I can get back in the building someday.)

What a nice set of labels! The presence of the clock and data lines matches with our expectations, since last time we spotted a shift-register built into the brains of the EM-32 display. The shift register will “clock in” or take in one bit of data from the data line each time it transitions, either from low-to-high or high-to-low. So we should expect to see these lines changing in alternation – first, the data line will go low or high to establish the next bit of data, then the clock line will be pulled low or high to tell the shift register to take-in this bit of data.

Or at least, that’s what I would expect, given the schematics of the EM-32 that we were looking at last time. Probing the signals will hopefully allow us to confirm this. So, let’s hook up a the signal analyzer’s ground to the GND wire and channels 1 and 2 of the analyzer to the CLOCK and DATA lines, here’s what we capture:

This is a substitute image of a different capture – the actual image is inaccessible due to pandemic conditions. But the capture would look much like this.

The first thing we note is that the data rate here is nowhere near the 500KHz rate that the datasheet says we can tolerate – we’re seeing about 40 bits of data at a rate of roughly 1KHz, in bursts about 10 times a second. So we can turn our data capture rate waaaay down from its maximum 24 MHz. Which is great. Applying the SPI decoder to this data (which has a similar clock-and-data-lines structure to what we expect) allows us to see a view of the individual 1’s and 0’s that make up the stream of bits coming from the PLC.

This is a substitute image of a different capture – the actual image is inaccessible due to pandemic conditions. But the capture would look much like this.

Comparing this bitstream with the timing diagram we saw last time, we thankfully see things lining up pretty well – we can see the initial clock pulse and start data bit, which tells the display to begin expecting data, following by 35 bits of data more. The PLC then pauses for approximately 100ms before sending more data.

The two major takeaways from our logic-analyzer work are:

  • The bitstream coming from the PLC is as-expected given what we learned from the datasheet, and
  • Its datarate is at most 1KHz, in bursts about 10 times a second.

This will help us develop our testing solution – knowing that we have reasonable data rates means that we don’t need to throw anything particularly fancy at this problems in term of hardware.

PICO-8: Orbit

Over the past couple weeks, as a way to stretch my programming legs and play around with a new system, I’ve been writing a little demo in the 8-bit retro video game environment called PICO-8. Since I think I’m drifting away from this project now, I figured I might as well post my progress here: a “game” demo called Orbit that instantiates a number of objects moving in an elliptical orbit around a central planet.

One of the neat things about PICO-8 is how easy it is to embed a playable demo! Here is the full program running in your browser:

The program starts by instantiating 5 orbiting objects around a central planet. You can switch which object you’re focusing on using ← and →. The two primary buttons (defaults to C and X on a desktop, or onscreen keys on mobile) allow access to the menu at the top-right. The menu has functionality for speeding up or slowing down time, adding and removing objects, and changing whether orbits are displayed and what info shows up on the HUD.

This is about the third time I’ve recreated essentially this same structure in different languages/environments. The first time was in Lua in the LOVE2D framework, the second was in Python in PyGame, and now it’s in pseudo-Lua in PICO-8. I’m not sure why this construct – just getting thing to orbit each other, really – appeals to me so much. But clearly there’s something there.

Each of the orbiting objects is “on rails” in a sense – rather than apply some kind of gravitational force each timestep, each object is locked into a perfect elliptical orbit defined by four orbital parameters (semi-major axis, eccentricity, argument of periapsis, and mean anomaly at epoch. Given a time T and those four parameters, the engine can calculate exactly where each object should be. Then we just let T advance at some fraction/multiple of real time.

The next step in turning this into some kind of actual game would be to allow the orbiting objects (“ships”) to apply a small amount of thrust that changes their orbit. This involved calculating the current Cartesian parameters (position and velocity) and turning those into new orbital parameters.

The hangup with this in PICO-8 is that all numbers are 32-bit fixed precision (0xFFFF.FFFF), with a range of -32768 to 32767.9999. While this is enough range to capture all the fundamental parameters of the orbit themselves (the largest of which is the semi-major axis, which can be up to about 200), it’s not enough dynamic range to do some of the calculations for converting cartesian parameters to orbital ones. Even finding the magnitude of a 2D vector with components ~150 or greater involves an intermediate step with numbers larger than 32767, which is a problem when that’s the largest number we can represent in our number system.

I briefly toyed with creating a system to present 64-bit numbers as a duo of 32-bit fixed-point ones, but it’s not quite where my interests lie at the moment. So the project pauses here for now.

In any case, in encourage you to try out PICO-8 and play around. It’s very approachable and a ton of fun, takes me right back to my days writing QBasic on my middle-school math teacher’s computer.