Advent of Code 2020

Last winter, I participated in the annual Advent of Code Challenge, a website which offers small (but not necessarily easy) programming challenges every day from December 1 through 25. It turned out to be a great way to get exposed to different corners of development in my language of choice (Python), and with a little more time on my hands this Winter, I’m excited to dive into it again.

The challenges are all written in a non-programming-language-specific way. For example, the first part of the problem from December 1, 2019 boils down to:

* Ingest a list of numbers from a text file, with one line per number
* For each number, divide it by 3, round down, and subtract 2
* Sum all these results together
* Print/return/somehow give the user back the sum

While I was doing this in Python, there’s no reason you couldn’t do it in C, or Java, or Haskell, or ALGOL, or any language of your choice (though of course, some of the problems will be more tractable using structures built into some langauges and not others). The actual prompts are a bit more flavorful that that example – a narrative about needed to rescue Santa from outer-space was woven through all 25 problem last year.

I’m confident in saying that my Python has gotten significantly stronger over the past year, but I’m feeling like I could be stronger in some algorithmic thinking (the mazes last year slayed me) and in some process crevices around my workflow. To that end, my goals for this year are:

  • To strengthen my intuition for solving data-based problems with time-efficient algorithms
  • To cement the core concepts around Pythonic data structures in my knowledgebase
  • To become more comfortable with Git/GitHub, in particular its command line interface and the branch/merge/HEAD flow
  • To complete each challenge on the day it’s issued

Because nobody needs their RSS feed flooded by me every day for a month, I think I’ve found a way to start a blog post on, say, December 1st, update it every day for a week, then only push to the RSS feed on the 7th – so if you want to check on them daily, you can go to the actual factual blog, or just wait for the summary posts to come out.

If you’re just interested in the code (or are reading this from the future) and want to see my solutions, I’ll be posting the code over on GitHub. I’m not going to be striving to be one of the first 100 people posting successes to each problem (for which there is a leaderboard), I’m just solving these for me. And I encourage anyone out there looking to build their programming confidence to do the same!

Demilight v 0.9.1

The Demilight (miniature moving light) project has been slowed down in the past few months, mostly by good things. Namely, my return to my fulltime job and other interesting technical nerdery. But the project soldiers on!

I made a video detailing the trials and tribulations of getting version 0.9.1 built, which you can watch below (embedded) or over on YouTube.

How to Livestream a (Technology Focused) Class

In BC times (Before Covid), I had often dreamed of setting up a semi-regular gathering with some nerd friends to make things. We’d all sit around, drink beer, eat trail mix, and bash things together with Arduinos and Raspberry Pis and servos and LEDs and what have you. And then March 2020 rolled around – getting together in person was suddenly passé, but with my day job sending us home for “three weeks” of shelter-at-home, I also had a lot more time on my hands…

And so, the Electronics Bash live video classes were born. Starting Sunday, March 15, I begin streaming live electronics classes every Sunday night. They have centered around Arduino programming and usage, but I’ve also branched off into electrical theory, battery types, microcontroller hardware, and other related topics. After 20 weeks of that, I shifted gears to Raspberry Pi programming and single board computers. Many of the topics have been suggested by the small but enthusiastic core group of nerds who come together on Sunday nights to share ideas and learn things.

It’s now late-August 2020, I’ve taught 22 of these classes, I’m back at my day job, and having “completed” the Arduino course, it feels like I’ve created “one whole thing” . And so I thought it might be a fun time to look back at what I’ve learned about online teaching, streaming setups, electronics, and life over the first 22 Electronics Bash classes.

Some of this is going to be technical, some philosophical, some nonsensical. But what else is new.

The stream looks pretty good these days, I like to think.


Technology

My technology setup has been relatively consistent since about week 4 of Electronics Bash, with a few adjustments along the way as noted below. Let’s break it down by technology categories.

(My setup in many areas changed significantly with the shift to Raspberry Pi classes, so all those changes are noted at the end of this section.)

Goals

When I leapt into the idea of teaching these classes, the thought was to focus on “Arduino, Electronics, and Related Stuff.” I knew I would need at least two things to be visible: a computer desktop (for the programming IDE and explanatory slides) and the workbench itself (for showing wiring and physical demos). Seeing my face I’d count as a bonus. I also wanted to stream in reasonably high resolution – 720p as a goal, 1080p would be nice – and to make the process of switching between what the viewer is seeing as seamless as possible. Most topics would involve a good amount of swapping back and forth between slides, code, the workbench, and verbal explanation. And it should all look reasonably clear and clean.

The setup that I came up with has served me well in these regards over time, and wasn’t terribly complicated nor expensive to put together.

Computer

I use my Lenovo Legion Y7000 laptop for basically all my computer purposes these days, including streaming and programming. It’s a “gaming laptop”, which essentially means it has a mid-tier GPU stuffed inside a laptop chassis with some extra fans. I personally like the versatility this gives me – I can run Fusion360 or AutoCAD pretty well, rendering a video out from Da Vinci Resolve is pretty efficient, and my setup is still portable.

Lenovo Legion Y7000P-1060 - Notebookcheck.net External Reviews

I have an external monitor more or less permanently behind my workbench to accommodate the streaming setup – it’s a basic 1600×900 monitor that I picked up from FreeGeek Chicago at some point, just fed from the HDMI output on my laptop.

Cameras

My stream setup centers around two primary views- looking at something on the workbench (with my face in a little window in the corner) and looking at something on the computer (with my face in a little window in the corner). Sometimes, it’s looking at my face alone, but that’s mostly for the beginning and end of the class, and any long explanations in the middle. The full list of stream looks is below, but these are the big two/three.

To achieve these core looks, I have three cameras: two Logitech c920 HD webcams as the face-cameras, and a Sony a5100 mirrorless camera feeding an Elgato CamLink 4k HDMI capture dongle pointing straight down at the workbench.

The c920s are both mounted on 3D-printed reposition-able arms, which mount to some 2020 aluminum extrusion that clips onto the front of my workbench shelves. They’re really decent face cameras, with a wide field-of-view and decent autofocus. It’s a shame that the Logitech drivers don’t like to save their settings very well, so I end up needing to reconfigure things like color temperature and gain every time I restart my streaming software. But that’s only an annoyance.

You can see both ducting tape (NOT duct tape) and Black Tack in the pictures below, used as barn-doors to shield the cameras from the nearby lights to avoid flare. I have one for when I’m working at the workbench and another for when I’m looking at the laptop screen.

The a5100 is usually attached to an 11″ magic arm with a soft-clamp on a higher shelf; I also have a desktop boom-arm for filming things up-close, but I almost never stream that way. I originally had a cheaper, plastic-y 11″ magic arm, in the theory that I wasn’t sure if it would actually be useful. Turns out they’re a great tool, but the cheapiest ones wear out pretty quick – the metal ones like the one linked above are worth the investment.

I use the kit OSS 18-55mm lens that the A5100 came with – with “digital true zoom” providing another 2x magnification beyond the longest zoom range, I find I get a really good range of full-desk to close-up-on-table. A battery-replacer (wall-wart-to-battery-form-factor-plug) is a must for streaming, because any internal battery is going to die very quickly. The a5100 also requires a micro-HDMI to HDMI cable.

Software

I use Open Broadcast System (OBS) as my primary streaming software. I find it does most everything I want it to, and a couple other things besides. Since I’m not monetizing my streams at all, and don’t need features like pop-up notifications when somebody throws me some digi-chits or something, I don’t feel the need to switch to something like Streamlabs or Stream Elements. But perhaps someday I should play with them.

As I mentioned above, my big 3 scenes are: Computer Screen (+ small face), Workbench (with small face), and Face (With small computer screen and workbench). But I have 13 different scenes in my active collection; for the sake of completeness, they are:

  • Just facecam
    • Facecam with small workbench and laptop views
  •  Just workbench
    • Workbench with small facecam
    • Workbench with small facecam and laptop views
  • Just laptop screen
    • Laptop with small facecam
    • Laptop screen with small facecam and workbench views
  • Raspberry Pi Display with small facecam
  • “Video Adjustments in Progress” slide with microphone ON – I use this mostly when I need to stand up from my workbench to grab something on the shelves behind it, and I don’t want viewers to be staring at my tummy
  • “We’ll Be Right Back” slide with Microphone OFF and music on – For times I actually need to step away for a moment
  • Stream Starting Soon” slide with countdown to start
  • “Goodnight” slide – for end of streams

Switching between the various views smoothly on the fly as necessary to explain a concept is, I think, critical to maintaining flow. For that, I use the Stream Deck Mobile app for my iPhone, which emulates a Stream Deck controller. The Stream Deck configuration app is easy to use if just a little bit buggy – it allows me to have up to 15 buttons on my phone which switch between scenes in OBS on the fly.

My Streamdeck App configuration

To do the “Starting Soon” and “waiting for stragglers to arrive” countdowns, I use a little script called My Stream Timer, which updates a .txt file with the current countdown time and specified by some very basic controls. OBS then uses this text file as the source for some text that appears on the screen.

Lighting

I spent more than a decade as a stage lighting professional before shifting gears into my current job. As such, I have opinions about lighting.  Of all the physical elements of my setup, this is the one that’s changed most over time. But thankfully, it doesn’t take a ton of cash to make a halfway decent lighting environment, particularly when you’re in charge of your own camera angles.

One good rule of thumb for video that’s meant to be clear and communicative – get a lot of light on your subject, and get light off of whatever’s behind your subject. In my case, I have an 11W 6500K LED bulb strung above my workbench as the primary bench light, as well as a small LED A-lamp fixture that used to be in a bedroom as some fill light. These just blast the bench with light, and allow me to turn the ISO on my camera down to keep the grain away.

On my face, I have a small LED gooseneck that was on an alternate workbench in my last apartment. Hanging above my chair is a clip light with another cool-while LED acting as a hair light. Finally, down near my left knee is a small clip light with a blue LED bulb, which acts as a fill light when I turn 45 degrees to look at my laptop screen.

The background behind your subject doesn’t need to be totally dark, though relative darkness does help with contrast. Creating color contrast can help draw a figure out from the background as well. To that end, I have some RGB LED tape (with only blue and green hardwired on) on my storage shelves that sit behind me on camera, and a red LED PAR bulb that scrapes up my blinds for some additional color and texture. Just provides a little additional pop and saturation to the scene.

All together this adds up to what I feel is a balanced lighting look, that keeps my face visible and clear, illuminates the desktop, and hopefully doesn’t look too cheesy.

Audio

For the first 16 weeks or so of classes, my microphone setup was incredibly inexpensive – a wired BOYA lavalier from Amazon and a generic USB Audio Interface that a picked up when I was experimenting with Audio input to the Raspberry Pi a few years back. I like the BOYA a lot for the price – decent response, nice long cable, fairly durable. More decently, I’ve been used a Fifine wireless boom-style microphone, which gives me a little more freedom to move around, but the low-frequency response isn’t nearly as good.

I’m not in love with the look of the boom mic, but it does its job.

To make things sound just a little rounder, use a couple of OBS’s built-in VST audio plugins – EQ and Compressor – to keep the frequency response pleasant and the volume to a reasonable level.

I used an inexpensive pair of over-the-ear headphones to hear myself and any notification sounds that come up. They’re pretty darn good for headphones that cost less than $20.

I enjoy having a little background music on my stream, just to fill air and make things a little more cozy. All of it is pulled from YouTube’s music library, which guarantees I won’t be hit with an obscure copyright strike someday.

Raspberry Pi Class Adjustments

When I start the Raspberry Pi classes, I’m wanted to capture the HDMI output directly from the Pi into the capture software as well, so I went ahead and picked up one of the $20 HDMI capture dongles that have popped up from overseas in the past couple months. The thing works really amazingly well for how inexpensive it is – decent color, framerate, resolution, HDCP support… I’ve had no issues with it so far, and at least on my system the automatically-installed drivers work just fine. There does seem to be about 200ms of lag going into OBS, but for desktop instruction this is just fine. If you were using it to capture the output of an external camera, it might be necessary to delay your audio to match.

It could not look any more generic, but it actually works pretty well.

For my very first RPi class, I interacted with the Pi via OBS – that is, my view of the Raspberry Pi’s desktop was inside of my streaming output inside of OBS. This wasn’t ideal. The display is, of course, somewhat shrunk down; worse, the slight lag made the interface feel very floaty and hard to use. By the next class, I had dropped an HDMI splitter in between the Pi and the capture card, whose second output feeds a second external monitor. So now I have my laptop screen (where slides/IDE live), my streaming screen (HDMI output from laptop, where OBS/chat lives) and a Raspberry Pi screen (showing Pi desktop). This works really quite well as an interface.

Sometime I had discovered during my initial setup about USB video sources and USB hubs has also popped up again with this setup. I won’t claim to fully understand the issue, but something about the way USB 2.0/3.0 handle video streaming resources is less than ideal. The result is that putting multiple video devices (webcams, capture cards) into the same USB port on a computer (via a hub) doesn’t necessarily allow them to utilize all the available bandwidth, so having multiple video devices on one hub can be a problem. This blog post by Yokim encapsulates the same experiences I had.

My workaround for this is to have two of the video sources on the same hub, and then only ever activate one of them at a time. The two I chose are the webcam which shows my face when I’m looking at my laptop, and the cheapie capture card bringing in the Raspberry Pi desktop. These are the two feeds I think I’m least likely to ever need at the same time.

I had to take both monitors off their OEM stands to fit them under the lowest shelf in my workspace. Currently fitting them with 3D-printed stands.


Teaching: In Person vs. Streaming vs. Zooming

There was a time in my life that I thought I was going to be a school teacher. All of my summer jobs in high-school involved teaching a theater camp for kids and teens. Many of my college classes focused on “teaching artist” work, theater for young audiences, and pedagogical theory. I even accidentally ended up in a “How to Teach High School English” class in college that was meant for M.S.Ed. students, and stuck it out because it was so fascinating. And while that’s not ultimately the direction my career has lead me at the moment, I’ve always had an interest in teaching skills and sharing knowledge.

There’s been a real learning curve to teaching a course online though. And in my case, teaching it via stream, which I think is worth distinguishing from teaching via Zoom (or one of its thousand clones), which I’ll shorten to ‘Zooming.’ When one is Zooming, whether with friends or students, there’s still a modicum of feedback, even when no ones saying something. You can see faces. You can see confusion or comprehension. You can roughly gather whether a class is engaged or lost or checked out or eager for what’s next. It’s a poor substitute for in-person lessons, I think, but at least there’s still some faces in the digital crowd.

In a streaming setup like I use, none of that is guaranteed. I spend a good chunk of my classes essentially talking to myself, and assuming it’s being absorbed on the other side of the internet. Which is not to say the participants are unresponsive – they’re wonderfully good about asking questions, poking fun, chiming in, giving suggestions. But especially for more complex topics, it’s difficult to not be able to look into somebody’s eyes every 30 seconds and make sure they’re following along.

Classes 16, 17, and 18 on Interrupts and Timers are a great example of these challenges. These topics are super interesting (I think), but they’re fairly dense. You need to understand a little bit about program flow, a little bit about memory, a little bit about hardware, and a little bit about timing to understand them. All of which we covered. But it’s the kind of thing where one wants to ask “Does that make sense? Are we all following?” after each tidbit… and that’s just not practical or actionable in a streaming environment. Especially with 6-10 seconds of lag between question and response.


Dealing with Errors: Doing it live

In teaching over 60 hours of live classes at this point, some errors were inevitable. Especially in an electronics course where I think it’s valuable to build up the circuits, code, and understanding in real time. No matter how much I prep, experiment, and try to plan, there is inevitably going to be something that goes wrong. Such is life.

The challenge, then, is what to do when something fails? I personally find it throws me very much off my game – but I’ve consistently gotten feedback that the process of working through problems on camera is actually super useful to those watching.

I’ve wondered as part of these classes if a whole stream on just “Troubleshooting” would be valuable, but I think the more useful version of that is to make an earnest effort to solve the real issues as they come up. Of course, spending 20 minutes tracking down typos would suck. Those are the times I pull out a cake-I-baked-earlier version of the code. But most errors can be fixed quickly, and talking out how to find them – “Oh, this error message usually means…” “Oh, this behavior is wrong because…” is valuable to those learning to code and wire.


Lesson Development

Anyone who’s ever built a course from scratch (and I know that’s what a lot of traditionally-in-person instructors are doing these days!) knows how time consuming it is. First to make sure you fully understand the topic for a given lesson. Then to synthesize that knowledge into a logical sequence of explanations, topics, and themes. And finally to reify those ideas into tangible explanations and demos. Especially with a sweeping topic like Fundamentals of Electricity– where do you even start?

This did end up being a really fun week.

Especially since I was making these classes up as I went along, week to week, my process typically looked something like this:

  • Previous Saturday – identify a potential theme for the following week’s lesson; ruminate, ponder while finalizing the current week’s lesson
  • Sunday is stream-day – focus on the day’s lesson. Possibly announce the next week’s lesson if feeling very confident
  • Monday/Tuesday – Do broad research, identify gaps in current knowledge (‘wait I didn’t know that was a thing’), form idea of scope of topic
  • Wednesday – Start prepping slides with specific research, rearranging and re-shaping the lesson order as they form. Announce stream on Facebook/YouTube
  • Thursday/Friday – Finalize slides while starting to build demo circuits, programs.
  • Saturday – Finish building demo circuits, test that they can be built in real time for stream. Start pondering the following week…
  • Sunday – STREAM IT!

Taking Breaks and ‘Bye’ Days

Writing a new 2-3 hour class every week and teaching it online would be exhausting enough, especially for someone a little rusty with teaching. Doing it in the throws of a Pandemic was… well, let’s just say a lot.

I really wanted to keep to the every-single-week schedule as much as I could, both for continuity of those watching and frankly to maintain some structure for myself as the world changed. To that end, I did 20 straight streams from March through the end of July, every single Sunday (well, 1 Monday). Which I felt great about, but I did need to find ways to give myself little breaks in there.

The outlet I came up with was taking what I thought of as ‘bye weeks;’ like when a team is doing so well in a sports tournament that they’re just “assumed to have won” they’re week and advance automatically. I did this by selecting topics that I either knew well enough to be able to teach with minimal preparation, or that I had already taught for some other purpose.

The two weeks that exemplified this were Week 10: Write Better Code and Week 13: Creating a Printed Circuit Board. The former was essentially refactoring existing code in an IDE, a straightforward thing to do live. The latter was based on a lesson I had actually given at my previous job to some employees and interns. Both provided a little brain space in weeks where I was otherwise swamped.

Now that I’m back to work at my fulltime job, I’ve elected to go to an every-other-weekend schedule, which gives me a lot more breathing room in terms of ruminating, absorbing, and developing the upcoming lessons. And I think the lessons themselves are turning out better for it. Slamming a lesson together in a week on top of  a 40-hour-a-week job would lead to some substandard teaching, no doubt.


Conclusion

I don’t think there’s any better way to illuminate the holes in your knowledge of a topic than to try to teach that topic. Once you have to verbalize/write down/illustrate/demo a subject to someone who’s never touched it before, you discover exactly what you’ve always glossed over. What does happen in that edge case? What situations would cause this specific thing to happen? Why this and not that?

Though I wouldn’t have wished for the current state of the world, I’m grateful to have spent so many Sundays in the last five-and-a-half months with other nerds, teaching, learning, and exploring. I hope we can do the same over beer and trail mix real soon.


Many of the above links are Amazon Affiliate links; they are also all products I use in my everyday work and think are decent and worth the money.

Demilight Version 0.8.1

The newest round of Demilight PCBs and 3D-Prints have taken shape as version 0.8.1. Here’s a brief video overview of the current state of thing:

The biggest change, as I mention in the video, is that I tried out JLCPCB’s surface mount parts assembly service for the firs time. Overall, I’m very satisfied, and I’m delighted to have such a useful shortcut for assembly of these PCBs. The version 0.7 and 0.8 prototype boards, which are essentially the same as 0.8 with their 0603 passives and tqfp ATmega, took between 60 and 90 minutes each to assemble. I wouldn’t say they were an enormous challenge to assemble, they just took time and concentration.

But now, with JLCPCB assembling the surface mount components, each of the 0.8.1 PCBs took just 3 minutes to finalize assembly, and it’s all easy thru-hole parts. As I’m considering making a little flock of these, or providing them to folks who aren’t as practiced at soldering, finding ways to accelerate the assembly process is a huge boon.

Of course, there’s some additional cost to getting the boards machine-assembled. And for ordering just two assembled boards, of course the unit-cost is going to be high. But it drops off quickly with any kind of scale. I just put in an order for some 0.9 PCBs, and getting 10 of them instead of 2 dropped the unit-cost by almost 70%. All the fixed costs – DHL shipping, extended-part-charges from JLCPCB – start to amortize real quick. Most of the components themselves have a 10- or 20-part minimum order, due to part-loss loading and unloading the pick-n-place machines, so the component cost didn’t actually increase all that much except for the expensive IC’s (ATmega, AL8860).

Looking forward to 0.9.0.

Reverse Engineering and Replacing an Industrial 7-Segment Display – Part 2, Investigation

This is Part 2 of an N-part series. See also [Part 1].

In part one of this series, we began the process of developing a replacement for the LASCAR EM-4-LED 4-digit industrial 7 segment display. To recap: we mined the display’s datasheet for all it we could, then opened up the device to reveal its component parts and continued to dig into their datasheets until we had a reasonably complete view of the device’s functions. With the research phase complete, it’s time to move into in investigation, and we’ll think about how we might begin to probe an unknown device and its connections more specifically.

Author’s Note: The post has been sitting fully written in my drafts since before things locked-down in mid-March, but was lacking a couple of illustrative screenshots/pictures of the signal-capture process. Since the pandemic’s effects are still dragging on, I’m pushing this post out now with a couple of substitute images – they are noted below where applicable.

A refresher – this is the little display we are attempting to replace.

As you move into the phase of actually powering a device up and testing it, there are a few key parameters to keep in mind. Power and signal voltage levels are key – is this a 5V part, perhaps 3.3V, perhaps 12 or 24 or higher for industrial parts? And even if the device has a high or wide-range power voltage, any I/O ports may be more limited. This is why gathering as much data on-paper first is useful: to avoid letting the magic smoke out of the device-under-test before you get all its juicy secrets out.

Other specs worth keeping in mind are:

  • Voltage level of outputs – can you safely probe all external pins with a TTL logic probe? Do you need to start with an oscilloscope to verify voltage ranges? Or even a multimeter?
  • Output clock rates – does your instrumentation have the bandwidth to reveal useful information.
  • Open-collector vs. current-source outputs – if you’re expecting to see some output (for driving LEDs, relays, etc), do you need to supply external power to see if anything is actually happening?

Since we have this info (fairly) confidently in hand, let’s dive into probing our hardware and see what new things we can learn.

Utilizing a Logic Analyzer

One thing that many folks pointed out in the comments of my writeup of useful electronics bench tools was the lack of a logic analyzer on my list. I confess before this project, I had never used one, nor particularly found a need for one. For many years, my primary electrical hobby was amateur radio (indeed, I had a whole separate blog for ham radio pursuits) – which, as a side note, is also a wonderful place to jump into learning about electricity in a very hands on way. Working in the handfuls-of-megahertz with analog signals, a 25MHz analog oscilloscope  was a much more useful tool than something that operated only on digital logic. But for this particular project, while a scope is useful for verifying voltage levels and seeing whether a signal is present or not, the right tool for the job is a logic analyzer.

The old analog oscilloscope that got me through years of Ham Radio adventures

A logic analyzer is a piece of digital test gear that reads the voltage on two or more input connectors and creates a digital representation of the logic-levels of the voltages present over time. So where a digital oscilloscope records and displays analog voltages over time with some degree of precision, a logic analyzer is only interested as to whether the voltage is above or below a threshold, so as to be a logic high or logic low (typically 0v-12v, 0v-5v, 0-3.3v).

For talking to some other nerds and receiving some feedback online, it seems like the standout stars in the relatively-low-cost logic analyzer space are the offerings from Saleae and the Analog Discovery and Digital Discovery from Digilent. All of the above are modules that plug into a computer via USB for their control and display capabilities, so they cannot be used as stand-alone devices in the field. While some mid-to-high-end oscilloscopes also have signal-analysis capabilities built in – these are often listed as “mixed signal” oscilloscopes –  those are a bit beyond my current needs at the moment. And in fact, while the Digilent products have had my eye for awhile, as a place to get my feet wet with signal analyzers for this project, I wanted to verify that this would be a useful tool before I committed my department’s funding to a few-hundred-dollar purchase.

A fancy Rigol scope with logic analyzer functions – note the multipin connector under the display.

I ended up with a $25 8-Channel Sparkfun Logic Analyzer, which handles 3.3V and 5V signals with a sample rate of us to 24 MHz. This nominally means it can handle digital signals up to about 12 MHz, but in practice, something somewhat lower would be a safer choice. Since the LASCAR display we’re working on has a nominal data rate of 500 KHz, this should be plenty for my purposes.

The basic 8-channel logic analyzer from Sparkfun

The Sparkfun Analyzer seems to essentially be a branded version of the many inexpensive logic analyzers floating around Amazon – all of which pretty much will work with the open-source logic analysis software PulseView, which is itself a graphical frontend for the command line program Sigrok. While PulseView doesn’t allow access to all of Sigrok’s many capabilities, its a significantly more approachable way to get started with these devices in my opinion.

Script to compile and install PulseView on Ubuntu · One Transistor

Pulseview can capture samples and decode them visually for you.

Sparkfun has already written up a great Getting Started with Sigrok, Pulseview, and the Logic Analyzer tutorial, so I won’t try to duplicate their work here. Suffice to say, after getting the software installed, you attach the ground probe on the analyzer to a ground point on the circuit you’re probing, and attach one or more signal probes to the signal lines you’re like to test. After configuring the sample rate at which you want to capture data points and how many points to capture, you “run” the analyzer, which then then a few seconds to minutes capturing the number of points you selected. After capture, you can select one of a number of “decoders” that attempt to turn the individual high-or-low, one-or-zero datapoints into a structured view of what data contained therein. For example, if you’re probing what you think is a serial UART stream, the UART decoder will give you a view of the data as ASCII characters being transmitted over the UART, which is much easier than looking at pure sample points.


Here’s a look at the data and power lines going to the existing LASCAR display:

(Getting this picture has been pre-empted by a global pandemic! A picture will be here when I can get back in the building someday.)

What a nice set of labels! The presence of the clock and data lines matches with our expectations, since last time we spotted a shift-register built into the brains of the EM-32 display. The shift register will “clock in” or take in one bit of data from the data line each time it transitions, either from low-to-high or high-to-low. So we should expect to see these lines changing in alternation – first, the data line will go low or high to establish the next bit of data, then the clock line will be pulled low or high to tell the shift register to take-in this bit of data.

Or at least, that’s what I would expect, given the schematics of the EM-32 that we were looking at last time. Probing the signals will hopefully allow us to confirm this. So, let’s hook up a the signal analyzer’s ground to the GND wire and channels 1 and 2 of the analyzer to the CLOCK and DATA lines, here’s what we capture:

This is a substitute image of a different capture – the actual image is inaccessible due to pandemic conditions. But the capture would look much like this.

The first thing we note is that the data rate here is nowhere near the 500KHz rate that the datasheet says we can tolerate – we’re seeing about 40 bits of data at a rate of roughly 1KHz, in bursts about 10 times a second. So we can turn our data capture rate waaaay down from its maximum 24 MHz. Which is great. Applying the SPI decoder to this data (which has a similar clock-and-data-lines structure to what we expect) allows us to see a view of the individual 1’s and 0’s that make up the stream of bits coming from the PLC.

This is a substitute image of a different capture – the actual image is inaccessible due to pandemic conditions. But the capture would look much like this.

Comparing this bitstream with the timing diagram we saw last time, we thankfully see things lining up pretty well – we can see the initial clock pulse and start data bit, which tells the display to begin expecting data, following by 35 bits of data more. The PLC then pauses for approximately 100ms before sending more data.

The two major takeaways from our logic-analyzer work are:

  • The bitstream coming from the PLC is as-expected given what we learned from the datasheet, and
  • Its datarate is at most 1KHz, in bursts about 10 times a second.

This will help us develop our testing solution – knowing that we have reasonable data rates means that we don’t need to throw anything particularly fancy at this problems in term of hardware.

PICO-8: Orbit

Over the past couple weeks, as a way to stretch my programming legs and play around with a new system, I’ve been writing a little demo in the 8-bit retro video game environment called PICO-8. Since I think I’m drifting away from this project now, I figured I might as well post my progress here: a “game” demo called Orbit that instantiates a number of objects moving in an elliptical orbit around a central planet.

One of the neat things about PICO-8 is how easy it is to embed a playable demo! Here is the full program running in your browser:

The program starts by instantiating 5 orbiting objects around a central planet. You can switch which object you’re focusing on using ← and →. The two primary buttons (defaults to C and X on a desktop, or onscreen keys on mobile) allow access to the menu at the top-right. The menu has functionality for speeding up or slowing down time, adding and removing objects, and changing whether orbits are displayed and what info shows up on the HUD.

This is about the third time I’ve recreated essentially this same structure in different languages/environments. The first time was in Lua in the LOVE2D framework, the second was in Python in PyGame, and now it’s in pseudo-Lua in PICO-8. I’m not sure why this construct – just getting thing to orbit each other, really – appeals to me so much. But clearly there’s something there.

Each of the orbiting objects is “on rails” in a sense – rather than apply some kind of gravitational force each timestep, each object is locked into a perfect elliptical orbit defined by four orbital parameters (semi-major axis, eccentricity, argument of periapsis, and mean anomaly at epoch. Given a time T and those four parameters, the engine can calculate exactly where each object should be. Then we just let T advance at some fraction/multiple of real time.

The next step in turning this into some kind of actual game would be to allow the orbiting objects (“ships”) to apply a small amount of thrust that changes their orbit. This involved calculating the current Cartesian parameters (position and velocity) and turning those into new orbital parameters.

The hangup with this in PICO-8 is that all numbers are 32-bit fixed precision (0xFFFF.FFFF), with a range of -32768 to 32767.9999. While this is enough range to capture all the fundamental parameters of the orbit themselves (the largest of which is the semi-major axis, which can be up to about 200), it’s not enough dynamic range to do some of the calculations for converting cartesian parameters to orbital ones. Even finding the magnitude of a 2D vector with components ~150 or greater involves an intermediate step with numbers larger than 32767, which is a problem when that’s the largest number we can represent in our number system.

I briefly toyed with creating a system to present 64-bit numbers as a duo of 32-bit fixed-point ones, but it’s not quite where my interests lie at the moment. So the project pauses here for now.

In any case, in encourage you to try out PICO-8 and play around. It’s very approachable and a ton of fun, takes me right back to my days writing QBasic on my middle-school math teacher’s computer.

Video: Demilight Version 0.8

It’s been quite awhile since the mini-moving light project (now renamed The Demilight) has been written up on the blog. The project was in hiatus for a few months while dove into the technical challenges of a new job, but as the job isn’t keeping quite as busy at the moment (here in early summer 2020), it’s back on the workbench. I’ve put together a video showcasing the current state of the project, now in version 0.8:

The video does a pretty decent job of capturing the current state of things. So what’s next?

Firstly, the goofs I alluded to in the video that I consider to be must-fix items before the files are ready for primetime. Theu mostly have to do with the 3D-printed parts – I adjusted the access holes and programming slots from version 0.5 to 0.8, but I didn’t do a great job double-checking everything, and things don’t line up very well. That’ll need another few test prints and some adjustment to alleviate the all the filing that’s currently necessary.

I’ve also been having some issues with mechanical assembly – I’ve been using some M2 insert nuts to hold the case and case-lid together, and to secure the PCB into the case, but that doesn’t seem to be a particularly good system. It’s possibly my nuts and bolts are just really high-tolerance, but they’re constantly cross-threading and not inserting all the way. I think a more robust solution is in order.

The other main error has to do with the footprint for the 5V buck-converter module – somehow, my pin placement is off by .2″ on the PCB footprint, which makes the part overlap with the attachment points for the servos unless you bend the voltage-regulator’s pins over. Not insurmountable, but really annoying. That’ll have to get fixed in version 0.9. Once those two most-egregious errors are corrected, though, I think the unit will be decent enough to publish as a beta version.

It’s a pretty simple part… how did I goof this up?

There are several more substantial improvements in the pipeline as well. In no particular order:

As I mention in the video, I’m working on a miniaturized programmer interface based on some little 0.05″-pitch pogo pins. The results, so far, have been mixed – I have been able to confirm that the interface is providing gnd/5V to the ATmega328, at least enough that its 16 MHz ceramic resonator is oscillating, but I can’t seem to program the chips in-place. Further experiments will be necessary.

Some iterations of the Demilight have incorporated a heatsink to help manage the heat-output from the LED emitter chips. To be honest, I’m not sure how necessary it is – I would love to set up some tests with the unit running at its full 1 Amp current and see just how hot things get. Perhaps the first test would be in free-air, then inside the case in multiple orientations. I know from some tests I did on a livestream last summer that with enough heatsinking the LED stars can handle up to about 5 Amps, but they dump a huge amount of heat at the point.

If the heatsink comes back, should it still be in candy-apple red?

RGB or RGBW dimming capacity would be really neat – as spiffy as the pure-white versions are, there’s something about color-changing light that feels like it would take this project to the next level. I would need to free up some more PCB space, and possibly move from a single-channel driver to a 3 or 4 channel driver, but finding those in the ~1A current capacity range seems a little tricky.

There are also a couple of purely aesthetic things which could get bumped up to something better. I’ve ordered some 1/4″ white wire sleeving to take the place of the gaff tape covering the wires that run from head to base. And I need to invest a little time dialing in my 3D printer – after 3.5 years of printing, it’s starting to show its age a little bit, and a little extra tightening and lubrication wouldn’t be a bad idea.


So many of my projects during quarantine have focused on building my digital communication mediums – building out this video feels very much like a continuation of that skill-building. The weekly Arduino/Electronics classes I’ve been teaching for 15 weeks now have been a serious crash course in live digital video. That learning process deserves a write-up of it’s own, but if you compare the following two frames from Episode 0 (testing) and Episode 14 (Wireless Signals), I think the improvements are pretty clear:

Epsiode one was…. pretty rough. The audio is really crunchy too – turns out I had two microphones on (lav and webcam) and they did unkind things together.

We’ve got things pretty well dialed in by now.

It’s been a joy to build some more digital video skills putting this video together, like putting together a basic script, recording a voiceover, learning the editting, effects, and color-grading processes… it’s been both fascinating and time-consuming. The video definitely has some rough edges, but I’m thinking of it as good-enough, and I’m excited to take what I’ve learned from this early creation and apply it to future videos. Much like the tiny-light itself, it’s good to just make a thing, anything, a small thing, and iterate from there.

Stream – Electrics and Electronics Bash – Arduino #1

This Sunday evening, March 22nd 2020 at 7pm Central time, I’ll be hosting a livestreaming Introduction to Arduino over on YouTube!

We’ll start from scratch installing the Arduino IDE software, then moving on to programming fundamentals, wiring to the Arduino and using a breadboard, and more. We should cover enough ground to be useful to absolute beginners and pro’s alike.

Grab a cold one and come join me live as we make stuff and learn things. Bring your projects, bring your questions, bring your ideas for what we should learn or talk about. Let’s hang and talk about something other than hand washing. See you there.

This page will be updated with links and resources following the stream.

dffd

Geared Nameplate

A quickie today – over the weekend, I decided that my workshop at work needed a nameplate outside the door, to make it a little easier for folks to find me. So I put together this design in Fusion 360, and printed it in black and white PLA+. (The “grey” of the gears is a single later of white PLA on top of black gears).

The gears have 8, 9, 10, 12, 14, and 16 teeth, and are symmetrical left/right. This means that it takes 630 revolutions of the smallest gear to return the arrangement to its starting place. We determine this by finding the least common multiple of the number of teeth, which is or 5040. Divide that by the 8 teeth on the smallest gear, and we get our 630 revolutions to return to where we started.

The design has two chamferred holes for screw-mounting, but it’s currently just stuck on the wall with sticky-tack.

Reverse Engineering and Replacing an Industrial 7-Segment Display – Part 1, Research

Building one-off hardware is one part inventing, one part dissecting, one part scrounging. When we try to hit that magic mixture of good, fast, and cheap, so often we must rely on prebuilt modules – if we’re trying to build a widget that gets us from zero to 100, it may only be financially/temporarily/technologically reasonable if someone already makes a module that gets us from 0 to 80. Utilizing economies of scale of already-developed parts can bring a one-off project from the realm of fantasy into feasibility.  Often, the solution is to develop a chain of off-the-shelf components that can fulfill the end goal.

But all components have a service life, and a manufacturing lifetime. And when your part goes out of production and then your spares-bin runs dry, sometimes keeping your machine running requires some deeper problem solving. When you work in the public-facing technology sphere (theatre, museum work, retail displays, etc), a lot of the solutions are literally one-of-a-kind, even if they’re constructed from commercial parts.

I recently had the need to replace a very specific module in some equipment. While it didn’t end up being the most high-tech/high speed/highfalutin bit of technology, it presents a good opportunity to talk through how one can approach an unknown part, come to understand its workings, and develop a replacement. So in this N-part series, we’ll look at the process of researching, developing, and implementing a custom one-off solution to a failed part in a unique piece of gear.


The Lascar EM32-4-LED is a four-digit seven-segment panel mount LED display meant for general-purpose data display. Its small digit size (.39″ tall), machined aluminum housing, small footprint (32.5mm diameter punchout) and NEMA 4X/IP67 made it a compact choice for anyone needing to display a single value with 4 digits of precision. It also had the ability to drive four external LEDs, for additional status or process indicators.

Lascar Electronics EM32-4 LED

A piece of equipment I’ve been working on recently had just such a LASCAR display installed a few years back to serve as a timer. I”m going to have to be a little vague about the specifics of the equipment itself, but since this post is focused on technical process and not the piece itself, I think I can safely share enough details for the following to make sense:

The piece is an interactive object that triggers some actions and servos, demonstrates a physical phenomenon, and then takes about 25 seconds to cool back down before can be used again. The user is presented with a green illuminated button to activate the system – when the system is in active or cooling down, the illuminated button turns red. But because it’s not entirely clear from the action of the device alone when it will be cool enough for use, a countdown timer (two digits) is displayed on the EM32 display, counting the number of seconds until we’re good to run again.

Sadly, this particular EM32 display died shortly after LASCAR decided the product hit its End of Life. What’s more, I’m currently without the ability to modify the programming of the PLC that’s driving the whole shebang. In order to maintain the functionality of the piece, it became necessary to build a device that would ingest the existing signals being sent by the PLC, interpret them, and drive a newly crafted 7-segment display of some kind.

The ‘datasheet’ for the EM32-4 is a paltry 2 pages long. Presumably there was additional documentation provided to those who were using the device, but since it’s now EOL, that documentation seems to be unobtanium. But the existing pair of pages does contain some useful information.

We’ll start at the very beginning (a very good place to start): the opening prose paragraph:

This is where we find a high-level overview of the part, it’s intended purpose, and (sometimes) explanations of the differences between any variants of the part. Say, for example, a given part is made in a standard and a mil-spec version, or a normal and a slew-rate limited version, a manufacturer will often encompass them in a single datasheet. It’s important to identify specifically what part you have, so you can characterize it accurately. 

In our case, the EM32-4 is unique enough that there are no major variants. The paragraph mostly tells us what we already know – it’s a 4-digit, 3 decimal point display in a metal bezel. But it does call out the “optional external LEDs.” While it’s unclear at this point exactly what this means, it’s useful to make note of these surprises early on, as they’ll often explain a what-the-heck-is-that moment late in the datasheet.

Moving on then to the next useful block in just about any datasheet; the electrical specifications. This is where you’ll find input-voltage ranges for power and signals, output voltages and timing, and other device-specific characteristics (transistor beta and voltage spreads, op-amp gain and slew rate, power ratings, etc). If I was doing a Double Dare Physical Challenge and had to utilize a part with only one table of its datasheet available, I’d take the electrical specs chart 9 out of 10 times.

In our case, there’s only 6 lines, but 6 important lines they are. We learn that this is a 5V part, but can run at up to 9V so we can’t assume we’ll have 5V power available. Nominal power  usage is ~20mA, so the power available on existing supply lines may be limited. The operating and storage temperature ranges are typical. VLED is a a bit confusing – does this refer to the display itself, in which case we have no real purpose for this voltage? Or perhaps it refers to the voltage available for the external LEDs. 

The final line is promising – that the typical clock input frequency is 500KHz. This is the first we’ve seen any information about how this device receives communication from a controller. But now we know it’s some kind of clocked input (perhaps sometime like SPI?), and that its possible frequency is not unreasonable from something we might interpret with off-the-shelf hardware. Not that 500KHz is a stroll in the park, but it’s not in the many-megahertz range, say.

The last really useful part of the datasheet is the Functional Block Diagram. This block shows a symbolic representation of what’s happening inside the device, as an aid to the user in visualizing what’s happening on the interior and how we need to interface with it. You really only see this with integrated circuits or other modules (the functional bock diagram of a transistor would be… just a transistor).

To highlight the purpose of the block diagram, let’s do a quick comparison between two drawings on another part: the venerable 555 Timer IC. Its datasheet sports both a schematic diagram and a functional diagram; here are the two side-by-side:

This demonstrates pretty clearly the distinct purposes of the schematic diagram versus the functional one. The functional diagram is there to give users a high-level understanding of how the device functions, where inputs and outputs attach, and what the essential parts of the device are. The schematic diagram is there for those who need to really drill into exactly how the chip is built, because of some precise technical reason. When I’m driving a car, I need to know whether the transmission is manual or automatic, two-wheel vs four wheel, and so on – a functional understanding is enough. A mechanic needs a schematic showing the various linkages and gears of my transmission to diagnose and repair issues; holding that level of information in my head all the time would get in the way of the business of driving around.

With that diversion hopefully making clear the purpose of the functional block diagram, let’s check out the one for the EM32-4.

There’s some really good info here! Let’s start with the external connections:

  • We could have guessed V+ and 0V are supply voltage and ground, but this confirms it.
  • The 35-bit shift register is intriguing, and illuminates the purpose of the D (Data) and Ck (Clock) terminals. There’s also an Ē (enable) pin for the data line which is active low (indicated by the bar over the pin name, for “not”). 
  • Since we don’t have direct control over the latches or buffer layer of the shift register, it seems that data will be shown as soon as its clocked in.
    • There’s a weird hanging inverter on the left side of the diagram attached to the output buffers, as if there was some kind of external buffer control possible at some point. How odd.
  • It seems that the  VL pin is on the downstream side from the voltage regulator, so it probably puts out the 3 volts listed under electrical specifications above.
    • This probably means that L1 thru L4 are open-collector outputs, so we have a sense of how we might use the part to drive the external LEDs.
  • Finally, there’s a Reset pin for soft-resetting the data displayed – this would be useful if the end product was configured so the displayed retained power when the controller turned off – the controller could simply reset the display (or many displays in parallel) to ensure that no data was present for a fresh start.

One of the starting placing for replacing this display was the possibility that there might be some driver circuitry driving a generic 7-segment display. If the display itself was still good, perhaps we can simply replace the driver and have a visually identical display. Those hopes were dashed, however, when I opened up the EM32-4 LED to find…

An OEM 4-LED – the power behind the throne – it’s the same product, right down to the block diagram, but in a DIP-style package. The EM32-4, it turns out, is the OEM-4 with a nice aluminum case and terminal blocks. And the back of the OEM-4 is epoxy-blobbed together, so even if we were to break into the thing, there’s a good chance everything is wirebonded all the way to nowhere and back. Reusing the display on this thing is a non-starter.

All is not in vain, however – the OEM-4’s datasheet is a whopping four pages to the EM32’s paltry two. The first two pages are essentially identical (which makes sense, since one is the other in a very real way), but the two additional pages in the OEM-4’s datasheet have four additional juicy diagrams. Starting with a timing diagram:

We can now see in much more detail that, yes indeed, the display is based around an internal shift register architecture, with bits being clocked in and held in the device. We can see that there’s a start bit (“1”) and the 35 data bits we saw in the EM32’s datasheet, so we’ll need to clock 36 physical bits into the device, whereupon it will automatically load the data (presumably into the data latches and output buffer). Then in 30 ns it will automatically reset and be ready The clock timing, which is listed as 500 Khz nominal, can in theory be pushed to 2 MHz if the 500 ns cycle time (250 ns + 250 ns) can be believed. (Not that we’re hoping it’s that high). We can also get some detail about the external reset signals and the data input timing.

Remember, all this sleuthing is with a goal –  not of driving an EM32, but of creating a display controller which takes the place of an EM32 in a specific installation. Any details we can deduce from the datasheets will help us narrow down where we begin with our investigation of the controller itself.

The “applications” diagram gives us a few pointers – not all are useful to our goal, but are interesting nonetheless. As we guessed before, the LED1 through LED4 pins are open collector drivers – but unlike our guess, we actually need to provide the +3 volts for that control from an external regulator, not from the VLED pin. And the typical current should be 2.5mA per LED, so there’s aren’t high-current drivers in any sense. We can also see that the OEM-4 module has an option for external brightness control via a 50kΩ potentiometer, but we don’t have the ability to access those pins on the EM32 unit.

There’s also a sneaky note at the bottom of the diagram that there is a ‘special version’ OEM-4 LED with a built-in 3V regulator and brightness control. I wonder which version we have?

At first blush the circuit diagram appears to tell us what we already know – there’s a shift-register LED driver inside this thing that’s taking clocked data in and driving LEDs on the downstream side. But there are actually two key things to note here – while I had assumed the VLED pin was only for the external LED’s, it’s actually the anode connection for all the segments of the display! This means that connecting it isn’t optional for driving external LEDs, it’s mandatory if we want the OEM-4 to work. Looking back at the block diagram from the EM32, we can understand the purpose of the built-in regulator shown there.

The EM32’s built-in 3V regulator on the EM32.

The second key thing we learn from the circuit diagram is which bits control which segments. But it’s made even more clear in the final diagram from the OEM-4 datasheet: the serial data input sequence:

Now we don’t have to try to deducing the bit-order from what we think the data stream is displaying, we can build that data into our programming from the beginning. Thank goodness, since I’d never actually seen this display in action before I undertook the task to replace it!


This is about as deep as the research rabbit-hole goes, it seems. We’ve found the datasheet for the EM32 module itself, the OEM-4 module inside it, and the PS035 inside that.

In the next post, we’ll start probing the signals coming from the controller, building a version of the display in software, and testing some theories about how the display operates.