## Announcing a new Quick Video: Gates

Here’s the thing. I started with NPN transistors because my intent is not to start with some sort of weird theoretical “see these diagrams, that’s why it works” nonsense.

I think one thing that is sorely missing from computer science is the hands-on part: the part where you actually touch something tangible and see it work. Sure, you can buy an Arduino and plug it into your USB port and download a program you got from the Internet–but is that the same thing as wiring up individual transistors and seeing basic logic gates working before your eyes?

So I’ve uploaded a new “Quick Video” which shows all five gates in the first “Introduction to Digital Computers” video, wired up on a bread board. I’ve even included show notes with a complete circuit diagram for all five circuits, as well as links on Amazon if you decide to do this yourself.

At the most basic level, computers are not this abstract thing. These circuits are tangible parts, working on basic principles: transistors wired in series or in parallel, outputs attached to the collector(s) or the emitters(s), base being supplied with power (on) or not (off).

Zeros and ones are not theoretical things; they represent something real: power off (zero) or power on (one).

Anyway, here’s the video itself.

Thanks for watching.

## First release video in the new series now released!

I did some minor tweaks and reshot the opening sequence. This first in a series of videos will attempt, over the next few months, illustrate the design of a basic 8-bit computer built from transistors on up.

The entire series can be found on YouTube, and linked to the new Videos section of this blog.

As I mentioned with my last post, the goal here is to describe how computers work all the way down to the level of transistors. My goal with these videos is to show how, with enough resistors, transistors and other discrete components you could build your own working computer entirely from scratch.

Second, I see this as a foundational set of videos which launch into a second set on computer programming. One reason why I think this way is because while it’s easy for us to think about computer programming as writing some code–we never really stop to think why we write code as a sequence of instructions. We never really give much thought to “what’s under the hood” or why, for example, the most common paradigm for programming is the sequence of steps and not, for example, a Prolog-like set of assertions which is then evaluated by resolving the validity of the assertions in the list of statements.

(After all, in the past there have been many attempts at building analog computational devices that work more like Prolog: establishing a circuit or mechanical system which asserted a truism, then asking questions of the system in order to get answers.)

Edited April 22, 2018:

The first video contained a fatal mistake, so I’ve corrected the error and updated the links to the corrected video.

## What I’ve been doing with my time.

Okay, I confess I’ve left the gear cutting stuff on the back burner while I recover from a bug I picked up while in Mexico. During my spare time I’ve been working towards a series of videos which I hope to post on this site that discuss various topics in computer science.

The goal of this first series of videos is twofold. First, I want to describe how computers work–and I mean down to the level of transistors all the way up. I want you to feel like if you had enough transistors, resistors, some discrete components and a whole bunch of wire and a lot of time and patience, you could build a computer completely from scratch.

Second, I’m trying to find my “voice.”

See, my entire point in starting this blog is to share what I know and to hopefully get others excited about computers, computer science, electronics, calculators (and that includes mechanical calculators; thus, the gear cutting stuff). And one avenue of teaching is the instructional video.

So by starting with a relatively complex topic, I hope to learn how to put together videos like this.

Practice makes perfect, right?

So here’s the video, any constructive comments are welcome.

## Administrative: the velocity of my posts.

I love computers.

I’ve always been fascinated by computers, by the things you can do with computers, and by making things in general: hardware, software, networks, servers, user interfaces. I also have a love of mechanical things: watches, clocks, orreries, mechanical adding machines: all the things we use and have used in the past to do math, visualize planets, watch the passing of time, or explore the world around us.

And I started The Hacking Den with the intent to share my fascination through creating videos and articles which help walk someone through step-by-step in making things.

It’ll be a while for me to find my voice.

When I started The Hacking Den I had a series of three articles I wanted to write immediately, and I’ve just finished two of them: a series of articles on 3D graphics, and a series on user interface design. The third–a simple game for the Arduboy, I plan to start working on in the next few weeks. And as always, everything will be on Github so you can play with the code yourself.

But beyond those articles I have a number of projects I want to work on for this web site. Unlike these three other articles, however, I don’t have this stuff quite on tap: I can’t just sit down, crank the code out and document the steps I followed to do what I did.

The Earth-Moon Orrery is one of those projects. I’m still learning how to cut gears, and I’m starting small. My eventual goal–one that may take a year or two–is to build a rather accurate Orrery representing the planets of the solar system.

Another project I’m working on for The Hacking Den is a series of videos which describe how a microprocessor works, from transistors to assembly language and all the steps in between.

I eventually want to do a series of videos teaching C, a series of videos covering building more complex projects, perhaps a few videos covering basic computer science topics (such as state machines or LR1 languages) and perhaps a series of articles or videos sharing some of the other things I’ve learned in my 30 years as a software developer.

But all of this will take time.

What I’m saying here is that up until now, my posting “velocity” has been pretty high, as I write the three series of posts I had “on-tap.”

But moving forward, that velocity will start to slow–as I work on educational videos, as I work on physical hardware, as I work through new projects for this site.

Good educational materials takes time to produce–and so far I’ve cranked out a bunch of stuff on a daily, which may be a bit telling as to the quality of the work I’ve provided. But I wanted a firm foundation of materials for which to define my new site, rather than to create an empty vanity site.

I also wanted to commit myself to this project moving forward: to promise myself to build those videos, to make the orrery (and document how I did it), to work through ways to explain many of the things I’ve learned over the years.

So hopefully when I’m done, you’ll love computers too.

## The Earth-Moon Orrery: another update.

So the biggest problem, apparently, when it comes to cutting gears, is just getting all the fiddly bits together to make your gear cutting system.

In my case, I’m starting with a Sherline mill, and I’m adding a CNC rotary table, a 90 degree table, and a 1 inch arbor to mount the gear cutters with. The gear cutters have showed up; and I bought gear cutters from #2 to #7; #1 is for the tiniest gears I will probably never need to cut, and #8 is for gears with 135 teeth and more. If I decide to make a clock, well, then I’ll buy the other cutters as needed.

(The CNC rotary table is for the simple reason that, unlike folks making clocks, an Orrary–especially one of my own design–has some rather odd sized gears. A dividing table that can help cut a 29-tooth gear or a 52-tooth gear is probably not all that common. But for a CNC computer, 29 or 52 is just another input.)

Well, all the parts have arrived, I’ve mounted the Sherline mill, everything fits, everything appears to be in working order–and I’m ready to cut gears, just in time to be out of town for a week.

Which is just the way things work out, I guess.

But I’m all set up for my gear-cutting debut–and when I get back we’ll start cutting the gears for the Earth-Moon Orrery.

I promise I’ll start cutting gears soon–just as soon as life stops interfering. And when I do, I’ll walk through all the steps I took–including the missteps–and I’ll also include a list of the stuff I ordered, so if you want to try to reproduce what I’ve done, it’ll be easier than squinting at otherwise well produced videos to try to divine the setup.

## User Interface Design Part 6: … and the rest.

Now that we’ve designed our interface by designing our visual language and defining the nouns and verbs, we’ve laid out the screens using that visual interface, and built some of the basic building blocks: the code to draw our interface elements and the code to manage our screen, all that is left to do is to build the individual screens.

Rather than describe every single screen in our system, I’ll describe the building of one of the screens: the rest are built more or less the same way, and you can see how this all works in the source code.

When the user presses the fan button or presses one of the temperature buttons, we drop the user into the temperature setting page.

This page allows the user to set the fan, turn off the unit, and set the temperature.

Now recall that our user interface is segregated in three major blocks of code: the model, view and controller:

From a code perspective, we manipulate the `GThermostat` object, the model component which is used to directly control the HVAC hardware. Our code also handles the drawing of our layout and setting up the hit locations which represent the areas where the user can tap our screen.

Constructing the page object

Our temperature page, then, is very simple. We need to extend the UI page code we wrote yesterday, initialize our page with the appropriate layout parameters. We then need to draw the contents when the contents need to be redrawn, and we need to handle tap events.

```class AdaTempPage: public AdaUIPage
{
public:

virtual void    drawContents();

virtual void    handleEvent(uint8_t ix);
};```

Our constructor, likewise, is very simple, since we’re doing most of the work in AdaUIPage:

```AdaTempPage::AdaTempPage() : AdaUIPage(&ATemp)
{
}```

Literally all we’re doing is passing in a set of globals so our base class can know the title of our page, the label to use for the back button, the location of the rectangles that represent our buttons:

```static const AdaUIRect ATempRects[] PROGMEM = {
{ 117,  88,  40,  37 },       // Heat ++
{ 117, 126,  40,  37 },       // Heat --
{ 231,  88,  40,  37 },       // Cool ++
{ 231, 126,  40,  37 },       // Cool --
{  64, 195,  63,  37 },       // Auto
{ 148, 195,  63,  37 },       // On
{ 229, 195,  63,  37 }        // Off
};

static const AdaPage ATemp PROGMEM = {
string_settings, string_back, NULL, ATempRects, 7
};```

Where `string_settings` and `string_back` was declared in a separate “AdaStrings.h” header:

```/* AdaStrings.h */

extern const char string_settings[];
extern const char string_back[];```

and

```/* AdaStrings.cpp */

const char string_settings[] PROGMEM = "SETTINGS";
const char string_back[] PROGMEM = "\177DONE";```

Note: Because we reuse these strings, rather than use the F(“SETTINGS”) directive I’ve elected to move all the strings to a central file. This prevents the same string from being created multiple times in memory, wasting precious program memory space.

Drawing the page contents

We create two support routines which help us draw our buttons. The reason why we locate our button drawing and fan light drawing code in separate routines is so as to reduce flicker on the screen.

We could, when the user presses a button, call `invalidateContents`, a method we created previously to mark the content area as needing redrawing. However, this causes an unacceptable flashing of the screen. So instead, we move the code which draws the temperature area and draws the fan lights–that way we only erase and redraw the portion of the screen that needs redrawing. In this way we reduce the flickering on our screen.

```static void DrawHeatCool(uint16_t xoff, uint8_t temp)
{
char buffer[8];

GC.setFont(&Narrow75D);

FormatNumber(buffer,temp);
GC.drawButton(RECT(xoff,88,70,75),buffer,66);
}

static void DrawFan(uint8_t fan)
{
GC.drawButton(RECT(209,195,19,37));

GC.drawButton(RECT( 44,195,19,37));

GC.drawButton(RECT(128,195,19,37));
}```

Notice in both cases we make extensive use of our new user interface code. We even use it to draw our temperature–even though the background of the “button” is black. It may be slightly faster to explicitly draw “fillRect” with black and to call the GC.print() method to draw our temperature–but it would cause other libaries to be loaded into memory.

And memory usage in our thermostat is tight. Which means sometimes we reuse what we have rather than link against what may be nicer.

Now that we have these support routines, drawing our buttons and controls is simple:

```void AdaTempPage::drawContents()
{
char buffer[8];

// Draw temperatures
DrawHeatCool( 43,GThermostat.heatSetting);
DrawHeatCool(157,GThermostat.coolSetting);

// Draw buttons
GC.setFont(&Narrow25D);
GC.drawButton(RECT(117,88,40,37), (const __FlashStringHelper *)string_plus,28,KCornerUL | KCornerUR,KCenterAlign);
GC.drawButton(RECT(117,126,40,37),(const __FlashStringHelper *)string_minus,28,KCornerLL | KCornerLR,KCenterAlign);

GC.drawButton(RECT(231,88,40,37), (const __FlashStringHelper *)string_plus,28,KCornerUL | KCornerUR,KCenterAlign);
GC.drawButton(RECT(231,126,40,37),(const __FlashStringHelper *)string_minus,28,KCornerLL | KCornerLR,KCenterAlign);

// Draw state buttons
GC.drawButton(RECT( 32,195,11,37),KCornerUL | KCornerLL);
GC.drawButton(RECT( 64,195,63,37),(const __FlashStringHelper *)string_auto,28);
GC.drawButton(RECT(148,195,60,37),(const __FlashStringHelper *)string_on,28);
GC.drawButton(RECT(229,195,60,37),(const __FlashStringHelper *)string_off,28,KCornerUR | KCornerLR);

DrawFan(GThermostat.fanSetting);
}```

All we do is draw our temperature, our four buttons (the “+” and “-” under our temperatures), and the four regions that define the fan at the bottom.

Handling Events

The cornerstone of our “control” code, the thing that translates user actions with changes in our model, is contained in our `handleEvent` method. We have seven areas the user can tap, and we handle each of those cases in our code. Rather than duplicate the entire method here–the whole thing is on GitHub–I’ll just talk about one of these cases.

```        case AEVENT_FIRSTSPOT+4:
DrawFan(GThermostat.fanSetting);
break;```

The constant “AEVENT_FIRSTSPOT+4” refers to the fifth item in the list:

```static const AdaUIRect ATempRects[] PROGMEM = {
{ 117,  88,  40,  37 },       // Heat ++
{ 117, 126,  40,  37 },       // Heat --
{ 231,  88,  40,  37 },       // Cool ++
{ 231, 126,  40,  37 },       // Cool --
{  64, 195,  63,  37 },       // Auto
{ 148, 195,  63,  37 },       // On
{ 229, 195,  63,  37 }        // Off
};```

And this is the area drawn by our drawContents code:

```void AdaTempPage::drawContents()
{
...
// Draw state buttons
GC.drawButton(RECT( 32,195,11,37),KCornerUL | KCornerLL);
GC.drawButton(RECT( 64,195,63,37),(const __FlashStringHelper *)string_auto,28);
GC.drawButton(RECT(148,195,60,37),(const __FlashStringHelper *)string_on,28);
GC.drawButton(RECT(229,195,60,37),(const __FlashStringHelper *)string_off,28,KCornerUR | KCornerLR);
...
}```

Now when our button is tapped on, the location is detected, and we receive a call to `handleEvent` with the constant `AEVENT_FIRSTSPOT+4`.

And when we do, we want to turn the thermostat on to “AUTO”:

`            GThermostat.fanSetting = ADAHVAC_FAN_AUTO;`

Our thermostat code will then use this setting to make decisions in the future about turning on and off the fan as the temperature rises and falls. But because the model code is contained elsewhere, it’s not our problem. We simply tell the thermostat code what to do; it figures out how to do it.

Then we redraw our fan control lights to let the user know the setting was changed:

`            DrawFan(GThermostat.fanSetting);`

And that’s it. There is no step 3.

We do this for the rest of our event messages as well. One proviso is that we don’t allow the user to set the temperature below 50 or above 90, and we require temperatures to be 5 degrees apart.

And that’s it.

That’s our thermostat code. Well, there are a bunch of other screens–but all of them more or less follow the same pattern as the code above: we draw the screen, we listen for events, we respond to those events by making the appropriate changes in our model code.

There may be a few thousand lines of page handling code–but they all do the same thing.

And that’s the beauty of good design: it simplifies your code. If all the controls look the same, you can use the same routine to draw the controls. If all your controls behave the same way, you can reuse the same code to handle its behavior.

This simplicity makes the thermostat code quick for any user to understand and use. And while we may have started with the goal of an “LCARS”-like Star Trek-like interface, what we got was something that is pretty simple and nearly invisible. It’s an interface most users would not question if they encountered it.

The overall source kit is at GitHub if you want to test the code yourself.

Wrapping up.

In this six part series we discussed the importance of building a good user interface by carefully considering the visual language. We touched upon discovery–the ability of the user to discover how to use your interface by the use of “affordances”: using elements which behave consistently and which seem apparent to the user. We also touched upon consistency: making sure that when you design your visual language you stick with the design.

We briefly touched upon the components of your visual language: the “nouns” (the things you are manipulating), and the “verbs” (the actions you are taking with your nouns). We touched on the importance of visually separating “nouns” and “verbs” even in a simple interface like a thermostat.

And we spent most of our time putting this into practice: by first taking interface inspirations from a science fiction show to come up with our interface elements, then by putting together those elements in a consistent way to design the screens of our thermostat.

When putting together our code, we discussed the importance of consistency and how it reduces our work when putting together user interface drawing code: by using the same style of button everywhere we only have to write the code for drawing that button once.

We then touched upon the Model-View-Controller paradigm, and put together the “model” which reflected the “nouns” of our user interface and provided interfaces which allowed us to take action against our model–the “verbs” of our interface.

We built our screen control code which handles the common actions–commonality made possible by the consistency of our design. And we finally showed how to put together one of the screens–a process made extremely easy by having a good user interface design. We even touched briefly on times when our user interface guidelines had to be violated for simplicity–and how these exceptions should be rare and only done when necessary.

Hopefully you’ve learned something. Or at the very least, you can see some of the cool things you can do with your Arduino.

## User Interface Design Part 5: Common Code and Handling The Screen

In the last article I covered the “Model-View-Controller” code; the way we can think about building interfaces by separating our code into the “model”–the code that does stuff–from the “view”–the code that draws stuff–and the “controller”–the code that controls the behavior of your code.

And we sketched the model code: the low level code which handles the thermostat, storing the schedule and tracking the time. Now let’s start putting it together by building common code for drawing the stuff on our screen.

One element of our user interface we haven’t really discussed yet is the entire screen itself.

Our user interface contains a number of display screens:

But we’ve given no consideration as to how we will switch screens, how we will move back, or the “lifecycle” of each of our screens. Further, in our limited environment, as we’re not building a hierarchy of views, our “view” that is to be controlled is essentially the bits and pieces of code that draw and manage our screen–and it’d be nice if we had a way to manage this in a consistent fashion.

For our class we will make use of C++’s “inheritance”, and create a base class that can be overridden for all of our screens. This will allow us to put all the repetitive stuff in one place, so we can reduce the size of our app and reduce the amount of code we have to write.

So what should our screen do?

Switch between screens

Well, our current interface design has this notion of multiple screens and a “back” or “done” button which allows us to pop back to the previous screen. So let’s start by building a class that helps track this for us.

```class AdaUIPage
{
public:

/*
*  Global management
*/

void            popPage();
};```

These simple methods can track the list of displays that we are showing, maintaining a “stack” of screens, with the currently visible screen on top.

When we tap on a button–such as the fan button on our main screen–we load our new temperature setting screen “on top” of our main screen, effectively pushing our new screen on a stack:

When we tap the “Done” button, we can then “pop” our stack, leaving us the main screen.

The code for these methods are very simple. First, we need a global variable that represents the top of the stack, and for each page we need a class field which represents the page after this one on the stack:

```...
void            popPage();
protected:
/*
*  Linked list of visible pages.
*/

...```

And our methods for adding a page are very simple:

```void AdaUIPage::pushPage(AdaUIPage *page)
{
page->next = top;
top = page;
}

{
if (next) {
top = next;
}
}```

Notice what happens. When we call ‘pushPage’ we cause that page to be the `top` page, and we save the former top page away as our `next` page. And when we pop a page, we simply set the `top` page to the page that was behind this page.

Now when we show or hide the top page, we’d like a way to signal to the page that it should redraw itself. After all, thus far we’ve just manipulated a linked list.

There are two ways we could do this. Our first option is that we could simply call a ‘draw’ method to draw the page. But it may be that our page code may want to do some work before we show the page. So a second option is to use an “event-driven environment.”

Handle Events

All modern user interfaces are event-driven. An “event-driven environment” is just a fancy way of saying that our code will run in a loop, and look for things to do. If it finds something that needs doing, it then calls some piece of code that says “hey, this needs doing.”

And we’ve seen this before on other Arduino sketches.

All Arduino sketches start with a “setup” call which you can use to set up your sketch, and then it repeatedly calls the “loop” call. And the idea is that in your “loop” call you write code similar to:

```void loop()
{
if (IsButtonPressed(1)) {
DoSomethingForButtonOne();
} else if (IsButtonPressed(2)) {
DoSomethingForButtonTwo();
} ...
}```

In other words, in your loop you look at all the things that need to be done, and if something triggers an event, you call the appropriate routine to handle that event.

Well, we’re going to do the same thing here. And for our screen, there are two primary events we want to track: if we need to redraw the screen, and if the user tapped on the screen.

So let’s add a new method, `processEvents` which we can call from our Arduino sketch’s `loop` function. And while we’re at it let’s also add code to track if the screen needs to be redrawn. To do we create a new field which sets a bit if the screen needs drawing and (because it’ll be useful later) if just the content area needs redrawing. We’ll also add two methods to set the bit indicating we need to redraw something:

```...
void            popPage();

static void     processEvents();

/*
*  Page management
*/

void            invalidate()
{
invalidFlags |= INVALIDATE_DRAW;
}
void            invalidateContents()
{
invalidFlags |= INVALIDATE_CONTENT;
}
protected:
/*
*  Linked list of visible pages.
*/

private:
void            processEvents();

uint8_t         invalidFlags;       // Invalid flag regions
...```

Our periodic events then can check to see if we need to redraw something by checking if we even have a top page–and if we do, then ask the page to handle events.

```void AdaUIPage::processEvents()
{
if (top == NULL) return;
top->processPageEvents();
}```

And for our process page internal method, here we check if the page needs redrawn and do the page drawing.

```/*  processPageEvents
*
*      Process events on my page
*/

{
/*
*  Redraw if necessary
*/

if (invalidFlags) {
if (invalidFlags & INVALIDATE_DRAW) {
draw my page
} else if (invalidFlags & INVALIDATE_CONTENT) {
clear our content screen
draw our contents page
}
invalidFlags = 0;
}
}```

All this implies we also need a method to draw, a method to draw just the contents area, and a way to find the content area of our screen.

Handling when a screen becomes visible and when it disappears

Now when we call our methods to show and hide a screen, we need to do two things.

First, we need to mark the top screen as needing to be redrawn.

Second, we need to let the screen that is appearing that it is appearing, and the screen that is disappearing that it is disappearing. This is because the appearing screen may want to do some setup (such as getting the current time or temperature), and because the screen that is disappearing may want to save its results.

So let’s extend our `pushPage` and `popPage` methods to handle all of this.

First, let’s add two more methods that can be called when the page may appear and when the page may disappear:

```...
void            invalidateContents()
{
invalidFlags |= INVALIDATE_CONTENT;
}

virtual void    viewWillAppear();
virtual void    viewWillDisappear();
protected:
void            processPageEvents();
...```

The implementation of these methods do nothing by default; they’re there so if we create a page we can be notified when our page appears and when it disappears.

Now let’s update our push and pop methods:

```void AdaUIPage::pushPage(AdaUIPage *page)
{
if (top) top->viewWillDisappear();
if (page) page->viewWillAppear();

page->next = top;
top = page;

top->invalidFlags = 0xFF;    // Force everything to redraw
}

{
if (next) {
viewWillDisappear();
next->viewWillAppear();

top = next;
top->invalidFlags = 0xFF; // Force everything to redraw
}
}```

Drawing our screen

Notice our original design. We can have one of two screens: one with a list of buttons along the left–

–and one without–

Since all of our screens are laid out like this, it’d be nice if we handled all of the drawing in this class, so that our children class can focus on just drawing the title area and the content area.

That is what our `AdaUIPage::draw()` method does.

I won’t reproduce all of the code here; it’s rather long. But it does make extensive use of our AdaUI class to draw the title bar and the side bar.

To get the name of the titles we need to initialize our page with some information–such as the title of our page, the name of the back button (if we have one), the list of button names, and a list of the location of the buttons that the user can tap on. (We’ll use that list below.)

Our constructor for our class–which is then overridden by our screens–looks like:

```/*  AdaUIPage::AdaUIPage
*
*      Construction
*/

{
page = p;
invalidFlags = 0xFF;        // Set all flags
}```

The `page` variable is a new field we add to our class, and of course we mark the page as invalid so it will be redrawn.

And the contents of our page setup structure looks like:

```struct AdaPage
{
const char *title;          // title of page (or NULL if drawing by hand)
const char *back;           // back string (or NULL if none)
const char **list;          // List of five left buttons or NULL
AdaUIRect  *hitSpots;       // Hit detection spots or NULL
uint8_t    hitCount;        // # of hit detection spots
};```

Everything in the AdaPage object is stored in PROGMEM. And a typical page may declare a global that sets these values, so our page code knows what to draw for us.

Our page drawing then looks like this–in pseudo-code to keep it brief:

1. Erase the screen contents to black.
2. Draw the title of our page using the title if given, drawTitle() if not.
3. Draw our back button if we have one
4. Do we have any left buttons?
• True: Draw the inverted L with our left buttons.
• False: Draw a blank title bar if not.
5. Call drawContents() so our class knows to draw its contents

Handling taps

Our Adafruit TFT Touch Shield contains a nice capacitive touch panel, which can be accessed using the Adafruit FT6206 Library. We use this library to see if the user has tapped on the screen, determine where on the screen the user has tapped, and call a method depending on what was tapped. By putting all this tapping code in our main class, our screens only need to do something like this to handle tap events:

```void MyClass::handleEvent(uint8_t ix)
{
switch (ix) {
case MyFirstButton:
doFirstButtonEvent();
break;
case MySecondButton:
doSecondButtonEvent();
break;
....
}
}```

Now the nice thing about the `AdaPage` structure we passed in is that we now can know if we have a back button, what side buttons we have, and the location of the buttons we’re drawing on the screen.

Note: Normally if we had more program memory we’d define a view class which can then determine the location of itself, and draw itself.

But for us, we need to deal with static text titles, dynamic button titles, and all sorts of other stuff–and 32K of program memory combined with 2K of RAM is just not enough space. So we “blur” the line between views and control code–by separating out the tapping of views from the displaying of views. This may make life a little harder on us–if the buttons our content code does not align with the rectangles we pass to our constructor, things can get sort of confusing. But it does save us space on a small form factor device.

Our tap code needs to track if the user is currently touching it; we’re only interested when the user taps and not when he drags. (If we don’t, we’ll send events over and over again multiple times just because the user didn’t lift his finger fast enough.) We do this by using a `lastDown` variable; if it is set, we’re currently touching the screen.

So we add code to our `processPageEvents` method to test to see if the touch screen has been tapped. In pseudo code our processPageEvents code does the following:

1. Is the touch screen being touched, and lastDown is false?
• True:
1. Set lastDown to true.
1. Get where we were tapped.
2. Was the back button tapped?
• True: Call popPage() and exit.
3. Did we tap on one of the left buttons?
• True: Call handleEvent() with the index of the left button and exit.
4. Did we tap on one of the other buttons?
• True: Call handleEvent() with the index of the button and exit.
5. If we get here nothing was tapped on. Call handleTap() to give the code above a chance to handle being tapped on.
2. If the screen is not being touched, set lastDown to false.

Summary

So in the above code we’ve created a base class which handles most of our common event handling and screen drawing stuff. This means a lot less work as we create our individual screens on our Thermostat.

And note that we were able to do all of this–creating common code for handling our screens–by having a design language that made things consistent.

That’s because consistency makes things easier to implement: we now have reduced our problem of a big, empty screen by defining exactly how our screens should work–and doing so in a consistent way that allows us to reuse the same code and reuse the same basic drawing algorithms over and over and over again.

And when we have a list of consistent screens, we then become discoverable: the user only has to learn one set of actions and learn one consistent visual language to use the thermostat.

Of course from this point, the rest of our thermostat is simply an exercise in using our two primary tools: `AdaUI` which draws our visual language, and `AdaUIPage` which handles the common behavior of our pages, to create our user interface and hook them up to our model.

For next time.

Meanwhile, all of the code above is contained at GitHub and the finished product can be downloaded and installed on your own Adafruit Metro/TFT touch screen.

## User Interface Design Part 4: The Model-View-Controller paradigm

Thus far we’ve talked about designing a user interface: the visual language that tells users what they can do, the elements of the language (nouns and verbs), the way these things contribute to consistency and discoverability and how they can be used to create a user interface that seems… inevitable. Simple. Invisible.

Where this attention to detail shines is when it comes to building our interface.

But before we can do this, let’s talk a little bit about a common way used to organize interfaces, and discuss the non-interface elements of our code.

The single most common used design pattern in application development today is the Model-View-Controller. In fact, it has been called “the seminal insight of the whole field of graphical user interfaces.” Originally invented at Xerox Parc in the 1970’s, it was first used to help build Smalltalk-80 user interface applications, and it was first described in the paper A Cookbook for Using the Model-View-Controller User Interface Paradigm in Smalltalk-80 by Glenn Krasner and Stephen Pope.

This paradigm is so important that nearly every modern application API (from iOS to Android to MacOS to Microsoft Windows) either explicitly provides interfaces that build on this model, or implicitly provides a way by which programmers can use this model.

The key idea is that we can think of our programs as being made up of three major parts:

At the bottom is our “Model.” This is the “domain-specific software” which implements our application–and is the collection of code which does stuff like load a file, or provide in-memory editing of a text file, or (in our case) connects to the physical hardware that controls an HVAC system or which provides the time.

Our user interface is then built on top of this model code.

Our user interface roughly comprises of two parts. The first part are the “views”; these deal with everything graphical. They grab data from the model and display the contents of the screen–and they can contain “subviews” and be contained in “superviews.”

The second part is the “control” code. Controllers contain the logic which coordinates our views; they deal with user interface interactions and translate those actions into meaningful action. (So, for example, a view may represent a button, but it is the control code which determines what pressing that button “means” in terms of updating the interface and updating the model.)

For our Arduino application we don’t fully implement “views,” since some of the overhead may not quite fit in our small memory footprint. But they can help us segregate our code and help organize our thinking about the code, by thinking of the graphical parts as being separate from the control code and from our model code.

And notice one nice property about our visual language–about our thinking of a visual language in terms of the ‘nouns’ and ‘verbs’ and the consistent use of visual designs. All of this maps very nicely on our Model-View-Controller paradigm.

Our “Model” is the concrete implementation of our “nouns”: the thermostat settings. The schedule. The current time.

Our “Views” represents our visual language: how we show the user our “nouns”, our “verbs”, how we represent the actions a user can take.

And our “Control” code represents how our visual language works: how a sequence of taps on a bunch of icons translates into the user “changing the time” or “turning down the heater.”

Let’s give a concrete example of one of these “nouns”; the actual thermostat control code which determines if we need to turn on or off the heater, air conditioner or fan.

Our `AdaThermostat`, declared as a single `GThermostat` object, defines the actual code for setting the temperature, for getting the current temperature, and for turning on and off the three control lines that control our HVAC unit.

```class AdaThermostat
{
public:

void            periodicUpdate();

uint8_t         heatSetting;        /* Target heat to temperature */
uint8_t         coolSetting;        /* Target cool to temperature */
uint8_t         curTemperature;     /* Current interior temperature */
uint8_t         lastSet;            /* Last schedule used or 0xFF */

bool            heat;               /* True if heater runs */
bool            cool;               /* True if aircon runs */
bool            fan;                /* True if fan runs */
};```

Ultimately, when you change the thermostat setting–say, to heat the room to 70°–at the very core of our software, this sets `GThermostat.heatSetting` to 70.

Our class has only one entry point, `periodicUpdate`, which must be called periodically in order to run our logic: to periodically read the temperature sensor and set the `curTemperature` variable, and to decide when to turn on the heater and the air conditioner.

All of our model code looks like this: a loose collection of classes which control the hardware on our thermostat.

For example, our model includes code for getting and setting the time in `AdaTime.h`:

```extern void AdaTimeInitialize();

extern void AdaSetTime(uint32_t t);     // Time (seconds elapsed since 1/1/2017)
extern uint32_t AdaGetTime();           // Time (seconds elapsed since 1/1/2017)
extern uint32_t AdaGetElapsedTime();    // Time (seconds elapsed since power on)```

This uses the TIMER2 hardware module on the ATmega 328 CPU. This sets the timer to create an interrupt every 1/125th of a second, and the interrupt itself counts until 1 full second has elapsed before updating two global variables–one with the current time (measured as seconds from January 1, 2017), and the other with the number of seconds since the device was turned on.

Our code also provides support routines for converting the time to a Gregorian calendar date and for quickly finding the day of the week. This allows us to determine quickly which day of the week it is when we update the thermostat according to the schedule.

We also provide a class for getting and setting the thermostat’s schedule, which is stored in EEPROM on the ATmega 328.

```class AdaSchedule
{
public:

void            periodicUpdate();

void            setCurSchedule(uint8_t value);
uint8_t         getCurSchedule();

void            setSchedule(uint8_t schedule, uint8_t dow, const AdaScheduleDay &item);
};```

This allows us to get the schedule for a single day, which is an array of four temperature settings and four times:

```struct AdaScheduleItem
{
uint8_t hour;           /* 0-23 = hour of day, 0xFF = skip */
uint8_t minute;         /* 0-59 = minute */
uint8_t heat;           /* temperature to heat to */
uint8_t cool;           /* temperature to cool to */
};

{
struct AdaScheduleItem setting[4];  // 4 settings per day
};```

Our user interface code for running the main display of our thermostat then makes use of these model classes to get the current state of the system.

For example, in the part of our home page which displays the temperature buttons, our code is:

```    FormatNumber(buffer,GThermostat.heatSetting);
GC.drawButton(RECT(160,200,40,37),buffer,28,KCornerUL | KCornerLL,KCenterAlign);
FormatNumber(buffer,GThermostat.coolSetting);
GC.drawButton(RECT(201,200,40,37),buffer,28,KCornerUR | KCornerLR,KCenterAlign);```

This will read and display the temperature on our device.

By using the Model-View-Controller paradigm and separating out the parts which control our hardware from the parts that control our display, and make creating our user interface code that much easier. It also means we can test those parts of our code that controls our hardware separately from the rest of our application.

The complete source code, which strips out most of our user interface software and illustrates the models of our thermostat software, is checked into GitHub and is available for download.

Next time we’ll discuss the code we use for managing screens and the controller code which draws our home screen.

## User Interface Design Part 3: Sketching the rest of our interface.

We have a design language of sorts. We have some simple code. We have an idea of the functions we wish to implement for our Arduino-based thermostat.

Now let’s use that design language and lay out the rest of our screens.

Full disclosure.

For any complex UI, and let’s be honest: this is a bit of a complex UI (especially for an Arduino Metro with 32K of program space, 2K of RAM and 1K of EEPROM), there will always be a bit of give-and-take. You will run out of memory and need to make some design compromises. You will realize certain designs, once implemented, simply don’t work. You’ll discover better ways to organize your code to make it all fit.

The designs here are not the first pass at a design, but reflect the results. Sometimes (for example, the original screen I designed for setting the time) just seems clunky, sometimes (like the schedule editing screen) required several goes before I came up with something I was happy with. But rather than discuss the different preliminary designs, we’ll discuss the final results, and discuss how they fit in our design language developed in a prior post.

The final implementation of the code is done and checked in to GitHub. And in theory to actually use this with a real hardware thermostat, I’ve included some notes in GitHub on how you may wish to do this.

The main screen.

Our main screen follows the model we used described in our last post. The only difference is that we draw icons where the back button would go, icons which indicate if the heater is running, if the air conditioner is running or if the fan is running. This is done in part for debugging purposes.

Our design language specifies that this area is reserved for the back button–and normally we would leave this area blank if possible. However, the icons we’re displaying are so visually distinct from the back button, and many mobile apps (from which we borrow the back button concept) have also displayed non-back items in this area, it seems in this case okay to violate our guidelines.

Remember: they’re guidelines, in order to aid in discoverability and consistency. And as long as we don’t violate them (by, for example, displaying an arrow and text that a user could reasonably assume means “go back”), it’s okay in this case to vary from our guidelines.

Temperature Settings Page

If the user taps the “Fan” button, or if they tap one of the temperature range buttons, we drop into the temperature/fan control screen.

Our temperature settings page has no ‘nouns’; the thing we’re controlling is pretty direct: the temperature we heat to, the temperature we cool to, and if the fan is on, running automatically, or if we turn our air conditioner system off.

We use our grouping visual language (the rounded rectangle markers) to indicate to the user what can be tapped, and what things are related. This screen should be pretty easy for the user to quickly figure out how to change the temperature settings and how to turn the system on or off.

Settings

The two settings we want to control on our system is to set the current time and set the current date. This screen uses the inverted-L because we have two nouns we can pick from: “Time” and “Date.” We also display the current date and time to the user.

We display the current date and time next to the date/time buttons in order to reinforce to the user that the adjacent buttons are setting the date and the time.

Set Time Page

This is the page we use to set the time.

We provide the user a keypad for which to set the time. In order to give a visual indication to the user that the keypad is all part of the same thing, we use rounded rectangles to mark the corners of the keypad.

Set Date Page

The set date page displays a date picker calendar, which was modeled in code rather than using a drawing tool to lay out the buttons. The reason for doing this is because the grid layout of the calendar varies depending on the number of weeks in a month.

The calendar widget is a familiar one to most users, and we use a different background and foreground color in order to indicate the current date.

Notice that like the set time screen, we don’t use an inverted-L because we have no additional nouns to pick from: the user is directly affecting the current date or the current time.

Schedule

Our original goal for our thermostat was to create a programmable thermostat. By this we mean that we can set a series of times which the thermostat automatically adjusts itself. For example, we can set a time the thermostat turns the heating temperature down and the cooling temperature up, so the HVAC doesn’t work as hard while the owner is away from home.

We model our system by allowing the user to set up to five separate schedules: one for Spring, one for Summer, one for Fall and one for Winter, with an Energy Saver schedule. Each schedule then contains a separate setting depending on the day of the week–and for each day, the user can set up to four times when the thermostat changes the heating and cooling temperature.

The user can pick between the five schedules from the schedule picker screen:

Our main schedule page allows the user to pick the schedule the thermostat is following. We also place a button in the lower left to allow the user to edit the screens.

Note that this violates our notion about using nouns in the left column. The problem we have, however, is that the button can seem redundant: “Schedule” is the object we’re altering, but that is the screen we’re on.

This is another time when we break our design guidelines (and remember: they’re guidelines) in order to provide better clarity to the user. Notice that this brings is up to two times we’ve violated our guidelines–we don’t violate those guidelines lightly, and only do it if it promotes clarity.

Schedule Editor

Our schedule editor allows the user to actually edit the schedule itself. This is a fairly complex and busy screen.

Sadly, like the old joke goes:

A user interface is like a joke. If you have to explain it, it’s bad.

But we can alleviate some of the complexity of this screen by the use of rounded corners.

In this case we use rounded corners to group the setting rows and the day of the week. We use the left bar to pick the specific schedule we are editing. And we provide some verbs: some operations we can take on the currently selected day: we can clear that day, we can copy that day or we can paste a previously copied day. The later two operations allow you to quickly set a group of days to use the same settings.

Edit Schedule Item

If you select a single date/time row, you are brought to a new screen which allows you to edit the temperature and the time for a given schedule row:

This screen reuses the date selection elements, but adds two additional control groups: one to set the heating temperature, and another to set the cooling temperature.

Like the other screens we use rounded rectangles to group functionally similar items. We also get rid of the inverted L since we have no other nouns: we are directly manipulating three objects and doing so in an obvious manner.

This is a relatively complex UI, and as I noted above, the complete implementation is actually complete and checked into GitHub.

And notice what we’ve done.

In all cases we’ve defined what it is we’re manipulating: the current time, the current temperature–and we’ve carefully defined the nouns and the verbs of our user interface: the noun (such as “the heater temperature setting”), and the verb (“increase the temperature”).

We’ve placed nouns on the left bar when there are multiple nouns to pick from, so the user can select the noun he is interested in manipulating. The content area of the screen then provides the verbs–the controls–by which the user can manipulate those nouns.

We’ve consistently used the inverted-L screen to indicate when there are a group of nouns the user can operate on. We’ve also consistently used a page-navigation scheme to switch between screens as the user switches between items he is interested in manipulating. And our verbs are always placed in rectangular or rounded-rectangle-shapes, where the rounding is used to help define groups of verbs. (For example, the “+”/”-” button in the Edit Schedule Item screen.)

And in the process we’ve designed the UI for a programmable thermostat that should be relatively easy for a user to use. Yes, the schedule programming screen is cluttered–but the use of grouping should make explaining how to use the screen in a user’s manual relatively trivial.

This process: defining our nouns, defining our verbs, defining the user interface elements and their appearance, and consistently applying these rules as we design our screens, results in a user interface that almost seems inevitable.

It wasn’t; there were a thousand other ways we could have gone. But this use of a design language with well defined nouns, verbs and actions results in an interface that seems simple–almost troublingly simple.

But that is what we want: a user interface that is so discoverable, so consistent, that it seems… invisible.

The next few posts will cover the implementation–the steps used in putting together the code.

## User Interface Design Part 2: Sketching our Interface.

Last time we discussed the theory of user interface design. The idea that user interfaces form a sort of language, complete with nouns and verbs and modifiers. That user interfaces need to be discoverable and consistent, and that parts of discoverability means defining affordances: things the user can see and think “oh, that must be a button I can press” or “oh, that must be an action I can take.”

We even started sketching a potential interface based on the visual language behind LCARS, which groups buttons using rounded rectangles and provides an inverted L.

Designing a Thermostat.

With this article we’ll use the visual language we started sketching last time to put together a display that could potentially be used to control a thermostat. I’m not going to actually build all the firmware for a thermostat–though you could conceivably expand the code here to do exactly that. (If it all goes haywire, however, don’t blame me for blowing up your HVAC.)

First, let’s review our overall screen designs. We have two fundamental screen layouts; one that can be used to select the nouns or groups of nouns we’re operating on, such as “controls”, “schedule”, etc., and the other provides a full wide screen.

Each screen provides a title area, and an optional back button which we would use to return back to the top level screen.

Now let’s consider for a moment the features we would want on a thermostat.

• We want to display the temperature.
• Obviously we want to set the temperature. We want to set the temperature at which we turn on the heater, and when we want to turn on the air conditioner. We also want to be able to turn the fan on or off.
• We want our thermostat to be programmable. That means we want to set the schedule.
• We want to allow the user to switch between schedules for the winter, summer, and an eco-friendly setting, and to show the user what schedule we’re following.
• We want to set the time, and other miscellaneous settings, such as screen blanking. (We won’t implement screen blanking as that requires a hardware modification of the Adafruit display. But we will include the UI elements for doing that.)

Our main screen should display the temperature, of course, as well as provide us a way to set the target temperatures, and to set the other items in our system: the fan, the program schedule, the settings.

With this list of ideas we can start sketching the outline of our page.

First, consider our ‘nouns.’ From the list above we have the fan, the schedule and the settings. We also have the target temperatures. It’d be nice to display a nice curved dial, since traditional thermostats used to be controlled by a circular dial, and show the target temperatures as well as the current temperature and our current schedule.

One potential design which accomplishes this is below. Note that our “nouns” are all on the inverted L bar. We display the time in the upper-left in the area we would use for our screen title. And we create a circular piece of art in the center where we show the temperature as well as the heating and cooling targets.

The design–based on our rules we devised with the last post–uses color to show the things we can click on, but also uses shape as well. The design is discoverable–the four things the user can do are clearly labeled on the left. We separate the four items by moving settings hard bottom–this causes settings to stand alone as something separate from the other things we may want to do. And we have a circular piece of artwork which clearly marks the heating and cooling temperatures.

Let’s make one minor modification.

It’s not entirely clear that the “TEMP” button allows us to set our temperatures. We can instead remove that button from the side, and move it into the extra space we have on our screen–with the ‘noun’ (the HVAC target temperatures) being implicit based on the location on the screen:

In this design we take advantage of our “grouped buttons” to move the HVAC temperature settings onto the display content area underneath the current temperature. The clarity of this is still there: we continue to use blue–but we use a dark gray background for our buttons so as not to distract too much from the temperature, while preserving our button affordance.

This is the design we will build towards.

Note: Notice there are a thousand different ways we can design our screen. But because we now have a design language we can use, we’ve effectively constrained ourselves: we’ve constrained ourselves to the shape of buttons, to the layout of screens. And those constraints allow us to have some consistency and to make our screens discoverable.

And ultimately easy to use while being cool.

Now that we’ve sketched the design of our screen, let’s build some code.

And this is where our design language starts to shine–because ultimately, at the bottom of the stack, the core of our UI is comprised of two basic symbols: the inverted L shape, and the button.

This is important, because it means all we have to do is build code for our inverted L, and code for our buttons–and the rest is entirely reusable over and over again.

It is important to resist the monotony of our symbol set. That is, it’s important not to think “well, hell; I only have an inverted L and a button with rounded corners–why not create blah blah blah…

Do not do this. That can lead to confusion–and ultimately frustration on the part of your users. Once you’ve designed your visual language, stick to it. This will help with discoverability and with consistency.

And it will streamline your development process.

Our code extends the Adafruit_ILI9341 class, but we will write our changes to only use the Adafruit_GFX library. We do this in order to add our symbol set to our drawing system, and because we want to make use of some internal state while drawing our objects.

The end result looks like this:

```class AdaUI: public Adafruit_ILI9341
{
public:
/*
*  Expose our ILI9314 constructors by redirecting to them.
*/

AdaUI(int8_t _CS, int8_t _DC, int8_t _MOSI, int8_t _SCLK, int8_t _RST = -1, int8_t _MISO = -1) : Adafruit_ILI9341(_CS,_DC,_MOSI,_SCLK,_RST,_MISO)
{
}
{
}

/*
*  Our custom UI elements. We only have two: the rounded separator
*  bar at top, and the rounded rect with right-aligned text.
*/

/*  drawTopBar
*
*      Draw the curved element in the area provided. This takes five
*  parameters: the position, width and width of the side bar, along
*  with the orientation of the curve (upper-left, lower-left,
*  upper-right,lower-right).
*
*      If called without parameters, draws our curve at the default
*  inverted-L location
*/

void drawTopBar(int16_t top = 32, AdaUIBarOrientation orient = KBarOrientLL,
int16_t left = 0, int16_t width = 0, int16_t lbarwidth = 80);

/*  drawButton
*
*      The only other custom element in our custom UI is our
*  button/rectangle area. This is at a given rectangular area, with
*  text that can be left, center or right-aligned, and with
*  rounded corners. The radius is fixed.
*/

void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,

void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,
const __FlashStringHelper *str, int16_t baseline,

void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,
const char *str, int16_t baseline,
We have our two basic symbols: `drawTopBar` to draw the top bar, with default settings to build an inverted L shape. This builds our basic inverted L symbol, and allows us to flip it around in one of four different orientations as well as draw the inverted L with any width and side width.