User Interface Design Part 6: … and the rest.

Now that we’ve designed our interface by designing our visual language and defining the nouns and verbs, we’ve laid out the screens using that visual interface, and built some of the basic building blocks: the code to draw our interface elements and the code to manage our screen, all that is left to do is to build the individual screens.

Rather than describe every single screen in our system, I’ll describe the building of one of the screens: the rest are built more or less the same way, and you can see how this all works in the source code.


When the user presses the fan button or presses one of the temperature buttons, we drop the user into the temperature setting page.

Temperatures

This page allows the user to set the fan, turn off the unit, and set the temperature.

Now recall that our user interface is segregated in three major blocks of code: the model, view and controller:

Model View Controller

From a code perspective, we manipulate the GThermostat object, the model component which is used to directly control the HVAC hardware. Our code also handles the drawing of our layout and setting up the hit locations which represent the areas where the user can tap our screen.

Constructing the page object

Our temperature page, then, is very simple. We need to extend the UI page code we wrote yesterday, initialize our page with the appropriate layout parameters. We then need to draw the contents when the contents need to be redrawn, and we need to handle tap events.

class AdaTempPage: public AdaUIPage
{
    public:
                        AdaTempPage();
                        
        virtual void    drawContents();

        virtual void    handleEvent(uint8_t ix);
};

Our constructor, likewise, is very simple, since we’re doing most of the work in AdaUIPage:

AdaTempPage::AdaTempPage() : AdaUIPage(&ATemp)
{
}

Literally all we’re doing is passing in a set of globals so our base class can know the title of our page, the label to use for the back button, the location of the rectangles that represent our buttons:

static const AdaUIRect ATempRects[] PROGMEM = {
    { 117,  88,  40,  37 },       // Heat ++
    { 117, 126,  40,  37 },       // Heat --
    { 231,  88,  40,  37 },       // Cool ++
    { 231, 126,  40,  37 },       // Cool --
    {  64, 195,  63,  37 },       // Auto
    { 148, 195,  63,  37 },       // On
    { 229, 195,  63,  37 }        // Off
};

static const AdaPage ATemp PROGMEM = {
    string_settings, string_back, NULL, ATempRects, 7
};

Where string_settings and string_back was declared in a separate “AdaStrings.h” header:

/* AdaStrings.h */

extern const char string_settings[];
extern const char string_back[];

and

/* AdaStrings.cpp */

const char string_settings[] PROGMEM = "SETTINGS";
const char string_back[] PROGMEM = "\177DONE";

Note: Because we reuse these strings, rather than use the F(“SETTINGS”) directive I’ve elected to move all the strings to a central file. This prevents the same string from being created multiple times in memory, wasting precious program memory space.

Drawing the page contents

We create two support routines which help us draw our buttons. The reason why we locate our button drawing and fan light drawing code in separate routines is so as to reduce flicker on the screen.

We could, when the user presses a button, call invalidateContents, a method we created previously to mark the content area as needing redrawing. However, this causes an unacceptable flashing of the screen. So instead, we move the code which draws the temperature area and draws the fan lights–that way we only erase and redraw the portion of the screen that needs redrawing. In this way we reduce the flickering on our screen.

static void DrawHeatCool(uint16_t xoff, uint8_t temp)
{
    char buffer[8];

    GC.setFont(&Narrow75D);
    GC.setTextColor(ADAUI_RED,ADAUI_BLACK);

    FormatNumber(buffer,temp);
    GC.drawButton(RECT(xoff,88,70,75),buffer,66);
}

static void DrawFan(uint8_t fan)
{
    GC.setTextColor(ADAUI_BLACK,(fan == ADAHVAC_OFF) ? ADAUI_GREEN : ADAUI_DARKGRAY);
    GC.drawButton(RECT(209,195,19,37));

    GC.setTextColor(ADAUI_BLACK,(fan == ADAHVAC_FAN_AUTO) ? ADAUI_GREEN : ADAUI_DARKGRAY);
    GC.drawButton(RECT( 44,195,19,37));

    GC.setTextColor(ADAUI_BLACK,(fan == ADAHVAC_FAN_ON) ? ADAUI_GREEN : ADAUI_DARKGRAY);
    GC.drawButton(RECT(128,195,19,37));
}

Notice in both cases we make extensive use of our new user interface code. We even use it to draw our temperature–even though the background of the “button” is black. It may be slightly faster to explicitly draw “fillRect” with black and to call the GC.print() method to draw our temperature–but it would cause other libaries to be loaded into memory.

And memory usage in our thermostat is tight. Which means sometimes we reuse what we have rather than link against what may be nicer.

Now that we have these support routines, drawing our buttons and controls is simple:

void AdaTempPage::drawContents()
{
    char buffer[8];

    // Draw temperatures
    DrawHeatCool( 43,GThermostat.heatSetting);
    DrawHeatCool(157,GThermostat.coolSetting);
    
    // Draw buttons
    GC.setFont(&Narrow25D);
    GC.setTextColor(ADAUI_BLACK,ADAUI_BLUE);
    GC.drawButton(RECT(117,88,40,37), (const __FlashStringHelper *)string_plus,28,KCornerUL | KCornerUR,KCenterAlign);
    GC.drawButton(RECT(117,126,40,37),(const __FlashStringHelper *)string_minus,28,KCornerLL | KCornerLR,KCenterAlign);

    GC.drawButton(RECT(231,88,40,37), (const __FlashStringHelper *)string_plus,28,KCornerUL | KCornerUR,KCenterAlign);
    GC.drawButton(RECT(231,126,40,37),(const __FlashStringHelper *)string_minus,28,KCornerLL | KCornerLR,KCenterAlign);
    
    // Draw state buttons
    GC.drawButton(RECT( 32,195,11,37),KCornerUL | KCornerLL);
    GC.drawButton(RECT( 64,195,63,37),(const __FlashStringHelper *)string_auto,28);
    GC.drawButton(RECT(148,195,60,37),(const __FlashStringHelper *)string_on,28);
    GC.drawButton(RECT(229,195,60,37),(const __FlashStringHelper *)string_off,28,KCornerUR | KCornerLR);
    
    DrawFan(GThermostat.fanSetting);
}

All we do is draw our temperature, our four buttons (the “+” and “-” under our temperatures), and the four regions that define the fan at the bottom.

Handling Events

The cornerstone of our “control” code, the thing that translates user actions with changes in our model, is contained in our handleEvent method. We have seven areas the user can tap, and we handle each of those cases in our code. Rather than duplicate the entire method here–the whole thing is on GitHub–I’ll just talk about one of these cases.

        case AEVENT_FIRSTSPOT+4:
            GThermostat.fanSetting = ADAHVAC_FAN_AUTO;
            DrawFan(GThermostat.fanSetting);
            break;

The constant “AEVENT_FIRSTSPOT+4” refers to the fifth item in the list:

static const AdaUIRect ATempRects[] PROGMEM = {
    { 117,  88,  40,  37 },       // Heat ++
    { 117, 126,  40,  37 },       // Heat --
    { 231,  88,  40,  37 },       // Cool ++
    { 231, 126,  40,  37 },       // Cool --
    {  64, 195,  63,  37 },       // Auto
    { 148, 195,  63,  37 },       // On
    { 229, 195,  63,  37 }        // Off
};

And this is the area drawn by our drawContents code:

void AdaTempPage::drawContents()
{
...    
    // Draw state buttons
    GC.drawButton(RECT( 32,195,11,37),KCornerUL | KCornerLL);
    GC.drawButton(RECT( 64,195,63,37),(const __FlashStringHelper *)string_auto,28);
    GC.drawButton(RECT(148,195,60,37),(const __FlashStringHelper *)string_on,28);
    GC.drawButton(RECT(229,195,60,37),(const __FlashStringHelper *)string_off,28,KCornerUR | KCornerLR);
...
}

Now when our button is tapped on, the location is detected, and we receive a call to handleEvent with the constant AEVENT_FIRSTSPOT+4.

And when we do, we want to turn the thermostat on to “AUTO”:

            GThermostat.fanSetting = ADAHVAC_FAN_AUTO;

Our thermostat code will then use this setting to make decisions in the future about turning on and off the fan as the temperature rises and falls. But because the model code is contained elsewhere, it’s not our problem. We simply tell the thermostat code what to do; it figures out how to do it.

Then we redraw our fan control lights to let the user know the setting was changed:

            DrawFan(GThermostat.fanSetting);

And that’s it. There is no step 3.

We do this for the rest of our event messages as well. One proviso is that we don’t allow the user to set the temperature below 50 or above 90, and we require temperatures to be 5 degrees apart.

And that’s it.


That’s our thermostat code. Well, there are a bunch of other screens–but all of them more or less follow the same pattern as the code above: we draw the screen, we listen for events, we respond to those events by making the appropriate changes in our model code.

There may be a few thousand lines of page handling code–but they all do the same thing.

And that’s the beauty of good design: it simplifies your code. If all the controls look the same, you can use the same routine to draw the controls. If all your controls behave the same way, you can reuse the same code to handle its behavior.

This simplicity makes the thermostat code quick for any user to understand and use. And while we may have started with the goal of an “LCARS”-like Star Trek-like interface, what we got was something that is pretty simple and nearly invisible. It’s an interface most users would not question if they encountered it.

The overall source kit is at GitHub if you want to test the code yourself.


Wrapping up.

In this six part series we discussed the importance of building a good user interface by carefully considering the visual language. We touched upon discovery–the ability of the user to discover how to use your interface by the use of “affordances”: using elements which behave consistently and which seem apparent to the user. We also touched upon consistency: making sure that when you design your visual language you stick with the design.

We briefly touched upon the components of your visual language: the “nouns” (the things you are manipulating), and the “verbs” (the actions you are taking with your nouns). We touched on the importance of visually separating “nouns” and “verbs” even in a simple interface like a thermostat.

And we spent most of our time putting this into practice: by first taking interface inspirations from a science fiction show to come up with our interface elements, then by putting together those elements in a consistent way to design the screens of our thermostat.

When putting together our code, we discussed the importance of consistency and how it reduces our work when putting together user interface drawing code: by using the same style of button everywhere we only have to write the code for drawing that button once.

We then touched upon the Model-View-Controller paradigm, and put together the “model” which reflected the “nouns” of our user interface and provided interfaces which allowed us to take action against our model–the “verbs” of our interface.

We built our screen control code which handles the common actions–commonality made possible by the consistency of our design. And we finally showed how to put together one of the screens–a process made extremely easy by having a good user interface design. We even touched briefly on times when our user interface guidelines had to be violated for simplicity–and how these exceptions should be rare and only done when necessary.


Hopefully you’ve learned something. Or at the very least, you can see some of the cool things you can do with your Arduino.

User Interface Design Part 5: Common Code and Handling The Screen

In the last article I covered the “Model-View-Controller” code; the way we can think about building interfaces by separating our code into the “model”–the code that does stuff–from the “view”–the code that draws stuff–and the “controller”–the code that controls the behavior of your code.

And we sketched the model code: the low level code which handles the thermostat, storing the schedule and tracking the time. Now let’s start putting it together by building common code for drawing the stuff on our screen.


One element of our user interface we haven’t really discussed yet is the entire screen itself.

Our user interface contains a number of display screens:

Home Screen Final

Temperatures

Schedule Picker Screen

But we’ve given no consideration as to how we will switch screens, how we will move back, or the “lifecycle” of each of our screens. Further, in our limited environment, as we’re not building a hierarchy of views, our “view” that is to be controlled is essentially the bits and pieces of code that draw and manage our screen–and it’d be nice if we had a way to manage this in a consistent fashion.

For our class we will make use of C++’s “inheritance”, and create a base class that can be overridden for all of our screens. This will allow us to put all the repetitive stuff in one place, so we can reduce the size of our app and reduce the amount of code we have to write.


So what should our screen do?

Switch between screens

Well, our current interface design has this notion of multiple screens and a “back” or “done” button which allows us to pop back to the previous screen. So let’s start by building a class that helps track this for us.

class AdaUIPage
{
    public:

        /*
         *  Global management
         */

        static void     pushPage(AdaUIPage *page);
        void            popPage();
};

These simple methods can track the list of displays that we are showing, maintaining a “stack” of screens, with the currently visible screen on top.

When we tap on a button–such as the fan button on our main screen–we load our new temperature setting screen “on top” of our main screen, effectively pushing our new screen on a stack:

Screen Stack

When we tap the “Done” button, we can then “pop” our stack, leaving us the main screen.

The code for these methods are very simple. First, we need a global variable that represents the top of the stack, and for each page we need a class field which represents the page after this one on the stack:

...
        static void     pushPage(AdaUIPage *page);
        void            popPage();
    protected:
        /*
         *  Linked list of visible pages.
         */
        
        AdaUIPage       *next;
        static AdaUIPage *top;
...

And our methods for adding a page are very simple:

void AdaUIPage::pushPage(AdaUIPage *page)
{
    page->next = top;
    top = page;
}

void AdaUIPage::popPage()
{
    if (next) {
        top = next;
    }
}

Notice what happens. When we call ‘pushPage’ we cause that page to be the top page, and we save the former top page away as our next page. And when we pop a page, we simply set the top page to the page that was behind this page.

Now when we show or hide the top page, we’d like a way to signal to the page that it should redraw itself. After all, thus far we’ve just manipulated a linked list.

There are two ways we could do this. Our first option is that we could simply call a ‘draw’ method to draw the page. But it may be that our page code may want to do some work before we show the page. So a second option is to use an “event-driven environment.”

Handle Events

All modern user interfaces are event-driven. An “event-driven environment” is just a fancy way of saying that our code will run in a loop, and look for things to do. If it finds something that needs doing, it then calls some piece of code that says “hey, this needs doing.”

And we’ve seen this before on other Arduino sketches.

All Arduino sketches start with a “setup” call which you can use to set up your sketch, and then it repeatedly calls the “loop” call. And the idea is that in your “loop” call you write code similar to:

void loop()
{
    if (IsButtonPressed(1)) {
        DoSomethingForButtonOne();
    } else if (IsButtonPressed(2)) {
        DoSomethingForButtonTwo();
    } ...
}

In other words, in your loop you look at all the things that need to be done, and if something triggers an event, you call the appropriate routine to handle that event.

Well, we’re going to do the same thing here. And for our screen, there are two primary events we want to track: if we need to redraw the screen, and if the user tapped on the screen.

So let’s add a new method, processEvents which we can call from our Arduino sketch’s loop function. And while we’re at it let’s also add code to track if the screen needs to be redrawn. To do we create a new field which sets a bit if the screen needs drawing and (because it’ll be useful later) if just the content area needs redrawing. We’ll also add two methods to set the bit indicating we need to redraw something:

...
        static void     pushPage(AdaUIPage *page);
        void            popPage();

        static void     processEvents();

        /*
         *  Page management
         */

        void            invalidate()
                            {
                                invalidFlags |= INVALIDATE_DRAW;
                            }
        void            invalidateContents()
                            {
                                invalidFlags |= INVALIDATE_CONTENT;
                            }
    protected:
        /*
         *  Linked list of visible pages.
         */

        AdaUIPage       *next;
        static AdaUIPage *top;

    private:
        void            processEvents();

        uint8_t         invalidFlags;       // Invalid flag regions
...

Our periodic events then can check to see if we need to redraw something by checking if we even have a top page–and if we do, then ask the page to handle events.

void AdaUIPage::processEvents()
{
    if (top == NULL) return;
    top->processPageEvents();
}

And for our process page internal method, here we check if the page needs redrawn and do the page drawing.

/*  processPageEvents
 *
 *      Process events on my page
 */

void AdaUIPage::processPageEvents()
{
    /*
     *  Redraw if necessary
     */

    if (invalidFlags) {
        if (invalidFlags & INVALIDATE_DRAW) {
            draw my page
        } else if (invalidFlags & INVALIDATE_CONTENT) {
            clear our content screen
            draw our contents page
        }
        invalidFlags = 0;
    }
}

All this implies we also need a method to draw, a method to draw just the contents area, and a way to find the content area of our screen.

Handling when a screen becomes visible and when it disappears

Now when we call our methods to show and hide a screen, we need to do two things.

First, we need to mark the top screen as needing to be redrawn.

Second, we need to let the screen that is appearing that it is appearing, and the screen that is disappearing that it is disappearing. This is because the appearing screen may want to do some setup (such as getting the current time or temperature), and because the screen that is disappearing may want to save its results.

So let’s extend our pushPage and popPage methods to handle all of this.

First, let’s add two more methods that can be called when the page may appear and when the page may disappear:

...
        void            invalidateContents()
                            {
                                invalidFlags |= INVALIDATE_CONTENT;
                            }

        virtual void    viewWillAppear();
        virtual void    viewWillDisappear();
    protected:
        void            processPageEvents();
...

The implementation of these methods do nothing by default; they’re there so if we create a page we can be notified when our page appears and when it disappears.

Now let’s update our push and pop methods:

void AdaUIPage::pushPage(AdaUIPage *page)
{
    if (top) top->viewWillDisappear();
    if (page) page->viewWillAppear();

    page->next = top;
    top = page;

    top->invalidFlags = 0xFF;    // Force everything to redraw
}

void AdaUIPage::popPage()
{
    if (next) {
        viewWillDisappear();
        next->viewWillAppear();

        top = next;
        top->invalidFlags = 0xFF; // Force everything to redraw
    }
}

Drawing our screen

Notice our original design. We can have one of two screens: one with a list of buttons along the left–

Basic Inverted L

–and one without–

No Inverted L

Since all of our screens are laid out like this, it’d be nice if we handled all of the drawing in this class, so that our children class can focus on just drawing the title area and the content area.

That is what our AdaUIPage::draw() method does.

I won’t reproduce all of the code here; it’s rather long. But it does make extensive use of our AdaUI class to draw the title bar and the side bar.

To get the name of the titles we need to initialize our page with some information–such as the title of our page, the name of the back button (if we have one), the list of button names, and a list of the location of the buttons that the user can tap on. (We’ll use that list below.)

Our constructor for our class–which is then overridden by our screens–looks like:

/*  AdaUIPage::AdaUIPage
 *
 *      Construction
 */

AdaUIPage::AdaUIPage(const AdaPage *p)
{
    page = p;
    invalidFlags = 0xFF;        // Set all flags
}

The page variable is a new field we add to our class, and of course we mark the page as invalid so it will be redrawn.

And the contents of our page setup structure looks like:

struct AdaPage
{
    const char *title;          // title of page (or NULL if drawing by hand)
    const char *back;           // back string (or NULL if none)
    const char **list;          // List of five left buttons or NULL
    AdaUIRect  *hitSpots;       // Hit detection spots or NULL
    uint8_t    hitCount;        // # of hit detection spots
};

Everything in the AdaPage object is stored in PROGMEM. And a typical page may declare a global that sets these values, so our page code knows what to draw for us.

Our page drawing then looks like this–in pseudo-code to keep it brief:

  1. Erase the screen contents to black.
  2. Draw the title of our page using the title if given, drawTitle() if not.
  3. Draw our back button if we have one
  4. Do we have any left buttons?
    • True: Draw the inverted L with our left buttons.
    • False: Draw a blank title bar if not.
  5. Call drawContents() so our class knows to draw its contents

Handling taps

Our Adafruit TFT Touch Shield contains a nice capacitive touch panel, which can be accessed using the Adafruit FT6206 Library. We use this library to see if the user has tapped on the screen, determine where on the screen the user has tapped, and call a method depending on what was tapped. By putting all this tapping code in our main class, our screens only need to do something like this to handle tap events:

void MyClass::handleEvent(uint8_t ix)
{
    switch (ix) {
        case MyFirstButton:
            doFirstButtonEvent();
            break;
        case MySecondButton:
            doSecondButtonEvent();
            break;
....
    }
}

Now the nice thing about the AdaPage structure we passed in is that we now can know if we have a back button, what side buttons we have, and the location of the buttons we’re drawing on the screen.

Note: Normally if we had more program memory we’d define a view class which can then determine the location of itself, and draw itself.

But for us, we need to deal with static text titles, dynamic button titles, and all sorts of other stuff–and 32K of program memory combined with 2K of RAM is just not enough space. So we “blur” the line between views and control code–by separating out the tapping of views from the displaying of views. This may make life a little harder on us–if the buttons our content code does not align with the rectangles we pass to our constructor, things can get sort of confusing. But it does save us space on a small form factor device.

Our tap code needs to track if the user is currently touching it; we’re only interested when the user taps and not when he drags. (If we don’t, we’ll send events over and over again multiple times just because the user didn’t lift his finger fast enough.) We do this by using a lastDown variable; if it is set, we’re currently touching the screen.

So we add code to our processPageEvents method to test to see if the touch screen has been tapped. In pseudo code our processPageEvents code does the following:

  1. Is the touch screen being touched, and lastDown is false?
    • True:
      1. Set lastDown to true.
        1. Get where we were tapped.
        2. Was the back button tapped?
          • True: Call popPage() and exit.
        3. Did we tap on one of the left buttons?
          • True: Call handleEvent() with the index of the left button and exit.
        4. Did we tap on one of the other buttons?
          • True: Call handleEvent() with the index of the button and exit.
        5. If we get here nothing was tapped on. Call handleTap() to give the code above a chance to handle being tapped on.
  2. If the screen is not being touched, set lastDown to false.

Summary

So in the above code we’ve created a base class which handles most of our common event handling and screen drawing stuff. This means a lot less work as we create our individual screens on our Thermostat.

And note that we were able to do all of this–creating common code for handling our screens–by having a design language that made things consistent.

That’s because consistency makes things easier to implement: we now have reduced our problem of a big, empty screen by defining exactly how our screens should work–and doing so in a consistent way that allows us to reuse the same code and reuse the same basic drawing algorithms over and over and over again.

And when we have a list of consistent screens, we then become discoverable: the user only has to learn one set of actions and learn one consistent visual language to use the thermostat.

Of course from this point, the rest of our thermostat is simply an exercise in using our two primary tools: AdaUI which draws our visual language, and AdaUIPage which handles the common behavior of our pages, to create our user interface and hook them up to our model.

For next time.

Meanwhile, all of the code above is contained at GitHub and the finished product can be downloaded and installed on your own Adafruit Metro/TFT touch screen.

User Interface Design Part 3: Sketching the rest of our interface.

We have a design language of sorts. We have some simple code. We have an idea of the functions we wish to implement for our Arduino-based thermostat.

Now let’s use that design language and lay out the rest of our screens.


Full disclosure.

For any complex UI, and let’s be honest: this is a bit of a complex UI (especially for an Arduino Metro with 32K of program space, 2K of RAM and 1K of EEPROM), there will always be a bit of give-and-take. You will run out of memory and need to make some design compromises. You will realize certain designs, once implemented, simply don’t work. You’ll discover better ways to organize your code to make it all fit.

The designs here are not the first pass at a design, but reflect the results. Sometimes (for example, the original screen I designed for setting the time) just seems clunky, sometimes (like the schedule editing screen) required several goes before I came up with something I was happy with. But rather than discuss the different preliminary designs, we’ll discuss the final results, and discuss how they fit in our design language developed in a prior post.

The final implementation of the code is done and checked in to GitHub. And in theory to actually use this with a real hardware thermostat, I’ve included some notes in GitHub on how you may wish to do this.


The main screen.

Our main screen follows the model we used described in our last post. The only difference is that we draw icons where the back button would go, icons which indicate if the heater is running, if the air conditioner is running or if the fan is running. This is done in part for debugging purposes.

Home Screen Final

Our design language specifies that this area is reserved for the back button–and normally we would leave this area blank if possible. However, the icons we’re displaying are so visually distinct from the back button, and many mobile apps (from which we borrow the back button concept) have also displayed non-back items in this area, it seems in this case okay to violate our guidelines.

Remember: they’re guidelines, in order to aid in discoverability and consistency. And as long as we don’t violate them (by, for example, displaying an arrow and text that a user could reasonably assume means “go back”), it’s okay in this case to vary from our guidelines.

Temperature Settings Page

If the user taps the “Fan” button, or if they tap one of the temperature range buttons, we drop into the temperature/fan control screen.

Temperatures

Our temperature settings page has no ‘nouns’; the thing we’re controlling is pretty direct: the temperature we heat to, the temperature we cool to, and if the fan is on, running automatically, or if we turn our air conditioner system off.

We use our grouping visual language (the rounded rectangle markers) to indicate to the user what can be tapped, and what things are related. This screen should be pretty easy for the user to quickly figure out how to change the temperature settings and how to turn the system on or off.

Settings

The two settings we want to control on our system is to set the current time and set the current date. This screen uses the inverted-L because we have two nouns we can pick from: “Time” and “Date.” We also display the current date and time to the user.

Settings Page

We display the current date and time next to the date/time buttons in order to reinforce to the user that the adjacent buttons are setting the date and the time.

Set Time Page

This is the page we use to set the time.

Time

We provide the user a keypad for which to set the time. In order to give a visual indication to the user that the keypad is all part of the same thing, we use rounded rectangles to mark the corners of the keypad.

Set Date Page

The set date page displays a date picker calendar, which was modeled in code rather than using a drawing tool to lay out the buttons. The reason for doing this is because the grid layout of the calendar varies depending on the number of weeks in a month.

Date

The calendar widget is a familiar one to most users, and we use a different background and foreground color in order to indicate the current date.

Notice that like the set time screen, we don’t use an inverted-L because we have no additional nouns to pick from: the user is directly affecting the current date or the current time.

Schedule

Our original goal for our thermostat was to create a programmable thermostat. By this we mean that we can set a series of times which the thermostat automatically adjusts itself. For example, we can set a time the thermostat turns the heating temperature down and the cooling temperature up, so the HVAC doesn’t work as hard while the owner is away from home.

We model our system by allowing the user to set up to five separate schedules: one for Spring, one for Summer, one for Fall and one for Winter, with an Energy Saver schedule. Each schedule then contains a separate setting depending on the day of the week–and for each day, the user can set up to four times when the thermostat changes the heating and cooling temperature.

The user can pick between the five schedules from the schedule picker screen:

Schedule Picker Screen

Our main schedule page allows the user to pick the schedule the thermostat is following. We also place a button in the lower left to allow the user to edit the screens.

Note that this violates our notion about using nouns in the left column. The problem we have, however, is that the button can seem redundant: “Schedule” is the object we’re altering, but that is the screen we’re on.

This is another time when we break our design guidelines (and remember: they’re guidelines) in order to provide better clarity to the user. Notice that this brings is up to two times we’ve violated our guidelines–we don’t violate those guidelines lightly, and only do it if it promotes clarity.

Schedule Editor

Our schedule editor allows the user to actually edit the schedule itself. This is a fairly complex and busy screen.

Schedule Editor

Sadly, like the old joke goes:

A user interface is like a joke. If you have to explain it, it’s bad.

But we can alleviate some of the complexity of this screen by the use of rounded corners.

In this case we use rounded corners to group the setting rows and the day of the week. We use the left bar to pick the specific schedule we are editing. And we provide some verbs: some operations we can take on the currently selected day: we can clear that day, we can copy that day or we can paste a previously copied day. The later two operations allow you to quickly set a group of days to use the same settings.

Edit Schedule Item

If you select a single date/time row, you are brought to a new screen which allows you to edit the temperature and the time for a given schedule row:

Schedule Picker

This screen reuses the date selection elements, but adds two additional control groups: one to set the heating temperature, and another to set the cooling temperature.

Like the other screens we use rounded rectangles to group functionally similar items. We also get rid of the inverted L since we have no other nouns: we are directly manipulating three objects and doing so in an obvious manner.


This is a relatively complex UI, and as I noted above, the complete implementation is actually complete and checked into GitHub.

And notice what we’ve done.

In all cases we’ve defined what it is we’re manipulating: the current time, the current temperature–and we’ve carefully defined the nouns and the verbs of our user interface: the noun (such as “the heater temperature setting”), and the verb (“increase the temperature”).

We’ve placed nouns on the left bar when there are multiple nouns to pick from, so the user can select the noun he is interested in manipulating. The content area of the screen then provides the verbs–the controls–by which the user can manipulate those nouns.

We’ve consistently used the inverted-L screen to indicate when there are a group of nouns the user can operate on. We’ve also consistently used a page-navigation scheme to switch between screens as the user switches between items he is interested in manipulating. And our verbs are always placed in rectangular or rounded-rectangle-shapes, where the rounding is used to help define groups of verbs. (For example, the “+”/”-” button in the Edit Schedule Item screen.)

And in the process we’ve designed the UI for a programmable thermostat that should be relatively easy for a user to use. Yes, the schedule programming screen is cluttered–but the use of grouping should make explaining how to use the screen in a user’s manual relatively trivial.


This process: defining our nouns, defining our verbs, defining the user interface elements and their appearance, and consistently applying these rules as we design our screens, results in a user interface that almost seems inevitable.

It wasn’t; there were a thousand other ways we could have gone. But this use of a design language with well defined nouns, verbs and actions results in an interface that seems simple–almost troublingly simple.

But that is what we want: a user interface that is so discoverable, so consistent, that it seems… invisible.


The next few posts will cover the implementation–the steps used in putting together the code.

User Interface Design Part 2: Sketching our Interface.

Last time we discussed the theory of user interface design. The idea that user interfaces form a sort of language, complete with nouns and verbs and modifiers. That user interfaces need to be discoverable and consistent, and that parts of discoverability means defining affordances: things the user can see and think “oh, that must be a button I can press” or “oh, that must be an action I can take.”

We even started sketching a potential interface based on the visual language behind LCARS, which groups buttons using rounded rectangles and provides an inverted L.


Designing a Thermostat.

With this article we’ll use the visual language we started sketching last time to put together a display that could potentially be used to control a thermostat. I’m not going to actually build all the firmware for a thermostat–though you could conceivably expand the code here to do exactly that. (If it all goes haywire, however, don’t blame me for blowing up your HVAC.)

First, let’s review our overall screen designs. We have two fundamental screen layouts; one that can be used to select the nouns or groups of nouns we’re operating on, such as “controls”, “schedule”, etc., and the other provides a full wide screen.

Basic Inverted L

No Inverted L

Each screen provides a title area, and an optional back button which we would use to return back to the top level screen.

Now let’s consider for a moment the features we would want on a thermostat.

  • We want to display the temperature.
  • Obviously we want to set the temperature. We want to set the temperature at which we turn on the heater, and when we want to turn on the air conditioner. We also want to be able to turn the fan on or off.
  • We want our thermostat to be programmable. That means we want to set the schedule.
  • We want to allow the user to switch between schedules for the winter, summer, and an eco-friendly setting, and to show the user what schedule we’re following.
  • We want to set the time, and other miscellaneous settings, such as screen blanking. (We won’t implement screen blanking as that requires a hardware modification of the Adafruit display. But we will include the UI elements for doing that.)

Our main screen should display the temperature, of course, as well as provide us a way to set the target temperatures, and to set the other items in our system: the fan, the program schedule, the settings.

With this list of ideas we can start sketching the outline of our page.

First, consider our ‘nouns.’ From the list above we have the fan, the schedule and the settings. We also have the target temperatures. It’d be nice to display a nice curved dial, since traditional thermostats used to be controlled by a circular dial, and show the target temperatures as well as the current temperature and our current schedule.

One potential design which accomplishes this is below. Note that our “nouns” are all on the inverted L bar. We display the time in the upper-left in the area we would use for our screen title. And we create a circular piece of art in the center where we show the temperature as well as the heating and cooling targets.

Design 1

The design–based on our rules we devised with the last post–uses color to show the things we can click on, but also uses shape as well. The design is discoverable–the four things the user can do are clearly labeled on the left. We separate the four items by moving settings hard bottom–this causes settings to stand alone as something separate from the other things we may want to do. And we have a circular piece of artwork which clearly marks the heating and cooling temperatures.

Let’s make one minor modification.

It’s not entirely clear that the “TEMP” button allows us to set our temperatures. We can instead remove that button from the side, and move it into the extra space we have on our screen–with the ‘noun’ (the HVAC target temperatures) being implicit based on the location on the screen:

Design 2

In this design we take advantage of our “grouped buttons” to move the HVAC temperature settings onto the display content area underneath the current temperature. The clarity of this is still there: we continue to use blue–but we use a dark gray background for our buttons so as not to distract too much from the temperature, while preserving our button affordance.

This is the design we will build towards.

Note: Notice there are a thousand different ways we can design our screen. But because we now have a design language we can use, we’ve effectively constrained ourselves: we’ve constrained ourselves to the shape of buttons, to the layout of screens. And those constraints allow us to have some consistency and to make our screens discoverable.

And ultimately easy to use while being cool.


Now that we’ve sketched the design of our screen, let’s build some code.

And this is where our design language starts to shine–because ultimately, at the bottom of the stack, the core of our UI is comprised of two basic symbols: the inverted L shape, and the button.

This is important, because it means all we have to do is build code for our inverted L, and code for our buttons–and the rest is entirely reusable over and over again.

It is important to resist the monotony of our symbol set. That is, it’s important not to think “well, hell; I only have an inverted L and a button with rounded corners–why not create blah blah blah…

Do not do this. That can lead to confusion–and ultimately frustration on the part of your users. Once you’ve designed your visual language, stick to it. This will help with discoverability and with consistency.

And it will streamline your development process.


Our code extends the Adafruit_ILI9341 class, but we will write our changes to only use the Adafruit_GFX library. We do this in order to add our symbol set to our drawing system, and because we want to make use of some internal state while drawing our objects.

The end result looks like this:

class AdaUI: public Adafruit_ILI9341
{
    public:
        /*
         *  Expose our ILI9314 constructors by redirecting to them.
         */

        AdaUI(int8_t _CS, int8_t _DC, int8_t _MOSI, int8_t _SCLK, int8_t _RST = -1, int8_t _MISO = -1) : Adafruit_ILI9341(_CS,_DC,_MOSI,_SCLK,_RST,_MISO)
            {
            }
        AdaUI(int8_t _CS, int8_t _DC, int8_t _RST = -1) : Adafruit_ILI9341(_CS,_DC,_RST)
            {
            }


        /*
         *  Our custom UI elements. We only have two: the rounded separator
         *  bar at top, and the rounded rect with right-aligned text.
         */

        /*  drawTopBar
         *
         *      Draw the curved element in the area provided. This takes five
         *  parameters: the position, width and width of the side bar, along
         *  with the orientation of the curve (upper-left, lower-left,
         *  upper-right,lower-right).
         *
         *      If called without parameters, draws our curve at the default
         *  inverted-L location
         */

        void drawTopBar(int16_t top = 32, AdaUIBarOrientation orient = KBarOrientLL, 
                        int16_t left = 0, int16_t width = 0, int16_t lbarwidth = 80);

        /*  drawButton
         *
         *      The only other custom element in our custom UI is our 
         *  button/rectangle area. This is at a given rectangular area, with
         *  text that can be left, center or right-aligned, and with 
         *  rounded corners. The radius is fixed.
         */

        void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,
                        AdaUICorner corners = 0);

        void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,
                        const __FlashStringHelper *str, int16_t baseline, 
                        AdaUICorner corners = 0,
                        AdaUIAlignment align = KRightAlign);

        void drawButton(int16_t x, int16_t y, int16_t w, int16_t h,
                        const char *str, int16_t baseline, 
                        AdaUICorner corners = 0,
                        AdaUIAlignment align = KRightAlign);
};

We have our two basic symbols: drawTopBar to draw the top bar, with default settings to build an inverted L shape. This builds our basic inverted L symbol, and allows us to flip it around in one of four different orientations as well as draw the inverted L with any width and side width.

Inverted L

And the other draws our button shapes. We have three versions; one that doesn’t draw any text, one that draws in-memory text and one that draws in-program memory text.

Buttons

With this we can now sketch our first screen interface. Note the purpose is not to create a functional screen, only to put together our elements to re-create our design above, and to get a feel for how our elements come together.

The code, now checked into GitHub, when compiled for the Arduino, gives us the following:

Display


Next time we’ll design the rest of our screens, and build code which simplifies the construction of those screens–thus allowing us to quickly prototype our layout.

User Interface Design Part 1: Defining Our Visual Language

I have always wanted to write a series of articles on user interface design from the ground up, but I’ve never seem to have the time. Well, it’s time to take the time, and now that I have this beautiful little 2.8″ TFT Touch Shield that thus far I’ve been using to show how to render 3D graphics, I figured it was time we put it through its paces for what it was intended to be used for: as the input touch screen to control some cool little gadget.

Blankcanvas

So we have a blank canvas–and we toss in a button here, some text there, a cute icon below that–and we call it a day. Simple, right?


I want to discuss a more systematic way to think about user interface design, one which allows us to think about how to assemble our interface so that users find it easy to use, can engage in complex interactions with the user, and can be extended to provide a consistent visual language that can give your gadgets pizzaz.

First, let’s define what we actually mean by “easy to use.”

Waaaaay back in the dark days before there was an Internet, at the very beginning of computer-user interfaces, a division of Xerox was doing research in creating easy to use computers for the office place. Faced with a blank screen, they decided to model their interface after real-world physical things we encounter on a day to day basis. Apple used this as the basis for their own computer interface, and crafted human interface guidelines based on their recommendations and based on an established set of user interfaces.

This set of standard controls included radio buttons–so named because they resembled the behavior of old mechanical car radios, which could tune a radio station quickly but only one was active at a time.

Radio Buttons

They created check-boxes, borrowing visual language from filling in forms, where you can check off the appropriate boxes which match your choice.

We also got buttons, which were modeled after their electronic counterpart, pull down menus (which borrow their name from restaurant menus, where you have a list of things you can pick from), and text fields, which does not have a real good counterpart in the real world.

And the notion was that so long as you used these controls–and used them consistently, according to the detailed human interface guidelines provided by a company like Apple, then a user would know how to use your application.


What Apple did was create a set of affordances, things that, once we learned what they meant, we would remember how to use them.

Affordances are, in essence, the hints we have in the objects that surround us which give us an immediate idea how that object is supposed to be used. A teapot has a handle, and the shape of the handle tells us we should grab it if we want to pick up the teapot. A door has a knob or handle; the shape of the knob tells us to rotate it to open the door.

At a more fundamental level, however, an affordance is–in a very real sense–a visual language. We know if we see a circle with some text next to it, it’s a radio button–and if we tap on it, we’ll change its state and deactivate the state of the other related buttons.

Radio Buttons

But it doesn’t mean this is the only way we can represent picking one of a list of items. On the iPhone, one can use a picker view instead.

Picker

Some ‘Affordances’ may not even have a particular shape or color or size–but may be based solely on the position the thing occupies on the screen. For example, on the iPhone, the upper part of the screen generally has a button on the left or the right. The button itself is not marked out; we only know the button exists because we’ve become accustomed to the idea the word on the upper left and the uppper right of the screen is a button–and generally the button on the upper left backs us out a screen.


The point is not that we should use a particular design.

The point is that simplicity comes from consistency.

If we know “cancel” or “back” is in the upper-left, we know to always look to the upper left for a cancel or back button. Placing the cancel button in the lower right would be confusing if we got used to the cancel button being somewhere else.

And “language” can even extend to gestures or physical controls. For example, on the side of our phones we know there are two buttons: the top one turns the volume up, the bottom one turns the volume down. If the “volume up” button suddenly turned your phone off, you’d think something was broken.

We could, in theory, conceive of a user interface that consists of a screen full of information, a rotary knob that can be pressed, and a back button. If the back button takes you back a screen–it should always take you back a screen. If suddenly going back means twisting the knob sharply to the left–users will be confused. And they may think your system is broken.


In essence, our user interface is a language, like English, like Latin–but the “domain” of our language is rather limited. You’re not going to talk to your house thermostat about what’s for dinner–but in its domain (what temperature you want your house to be at), your thermostat provides a language: turn the knob to change the temperature. Flip the switch to turn on heat or air conditioning. Flip another switch to turn the system on or off. And a third switch for changing the fan settings.

And that interface is “easy to use”–because it is discoverable: you can find the switch–and you know when you see a switch how to flip it. You can see the knob–and the dial indicates the temperature in the room and the temperature you want it to be at. And it is consistent: when you flip the fan switch, the fan changes state: flipping the fan switch doesn’t order pizza from Domino’s or opens the garage door.

Now of course we’re talking about a small 2.8″ touch display. But the same principles apply. You don’t have to make your interface look like anything else–but you have to make your interface discoverable: does the user know what can be tapped on? Does the user have an idea when something is tapped what it will do? Does the user know what cannot be tapped on?

And it’s consistent: do things that look like they can be tapped on do something when they’re tapped? Do they do the same thing when they’re tapped–does a radio button always behave like a radio button and a check box like a check box? Do things that cannot be tapped do nothing when tapped–or are you hiding behavior where the user cannot discover it by being inconsistent?


Further, like any other language, our user interface language has nouns–that is, things we are acting on, and verbs–actions we are taking. More complex applications can even have adjectives: modifiers that describe the nouns of our language, and adverbs: modifiers that alter our verbs.

And when we “talk” to our computer, we are in essence using our interface to construct simple sentences to describe what we want to do. For example, in a drawing application, our “nouns” may be the lines, circles and blocks of text we’re manipulating, and our “verbs” are the ways we are modifying our objects.

Our language may look like this: “Please move that line” (picks the line) “over there” (drags the line to its new location). Or “please draw a new line” (picks the line drawing tool) “over here” (clicks and drags a new line on the screen).

Sometimes the object we’re acting on is implicit: the thermostat knob only changes the temperature. Sometimes the order in which we pick the noun (the object of our action) and the verb (the action we are taking) can be reversed: we may pick a tool which shrinks everything we touch in half.

And sometimes our language has contexts. For example, we may have a programmable thermostat; on the ‘main screen’ the knob controls temperature. But on another ‘screen’, the knob controls the time during the day when we want to raise or lower the temperature. This context can be thought of as a way to pick nouns on a limited screen (what temperature do you want for your house?), or it could be thought about as a “context” where we are having a conversation: in the kitchen, a ‘cup’ may mean something entirely different than a ‘cup’ on a football uniform.


As a side note, it is the failure to adhere to these two basic principles: discoverability and consistency–for example, consistency in how you construct complex sentences as you “talk” to your computer–which make some applications harder and harder to use.

In other words, you are not stupid. The designers who designed the application you are having a hard time failed to adhere to these basic principles, and they failed to adhere to some of the other principles outlined in the book Designing with the Mind in Mind, such as responsiveness–the fact that when the user interacts with a computer they expect the computer to respond quickly, even if the response is “stand by.”

And increasingly more and more “UI Designers” are being hired out of art school with no formal training in affordance theory or in the necessity for discoverability and consistency.


Its also worth talking about color. Color is a great way to augment a user interface–and certainly a colorful interface is an attractive one. But you need to consider color as part of your language–as a way to augment the information you are presenting, such as using color to represent adverbs to the nouns of your language.

But you need to be careful when using color. Partially because some people are color-blind. And partially because you may not get to use a beautiful color display when developing your language–so you need to have another way to specify your adjectives.

B&W Display

Where is your color God now?


All of this is a long-winded way to say we need to think of what we want our visual language to look like, and we need to consider how we pick the things we act on and how we act on them. That includes determining the appearance and behavior of our controls–with consistency being key–and the way we specify the things we are operating on and the things we can do to them.

And today we will start designing our visual language by taking inspiration from Star Trek.

Of course there are two immediate problems we face here. First, the original LCARS interface is copyright CBS Studios. And second, there is no real consistency between screens; the layout of the screens vary according to the artistic needs of the episodes rather than are designed according to the principles of discovery and consistency.

But we certainly can use the ideas here as a launching off point.

For our design we’ll use some of the shapes but then design a standard language for our visual interface that relies on these basic shapes to function.

First, we’ll start with a sloped inverted L shape. We’ll use this to divide the screen from a top part which contains a title and an optional ‘cancel’ button in the upper left, and the bottom half. If we have ‘nouns’ we intend to operate on, we can put them along the left column of the inverted L pattern; pressing one can either navigate to a new screen or update the content area as appropriate. Otherwise, we can hide the left part and simply have a bar.

Basic Inverted L

No Inverted L

We will also create basic button groups. The idea is that these are all buttons that are related to some piece of functionality. We group them by using rounded corners.

Grouped Buttons

Finally we’ll use a grouping that contains a rectangular region on the left to indicate a group of radio buttons.

Radio Buttons

We’ll use this as the start of our visual language for buttons and radio buttons, as well as navigating between screens and picking the ‘nouns’ we’re operating on.


Now that we’ve sketched some ideas–and have a framework in which to use these ideas–next time we’ll start building the code, including our custom font and some changes to the Adafruit GFX library to reduce flicker when pressing a button.