Building a 3D Graphics Pipeline on an Arduino, Part 1

This is a series of articles I’ve intended to write showing the creation of a basic 3D graphics pipeline on an Arduino. By the end of this series hopefully you’ll have a good understanding of basic 3D graphics, including coordinate transformations, rendering and perspective rendering.


Things you need to follow along.

Of course, if you want to write software for an Arduino, you need one.

For this series of exercises I used the Adafruit METRO 328, which can be had (as of this writing) for $17.50. You can also use any other Arduino or Arduino clone, but you may need to fiddle with the settings.

I also used the 2.8″ TFT Touch Shield for Arduino. The advantage of all of this is that we can simply plug the two parts together, plug it into our computer, download the Arduino IDE, and we have a basic development environment.

Adafruit also does a very good job integrating with the Arduino IDE, providing libraries that can be quickly downloaded to test your display. And they provide a great on-line tutorial to get you up and running in just a few minutes.

Of course by using these components we create a dependency on their libraries; if you decide to use a different display shield from a different company you’ll also need to make a few changes in our code to make it all hang together. We’ll talk about that when we get there.

When you get it all set up, you should see a series of test screens.

Image

* Bear not included.


Introduction.

The reason why we call this a “graphics pipeline” is because we’ll break the problem down of drawing 3D objects into a series of steps–a “pipeline” where we do a small but well-defined unit of work at each step along the way.

By organizing our drawing as a pipeline we simplify each step of the process. Each step may not do much, but when added together we can create incredible things.

The pipeline we will build will allow us to draw 3D wireframes. We’ll do a series of articles covering polygonal rendering later. At the end of this series of articles we will have created a pipeline that looks like the following:

Pipeline

So let’s get started.

Hardware drawing.

At the very bottom of our pipeline is the code which draws directly to the hardware. For this we will rely on the Adafruit GFX library to actually handle the drawing–but we still call this layer out because it is the only point in our code where we talk to the Adafruit GFX library in a meaningful way. (Our code on construction interrogates the library to get the screen size, and we expose the start and stop drawing methods and color methods–but that’s it.)

We could, in theory, write our own line drawing routine. Bresenham’s line drawing algorithm can handle drawing lines in hardware very quickly, and is the algorithm used by the Adafruit library. Our line drawing routine could be wired to another library as well, such as the graphics library used by the Arduboy hand held game.

Line Drawing

Our primary line drawing for the first part of our 3D pipeline is the ‘movedraw’ method. This contains all the logic for moving a virtual pen to a given location, or drawing from the current pen location to a new pen location.

We use this ‘pen’ drawing model because it allows us to keep track of the state of the current location; this comes in handy when we do our 3D calculations or our line clipping.

This means our class G3D must keep track of the current pen location (and all the state revolving around the current pen location).

So let us create our 3D graphics class. First, we need to set up the constructor and the destructor to initialize and shut down our 3D pipeline, as well as entry points for setting the current color and for starting and stopping line drawing.

class G3D
{
    public:
                G3D(Adafruit_GFX &lib);
                ~G3D();

        void    setColor(int16_t c);		
        void    begin();
        void    end();
        void    move(uint16_t x, uint16_t y);
        void    draw(uint16_t x, uint16_t y);
        void    point(uint16_t x, uint16_t y);
};

The three routines shown in red are test methods; we will replace them when we build the next level of our graphics engine.

Now let’s add the internal state we need to track, as well as our constructor and destructor. We want to keep track of our drawing library as well as the screen size and color:

class G3D
{
    public:
                G3D(Adafruit_GFX &lib);
                ~G3D();

        void    setColor(int16_t c)
                    {
                        color = c;
                    }

        void    begin();
        void    end();
        void    move(uint16_t x, uint16_t y);
        void    draw(uint16_t x, uint16_t y);
        void    point(uint16_t x, uint16_t y);
    private:
        /*
         *  Internal state
         */
        Adafruit_GFX &lib;
        uint16_t width;
        uint16_t height;

        /*
         *  Current drawing color
         */

        uint16_t color;
};

Our constructor and destructor simply notes the width and height of the screen as well as the drawing library:

G3D::G3D(Adafruit_GFX &l) : lib(l)
{
    width = l.width();
    height = l.height();
}

G3D::~G3D()
{
}

We also want to build the begin and end methods. These call into the Adafruit GFX library to start and end drawing to our device. The reason why we do this is to optimize drawing; we don’t want to unnecessarily start up and shut down communications to our hardware every time we draw a line.

void G3D::begin()
{
    lib.startWrite();
}

void G3D::end()
{
    lib.endWrite();
}

Our setColor method can be handled inline above.


Now all of this so far has just been plumbing; creating our class and tracking state.

The real meat of our first level of drawing is the p1movedraw method and associated stuff. The idea here is to create a single entry point which handles moving and drawing through a single entry point.

Our p1movedraw method needs to keep track of the current pen location, as well as if the last time we drew something if we moved to that location or drew to that location. So let’s add the additional methods needed to handle moving and drawing:

class G3D
{
    public:
                G3D(Adafruit_GFX &lib);
                ~G3D();

        void    setColor(int16_t c)
                    {
                        color = c;
                    }

        void    begin();
        void    end();
        void    move(uint16_t x, uint16_t y);
        void    draw(uint16_t x, uint16_t y);
        void    point(uint16_t x, uint16_t y);
    private:
        /*
         *  Internal state
         */
        Adafruit_GFX &lib;
        uint16_t width;
        uint16_t height;

        /*
         *  Current drawing color
         */

        uint16_t color;

        /*
         *  State 1 pipeline
         */

        bool    p1draw;
        uint16_t p1x;
        uint16_t p1y;

        void    p1init();
        void    p1movedraw(bool drawFlag, uint16_t x, uint16_t y);
        void    p1point(uint16_t x, uint16_t y);
};

The internal variables p1draw is true if the last time we called we were drawing a line. The values (p1x,p1y) is the pixel location of the pen. We have an initialization method for our drawing, as well as our p1movedraw and p1point methods which either moves the pen or draws a line, and which plots a single pixel point.

With our methods defined for our level 1 code, we can add our initialization method to our class constructor:

G3D::G3D(Adafruit_GFX &l) : lib(l)
{
    width = l.width();
    height = l.height();

    /* Initialize components of pipeline */
    p1init();
}

And our test methods simply call into our internal drawing routines:

void G3D::move(uint16_t x, uint16_t y)
{
    p1movedraw(false,x,y);
}

void G3D::draw(uint16_t x, uint16_t y)
{
    p1movedraw(true,x,y);
}

void G3D::point(uint16_t x, uint16_t y)
{
    p1point(x,y);
}

Hardware drawing

This is a lot of stuff, but it gets us to the core of our drawing methods. And we’ll reuse a lot of this machinery when we get to the next phase of our drawing code.

First is initialization. Our initialization routine should set our system to a known state; in this case, we move the pen to (0,0). Recall that the variable p1draw is true if we drew last, false if we moved.

Our initialization code then sets everything to zero:

void G3D::p1init()
{
    p1draw = false;
    p1x = 0;
    p1y = 0;
}

Our point drawing does not move the pen; it simply sets the pixel. This is a simple wrapper into our Adafruit GFX library to write a pixel:

void G3D::p1point(uint16_t x, uint16_t y)
{
    lib.writePixel(x,y,color);
}

Note: We call the ‘writePixel’ method rather than the ‘drawPixel’ method because we are calling the ‘startWrite’ and ‘endWrite’ methods inside a ‘being’/’end’ pair.

And our move/draw routine will draw a line only if the pen is down; we then update our current state.

void G3D::p1movedraw(bool drawFlag, uint16_t x, uint16_t y)
{
    if (drawFlag) {
        lib.writeLine(p1x,p1y,x,y,color);
    }

    p1draw = drawFlag;
    p1x = x;
    p1y = y;
}

Note that this is the place we would need to change our code if we needed to draw our own line using Bresenham’s line drawing algorithm, or if we needed to create a draw list to send to a piece of connected hardware.


We can put all this together with a simple test:

// Use hardware SPI (on Uno, #13, #12, #11) and the above for CS/DC
Adafruit_ILI9341 tft = Adafruit_ILI9341(TFT_CS, TFT_DC);

// Graphics setup
G3D draw(tft);

void setup() 
{
}

void loop() 
{
    tft.begin();
    tft.fillScreen(ILI9341_BLACK);
    
    draw.begin();
    draw.setColor(ILI9341_RED);
    draw.move(0,0);
    draw.draw(0,100);
    draw.draw(100,100);
    draw.draw(100,0);
    draw.draw(0,0);
    draw.move(100,120);
    draw.draw(100,140);
    draw.end();

    for (;;) ;
}

And if all works well, we should see:

L1

Okay, it’s not much. But you have to learn to crawl before you walk.

The source code for this version of the 3D graphics pipeline on GitHub; the linked branch shows just the first part of the graphics pipeline.


Now this is rather boring. If this was all we wanted to do, big deal.

But remember our original principle:

Each step may not do much, but when added together we can create incredible things.

The next step in our pipeline is viewport drawing.

The purpose of viewport drawing is to abstract away the pixels. That is, what we want to do is to redefine our drawing coordinates so that we get more or less the same drawing regardless of the dimensions of the physical hardware attached to our Arduino. And we do this by defining a new viewport coordinate system that ranges from -1 to 1.

Note: We will be using floating point math for our operations, which may not necessarily be the fastest on an Arduino. However, it should be fast enough for us to do some simple things like rotating a cube. When designing for an Arduino, the most important thing you can do is figure out what not to draw; that way you can avoid doing math for stuff that is not visible.

Now our virtual coordinate system imposes a virtual square; the top and bottom of our screen (or the left and right sides depending on the orientation of our screen) has a smaller coordinate value:

Viewport

This means we have a little math to do.

Basically we need to figure out two things when our 3D class starts up:

  1. What are the viewport drawing dimensions of the screen? We need this in order to determine the dimensions of the screen we are clipping to.
  2. To convert from a viewport coordinate (x,y) to pixel coordinates (px,py), we need to calculate px = A + B*x and py = C + D*y for some values A, B, C, D. What are those values?

Things get particularly interesting on the fringes. Note that when we round from a floating point value to an integer, we generally truncate (or round down); this means the value 12.75 becomes 12 when converted to an integer. So we can’t, for a screen that is 320 pixels wide, write:

    px = 160 + 160 * x;  // 160 = 320 / 2

Because if we plug in a value 1 for x, we get px = 320–and our pixel coordinate range is from 0 to 319.

So we solve this problem by bumping the width down by 1 pixel, and we calculate our pixel range (in floating point values) from 0.5 to 319.5. (This will then convert to integer values from 0 to 319.)

Thus we’d write:

    px = 160 + 159.5 * x;  // 159.5 = 319/2

Plugging in -1 for x gives is px = (int)0.5 = 0, and 1 for x gives us px = (int)319.5 = 319.

We have a second problem in that in our pixel coordinates (0,0) is in the upper-left corner of the screen–but we want our drawing to put (-1,-1) at the bottom-left. This means we need to flip the sign of the y value during our calculations:

    px = A + B * x;
    py = C - D * y;

Our third problem, of course, is that our display is not square; it’s rectangular. But we assume our pixels are square. And we assume whichever is larger (the width or the height) will map from -1 to 1; the other wide will map from roughly -0.75 to 0.75 or so.

This implies our scale value B and D which we multiply in our equation above is the same. (If our pixels are not square we need to do something different.)


So let’s put all this together.

For our viewport coordinate system we need to track five variables: two giving the (+/-) width and (+/-) height of our viewport:

        float	p2xsize;      // viewport width +/-
        float	p2ysize;      // viewport height +/-

And three that handle the transform: the amount we multiply and add to our viewport coordinate to give our pixel coordinate:

        float	p2scale;      // coordinate transform scale.
        float	p2xoff;      // coordinate transform offset
        float	p2yoff;

Of course we also need to add our level 2 pipeline code.

All of this looks like:

class G3D
{
    public:
                G3D(Adafruit_GFX &lib);
                ~G3D();

        void    setColor(int16_t c)
                    {
                        color = c;
                    }

        void    begin();
        void    end();
        void    move(float x, float y);
        void    draw(float x, float y);
        void    point(float x, float y);
    private:
        /*
         *  Internal state
         */
        Adafruit_GFX &lib;
        uint16_t width;
        uint16_t height;

        /*
         *  Current drawing color
         */

        uint16_t color;

        /*
         *  State 2 pipeline; map -1/1 to screen coordinates
         */

        void    p2init();
        void    p2movedraw(bool drawFlag, float x, float y);
        void    p2point(float x, float y);

        float   p2xsize;        // viewport width +/-
        float   p2ysize;        // viewport height +/-
        float   p2scale;        // coordinate transform scale.
        float   p2xoff;         // coordinate transform offset
        float   p2yoff;

        /*
         *  State 1 pipeline
         */

        bool    p1draw;
        uint16_t p1x;
        uint16_t p1y;

        void    p1init();
        void    p1movedraw(bool drawFlag, uint16_t x, uint16_t y);
        void    p1point(uint16_t x, uint16_t y);
};

Notice we also change our test routines (shown in red).


Our p2movedraw and p2point routines look similar: they perform our math calculation using the constants we’ve calculated during startup to transform from viewport coordinates to pixel coordinates:

void G3D::p2point(float x, float y)
{
    int16_t xpos = (int16_t)(p2xoff + x * p2scale);
    int16_t ypos = (int16_t)(p2yoff - y * p2scale);
    p1point(xpos,ypos);
}

void G3D::p2movedraw(bool drawFlag, float x, float y)
{
    int16_t xpos = (int16_t)(p2xoff + x * p2scale);
    int16_t ypos = (int16_t)(p2yoff - y * p2scale);
    p1movedraw(drawFlag,xpos,ypos);
}

Now the heavy lifting is in our initization code, which must find all our constants given the size of the screen. As we noted before we assume pixels are square.

Our initialization code first needs to set up some constants and find the screen dimensions for our clipping method. We then need to calculate the scale parameter and ultimately the offsets. In the end our code looks like:

void G3D::p2init()
{
    /*
     *  We subtract one because we want our mapping to work so that
     *  virtual coordinate -1 is in the middle of the 0th pixel, and
     *  +1 is in the middle of the width-1 pixel for wide displays.
     *
     *  This offset of 1/2 by the pixel width implies we're drawing on
     *  a display 1 pixel narrower and wider, but then with 1/2 added
     *  to the pixel coordinate
     */

    uint16_t h1 = height - 1;
    uint16_t w1 = width - 1;

    /*
     *  Calculate the width, height in abstract coordinates. This
     *  allows me to quickly clip at the clipping level
     */

    if (w1 > h1) {
        p2xsize = 1;
        p2ysize = ((float)h1)/((float)w1);
    } else {
        p2xsize = ((float)w1)/((float)h1);
        p2ysize = 1;
    }

    /*
     *  Calculate the scale, offset to transform virtual to real.
     *  Note that -1 -> 0 and 1 -> (width-1) or (height-1).
     */

    if (w1 > h1) {
        p2scale = ((float)w1)/2;
    } else {
        p2scale = ((float)h1)/2;
    }

    p2xoff = ((float)width)/2;
    p2yoff = ((float)height)/2;
}

Of course we need to change our main Arduino test code to use the new floating point values for coordinates:

void loop() 
{
    tft.begin();
    tft.fillScreen(ILI9341_BLACK);

    draw.begin();
    draw.setColor(ILI9341_RED);
    draw.move(-0.5,-1);
    draw.draw(0.5,-1);
    draw.draw(0.5,1);
    draw.draw(-0.5,1);
    draw.draw(-0.5,-1);

    draw.move(-0.75,-0.75);
    draw.draw(-0.5,-0.75);
    draw.draw(-0.5,-0.5);
    draw.draw(-0.75,-0.5);
    draw.draw(-0.75,-0.75);
    draw.end();

    for (;;) ;
}

And if everything works correctly, you should see the following:

Image 2

Again, not all that exciting.

Except for one thing: This is what your image will look like regardless of your screen size. This becomes important as we move on and start building our 3D rendering engine.

All of this code is available at GitHub in a separate branch.


This has been an extremely long post, but a lot of this was setting the stage for future posts. The next post will discuss homogeneous coordinates, the coordinate system we use for representing objects in 3D. This will be followed by a post discussing clipping in homogeneous coordinates, then we’ll move into some 3D math.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: