CirclesAndRotators Design And Coding Decisions

The previous five posts, starting with Drawing Circles With OpenGL, and finishing with Adding a Moving Triangle, created an increasingly complex program that displayed two circles and a triangle moving about the drawing canvas. A number of design and coding decisions were made when the program was written. This post will discuss these decisions, and in so doing help describe some of the efficiencies and inefficiences that can result when using OpenGL.

OpenGL C++ Libraries?

The CirclesAndRotators program uses the OpenGL C API directly, well through GLEW. There are C++ wrapper libraries available, but they were not used, because I am starting to learn OpenGL, and introducing wrapper libraries at this stage would confuse the issue, and probably reduce the meager readership that I currently have.

There are at least three C++ wrapper libraries:

  1. OGLplus;
  2. OOGL; and,
  3. glbinding.

All of these are open source with source code on GitHub. OGLplus is actively maintained, while OOGL has not had any updates for two years and has bugs that have been outstanding for four years. glbinding uses C++11 features such as enum classes, lambdas and variadic templates rather than MACROs. It is both actively maintained, and comes with commercial support if needed. I leave it to you to decide if you want to use any of these, but before doing so, read this discussion from StackOverflow.

How A Circle is Defined

It is possible to define a circle in terms of many triangles with one vertex at the centre of the circle and the other two vertices on the edge of the circle. The more triangles you use, the more closely the drawn object looks like a circle rather than a polygon.

I chose instead to simply define the smallest square that completely encloses the circle, and, in the fragment shader, to discard pixels outside the circle. This method works for circles of any size; it also illustrates some of what it is possible to do in shaders.

glUseProgram

In the OnPaint method in CirclesAndRotatorsCanvas, glUseProgram is called to set the circle shader program, then the two circles are painted. glUseProgram is called again, this time to set the triangle shader program before the triangle is painted.

An alternative to this would be to place the glUseProgram calls in the glCircle and glEquilateralTriangle Paint methods. However, the glUseProgram is expensive in terms of the work that the GPU must perform to switch shader programs. Therefore, the fewer program switches, the better your program will perform. In general, if you can, code your program to limit the number of shader program switches. Now in the CirclesAndRotators program, you will not perceive any difference between having the glUseProgram calls in OnPaint and in the Paint methods for each object. However, once a program has many thousands of objects, there will be a noticeable difference in performance. In general, try to call glUseProgram once, paint all of the objects that use that shader program, then call glUseProgram for the next set of objects, and so forth.

Perform Transform in CPU or GPU?

Most graphics programs are designed such that objects are created to be centred on the origin of the display canvas, and then transformed to their final location; this is especially true for objects that are moved about the canvas whenever a frame is painted.

There are two places these transformations can be performed:

  1. By the CPU before moving the vertex data to the GPU; and,
  2. By the GPU.

The second is preferred for a number of reasons:

  1. CPU’s typically have between 2 and 16 logical cores. You could create multiple threads to calculate the final position of each vertex and pass the vertices using vertex buffers to the GPU each time the frame is painted, but how quickly you can perform the transformations on the vertices is limited by the number of logical cores in the CPU. Alternatively, you could pass the initial vertices in a vertex buffer once, and pass the transform via a uniform each time the frame is painted. Modern GPUs are optimized for parallel processing, and may contain hundreds or thousands of cores to perform the processing. For complex objects that contain many vertices, performing the transformation in the GPU is much more efficient.
  2. Transforming the vertices in the CPU performs work that can be performed in a vertex shader, but what happens if you need access to the transform elsewhere in the graphics pipeline. For example, in the circle fragment shader in the CirclesAndRotators program, the transform is required so that each fragment can be transformed to determine if it is inside or outside the circle. The transform information is not available to the fragment shader if the vertices are transformed by the CPU.
  3. For programs that display a large number of objects, each containing a large number of vertices, the vertex buffers that are transferred would be very large in comparison to the 16 values that are passed in a transformation matrix. Moving data to the GPU is a relatively time-consuming operation whose time is determined to some extent by the size of the data being moved. Transporting a large amount of data every time a frame is painted by the GPU takes more time than transporting a large amount of data once and smaller amounts of data each frame.

For the reasons given above, it is generally better to perform transformations in shaders (by the GPU) rather than in the program (by the CPU).

Where To Apply Multiple Transforms

In the CirclesAndRotators program, there are two transforms created for each circle that are then multiplied together to form a composite transform. For the triangle, there are four transforms created. These are then multiplied together to form a composite transform. The composite transforms are passed to the shaders where they are applied to the vertices in the vertex shader, and also in the fragment shader for circles.

An alternative would be to pass all of the individual transforms to the shaders and multiply them together there. However, the result of multiplying the individual transforms together is always the same each time a frame is drawn, so multiplying the transforms together for each vertex, and each fragment in the case of the circles, is performing a large amount of unnecessary work. Therefore, only the composite transforms should be passed as uniforms to the shaders.

Note also that in the circle fragment shader, the centre of the circle is calculated for every fragment, but this result is always the same when a circle is painted. Therefore, this calculation should have been done in the CPU, with the result being passed as a uniform.

Summary

This post has discussed some of the design and coding decisions that I made when writing the CirclesAndRotators program. A number of options were discussed, and the better or best choice as related to drawing efficiency were noted.

Advertisements

Adding A Moving Triangle

This is a follow-on from Two Rotating Circles. In this post, we will be adding a triangle that rotates about its centre, is scaled to half of its size, and is translated a variable distance from the centre of the canvas while rotating about the centre of the canvas. The code in the CirclesAndRotators program is modified to use multiple shader programs, and to use multiple vertex buffers to specify the data that is passed to the triangle’s vertex shader. Here is a video of the resulting display:

We have created a triangle like this before in OpenGL Shaders. In the program generated for that post, the vertices and the colours for the vertices were passed in a single buffer to the vertex shader. In order to illustrate the use of multiple buffers, we will pass the vertices and colours in two separate buffers. Here is the vertex shader for the triangle:

    const GLchar* vertexSource =
        "#version 330 core\n"
        "in vec4 position;"
        "in vec4 inColor;"
        "out vec4 outColor;"
        "uniform mat4 transform;"
        "void main()"
        "{"
        "    gl_Position = transform * position;"
        "    outColor = inColor;"
        "}";

and here is the fragment shader:

    const GLchar* fragmentSource =
        "#version 330 core\n"
        "in vec4 outColor;"
        "out vec4 OutColor;"
        "void main()"
        "{"
        "	OutColor = outColor;"
        "}";

As you can see, these are simple shaders. The only differences between these shaders and the shaders in the OpenGL Shaders post is the use of two vec4‘s and the uniform for passing the transform into the vertex shader.

The code for compiling the shaders and creating the shader program is the same as for the circle shaders and circle shader program, so they will not be shown here. See the source code in GitHub or the Drawing Circles With OpenGL post.

In the last post, we created a GlCircle class to encapsulate the functionality to create and draw circles. We will do the same for the triangle. Here is the declaration of the GlEquilatoralTriangle class:

#pragma once
#include "GL/glew.h"
#include "wx/glcanvas.h"
#define GLM_FORCE_CXX14
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

class GlEquilateralTriangle
{
public:
    GlEquilateralTriangle(float radius, float w, GLuint shaderProgram);
    virtual ~GlEquilateralTriangle() noexcept;
    void Paint(glm::mat4& transform) const noexcept;
private:
    float m_radius;
    float m_w;

    GLuint m_vao;
    GLuint m_vertexVbo;
    GLuint m_colorVbo;
    GLint m_transform;
};

and here is the class definition:

#include "GlEquilateralTriangle.h"

GlEquilateralTriangle::GlEquilateralTriangle(float radius, float w, GLuint shaderProgram)
    : m_radius(radius), m_w(w)
{
    // create vertices for triangle centred at origin
    float x = m_radius * sin(60.0f * 3.1415926f / 180.0f);
    float y = m_radius * cos(60.0f * 3.1415926f / 180.0f);
    glm::vec4 vertices[] = {
        { -x, -y, 0.0f, w },
        { x, -y, 0.0f, w },
        { 0.0f, m_radius, 0.0f, w }
    };

    glm::vec4 vertexColors[] = {
        { 1.0f, 0.0f, 0.0f, 1.0f },
        { 0.0f, 1.0f, 0.0f, 1.0f },
        { 0.0f, 0.0f, 1.0f, 1.0f }
    };

    glGenVertexArrays(1, &m_vao);
    glBindVertexArray(m_vao);
    // upload vertex data
    glGenBuffers(1, &m_vertexVbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_vertexVbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    // set up position attribute used in triangle vertex shader
    GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
    glVertexAttribPointer(posAttrib, 4, GL_FLOAT, GL_FALSE, 0, NULL);
    glEnableVertexAttribArray(posAttrib);
    // set up color attribute used in triangle vertex shader
    // upload color data
    glGenBuffers(1, &m_colorVbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_colorVbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertexColors), vertexColors, GL_STATIC_DRAW);

    GLint colAttrib = glGetAttribLocation(shaderProgram, "inColor");
    glVertexAttribPointer(colAttrib, 4, GL_FLOAT, GL_FALSE, 0, NULL);
    glEnableVertexAttribArray(colAttrib);
    // set up the uniform arguments
    m_transform = glGetUniformLocation(shaderProgram, "transform");
    glBindVertexArray(0);           // unbind the VAO
}

GlEquilateralTriangle::~GlEquilateralTriangle() noexcept
{
    glDeleteBuffers(1, &m_colorVbo);
    glDeleteBuffers(1, &m_vertexVbo);
    glDeleteVertexArrays(1, &m_vao);
}

void GlEquilateralTriangle::Paint(glm::mat4& transform) const noexcept
{
    glBindVertexArray(m_vao);
    glUniformMatrix4fv(m_transform, 1, GL_FALSE, glm::value_ptr(transform));
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glBindVertexArray(0);           // unbind the VAO
}

In the constructor, we create two arrays, one to specify the vertices of the triangle, and one to specify the colours assigned to each vertex. After creating a vertex array, the constructor creates a buffer for the vertices array, then sets the vertex position attribute. This is then repeated for the colour attribute.

The Paint method is very simple; it binds the vertex array for the triangle, passes the transform, and draws the triangle before releasing the vertex array.

Code is added to the CirclesAndRotators class to create a triangle (CreateTriangles method), build the triangle shader program (discussed earlier), and to display the triangle (in the OnPaint method). This code is added after the call to paint the second circle:

    glUseProgram(m_triangleShaderProgram);
    transform = glm::mat4();
    transform = glm::translate(transform, glm::vec3(-300.0f  * sin(time* 1.0e-9f) / w, 35.0f / w, 0.0f / w));
    rotation = glm::rotate(rotation, time * 5e-9f, glm::vec3(0.0f, 0.0f, 1.0f));
    glm::mat4 scale;
    scale = glm::scale(scale, glm::vec3(0.5f, 0.5f, 0.5f));
    glm::mat4 triRotation;
    triRotation = glm::rotate(triRotation, time * 4e-10f, glm::vec3(0.0f, 0.0f, 1.0f));
    transrotate = triRotation * transform  *rotation * scale;
    m_triangle1->Paint(transrotate);

Note the call to glUseProgram to switch to the triangle shader program. There is also a more complex transform created:

  1. The triangle is scaled to half of its size;
  2. The triangle is rotated about its centre as a function of time;
  3. The triangle is moved off-centre, with the distance being a function of time; and,
  4. The triangle is rotated about the centre of the canvas.

This transform provides the complex path followed by the triangle.

Finally, the Paint method is called to display the triangle.

The code for this version of the CirclesAndRotators program is available in the addtriangle branch on GitHub.

This ends coding for the CirclesAndRotators program. The next post will discuss various design and coding decisions that were made in creating this program.

Two Rotating Circles

At the end of the Moving The Circle Off Centre post, the CirclesAndRotators program displayed a circle located at pixel coordinates (220, -150) relative to the centre of the canvas. Before adding a second circle, we will refactor the program by adding a GlCircle class that encapsulates the the properties and functionality of circles in OpenGL, and then we will add a rotational transformation to the circle. Finally, we will add a second rotating circle.

Refactoring To Use A GlCircle Class

The GlCircleClass

In OpenGL, there are three tasks:

  • Initialization;
  • Drawing; and,
  • Cleanup.

These map nicely in the GlCircle class to the constructor, the Paint method and the destructor. The simplest way to create a circle is to define it centred at the origin and then transform it to its desired location. This is the order in which you would do things in games and other programs where objects move. So, with the centre of the circle specified for all circles, the simplest way to define a circle is in terms of its radius. Two other arguments are required in the call to the constructor; these are the transformation value between object coordinates and device coordinates, and the shader program. You will see in a moment where the shader program is used.

In the CirclesAndRotators program, pixels are used as object coordinates, with the origin at the centre of the canvas that is drawn on. This means that the transformation value between object coordinates and device coordinates is simply the fourth value in 4D space (w).

The GlCircle class constructor will create the vertex array, buffers, and buffer data required by the GPU. This code is very much a simple copy of the code that was in CirclesAndRotatorsCanvas::CreateSquareForCircle and CirclesAndRotatorsCanvas::BuildCircleShaderProgram. Here is the GlCircle constructor:

GlCircle::GlCircle(float radius, float w, GLuint shaderProgram)
: m_radius(radius), m_w(w)
{
    // create vertices for circle centred at origin
    glm::vec4 vertices[] = {
        { -m_radius, -m_radius, 0.0f, w },
        { m_radius, -m_radius, 0.0f, w },
        { m_radius, m_radius, 0.0f, w },
        { -m_radius, m_radius, 0.0f, w }
    };
    // create elements that specify the vertices.
    GLint elements[6] = { 0, 1, 2, 2, 3, 0 };

    // circle is centred at the origin. Later, in the Paint method, we transform the location to
    // where we want it.
    // VAO
    glGenVertexArrays(1, &m_vao);
    glBindVertexArray(m_vao);
    // upload vertex data
    glGenBuffers(1, &m_vbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
    glBufferData(GL_ARRAY_BUFFER, 16 * sizeof(float), vertices, GL_STATIC_DRAW);
    // upload element data
    glGenBuffers(1, &m_ebo);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_ebo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, 6 * sizeof(GLint), elements, GL_STATIC_DRAW);

    // set up position attribute used in circle vertex shader
    GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
    glEnableVertexAttribArray(posAttrib);
    glVertexAttribPointer(posAttrib, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);
    // set up the uniform arguments
    m_outerRadius = glGetUniformLocation(shaderProgram, "outerRadius");
    m_viewDimensions = glGetUniformLocation(shaderProgram, "viewDimensions");
    m_transform = glGetUniformLocation(shaderProgram, "transform");
    glBindVertexArray(0);           // unbind the VAO
}

The only things of note are:

  • The vertices for the square containing the circle are defined in terms of the radius of the circle;
  • A separate set of vertex arrays and buffers are created for each circle; and,
  • The vertex array object is unbound at the end of the destructor.

Unbinding the vertex array object ensures that the buffer contents are not accidentally altered outside the constructor.

The destructor simply deletes the vertex array object, the vertex buffer, and elements buffer.

The Paint method is straight-forward:

void GlCircle::Paint(glm::mat4& transform) const noexcept
{
    glBindVertexArray(m_vao);
    glUniformMatrix4fv(m_transform, 1, GL_FALSE, glm::value_ptr(transform));
    // set outer radius for circle here. We will be modulating it in later
    // example
    glUniform1f(m_outerRadius, m_radius);
    glUniform2f(m_viewDimensions, 2.0f * m_w, 2.0f * m_w);
    // draw the square that will contain the circle.
    // The circle is created inside the square in the circle fragment shader
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
    glBindVertexArray(0);           // unbind the VAO
}

In the Paint method, the vertex array for the circle is bound, the uniform values are transferred to the shader program, the circle is drawn, and the vertex array is unbound.

Here is the GlCircle class declaration:

#pragma once
#include "GL/glew.h"
#include "wx/glcanvas.h"
#define GLM_FORCE_CXX14
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

class GlCircle
{
public:
    GlCircle(float radius, float w, GLuint shaderProgram);
    virtual ~GlCircle() noexcept;
    void Paint(glm::mat4& transform) const noexcept;

private:
    float m_radius;
    float m_w;

    GLuint m_vao;
    GLuint m_vbo;
    GLuint m_ebo;
    GLint m_transform;
    GLint m_viewDimensions;
    GLint m_outerRadius;
};

Using the GlCircle Class

The CirclesAndRotatorsCanvas::CreateSquareForCircle method is renamed to CreateCircles, and is simplified to:

void CirclesAndRotatorsCanvas::CreateCircles()
{
	// define vertices for the two triangles
    wxSize canvasSize = GetSize();
    float w = static_cast(canvasSize.x) / 2.0f; 
    m_circle1 = std::make_unique(80.0f, static_cast(canvasSize.x) / 2.0f, m_circleShaderProgram);
}

The BuildCircleShaderProgram method is also simplified to:

void CirclesAndRotatorsCanvas::BuildCircleShaderProgram()
{
	// build the circle shaders
	BuildCircleVertexShader();
	BuildCircleFragmentShader();
	// create and link circle shader program
	m_circleShaderProgram = glCreateProgram();
	glAttachShader(m_circleShaderProgram, m_circleVertexShader);
	glAttachShader(m_circleShaderProgram, m_circleFragmentShader);
	glBindFragDataLocation(m_circleShaderProgram, 0, "outColor");
	glLinkProgram(m_circleShaderProgram);
}

Note that the code that used the shader program, but that was not actually related to building the shader program has been removed.

The OnPaint method is modified to:

void CirclesAndRotatorsCanvas::OnPaint(wxPaintEvent& event)
{
	// set background to black
	glClearColor(0.0, 0.0, 0.0, 1.0);
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	// use the circleShaderProgram
	glUseProgram(m_circleShaderProgram);
    // set the transform
    wxSize canvasSize = GetSize();
    float w = static_cast(canvasSize.x) / 2.0f;
    glm::mat4 transform;
    transform = glm::translate(transform, glm::vec3(220.0f / w, -150.0f / w, 0.0f / w));
    m_circle1->Paint(transform);
	glFlush();
	SwapBuffers();
}

With the changes made above, the order that the two methods are called in the SetupGraphics method must be reversed. Also, a number of changes are required to the #include‘s in CirclesAndRotatorsCanvas and CirclesAndRotatorsFrame source files.

Building and running the program should result in the circle being displayed at (220, -150) as before.

Rotational Transformation

To add a rotation to the circle, replace the call to paint the circle in CirclesAndRotatorsCanvas::OnPaint with:

    glm::mat4 rotation;
    rotation = glm::rotate(rotation, glm::radians(180.0f), glm::vec3(0.0f, 0.0f, 1.0f));
    glm::mat4 transrotate;
    transrotate = rotation * transform;
    m_circle1->Paint(transrotate);

In this code, we create two new matrices: rotation, and transrotate The rotate function is called to create a transformation matrix that rotates an object about the z-axis by 180 degrees. The transrotate matrix is the result of multiplying the translation (transform) matrix by the rotation matrix. Note that the translation is performed first and then the rotation.

Build and run the program. This is the result:

rotatecircle180

The circle was translated to (220, -150) then rotated about the centre of the canvas in a counter-clockwise direction 180° to (-220, 150).

Matrix multiplication is not transitive; that is,

    transrotate = rotation * transform;

does not produce the same result as

    transrotate = transform * rotation;

In this latter case, the rotation would be performed first and then the translation, but since the circle is initially centred at (0, 0), applying the rotation matrix simply rotates the circle about (0, 0), producing no visual change. The translation then moves the rotated circle to (220, -150).

Varying the Rotation Over Time

To vary the position of the circle over time, we need to create a timer that calls the CirclesAndRotatorsCanvas::OnPaint method, and in that method, calculate the rotation based on the time. Because I have previously described the changes needed to create and fire the timer, I will not repeat it here. See the Varying the Colour of the Triangle section of the OpenGL Shaders post for a description of the changes required to call the OnPaint method every time the timer fires.

In OnPaint, replace the line:

    rotation = glm::rotate(rotation, glm::radians(180.0f), glm::vec3(0.0f, 0.0f, 1.0f));

with:

    auto t_now = std::chrono::high_resolution_clock::now();
    auto time = (t_now - m_startTime).count();
    rotation = glm::rotate(rotation, time * 2.5e-10f, glm::vec3(0.0f, 0.0f, 1.0f));

The angle of rotation is now calculated based on the time since the CircleAndRotatorsCanvas object was created. The angle is calculated in radians, and the value 2.5e-10 was selected to provide a slow rate of rotation. Build and run the program to see the result.

Adding A Second Circle

We will be adding a second circle that rotates over top of the first circle. The fragment shader hard-codes the colour of the circle to green. In order to see the second circle as it rotates over the first, we need to set different colours for the two circles. Change the fragment shader program to accept the colour as a uniform vec4, and modify the code in OnPaint and GlCircle::Paint to set this value.

In CirclesAndRotatorsCanvas, add a second GlCircle called m_circle2, and add:

    m_circle2 = std::make_unique(30.0f, w, m_circleShaderProgram);

after the definition of m_circle1 in CirclesAndRotatorsCanvas::CreateCircles.

Now, in OnPaint, after the call to m_circle1->Paint,
add the following:

    transform = glm::mat4();
    transform = glm::translate(transform, glm::vec3(-200.0f / w, -75.0f / w, 0.0f / w));
    rotation = glm::rotate(rotation, time * 5e-10f, glm::vec3(0.0f, 0.0f, 1.0f));
    transrotate = rotation * transform;
    m_circle2->Paint(transrotate, glm::vec4(1.0f, 0.5f, 0.0f, 1.0f));

This creates and displays a smaller, orange circle that rotates twice as quickly as the blue circle. See this video for the result:

The code for this program is located in the rotate branch on GitHub.

Moving The Circle Off Centre

If you have been following along with the previous posts on drawing circles:

Drawing Circles With OpenGL, and
Device Coordinates and Object Coordinates,

you will note that the circle is located at the centre of the canvas (0, 0). In this post, we will move the circle away from the centre. In OpenGL, moving objects (and rotating and scaling objects) is normally done by defining a set of transformation matrices, and applying these transformations in the vertex shader.

The problem we have in the CirclesAndRotators program is that the circle is actually defined as a square, and the points that are outside the circle are discarded in the fragment shader. To determine if a point (pixel) is inside or outside the circle, the length of the vector from the origin to the fragment point is compared to the radius of the circle. So, if we transform the centre of the circle away from the centre of the canvas in the vertex shader, we must then transform each fragment point back to determine if it is inside the circle. To do that, we will have to pass the transformation matrix into the fragment shader.

Oh god, matrices, really? Well yes. Graphics programming is heavy on the mathematics, but don’t worry, there is help in the form of a matrix library written specifically for OpenGL called OpenGL Mathematics (GLM). See Transformations on learnopengl.com for a discussion of transformations, matrices, and GLM.

This post will be using that library, so go ahead and download the zip file from SourceForge, and extract it. GLM is a header only library, so there is no build to perform. Now add the location of the header files to the VC++ Include Directories list. Since you will be doing a lot of OpenGL programming, you should probably set up a Macro that defines the location where you extracted the library to, and add the include path as user-wide settings. See User-Wide Settings in Visual Studio 2015 if you do not know how to do that.

For more information on GLM, see the documentation in the doc subfolder that is downloaded with the library.

In this post, the transformation will consist only of a translation (moving the circle off centre); rotations and scaling will be discussed in a future post. We will pass the transform into both the vertex and the fragment shaders as a uniform. In the vertex shader, we apply the transform to each vertex, and in the fragment shader, we apply the transform to the origin to determine the centre of the circle. We need to transform the circle back to the origin to determine if a point is inside or outside the circle.

Here is the vertex shader:

    const GLchar* vertexSource =
        "#version 330 core\n"
        "in vec4 position;"
        "uniform mat4 transform;"
	"void main()
	"{"
	"    gl_Position = transform * position;"
	"}";

Note that we apply the transformation to the vertex to determine the transformed position of the vertex.

And here is the fragment shader:

    const GLchar* fragmentSource =
        "#version 330 core\n"
        "uniform vec2 viewDimensions;"
        "uniform float outerRadius;"
        "uniform mat4 transform;"
        "out vec4 outColor;"
        "void main()"
        "{"
        // transform the center of the circle. We need this later to determine if the
        // fragment is inside or outside the circle.
        "   vec4 center = transform * vec4(0.0f, 0.0f, 0.0f, viewDimensions.x / 2.0f);"
	// translate fragment coordinate to coordinate relative to center of circle
        "   float x = gl_FragCoord.x - center.x - viewDimensions.x / 2.0f;"
        "   float y = gl_FragCoord.y - center.y - viewDimensions.y / 2.0f;"
	// discard fragment if outside the circle
	"   float len = sqrt(x * x + y * y);"
	"    if (len > outerRadius) {"
        "        discard;"  
	"    }"
	// else set its colour to green
	"    outColor = vec4(0.0, 1.0, 0.0, 1.0);"
	"}";

In this shader, the transform matrix is multiplied against the vector describing the origin. The new x and y coordinates of the origin of the circle are then subtracted from the x and y fragment coordinates. These new coordinates are used to determine if the fragment is inside or outside the circle.

In the CirclesAndRotatorsCanvas class, we only need to create the transform matrix and pass it to the GPU. In the header file, add a property called m_transform to the class. In the source file, retrieve the location of the transform uniform value.

Add the following lines in the header includes at the top of the source file:

#define GLM_FORCE_CXX14
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

GLM_FORCE_CXX14 tells GLM to use the C++14 language extensions because we are using a compiler that supports C++14.

Now add the following code to CirclesAndRotatorsCanvas::OnPaint:

    glm::mat4 transform;
    transform = glm::translate(transform, glm::vec3(220.0f / w, -150.0f / w, 0.0f / w));
    glUniformMatrix4fv(m_transform, 1, GL_FALSE, glm::value_ptr(transform));

This code creates a 4 by 4 matrix called transform. The second line sets transform to be a translate matrix that will move the circle to (220, -150). Note that the second argument to glm::translate is a vec3 and not a vec4, so the x, y and z values must be normalized. The third line passes the transform to the GPU.

I did make one other change to the code in the class. In the CreateSquareForCircle method, I changed the points definition from an array of floats to an array of vec4s. This change is not necessary; I just did it to show a different way of defining the vertices.

The source code is in the offcenter branch on GitHub.

And here is the result of running the program. The circle has been moved from the center of the canvas (0, 0) to (220, -150).

offcentrecircle

Device Coordinates and Object Coordinates

All of the OpenGL posts so far have defined objects (points, lines, and triangles) in two dimensions with values in device coordinates. Using device coordinates, values that are displayed are limited to the following ranges:

  • x between [-1, +1]
  • y between [-1, +1]

where the square brackets indicate that the edge values are included in the range.

If you remember the post where we drew a circle at the origin, the vertices of the square containing the circle were specified in these device coordinates. However, the fragment shader provides the coordinates of the fragment it is working on as pixel coordinates. The graphics pipeline determines the pixel coordinate of each fragment inside each object before calling the fragment shader.

That is fine, but device coordinates are not always the most convenient coordinates to work in. The universe you are describing in your program may naturally fit into a different set of coordinates; for example, it may be more convenient to use pixel coordinates when specifying the vertices of the triangles that make up your objects. You may do so, but you must also provide the conversion factor from your object coordinates to device coordinates.

To do that, you must specify each vertex as a set of 4 values. In OpenGL these values are referred to as x, y, z, and w. In your shaders, these values are contained in a vec4. Your object coordinates are converted to the following device coordinates: x/w, y/w, and z/w. This conversion is handled for you in the graphics pipeline.

Let’s look at an example: the program that draws a circle at the origin in device coordinates. This program is described in the Drawing Circles With OpenGL post. In that program, the vertices of the square that contains the circle were defined using device coordinates as:

float points[] = {
    -0.2f, -0.2f,
    0.2f, -0.2f,
    0.2f, 0.2f,
    -0.2f, 0.2f
};

To convert these values to canvas-relative pixel coordinates we need to know the size of the canvas (the wxGLCanvas object). We defined that as 800 by 800 pixels when it was created in the window object that contains the canvas. Therefore, the viewable dimensions are -400 to +400 in both x and y directions. That means that we can change the coordinates of the vertices to:

float points[] = {
    -80.0f, -80.0f,
    80.0f, -80.0f,
    80.0f, 80.0f,
    -80.0f, 80.0f
};

The only problem with this definition is that we have not specified the conversion value w. To do that, we must specify each point in 4-D, with the fourth value as w. It is necessary to specify the z-coordinate of each vertex when using anything other than device coordinates. Since we are dealing with only a two-dimensional world, we will set the z-coordinate of all of the vertices to 0.0f. So that makes our vertices values as:

float points[] = {
    -80.0f, -80.0f, 0.0f, w,
    80.0f, -80.0f, 0.0f, w,
    80.0f, 80.0f, 0.0f, w,
    -80.0f, 80.0f, 0.0f, w
};

In fact, in the original program, we added the z (0.0) and w (1.0) values in the vertex shader. That is the only thing that the vertex shader did.

Now all we need is the value of w. We could either hard-code the value to 400.0f, or we could take the value as half the width of the canvas (800/2).
We will use the latter method. Add the following code before the definition of points:

wxSize canvasSize = GetSize();
float w = static_cast<float>(canvasSize.x) / 2.0f;

The only other changes needed to the program are to define outerRadius in terms of pixels, to modify the vertex shader to accept the vertex as a vec4 rather than a vec2, to modify the fragment shader to accept the radius in pixels, and to change the glVertexAttribPoint call to indicate that each vertex is specified as 4 floating point values. In CirclesAndRotatorsCanvas::OnPaint, change the circle value to 80.0f:

glUniform1f(m_circleOuterRadius, 80.0f);

The vertex shader program becomes:

const GLchar* vertexSource =
    "#version 330 core\n"
    "in vec4 position;"
    "void main()"
    "{"
    "    gl_Position = position;"
    "}";

The fragment shader program changes to:

const GLchar* fragmentSource =
    "#version 330 core\n"
    "uniform vec2 viewDimensions;"
    "uniform float outerRadius;"
    "out vec4 outColor;"
    "void main()"
    "{"
    "   float x = gl_FragCoord.x - viewDimensions.x / 2.0f;"
    "   float y = gl_FragCoord.y - viewDimensions.y / 2.0f;"
    // discard fragment if outside the circle
    "   float len = sqrt(x * x + y * y);"
    "	if (len > outerRadius) {"
    "       discard;"
    "    }"
    // else set its colour to green
    "   outColor = vec4(0.0, 1.0, 0.0, 1.0);"
    "}";

The only changes are the calculations of x and y.

In CirclesAndRotatorsCanvas::BuildCircleShaderProgram, change the glVertexAttribPointer call to:

	glVertexAttribPointer(posAttrib, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(GLfloat), 0);

If you compile and run the program, you will again get a green circle displayed at the centre of the window.

This example modified the program to create a object using pixel coordinates instead of device coordinates, but you could use any object coordinates that make sense in your universe. You just have to change the definitions of your vertices, possibly your vertex shader, and your fragment shader to use values in those object coordinates.

The source code, as modified, is available in the objectviewbranch on GitHub.