Drawing Circles With OpenGL

The only shapes that OpenGL can draw are points, lines, and triangles,so how do you draw circles and other shapes? A reasonable approximation of a circle can be drawn using many, many triangles with one vertex at the centre of the circle and the other two vertices sharing the same location as one of the vertices of the triangles next to it on the outer edge of the circle. Triangles of course have straight sides, but the more triangles you use to define the circle, and therefore the closer the vertices on the edge of the circle are, the closer they represent a rounded edge. But that is a lot of work, and there is a better way: define a square whose sides are equal to the diameter of the circle you want to draw and have the fragment shader determine whether the point it is drawing is inside or outside the circle. That is what we will do in this post.

The program that we will create in this post will just draw a green circle at the centre of the view, but we will be modifying the program over time to display circles and other objects at various locations in the view and rotate them around the centre of the view. New techniques will be described as they are used.

Drawing A Circle

Start by creating a new empty project in Visual Studio called CirclesAndRotators, then add three new classes to it called CirclesAndRotatorsApp, CirclesAndRotatorsFrame, and CirclesAndRotatorsCanvas. Follow the steps outlined in:

  1. Creating wxWidgets Programs with Visual Studio 2015 – Part 1
  2. Visual Studio, wxWidgets, and OpenGL
  3. HelloTriangle, and
  4. OpenGL Shaders.

replacing class names as appropriate.

A number of the methods in the classes remain the same, so they will not be repeated in this post. However, the source of the full program is provided if you find you are having problems.

CirclesAndRotatorsApp.OnInit is modified to place code in a try-catch block because additional exceptions may be thrown. The catch block simply displays a message box with the text of the exception. Here is the code:

bool CirclesAndRotatorsApp::OnInit()
    try {
        CirclesAndRotatorsFrame* mainFrame = new CirclesAndRotatorsFrame(nullptr, L"Circles and Rotators");
    catch (std::exception& e) {
        wxMessageBox(e.what(), "CirclesAndRotators");
    return true;

The CirclesAndRotatorsFrame constructor creates a CirclesAndRotatorsCanvas object that is 800 by 800 pixels in size. While this size can be changed, the code in this program assumes that the canvas is square (i.e. has the same number of pixels in both x and y directions). If you do not create a square canvas, you will have to modify the program to compensate.

As with the other programs I have shown so far, the majority of the code is in the canvas class.

The BuildCircleVertexShader method builds the vertex shader for the circle. Here is the code:

void CirclesAndRotatorsCanvas::BuildCircleVertexShader()
    const GLchar* vertexSource =
        "#version 330 core\n"
        "in vec2 position;"
        "void main()"
        "    gl_Position = vec4(position, 0.0, 1.0);"
    m_circleVertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(m_circleVertexShader, 1, &vertexSource, NULL);
    CheckShaderCompileStatus(m_circleVertexShader, "Circle Vertex Shader did not compile.");

This simple vertex shader takes the two-dimensional position of the vertex and stores it in the gl_Position global variable. The rest of the code you have seen in the previous posts except for the call to CheckShaderCompileStatus. This method checks if the shader compiled, and throws an exception if it did not:

void CirclesAndRotatorsCanvas::CheckShaderCompileStatus(GLuint shader, const std::string& msg) const
    // check shader compile status, and throw exception if compile failed
    GLint status;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
    if (status != GL_TRUE) {
        throw std::exception(msg.c_str());

Note: The status returned by glGetShaderiv contains only GL_TRUE or GL_FALSE. If GL_FALSE, then you can call glGetShaderInfoLog to retrieve information on the error. Do not expect detailed error messages. This would perhaps be a good topic for a future post.

The BuildCircleFragmentShader method determines whether the fragment (pixel) being displayed is inside or outside the circle. Here is the source code for the method:

void CirclesAndRotatorsCanvas::BuildCircleFragmentShader()
    const GLchar* fragmentSource =
        "#version 330 core\n"
        "uniform vec2 viewDimensions;"
        "uniform float outerRadius;"
        "out vec4 outColor;"
        "void main()"
    // convert fragment coordinate (i.e. pixel) to view coordinate
        "   float x = (gl_FragCoord.x - viewDimensions.x / 2.0f) / (viewDimensions.x / 2.0f);"
        "   float y = (gl_FragCoord.y - viewDimensions.y / 2.0f) / (viewDimensions.y / 2.0f);"
    // discard fragment if outside the circle
        "   float len = sqrt(x * x + y * y);"
        "    if (len > outerRadius) {"
        "        discard;"
        "    }"
    // else set its colour to green
        "    outColor = vec4(0.0, 1.0, 0.0, 1.0);"
    m_circleFragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(m_circleFragmentShader, 1, &fragmentSource, NULL);
    CheckShaderCompileStatus(m_circleFragmentShader, "Circle Fragment Shader did not compile");

In order to determine if the fragment is inside or outside the circle, we need three values: the position of the fragment, given in gl_FragCoord, the radius of the circle, given by the uniform value outerRadius, and the dimensions of the canvas. gl_FragCoord gives the location of the pixel containing the fragment, but outerRadius is the radius of the circle in device coordinates (x and y between -1 and +1), so the dimensions of the canvas are required to convert between the pixel coordinates and the device coordinates.

The lines that define x and y convert the gl_FragCoord x and y coordinates into device coordinates. The length of the vector (len) from the origin (centre of the view) to the fragment is calculated and compared with the radius of the circle. If the fragment is inside the circle, then the pixel colour is set to green, and if the fragment is outside the circle, the fragment is discarded. If you wish, rather than discard the fragment, you could set it to a different colour than green or the background colour so that you can see the square that contains the circle.

The BuildCircleShaderProgram method first calls BuildCircleVertexShader and BuildCircleFragmentShader, then links them to create the shader program. The location of the position attribute input to the vertex shader is obtained and the enabled. The locations of the two uniform variables input to the fragment shader are obtained next, and finally, the size of the canvas is set (viewDimensions in the fragment shader). Note: the size of the canvas is not modifiable, so this uniform value needs to be set only once. Here is the source code for the BuildCircleShaderProgram method:

void CirclesAndRotatorsCanvas::BuildCircleShaderProgram()
    // build the circle shaders
    // create and link circle shader program
    m_circleShaderProgram = glCreateProgram();
    glAttachShader(m_circleShaderProgram, m_circleVertexShader);
    glAttachShader(m_circleShaderProgram, m_circleFragmentShader);
    glBindFragDataLocation(m_circleShaderProgram, 0, "outColor");

    // set up position attribute used in circle vertex shader
    GLint posAttrib = glGetAttribLocation(m_circleShaderProgram, "position");
    glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);
    // set up the uniform arguments
    m_circleOuterRadius = glGetUniformLocation(m_circleShaderProgram, "outerRadius");
    m_viewDimensions = glGetUniformLocation(m_circleShaderProgram, "viewDimensions");
    // The canvas size is fixed (and should be square), so initialize the value here
    wxSize canvasSize = GetSize();
    glUniform2f(m_viewDimensions, static_cast(canvasSize.x),

The fragment shader determines if the a fragment is inside or outside of the circle. But to get a fragment to determine this, we have to define the square that contains the circle. This is done in the CreateSquareForCircleMethod:

void CirclesAndRotatorsCanvas::CreateSquareForCircle()
    // define vertices for the two triangles
    float points[] = {
        -0.2f, -0.2f,
        0.2f, -0.2f,
        0.2f, 0.2f,
        -0.2f, 0.2f
    // define the indices for the triangles
    GLuint elements[] = {
        0, 1, 2,
        2, 3, 0

    // setup vertex array object
    glGenVertexArrays(1, &m_circleVao);
    // upload vertex data
    glGenBuffers(1, &m_circleVbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_circleVbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(points), points, GL_STATIC_DRAW);
    // upload element data
    glGenBuffers(1, &m_circleEbo);
    glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, m_circleEbo);
    glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(elements), elements, GL_STATIC_DRAW);

Wait a minute. To form a square, we need two triangles which have a total of 6 vertices, but the points array only contains 4 vertices. Yes, that is true, but note that the first triangle is defined by the vertices v0, v1, and v2, and the second triangle is defined by the vertices v2, v3, and v0. Rather than requiring that these vertices be restated, OpenGL has the concept of elements. Note that the elements array contains 6 values that correspond to the six vertex numbers for the two triangles. Below these definitions, there are two calls to glBindBuffer and glBufferData, one for the points array that specifies the type as GL_ARRAY_BUFFER, and the second for the elements array that specifies the type as GL_ELEMENT_ARRAY_BUFFER. This tells the GPU that the first buffer contains vertex data and the second buffer contains an array that specifies which vertices to use when drawing.

That seems like extra work, and more buffer space in the GPU. Assuming 4 bytes required for each float and each unsigned int in the GPU, defining 6 vertices results in 48 bytes of buffer space. Using 4 vertices and 6 elements, there is a total of 56 bytes, so there is more buffer space required in this case. But what happens if the vertices are specified in 3D, as would be the case in most OpenGL programs? For just these two triangles, specifying the triangles using only the points array uses 72 bytes; specifying the triangles using both points and elements arrays, again uses 72 bytes. Now add a third triangle that shares 2 vertices with other triangles: we have 108 bytes versus 96 bytes. As the number of shared vertices increases, the use of elements increases the savings. Since a normal OpenGL program will define hundreds or even thousands of attached triangles, the saving can become quite substantial.

Finally, here is the code that draws the rectangle resulting in the circle:

void CirclesAndRotatorsCanvas::OnPaint(wxPaintEvent& event)
    // set background to black
    glClearColor(0.0, 0.0, 0.0, 1.0);
    // use the circleShaderProgram
    // set outer radius for circle here. We will be modulating it in later
    // example
    glUniform1f(m_circleOuterRadius, 0.2f);
    // draw the square that will contain the circle.
    // The circle is created inside the square in the circle fragment shader
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

We have seen most of this before. The only differences are the three lines:

    glUniform1f(m_circleOuterRadius, 0.2f);
    glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);

In previous example programs, glUseProgram was called immediately after the shader program was linked. In those cases, only one shader program was created. As we develop the CirclesAndRotators program further, we will create additional shader programs. One shader program will be used for drawing some shapes, and additional shader programs will be used when drawing other shapes. Hence, we need to call glUseProgram for the appropriate shader program before drawing any objects that use that shader program.

If previous examples, glDrawArrays was called to draw the objects. If we called glDrawArrays here, only one triangle would be drawn. Try it and see. To use elements, we must call glDrawElements instead.

The only remaining task is to release the GPU resources in the CirclesAndRotatorsCanvas destructor.

Here is the resulting display:


The source code for the program is provided in the master branch on GitHub.


Improving Visual Studio Solution Build Times – Multiple Processors

Visual Studio is a powerful Integrated Development Environment for developing software for the Windows OS environment, and with its latest release (2015, Update 1), a number of tablet and phone OSes as well. As with all IDEs , the times required to build a solution using Visual Studio increases with the number of source files and projects in a solution. There are a number of different techniques that can be used to improve this build time. This post will discuss one of these techniques: using multiple processes and processors for the build.

MSBuild.exe is the builder used by Visual Studio to build solutions. MSBuild has a command line switch that tells MSBuild to use multiple processors if available to build the loaded solution. There is also a commercial product, Incredibuild, that integrates with Visual Studio to provide a different way of building the solution and can offload work to additional computers on the network.

Visual Studio and Multiple Processors

Visual Studio provides a property for each C++ project that controls whether MSBuild uses multiple processes to build the various projects in a solution. This setting is found on the Properties dialog for each project. The image below shows the Properties Pages for a C++ project. The setting is found under the C/C++ -> General item, and is called Multi-processor Compilation (the last item in the panel on the right of the dialog). There are three choices:

  • No
  • Yes (/MP)
  • <inherit from parent or project defaults>


No obviously means use a single process to perform the compiles, Yes means use as many processes as possible, and the third says to inherit the setting. You may use the technique discussed in User Wide Settings in Visual Studio 2015 to set a default setting for all new projects and for any projects that inherit the setting.

Let’s now look at the Yes setting. Using this switch, a number of processes are created to perform the build. The number of processes created, by default, is equal to the number of virtual processors on the computer performing the build (number of CPUs times the number of cores per CPU [times 2 if the cores support hyperthreading]). You can determine this number by starting Task Manager and selecting the Performance tab and the CPU item. The lower right of the displayed panel contains information about the CPUs, with the Logical processors line containing the number of logical (virtual) processors as shown in this image:


As you can see, on my computer, I have a total of 8 logical processors, so by default, Visual Studio would use up to 8 processes to build a solution.

It is possible to limit the number of processes. More information about this option is available in /MP (Build With Multiple Processes).

Visual Studio and Incredibuild

Incredibuild is a commercial product that accelerates builds by taking advantage of the multiple logical processors in the host computer, and if available, the processors in other computers on your network. At the start of each build, Incredibuild performs an analysis to determine the best way to build the Visual Studio solution, and divides the compiles and links among the available processors.

See Incredibuild 7.0 Review and How Incredible is Incredibuild for reviews of this product.

One advantage of Incredibuild over a number of other build accelerators is that Incredibuild integrates right into Visual Studio. As noted in the last review, above, one disadvantage is that linking files compiled with Incredibuild with those compiles with the normal build process may create executable files that crash. More information is provided in that review.

Build Accelerator Measurements

It’s nice to talk about multi-processor builds and Incredibuild, but how do they really compare? Some people claim great improvements in build time for very large solutions, and others say build times are worse with Incredibuild. I will perform a set of experiments to measure the build times for two open source software solutions that are the size that a typical user would encounter in his personal programming activities. Note: You should not extrapolate these results to much larger projects; you should repeat these measurements on your own software and hardware.

For each of the open source solutions, six separate measurements will be taken:

  1. Solution rebuild without using multi-processes;
  2. Solution rebuild using multi-processes;
  3. Solution rebuild using Incredibuild;
  4. Incremental build without using multi-processes;
  5. Incremental build using multi-processes; and,
  6. Incremental build using Incredibuild.

Test Setup


All measurements are made on a single laptop that contains the following components. This is typical for a higher end laptop built within the last two years. Specifically, the laptop contains:

  • Intel Core i7-4710MQ CPU with 8 logical processors. More information about the CPU is displayed in the first image above this text.
  • 8 GB DDR3 RAM running at 1600 MHz.
  • 1 TB hard drive spinning at 5400 RPM.


  • Visual Studio 2015 Community Edition (Update 1)
  • Incredibuild 7.1
  • Other software is running on the laptop during these tests, such as a web browser, an email client program, and a number of tasks that run in the background. This mix would be typical for a developer.

Source Code for the Measurements

The two open source software solutions used for these measurements are:

  • wxWidgets 3.1.0. wxWidgets contains 24 projects that generate 23 x64 Debug lib files.
  • libtins. libtins contains 4 projects that generate one lib file. The solution is built using the x86 Debug configuration.

Rebuild Test Procedure

  1. Visual Studio is started and the solution is loaded.
  2. The solution is built and the build time is measured. For the Incredibuild builds, the time is taken from the Incredibuild monitor panel; for other builds, the time is measured using a clock. In both cases, the times should be accurate to within one second.
  3. Visual Studio is terminated.
  4. Steps 1, 2, and 3 are repeated five times for each build type and for each solution.

The procedure above was chosen for two reasons:

  1. The first time that a rebuild is performed after it is loaded takes longer than subsequent builds. While I could have performed one or two solution builds before measuring build times, I chose to use the procedure outlined above. In either case, measured values relative to one another would be the same, only the actual measured values change.
  2. Additional software is running on the computer used for these tests. While no direct user interaction was performed with other programs during these tests, the other software may run a variable amount. For example, an email may be received by the email client program. The tests are performed a number of times to average out these variations.

Incremental Build Test Procedure

  1. Visual Studio is started and the solution is loaded.
  2. The solution is rebuilt to ensure that all object, lib, and executable files are compatible. See the discussion of advantages and disadvantages of Incredibuild in the section above titled “Visual Studio and Incredibuild”.
  3. A single blank line is added or removed at the end of one of the source files. Addition and removal are alternated between runs. The source file chosen resulted in the compilation of that file and the linking of a single lib file.
  4. A build is performed and the time taken is measured. For the Incredibuild builds, the time is taken from the Incredibuild monitor panel; for other builds, the time is measured using a clock.
  5. Steps 3 and 4 are repeated for a total of five measurements.
  6. Steps 2 through 5 are repeated for each of the build types (single process, multi-processes, and Incredibuild.





The graphs above shows the rebuild times in seconds as indicated for the build types.


Incremental Builds


This graph shows the Incredibuild incremental build times for wxWidgets. The build times using a single process were less than 1 second, and the build times using multiple processes was just over 1 second. The tests were not repeated for libtins; however, you can assume that the results would be similar since only a single source file was modified.

Discussion of Results


The procedure for the rebuilds measurements was chosen because it would typically mimic the procedure used for the integration of daily or more frequent changes in an agile development process.

It is interesting to note that the variation in rebuild times for the Incredibuild builds was the greatest. Also, the Incredibuild times were greater than the build times for the multi-process builds. The Incredibuild method starts with a “Preparing Build Tasks” task that takes several seconds to complete; the similar task in MSBuild for the multi-process builds takes much less time, but no doubt results in a different distribution of build tasks over the logical processors.

The build times for single process builds are longer than the build times for multi-process builds as would be expected. The relative difference in build times for libtins are what I would have expected; the surprising results are for the wxWidgets rebuilds. There are many times more files being compiled and libraries being built so I would have expected the difference between the single process and multi-process build times for wxWidgets to be relatively even greater, yet the differences are relatively minor. Also, the single-process build times for wxWidgets beat the Incredibuild build times for the same solution.

One of the possibly major advantages for Incredibuild is the splitting of build tasks across multiple computers on a network. It was not possible to test this, so no conclusion can be drawn about the build times using that scenario.

From the reviews and from testimonials on the Incredibuild web site, Incredibuild seems to shine for very large projects; that is, for projects that take many minutes to build. A very large solution was not selected for these tests because that does not fit the types of solution that I work on. As mentioned above, you may want to repeat these tests for your own solutions.

Incremental Builds

The incremental build times for single process and multi-processes were difficult to measure using a clock. However, the single process bulid times were definitely less than one second, and the multi-process build times were more than one second, but considerably less than two seconds.

Is this what one would expect? Shouldn’t the multi-process build time be less than the single process build time? In this case, no. Remember that a single source file is being compiled, and a single lib file is being linked. The linking cannot be performed until the source file has been compiled, so no parallel processing can be performed. The extra time required for the potential multi-process build is the time that MSBuild takes to determine that no parallel processing can be performed.

The extra time required for the Incredibuild builds was divided between determining the build tasks and the time taken to compile the source file. The time required to determine the build tasks is understandable; however, the much longer compile time was unexpected.

What Was Not Tested

The computer that the tests were run on has a single CPU with 4 cores and hyper-V support for a total of 8 logical processors. All 8 logical processes shared the same cache, so this may slow down the build processes. A second physically separate CPU would be expected to improve build times more than multiple cores on a single CPU, but this could not be tested.

Similarly, one of the potential advantages of Incredibuild is its ability to use the idle cycles of the CPUs on other computers on the same network to perform some of the build tasks. This also could not be tested.



  1. For those situations where multiple tasks (compiles, links) can be performed at the same time, setting the multi-processor switch in Visual Studio will reduce build times over the times of single-process builds.
  2. For situations where multiple tasks cannot be performed concurrently, there is a penalty for using the multi-processor switch. However, the number of times that multiple tasks cannot be performed concurrently relative to the number of times that multiple tasks can be performed concurrently dictates that setting the multi-processor switch is the recommended procedure.
  3. Incredibuild increases the build times for the standard “make a small change, build, and test” process that most developers use, and therefore Incredibuild should not be used for these development steps.
  4. For these open source projects and these specific test conditions, Incredibuild provided no improvement over the times for multi-process rebuilds.That does not indicate that Incredibuild does not improve the rebuild times for many other, and especially larger projects. There may be great advantages to using Incredibuild for integration builds, but you will have to perform your own tests.
  5. The open source projects used in these tests are typical of the type of software that I develop. Because Incredibuild provided no improvement in the build times for either of these projects, I will not be using Incredibuild in my build procedures.




I have spent the last week or so playing with Vulkan, “the next generation OpenGL initiative”. Vulkan is very new; the Vulkan 1.0 specification was released on February 16, 2016. It has a number of advantages over OpenGL; for example, it is a lower level API, similar to Direct3D in its use. It’s features include:

  • Reduced driver and CPU overhead;
  • CPU scaling to multiple core CPUs. OpenGL was originally designed for single CPUs, and scaling is difficult and poorly implemented;
  • Shaders may still be GLSL-based, but the shaders are precompiled, rather than compiled in the program. In future, other shader languages and compilers should become available; and,
  • The Vulkan SDK is available for hardware from mobile devices up through high-end graphics cards.

For more information on Vulkan, see:

Sounds great, but there are cons:

  • You have more control, but more control means you will be writing more C code than you would for OpenGL;
  • Graphics driver support is spotty. For example, at the time I write this, nVidia and AMD support some of their graphics cards, but not all. nVidia’s Windows driver version 356.45 is the only version that supports Vulkan, but their most current Windows driver is at least 361.60. Intel has no Windows drivers that implement Vulkan, and only beta drivers for Linux.
  • There are not a lot of sample programs available yet, and of course, no books. There does seem to be a number of Java examples.

Here are a couple of beginner guides:

and here are links to the Vulkan SDK, and an open-source  C++ wrapper from nVidia.

Update – May 20, 2016

Intel has released a Windows 10 beta driver with Vulkan support, but this driver only supports Vulkan on their 6th generation Intel Core processors with HD Graphics 510, 515, 520, 530, Intel® Iris™ Graphics 540, Intel® Iris™ Graphics 550, and Intel® Iris™ Pro Graphics 580. Since I do not have a computer with any of these processors, I am still out of luck.

nVidia has also updated their Windows graphics drivers to now support the Geforce 800M series. My computer has a Geforce 840M GPU, so I am now able to use the Vulkan libraries. I performed a quick test and the test suite that I created before this original Vulkan post now compile, link and execute without error.

I will begin using Vulkan to build graphics programs once my series of posts on Chaotic Systems and Fractals are completed.

Great, I had some fun. So what did I learn from my playing?

  1. The current version of the Vulkan specification is 1.0.5, the C++ wrapper supports version 1.0.4, and the only available SDK is version There is a mismatch here.
  2. Intel support for Vulkan is pretty much non-existent. My computer has an integrated Intel GPU.
  3. nVidia does not currently support Vulkan for their 800 series GPUs, and there is considerable question as to whether they ever will. Of course, my computer also has an nVidia 840M GPU.
  4. Using the Vulkan API, I was able to query the GPU for information about its capabilities; unfortunately, I cannot acquire a surface for drawing on for my nVidia GPU.

So I am stuck at this point. I do have a few options:

  1. Wait and hope that nVidia comes to its senses and supports the 800 series GPUs.
  2. Upgrade my nVidia graphics card to an nVidia card that supports Vulkan; or maybe upgrade to an AMD graphics card.
  3. Wait for Intel to publish Windows drivers for their graphics cards. My laptop computer also has an integrated Intel GPU so this is a potential option.
  4. Go back to OpenGL and concentrate on shaders, since knowledge of shaders and the shader language are transferable to Vulkan.

I think I will choose options  1, 3, and maybe 4.


OpenGL Shaders

Page 10 of OpenGL Programming Guide, Eighth Edition shows the rendering pipeline for OpenGL version 4.3. You will notice that there are 5 shaders shown:

  1. Vertex shader;
  2. Tessellation control shader;
  3. Tessellation evaluation shader;
  4. Geometry shader; and,
  5. Fragment shader.


The first four shaders are used to determine the position of the various primitives on the screen and the fragment shader determines the colour of each primitive. This is a simplified view; the vertex and geometry shaders can also affect the colours, but final colour selection is performed in the fragment shader.

Vertex Shader

For each vertex issued by the drawing program, the vertex shader is called. The purpose of the vertex shader is to output the final vertex position in device coordinates and any data that the fragment shader requires. This shader may do no more that pass the vertex on to the next filter, or it may do more complex tasks such as computing the vertex’s screen position.

Tessellation Shaders

The tessellation shaders are optional. These shaders may increase the number of geometric primitives to better describe the models.

Geometry Shader

The geometry shader is also optional. This shader allows the processing, and creation if necessary, of additional geometric primitives.

Fragment Shader

The output from the previous shaders is interpolated over all of the pixels on the screen that are covered by a primitive. These pixels, or fragments, are what the fragment shader determines the final colour of. The fragment shader also determines whether a fragment should be drawn.

HelloTriangle With Vertex and Fragment Shaders

Let’s add a vertex and a fragment shader to the HelloTriangle program. When we are done, the triangle will still be the same size and colour (white), but the program changes will make it easier to make further changes in the future.

Load TriangleCanvas.h and TriangleCanvas.cpp from the HelloTriangle project. In the TriangleCanvas class, add the following method declarations in the private section of the class:

    void BuildShaderProgram();
    void BuildVertexShader();
    void BuildFragmentShader();

and the following data members:

    GLuint m_vertexShader;
    GLuint m_fragmentShader;
    GLuint m_shaderProgram;

In TriangleCanvas.cpp, add the BuildVertexShader method:

void TriangleCanvas::BuildVertexShader()
    const GLchar* vertexSource =
	"#version 330 core\n"
	"in vec2 position;"
	"void main()"
	"    gl_Position = vec4(position, 0.0, 1.0);"
    m_vertexShader = glCreateShader(GL_VERTEX_SHADER);
    glShaderSource(m_vertexShader, 1, &vertexSource, NULL);

vertexSource is the source code for the vertex shader. Shaders are written in a C-like language called GLSL. The first line, “#version 330 core\n”, tells OpenGL to use the core functionality of version 3.30 of GLSL. Starting with OpenGL version 3.3, the GLSL version number matches the OpenGL version number.

The next line says that position is an input parameter of type vec2. This is an array of two floating point numbers. The remaining lines contain the actual shader that is run for each vertex passed. gl_Position is a vec4 (an array of four floats). These values are x, y, z, and w. x, y, and z are the three dimensions in 3D space. The fourth parameter, w, is the factor that each coordinate should be adjusted to specify the point in device coordinates. The three coordinates in device space are x/w, y/w and z/w. Because the vertices are already normalized to device coordinates, w is set to 1.0. Now if we go back and look at how the vertices are defined in the program (in TriangleCanvas::SetupGraphics), you will notice that only x and y coordinates are provided. The “in vec2 position” line in the shader states that we are passing in only two values. In main, this is turned into the four coordinates needed by OpenGL by adding a z-coordinate of 0, and a w value of 1.0. gl_Position is the GLSL global value that passes the vertex value onwards.

This vertex shader is a simple pass-through shader; that is, it passes the input values through to the next stage in the rendering process.

The final three lines in the BuildVertexShader method create and compile the shader.

Now add the BuildFragmentShader method:

void TriangleCanvas::BuildFragmentShader()
    const GLchar* fragmentSource =
	"#version 330 core\n"
	"out vec4 outColor;"
	"void main()"
	"    outColor = vec4(1.0, 1.0, 1.0, 1.0);"
    m_fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
    glShaderSource(m_fragmentShader, 1, &fragmentSource, NULL);

This time we provide the fragment shader. The line “out vec4 outColor;” says that the shader will output a vec4. This value will contain the colour to display the pixel in. outColor is set to a vec4 containing the red, green, blue, and alpha values of the colour for the pixel. In this case, every pixel will be set to white. Again, the final three lines of the method create and compile the fragment shader.

The BuildShaderProgram method contains:

void TriangleCanvas::BuildShaderProgram()
    m_shaderProgram = glCreateProgram();
    glAttachShader(m_shaderProgram, m_vertexShader);
    glAttachShader(m_shaderProgram, m_fragmentShader);
    glBindFragDataLocation(m_shaderProgram, 0, "outColor");

    GLint posAttrib = glGetAttribLocation(m_shaderProgram, "position");
    glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(GLfloat), 0);

This method calls methods to build the two shaders, then creates a shader program consisting of the two shaders, and the outColor variable is bound to the fragment shader. Finally, the shader program is linked and OpenGL is told to use the program.

The final three lines tie the two-dimensional vertices to the input of the vertex shader.

The final changes required to use the shaders are made to the SetupGraphics method. The last three lines of the method are replaced with a call to BuildShaderProgram().

Compiling and running the program should produce the same display as before the shaders were added. If you cannot determine what the problem with your program is, compare it to TriangleCanvas with Shaders.

Fun With Shaders

Invert Triangle

Inverting the triangle using shaders is easy. Just change the vertex shader to contain

"    gl_Position = vec4(position.x, -position.y, 0.0, 1.0);"

Inverted Green Triangle

Right now, the triangle is white because that colour is hard-coded into the fragment shader. To set the colour of the rectangle in the program rather than in the shader, change the fragment shader to this:

    const GLchar* fragmentSource =
        "#version 330 core\n"
        "uniform vec3 triColor;"
	"out vec4 outColor;"
	"void main()"
	"    outColor = vec4(triColor, 1.0);"

The line "uniform vec3 triColor;" tells the shader that the variable triColor comes from the program and may change over time. And of course, in the main function, outColor has been changed to use triColor.

Three changes are required in the program to define and use triColor. In TriangleCanvas.h, add the following data member:

    GLint m_uniColor;

In TriangleCanvas::BuildShaderProgram add:

	m_uniColor = glGetUniformLocation(m_shaderProgram, "triColor");

as the last line. This ties m_uniColor to the location of the uniform variable triColor.

And in TriangleCanvas::OnPaint, add the line:

	glUniform3f(m_uniColor, 0.0f, 1.0f, 0.0f);

before the triangle is drawn. This sets the triColor variable in the fragment shader to green. The resulting display should be:


Varying the Colour of the Triangle

Now the triangle is green but that is pretty boring. Let’s modify the program so that the colour changes over time.

For the timing, we will be using a wxTimer object and a timer event handler. In TriangleCanvas.h, add a timer event handler:

	void OnTimer(wxTimerEvent& event);

and two timer related constants:

    static const int INTERVAL = 1000 / 60;
    static const int TIMERNUMBER = 3;

The interval will be used to force the timer to fire at about 60 times per second. Now add a pointer to a timer object:

    std::unique_ptr<wxTimer> m_timer;

In the TriangleCanvas constructor, tie the timer event to its handler:

Bind(wxEVT_TIMER, &TriangleCanvas::OnTimer, this);

and at the bottom of the constructor add the code needed to create the timer and start it:

    m_timer = std::make_unique<wxTimer>(this, TIMERNUMBER);

In the destructor, stop the timer:


Add the timer event handler:

void TriangleCanvas::OnTimer(wxTimerEvent& event)

All this handler does is fire the paint event.

If you build and run the program now, OnPaint will execute approximately 60 times per second. You will not see any change because the colour of the triangle remains green.

Let’s add code to change the colour every time that the timer fires. In TriangleCanvas.h, add a time_point object to specify the start time:

std::chrono::time_point<std::chrono::high_resolution_clock> m_startTime;

In the TriangleCanvas constructor, set the start time before the timer is created:

    m_startTime = std::chrono::high_resolution_clock::now();

and in OnPaint, replace the colour setting code with:

    auto t_now = std::chrono::high_resolution_clock::now();
    float time = std::chrono::duration_cast<std::chrono::duration>(t_now - m_startTime).count();
    glUniform3f(m_uniColor, (sin(time * 1.0f) + 1.0f) / 2.0f, (sin(time * 0.5f) + 1.0f) / 2.0f,
        (cos(time * 0.25f) + 1.0f) / 2.0f);

Now build and run the program. The colour of the triangle should change at a rate of about 60 times per second, through all of the various combinations of intensities of red, green and blue. If you cannot get the program to work and cannot determine why, see the code listing in Varying Triangle Colours.

Blended Colour Triangle

The previous examples handled one colour at a time. This example starts with a different colour in each corner of the triangle and interpolates the colours for each pixel in the triangle.

This example starts back with the HelloTriangle program that contains the vertex and fragment shaders (the one that displays a white triangle, above). To begin, we need to define a colour for each vertex in the triangle. We will do that by adding the colour to each vertex defined in the points array:

    float points[] = {
	0.0f, 0.5f, 1.0f, 0.0f, 0.0f,        // red vertex
	0.5f, -0.5f, 0.0f, 1.0f, 0.0f,       // green vertex
	-0.5f, -0.5f, 0.0f, 0.0f, 1.0f       // blue vertex

Here are the vertex and fragment programs:

    const GLchar* vertexSource =
        "#version 330 core\n"
	"in vec2 position;"
	"in vec3 color;"
	"out vec3 Color;"
	"void main()"
	"    gl_Position = vec4(position, 0.0, 1.0);"
	"    Color = color;"
    const GLchar* fragmentSource =
	"#version 330 core\n"
	"in vec3 Color;"
	"out vec4 outColor;"
	"void main()"
	"    outColor = vec4(Color, 1.0f);"

You should note that an input parameter and an output parameter have been added to the vertex shader program. The input color is simply passed through to the output Color parameter.

In the fragment shader, the output parameter from the vertex shader is added as an input. The name of the parameters must be the same. Also, outColor is calculated based on the input parameter.

One final change is required. Since color is a new input and is specified in the points array, we have to tie the color parameter to the colour specification for each vertex. Also, since each vertex now is specified with five values rather than two, the position pointer must be modified to show that. Replace the last line in the BuildShaderProgram method with the following code:

    glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat), 0);
    GLint colAttrib = glGetAttribLocation(m_shaderProgram, "color");
    glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GLfloat),
        (void*)(2 * sizeof(GLfloat)));

The colours could just as easily have been specified in a different array, but placing them all in the same array makes sure that the position and colour values do not get out of step.

Build and run the program. The output should look like this:



This post described the shaders that are used in the graphics pipeline. The vertex and fragment shaders were modified to illustrate some of the control that these shaders can provide to the program output.

User-Wide Settings in Visual Studio 2015

Back in my post on Creating wxWidgets programs with Visual Studio – Part 1, I included instructions for setting a system environment variable for the wxWidgets directory, and instructions for adding Include and Library directories for projects in Visual Studio. These latter instructions must be repeated for every project that uses wxWidgets. Thanks to a suggestion from legalize (see his comment on the post linked above), I will provide instructions to define a user macro from within Visual Studio 2015 that provides this variable on a per-user basis. I will also provide instructions for setting up the Include and Library directories across all projects and solutions on a per-user basis.

We will be creating user-wide settings for Visual Studio. Illogically, Microsoft has hidden the interface for these settings in the property manager for a single project. To begin, start Visual Studio 2015 and load a solution, or create a new one. Now select the View->Property Manager menu item. This opens the Property Manger tab as shown in this image for the HelloWorld sample program.


When you first display the Property Manager tab,  only HelloWorld will be visible. Click on the expander for HelloWorld to show the list of configurations, then expand one of the configurations that you will be building. In the image above, I have opened the Debug | x64 configuration. Note that if your solution contains more than one project, then each project will be listed. It is only necessary to open one of the project expanders. Note that the first item under Debug | x64 is Microsoft.Cpp.x64.user. For the Win32-related configurations, the first item is Microsoft.Cpp.Win32.user. Double-click on this item to open the Microsoft.Cpp.x64.user Property Pages or the Microsoft.Cpp.Win32.user Property Pages:


Click on the User Macros item to display the image above. Now click on the Add Macro button to open the Add User Macro dialog. Fill in the name of the macro (WXWIN for wxWidgets), the macro value (C:\wxWidgets on my computer), and check the checkbox. The dialog will look like this:


Click the OK button to close the dialog. The property pages dialog now looks like this:


As you can see, the macro has been added. Add any other macros that you need.

Click on VC++ Directories, and click on Include Directories. See the image below.


Click on the down arrow box at the end of the Include Directories line and select <Edit…> to open the Include Directories dialog.

Double-click in the empty scroll box at the top of the dialog and enter $(WXWIN)\include. Double-click below this line and enter $(WXWIN)\include\msvc. Press Enter, and the dialog box will look like this:


Click the OK button to close the Include Directories Dialog, and then the Apply button. Now select Library Directories and open the Library Directories dialog. Enter the path to the wxWidgets library files (e.g.$(WXWIN)/lib/vc_x64_dll if you created the wxWidgets x64 DLL configuration). Click the OK button to close the Library Directories dialog, and then the Apply and OK buttons on the Properties Pages dialog.

If you now expand the Release | x64 configuration and open the Microsoft.Cpp.x64.user Property Pages dialog, you will see that the user macro and the VC++ Directories are listed here as well. If you also build Win32 configurations, you will have to repeat all of this procedure for Microsoft.Cpp.Win32.user. The properties that you set are saved in the %LOCALAPPDATA%\Microsoft\MSBuild\v4.0\Microsoft.Cpp.x64.user.props file for x64 configurations, and %LOCALAPPDATA%\Microsoft\MSBuild\v4.0\Microsoft.Cpp.Win32.user.props for Win32 configurations.

The next time you open a different solution or create a new solution, these properties will be included.