ChaosExplorer Wrapup

This is the final post in the ChaosExplorer series of posts. ChaosExplorer was created to explore a number of chaotic systems and to create the displays for these systems in fragment shaders using OpenGL and GLSL. Four different fractal formulae were used to generate fractals and their corresponding Julia Sets. From those standpoints, I would call ChaosExplorer a success. The program is not commercial grade; there are many changes that should be made to ChaosExplorer.

  1. There are many more fractal equations that could be used. In fact, Fractal Formulas Researched lists more than 1000 different fractal formulae. There are some duplicates in the list, but still more formulae that you are likely to investigate.When choosing a formula from that list, remember that z0 = 0, so make sure that z1 is not always also 0 or you will simply get a black display. Also, make sure that for formulae containing both a numerator and a denominator, that the denominator does not also calculate out to 0 or you will simply get a solid orange display.
  2. Additional fractals without Julia Sets include:

    Each of these can be generated using GLSL.

  3. ChaosExplorer currently displays different colours based solely on the number of iterations, so there is a step difference between each colour. The colour instead could be chosen based on the value of z after a constant number of iterations. This would potentially produce a much smoother transition between colours.
  4. ChaosExplorer could be refactored to place additional functionality in the panel base classes. It may be possible to entirely eliminate the individual derived panels in all cases except the MultibrotPanel.
  5. There are a few places in the code where magic numbers are used. These may be refactored to use defined constants.

Those are just a few ways that ChaosExplorer could be improved. Since my reasons for creating this program have been met, I will leave those modifications to you.


The Fractal dropdown menu in the ChaosExplorer program contains 4 menu items:

  • z^p + c
  • (z^3+1)/(1+cz^2)
  • c*e^z, and
  • z^3-z^2+z+c

Each of these menu items displays the fractal (z0 = 0.0 + 0.0i, varying c over the complex plane). Selecting the first menu item displays the Mandelbrot Set; selecting any of the others displays the fractal for the indicated formula. This post specifically shows the fractal and Julia Sets for the second menu item, whose formula is:

zn+1 = (zn3 + 1) / (1 + czn2)

Here is the fractal display. This is for the region -10.0 ≤ x ≤ 10.0, -10.0 ≤ y ≤ 10.0:


And here are 4 Julia Set displays for points in this fractal:





As you can see, the fractal and Julia Sets are very different from the Mandelbrot/Multibrot fractals and Julia Sets.

Julia Sets

Julia Sets are another set of images that can be created by iterating a dynamic equation. If you want all of the ugly mathematics, see the Wikipedia entry on Julia Set. If you take the time to understand all of it, you are a better person than me. For a better explanation, I suggest seeing Julia and Mandelbrot Sets, and Understanding Julia and Mandelbrot Sets.

That’s all fine and good, but how do we create a Julia Set mathematically? Certainly not by attempting to follow the Wikipedia entry! For a Julia Set that corresponds to the Mandelbrot set, we use the same equation, but generate the plot for a different plane. Remember the equation:

zn+1 = znp + c

For the Mandelbrot Set, z0 = 0 + 0i and p = 2. We set c to be each point in the section of the complex plane that we were looking at. For the Julia Set, we set c to a single value in the complex plane, then set z0 to each point in the section of the complex plane we are looking at, and iterate. For every point, c, inside the Mandelbrot Set, there is a connected Julia Set; for each point outside of the Mandelbrot Set, the Julia Set is disconnected.

The most interesting images are generated from points outside the Mandelbrot Set or just inside it. This image is generated for a point near the centre of the secondary bulb:


and this is image is generated for a point inside one of the small bulbs off the main bulb:


This image is generated for a point just outside the Mandelbrot set and slightly about the x-axis:


Here are a couple of images created from the tendrils of the Mandelbrot Set:



and finally, an image from a short distance outside the Mandelbrot Set:


Each value, c, produces a different image. To generate an image for a point in the Mandelbrot Set using the ChaosExplorer program, place the mouse cursor over the point, click the right mouse button, and select the Julia Set menu item. The image is generated in a panel in a new notebook tab. You can also zoom in on the image by selecting an area to zoom, and then selecting the Draw From Selection menu item. The source code and a wiki are available on GitHub.

This is the fragment shader for generating these Julia Sets:

    std::string fragmentSource =
        "#version 330 core\n"
        "uniform vec2 c;"
        "uniform vec2 p;"
        "uniform vec2 ul;"
        "uniform vec2 lr;"
        "uniform vec2 viewDimensions;"
        "uniform vec4 color[50];"
        "out vec4 OutColor;"
        "vec2 iPower(vec2 vec, vec2 p)"
        "    float r = sqrt(vec.x * vec.x + vec.y * vec.y);"
        "    if(r == 0.0f) {"
        "        return vec2(0.0f, 0.0f);"
        "    }"
        "    float theta = vec.y != 0 ? atan(vec.y, vec.x) : (vec.x < 0 ? 3.14159265f : 0.0f);"
        "    float imt = -p.y * theta;"
        "    float rpowr = pow(r, p.x);"
        "    float angle = p.x * theta + p.y * log(r);"
        "    vec2 powr;"
        "    powr.x = rpowr * exp(imt) * cos(angle);"
        "    powr.y = rpowr * exp(imt) * sin(angle);"
        "    return powr;"
        "void main()"
        "    float x = ul.x + (lr.x - ul.x) * gl_FragCoord.x / (viewDimensions.x - 1);"
        "    float y = lr.y + (ul.y - lr.y) * gl_FragCoord.y / (viewDimensions.y - 1);"
        "    vec2 z = vec2(x, y);"
        "    int i = 0;"
        "    while(z.x * z.x + z.y * z.y < 4.0f && i < 200) {"
        "        z = iPower(z, p) + c;"
        "        ++i;"
        "    }"
        "	 OutColor = i == 200 ? vec4(0.0f, 0.0f, 0.0f, 1.0f) : "
        "        color[i%50 - 1];"

If you compare this to the fragment shader for the Multibrot Sets included in the post Mandelbrot Set, you will see few changes.


This post discussed Julia Sets and showed images of some Julia Sets generated from several points inside and outside the Mandelbrot Set. The fragment shader that is used to generate the images was also shown.

CirclesAndRotators Design And Coding Decisions

The previous five posts, starting with Drawing Circles With OpenGL, and finishing with Adding a Moving Triangle, created an increasingly complex program that displayed two circles and a triangle moving about the drawing canvas. A number of design and coding decisions were made when the program was written. This post will discuss these decisions, and in so doing help describe some of the efficiencies and inefficiences that can result when using OpenGL.

OpenGL C++ Libraries?

The CirclesAndRotators program uses the OpenGL C API directly, well through GLEW. There are C++ wrapper libraries available, but they were not used, because I am starting to learn OpenGL, and introducing wrapper libraries at this stage would confuse the issue, and probably reduce the meager readership that I currently have.

There are at least three C++ wrapper libraries:

  1. OGLplus;
  2. OOGL; and,
  3. glbinding.

All of these are open source with source code on GitHub. OGLplus is actively maintained, while OOGL has not had any updates for two years and has bugs that have been outstanding for four years. glbinding uses C++11 features such as enum classes, lambdas and variadic templates rather than MACROs. It is both actively maintained, and comes with commercial support if needed. I leave it to you to decide if you want to use any of these, but before doing so, read this discussion from StackOverflow.

How A Circle is Defined

It is possible to define a circle in terms of many triangles with one vertex at the centre of the circle and the other two vertices on the edge of the circle. The more triangles you use, the more closely the drawn object looks like a circle rather than a polygon.

I chose instead to simply define the smallest square that completely encloses the circle, and, in the fragment shader, to discard pixels outside the circle. This method works for circles of any size; it also illustrates some of what it is possible to do in shaders.


In the OnPaint method in CirclesAndRotatorsCanvas, glUseProgram is called to set the circle shader program, then the two circles are painted. glUseProgram is called again, this time to set the triangle shader program before the triangle is painted.

An alternative to this would be to place the glUseProgram calls in the glCircle and glEquilateralTriangle Paint methods. However, the glUseProgram is expensive in terms of the work that the GPU must perform to switch shader programs. Therefore, the fewer program switches, the better your program will perform. In general, if you can, code your program to limit the number of shader program switches. Now in the CirclesAndRotators program, you will not perceive any difference between having the glUseProgram calls in OnPaint and in the Paint methods for each object. However, once a program has many thousands of objects, there will be a noticeable difference in performance. In general, try to call glUseProgram once, paint all of the objects that use that shader program, then call glUseProgram for the next set of objects, and so forth.

Perform Transform in CPU or GPU?

Most graphics programs are designed such that objects are created to be centred on the origin of the display canvas, and then transformed to their final location; this is especially true for objects that are moved about the canvas whenever a frame is painted.

There are two places these transformations can be performed:

  1. By the CPU before moving the vertex data to the GPU; and,
  2. By the GPU.

The second is preferred for a number of reasons:

  1. CPU’s typically have between 2 and 16 logical cores. You could create multiple threads to calculate the final position of each vertex and pass the vertices using vertex buffers to the GPU each time the frame is painted, but how quickly you can perform the transformations on the vertices is limited by the number of logical cores in the CPU. Alternatively, you could pass the initial vertices in a vertex buffer once, and pass the transform via a uniform each time the frame is painted. Modern GPUs are optimized for parallel processing, and may contain hundreds or thousands of cores to perform the processing. For complex objects that contain many vertices, performing the transformation in the GPU is much more efficient.
  2. Transforming the vertices in the CPU performs work that can be performed in a vertex shader, but what happens if you need access to the transform elsewhere in the graphics pipeline. For example, in the circle fragment shader in the CirclesAndRotators program, the transform is required so that each fragment can be transformed to determine if it is inside or outside the circle. The transform information is not available to the fragment shader if the vertices are transformed by the CPU.
  3. For programs that display a large number of objects, each containing a large number of vertices, the vertex buffers that are transferred would be very large in comparison to the 16 values that are passed in a transformation matrix. Moving data to the GPU is a relatively time-consuming operation whose time is determined to some extent by the size of the data being moved. Transporting a large amount of data every time a frame is painted by the GPU takes more time than transporting a large amount of data once and smaller amounts of data each frame.

For the reasons given above, it is generally better to perform transformations in shaders (by the GPU) rather than in the program (by the CPU).

Where To Apply Multiple Transforms

In the CirclesAndRotators program, there are two transforms created for each circle that are then multiplied together to form a composite transform. For the triangle, there are four transforms created. These are then multiplied together to form a composite transform. The composite transforms are passed to the shaders where they are applied to the vertices in the vertex shader, and also in the fragment shader for circles.

An alternative would be to pass all of the individual transforms to the shaders and multiply them together there. However, the result of multiplying the individual transforms together is always the same each time a frame is drawn, so multiplying the transforms together for each vertex, and each fragment in the case of the circles, is performing a large amount of unnecessary work. Therefore, only the composite transforms should be passed as uniforms to the shaders.

Note also that in the circle fragment shader, the centre of the circle is calculated for every fragment, but this result is always the same when a circle is painted. Therefore, this calculation should have been done in the CPU, with the result being passed as a uniform.


This post has discussed some of the design and coding decisions that I made when writing the CirclesAndRotators program. A number of options were discussed, and the better or best choice as related to drawing efficiency were noted.

Adding A Moving Triangle

This is a follow-on from Two Rotating Circles. In this post, we will be adding a triangle that rotates about its centre, is scaled to half of its size, and is translated a variable distance from the centre of the canvas while rotating about the centre of the canvas. The code in the CirclesAndRotators program is modified to use multiple shader programs, and to use multiple vertex buffers to specify the data that is passed to the triangle’s vertex shader. Here is a video of the resulting display:

We have created a triangle like this before in OpenGL Shaders. In the program generated for that post, the vertices and the colours for the vertices were passed in a single buffer to the vertex shader. In order to illustrate the use of multiple buffers, we will pass the vertices and colours in two separate buffers. Here is the vertex shader for the triangle:

    const GLchar* vertexSource =
        "#version 330 core\n"
        "in vec4 position;"
        "in vec4 inColor;"
        "out vec4 outColor;"
        "uniform mat4 transform;"
        "void main()"
        "    gl_Position = transform * position;"
        "    outColor = inColor;"

and here is the fragment shader:

    const GLchar* fragmentSource =
        "#version 330 core\n"
        "in vec4 outColor;"
        "out vec4 OutColor;"
        "void main()"
        "	OutColor = outColor;"

As you can see, these are simple shaders. The only differences between these shaders and the shaders in the OpenGL Shaders post is the use of two vec4‘s and the uniform for passing the transform into the vertex shader.

The code for compiling the shaders and creating the shader program is the same as for the circle shaders and circle shader program, so they will not be shown here. See the source code in GitHub or the Drawing Circles With OpenGL post.

In the last post, we created a GlCircle class to encapsulate the functionality to create and draw circles. We will do the same for the triangle. Here is the declaration of the GlEquilatoralTriangle class:

#pragma once
#include "GL/glew.h"
#include "wx/glcanvas.h"
#define GLM_FORCE_CXX14
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>

class GlEquilateralTriangle
    GlEquilateralTriangle(float radius, float w, GLuint shaderProgram);
    virtual ~GlEquilateralTriangle() noexcept;
    void Paint(glm::mat4& transform) const noexcept;
    float m_radius;
    float m_w;

    GLuint m_vao;
    GLuint m_vertexVbo;
    GLuint m_colorVbo;
    GLint m_transform;

and here is the class definition:

#include "GlEquilateralTriangle.h"

GlEquilateralTriangle::GlEquilateralTriangle(float radius, float w, GLuint shaderProgram)
    : m_radius(radius), m_w(w)
    // create vertices for triangle centred at origin
    float x = m_radius * sin(60.0f * 3.1415926f / 180.0f);
    float y = m_radius * cos(60.0f * 3.1415926f / 180.0f);
    glm::vec4 vertices[] = {
        { -x, -y, 0.0f, w },
        { x, -y, 0.0f, w },
        { 0.0f, m_radius, 0.0f, w }

    glm::vec4 vertexColors[] = {
        { 1.0f, 0.0f, 0.0f, 1.0f },
        { 0.0f, 1.0f, 0.0f, 1.0f },
        { 0.0f, 0.0f, 1.0f, 1.0f }

    glGenVertexArrays(1, &m_vao);
    // upload vertex data
    glGenBuffers(1, &m_vertexVbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_vertexVbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

    // set up position attribute used in triangle vertex shader
    GLint posAttrib = glGetAttribLocation(shaderProgram, "position");
    glVertexAttribPointer(posAttrib, 4, GL_FLOAT, GL_FALSE, 0, NULL);
    // set up color attribute used in triangle vertex shader
    // upload color data
    glGenBuffers(1, &m_colorVbo);
    glBindBuffer(GL_ARRAY_BUFFER, m_colorVbo);
    glBufferData(GL_ARRAY_BUFFER, sizeof(vertexColors), vertexColors, GL_STATIC_DRAW);

    GLint colAttrib = glGetAttribLocation(shaderProgram, "inColor");
    glVertexAttribPointer(colAttrib, 4, GL_FLOAT, GL_FALSE, 0, NULL);
    // set up the uniform arguments
    m_transform = glGetUniformLocation(shaderProgram, "transform");
    glBindVertexArray(0);           // unbind the VAO

GlEquilateralTriangle::~GlEquilateralTriangle() noexcept
    glDeleteBuffers(1, &m_colorVbo);
    glDeleteBuffers(1, &m_vertexVbo);
    glDeleteVertexArrays(1, &m_vao);

void GlEquilateralTriangle::Paint(glm::mat4& transform) const noexcept
    glUniformMatrix4fv(m_transform, 1, GL_FALSE, glm::value_ptr(transform));
    glDrawArrays(GL_TRIANGLES, 0, 3);
    glBindVertexArray(0);           // unbind the VAO

In the constructor, we create two arrays, one to specify the vertices of the triangle, and one to specify the colours assigned to each vertex. After creating a vertex array, the constructor creates a buffer for the vertices array, then sets the vertex position attribute. This is then repeated for the colour attribute.

The Paint method is very simple; it binds the vertex array for the triangle, passes the transform, and draws the triangle before releasing the vertex array.

Code is added to the CirclesAndRotators class to create a triangle (CreateTriangles method), build the triangle shader program (discussed earlier), and to display the triangle (in the OnPaint method). This code is added after the call to paint the second circle:

    transform = glm::mat4();
    transform = glm::translate(transform, glm::vec3(-300.0f  * sin(time* 1.0e-9f) / w, 35.0f / w, 0.0f / w));
    rotation = glm::rotate(rotation, time * 5e-9f, glm::vec3(0.0f, 0.0f, 1.0f));
    glm::mat4 scale;
    scale = glm::scale(scale, glm::vec3(0.5f, 0.5f, 0.5f));
    glm::mat4 triRotation;
    triRotation = glm::rotate(triRotation, time * 4e-10f, glm::vec3(0.0f, 0.0f, 1.0f));
    transrotate = triRotation * transform  *rotation * scale;

Note the call to glUseProgram to switch to the triangle shader program. There is also a more complex transform created:

  1. The triangle is scaled to half of its size;
  2. The triangle is rotated about its centre as a function of time;
  3. The triangle is moved off-centre, with the distance being a function of time; and,
  4. The triangle is rotated about the centre of the canvas.

This transform provides the complex path followed by the triangle.

Finally, the Paint method is called to display the triangle.

The code for this version of the CirclesAndRotators program is available in the addtriangle branch on GitHub.

This ends coding for the CirclesAndRotators program. The next post will discuss various design and coding decisions that were made in creating this program.