OpenGL: Why Is Your Code Producing a Black Window?

OpenGL: Why Is Your Code Producing a Black Window?


One of the most common problems for novice, and sometimes experienced, OpenGL programmers is that they write new rendering code, run it, and it displays… nothing at all. Being more active on message boards again lately (Stack Overflow and OpenGL Discussion Boards), I noticed that this still covers a large part of the issues people ask about.

Diagnosing and solving these types of issues is often tricky because they can be caused by mistakes in almost any part of the OpenGL pipeline. This article explains some common reasons for the infamous “Black Window” problem, as well as approaches to track down these types of issues.

The following sections are mostly based on working with the Core Profile, which includes writing your own shaders. Much of it will apply to the fixed pipeline as well. If you want to make the transition to the Core Profile, check out my article about the topic.

Check for Errors

With hundreds of API calls, it can easily happen that a wrong argument is passed for one of them. Or less obviously, that API calls are made while not meeting all preconditions. You can check for these types of errors, and should do so routinely. The primary function for doing this is glGetError(). A useful approach is to have an error check that is always executed for debug builds at strategic places in your code. If you use assert macros, add a line like this for example at the end of rendering a frame:

ASSERT(glGetError() == GL_NO_ERROR);

If the assert triggers, temporarily add more of them across your code, gradually narrowing down which call causes the error based on the fact that the error was triggered between the last check that did not report an error, and the first one that did. Once you have isolated the error to a single call, reading the documentation for the call, and looking at the exact error code that was returned, should normally make it clear what went wrong.

Another important area of error checking is shader compilation and linking. At least for debug builds, check the shader compilation status after each shader compilation, using:

GLint status = 0; glGetShaderiv(shaderId, GL_COMPILE_STATUS, &status);

If status is GL_FALSE, you can retrieve the error messages using:

GLint logLen = 0; glGetShaderiv(shaderId, GL_INFO_LOG_LENGTH, &logLen); GLchar* logStr = new GLchar[logLen]; glGetShaderInfoLog(shaderId, logLen, 0, logStr); // report logStr delete[] logStr;

Checking success after linking the program looks similar, except that you use glGetProgramiv(), GL_LINK_STATUS,  and glGetProgramInfoLog().

When working with Frame Buffer Objects (FBO), there is another useful error check. After you finished setting up your FBO, check for success with:


Geometry Not in Field of View

This is by far one of the most common mistakes, and unfortunately not easy to track down. In this case, the whole rendering pipeline is correctly set up, but the the geometry is not in the coordinate range being mapped to the window.

The only good way to fix, or ideally avoid, this issue is a solid understanding of OpenGL coordinate systems and transformations. It is often best to start simple by placing the geometry around the origin, and placing the camera on the positive z-axis, pointing towards the origin. There is no need to use a projection transformation to get something to appear on the screen. Once this works, you can progress to setting up more advanced transformations, like perspective projections.

Common causes for the geometry not being in the field of view include:

  • The geometry has coordinates that are far away from the origin, while the camera points at the origin. In this case, you will either have to modify the coordinates of your geometry, apply a translation to move the geometry to the origin, or point your camera in the direction at the geometry.
  • The camera is inside the object, and backface culling is enabled. This can easily happen if no viewing transformation is set up at all. Make sure that you set up a viewing transformation.
  • The geometry is behind the camera. A variation of the same problem as the previous, but this typically happens if the viewing transformation is not set up correctly.
  • The clipping planes are set wrong. When using one of the common projection transformations, make sure that the range between the near and far clipping planes is consistent with the distance of your geometry from the camera. When not using a projection transformation, make sure that the z-coordinates after applying the viewing transformation are between -1.0 and 1.0.
  • Less common, but possible: The range of coordinates is much too small, so that in the extreme case, you end up drawing the entire geometry in a single pixel.

Black on Black

If there is a problem in the fragment part of the pipeline, it will often produce black pixels. When using shaders, a typical example is if the fragment shader uses texturing, but the texture was not properly set up. With the fixed pipeline, it can mean that no material color was set.

One very useful method to diagnose if this is happening is to set the clear color to something other than the default of black. I mostly set the clear color to something like yellow during development. If you do this, and see the outline of your geometry show up in black, you know that the problem is with the color of the fragments produced by your pipeline. This can be taken one step farther when using FBOs that are rendered to the primary framebuffer in the end. If you clear each render target with a different color, you can see where things break down.

Another useful approach can be applied when using relatively complex fragment shaders. To verify if there might be a problem with the output of the fragment shader, you can temporarily change it to simply produce a fixed color. If that color shows up, while you previously rendered all black, your problem is with the fragment shader.

 Vertex Data Not Properly Set Up

Current OpenGL requires vertex data to be in vertex buffers. If something goes wrong while setting up the data in those vertex buffers, it can result in no rendering at all. Verify that:

  • The vertex data itself that you store in the vertex buffers contains the correct coordinates for your geometry.
  • The correct vertex buffer is bound with glBindBuffer() when setting the vertex buffer data with a call like glBufferData().
  • All arguments to the vertex setup calls are correct. For example, make sure that the arguments to glVertexAttribPointer() match the format, sizes, etc. of your vertex data.

Faces Are Culled

By default, OpenGL expects the vertices of each face to be arranged in a counter-clockwise orientation. If you get this wrong, and have backface culling enabled, your faces can disappear. If you have any kind of suspicion that this might be happening, disable backface culling:


Not Everything Is Bound and Enabled

There is a number of objects that need to be bound, and features that need to be enabled, for rendering to happen. If any of them are missing, you will often get no rendering at all.

There is no way around code inspection, or stepping through the code in a debugger, to make sure that everything is properly bound and enabled when the draw calls are executed. Items to look out for include:

  • The correct program is bound with glUseProgram().
  • Rendering goes to the primary framebuffer when intended, using glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0).
  • Vertex array object is bound with glBindVertexArray().
  • Vertex attributes are properly set up and enabled, using calls like glVertexAttribPointer() and glEnableVertexAttribArray().

Uniforms Were Not Set

Make sure that the uniforms used in your shaders are set to valid values before starting to draw. For example, if you miss to set a value for a uniform matrix used for transformations, it can result in your geometry not showing up.

Code inspection and debugging is the only reasonable way to find this. If you suspect that certain uniforms might not be set correctly, you can also try to temporarily simplify the shader to not use the value, and see if something changes.

Depth Buffer Is Not Cleared

If you use a depth buffer, and have depth testing enabled, make sure that you clear the depth buffer at the start of each frame.

A slight variation of this is that if you do not need a depth buffer for your rendering, make sure that you configure your context/surface without a depth buffer during initialization.

Frame Is Not Displayed

This problem is so trivial that it is almost embarrassing, but it does happen: If you use a double buffered visual (which you should in almost all cases), make sure that the buffers are swapped when you finish rendering the frame, so that the frame you rendered is actually displayed.

How exactly this is done is very system dependent. Some higher level frameworks handle this automatically after they invoked your rendering method. If that is not the case, look for a function that typically has SwapBuffers, or something similar as part of its name, and can be found in the window system interface, or the toolkit you use.

The symptom of this is that nothing at all is displayed, not even the clear color. Like in other cases, setting the clear color to something different from black or white helps recognizing that this might be your problem.

Context Is Not Properly Set Up, or Not Current

When nothing renders, it is possible that there was a problem while setting up the context, pixel format, rendering surface, etc. How this is done is highly platform dependent. Fortunately, it is at least easy to diagnose if the problem is in this area. Set your clear color to something other than black, using glClearColor(), and change your rendering method to only do a glClear(). If the window does not show your clear color, chances are that your context setup failed.

OpenGL: Transition to Core Profile

OpenGL: Transition to Core Profile


Significant groups of OpenGL features were marked as deprecated when the 3.2 spec was published in December 2009, resulting in two different OpenGL profiles:

  • The Core Profile, which contains only non-deprecated features.
  • The Compatibility Profile, which contains all the features.

Graphics vendors still widely support the Compatibility Profile, so there has not been much pressure on developers to adopt the Core Profile. My impression is that there is still a lot of code in active use, or even still being developed, that uses the Compatibility Profile, including deprecated features.

For developers interested in transitioning to the Core Profile, I hope that this article will be helpful in understanding what is typically needed to complete the transition. Most of the content is based on my experience from completing the work for my Volume Rendering Library, which is a moderately sized project consisting of about 25,000 lines of source code, including around 300 OpenGL API calls and 300 lines of GLSL shader code. Full details about the deprecated features can be found in the OpenGL specs.

Overall Process

Depending on the amount of code, and the feature set used, the effort can be fairly extensive. My recommendation is to perform the work step by step, while testing that the code is still functional after each step. This means that you will not enable use of the Core Profile until being close to completing all the work. If you enable use of the Core Profile from the start, you will not be able to run your code until the entire process is completed. Chances are that it will be broken at this point, and it will be difficult to track down where things went wrong. The step by step approach allows you to verify success of each change. It also makes sense to save the state in your source control system after successfully completing a step, which allows you to easily go back to the last good state if you run into problems.

The following sections list some of the most important tasks in the Core Profile transition. Most of the steps can be completed in any order. The order used in this article is based on getting some of the big chunks out of the way early.

Fixed Pipeline

Arguably the most significant feature deprecated in the Core Profile is the fixed pipeline. Depending on where your current code stands, this can be a big deal. If your code is at least on an OpenGL 2.0 level, and already uses shaders, this will not affect you much. If you currently use the fixed pipeline, which typically means that you let OpenGL do shading calculations based on light sources and material properties specified with API calls like glLight*() and glMaterial*(), you may have a substantial amount of work to replace all that fixed pipeline functionality with vertex and fragment shaders written in GLSL.

If you are new to GLSL shader programming, you can learn about it from various resources, like the Orange Book. The book also contains shader code to match most of the previous fixed function pipeline.

GLSL Changes

There are some changes between older GLSL versions and the latest GLSL versions. The Core Profile requires using the newer versions in these cases. Most of the changes are very simple and quick to apply.


GLSL shaders need to start with a #version directive specifying the version of GLSL used by the shader. Once you transition to the Core Profile, this is also indicated as part of the version. For Core Profile code, you will typically use one of the following versions:

OpenGL 3.2: #version 150 core
OpenGL 3.3: #version 330 core
OpenGL 4.0: #version 400 core
OpenGL 4.1: #version 410 core
OpenGL 4.2: #version 420 core
OpenGL 4.3: #version 430 core
OpenGL 4.4: #version 440 core

You may have to defer adding the core part to the version until you completed a couple more of the following steps.

Storage Qualifiers

The attribute and varying storage qualifiers have been deprecated. Simple substitution can be used to update them to the latest standard:

  • Vertex shader: Replace attribute by in.
  • Vertex shader: Replace varying by out.
  • Fragment shader: Replace varying by in.
Predefined Variables

With the Core Profile, GLSL has a much smaller set of built-in uniform variables, mostly corresponding to fixed pipeline state being removed. For example, gl_ModelViewMatrix and gl_ProjectionMatrix are not available anymore.

You will typically need to add your own uniform declarations if you previously used any of these pre-defined variables. This is easy to do, but will also need corresponding changes in your C/C++ code to set these variables. For the example of transformation matrices, this will be covered in a section below.

Pre-defined vertex attributes like gl_Vertex, gl_Normal and gl_Color are gone. You need to define your own generic vertex attributes as in variables in the vertex shader to replace them.

gl_FragColor is not pre-define anymore. You need to define your own out variable in the fragment shader, and assign the fragment color to that variable instead of gl_FragColor. The declaration will typically look like this:

out vec4 FragColor;

Renamed Built-in Functions

Some built-in functions have been renamed. Most notably, the texture sampling functions are now overloaded with the same name for different sampler types, instead of having specific names per sampler type. For example, what used to be texture2D() is now simply texture().

Transformations and Matrix Stack

The matrix stack, which was previously used with glPushMatrix() and glPopMatrix(), is not available anymore in the Core Profile. If you previously used the matrix stack, the necessary changes can be much more intrusive than expected, depending on your code design and structure. You could maintain your own stack, and make it available globally (using something like a global variable or a singleton). If that sounds too ugly for your sense of design, you may need to start passing around transformation state between software components, which is cleaner, but can require far reaching changes.

In the Core Profile, the entire concept of transformations has basically been removed. What was previously treated as transformations are now just matrices, which are typically defined as uniform variables in the vertex shader, and applied as needed in the shader code. Where your application previously manipulated transformations with calls like glLoadMatrix(), it now needs to set the values of these matrix variables with glUniform*() calls.

Related to this, API calls to build transformation matrices, like glRotate*(), have also been removed. If you previously used them, you now have to build the matrices in your own code. The matrices for common transformations are listed in Appendix E/F of the Red Book.

Attribute Stack

The attribute stack, which was previously used with glPushAttrib() and glPopAttrib(), is not available anymore in the Core Profile. Similar to the matrix stack, the necessary code changes to eliminate previous use of the attribute stack can be more complex than one would expect, particularly if you want to minimize redundant state changes.

The OpenGL state API does not match up very well with software that uses a scene graph or similar structures. Without the attribute stack, clean and efficient state management can get even trickier. This topic might be worth a separate post at some point. Without going into more detail about the various options, one possible approach is that you define a “standard” set of state across your rendering code. When the rendering method for any object is invoked, it can rely on this state being set. If it changes any state to different values, it is responsible to setting this state back to the “standard” value at the end.

Vertex Data

Various ways of specifying vertex data were introduced into OpenGL over time. Starting with immediate mode and display lists, moving over vertex arrays (VA), vertex buffer objects (VBO), to vertex array objects (VAO). With the Core Profile, all of these options except for VAO, with vertex data stored in VBOs, is deprecated. A VAO always needs to be bound for any rendering.

If your code is moderately recent, and used VBOs, the changes needed for adopting VAO are very simple. If you used any of the older mechanisms, the effort might be  bigger, but should still be fairly straightforward.

Corresponding to pre-defined attributes like gl_Vertex not existing anymore in the vertex shader, the corresponding API entry points like glVertexPointer() , glNormalPointer() and glColorPointer() are not present anymore in the Core Profile. Use glVertexAttribPointer() to set the generic vertex attributes you used to replace the pre-defined attributes in the shader.

One somewhat interesting case comes up if you geometry is highly dynamic, e.g. if your geometry changes for each frame. In this case, you may want to experiment with various options, like the use of glBufferSubData() and glMapBufferRange(), to find out what performs best for your usage patterns and target platforms.

Quad Primitives

GL_QUADS and GL_QUAD_STRIP primitive types are not available in the Core Profile. They can fairly easily be replaced by the corresponding triangle primitive types. You need to be careful about the order of vertices, though. For example, if you were drawing a single quad with GL_QUADS, and want to use the same 4 vertices to draw the quad with a GL_TRIANGLE_STRIP, the 3rd and 4th vertex need to be swapped.

Texture Formats

Some redundant texture formats are not available in the Core Profile. They can easily be replaced by different formats. For example, if you previously used GL_LUMINANCE or GL_INTENSITY types for one-component textures, replace them with the corresponding GL_RED formats. For two-component textures, use GL_RG instead of GL_LUMINANCE_ALPHA.

Rarely Used Features

Wide Lines

Wide lines are still supported in the Core Profile, but marked as deprecated. If you choose your context attribute to also disable deprecated core features, setting the line width to a value greater than 1.0 will result in an error.

Texture Borders

Texture borders are not supported anymore.

Related, the GL_CLAMP texture attribute has been removed. Use GL_CLAMP_TO_EDGE instead.

Display Lists

Long considered an obsolete features, display lists are (finally) gone.

Alpha Test

The alpha test is gone in the Core Profile. It can be replaced with a discard statement in the fragment shader, conditional on the alpha value of the fragment.

Enable Core Profile, Test and Iterate

Once you have completed the changes to eliminate the use of deprecated features, and confirmed that your software is still functional, you are now ready to enable the Core Profile. This is done while creating and configuring your OpenGL rendering context. The exact mechanism is platform dependent:

  • Under Windows, you first need to retrieve the WGL entry point that allows you to create an OpenGL context with extended attributes by calling wglGetProcAddress(“wglCreateContextAttribsARB”). Once you have this entry point, call it similarly to how you previously called wglCreateContext(), except that you pass a set of attributes as a additional 3rd argument. For those attributes, use values like the following (using OpenGL 3.2 for the example):
    int ctxAttribs[] =
  • Under Mac OS, add the following attribute value to your NSOpenGLPixelFormatAttribute array:
    NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core

Also remember to add core to your shader version if you have not done so earlier.

You can now test your application for Core Profile compliance. If you see any problems, or just to be safe, add glGetError() calls in strategic places of your code, e.g. at the end of rendering a frame. One easy approach to do this is by having asserts for glGetError() == GL_NO_ERROR that are only active in debug builds in your code. For debug builds, I also recommend to test the result of all shader compile/link steps, and trigger asserts if they fail. This will allow you to pinpoint problems more quickly.