Nav links

Saturday, 11 May 2013

OpenGL render to texture resizing

As my OpenGL rendering was a trifle slow due to some complex pixel shaders I wanted to render a low-resolution version first, and then do a final high-resolution rendering at the end. The OpenGL mechanism for doing this is to render the scene into a low-resolution texture using Frame Buffer Objects and then draw this in a quad at screen resolution. This is a well-known and well-documented technique. However, I needed to do a seemingly minor modification, in that I wanted to have more than one size of low resolution. In fact, I wanted the user to be able to select it, so if they had a slow device they could choose a very low resolution image. I found no resources explaining directly how this could be done, and my straightforward implementation did not work. I did come up with a method which works, and which I hope may be useful elsewhere.

For my initial code, incorporating only a single low-resolution size, I used the instructions compiled from the following sources. This list starts with the more direct, practical code, down the the more theoretical explanations of the render to texture technique.
That worked well, so I moved on to adding the resizing of the low-resolution texture. Forum responses suggested that all that would be necessary would be to call glTextImage2D with the new texture size, and everything else would resize automatically:
private void SetupRenderToTexture() {
    // check if we only need to resize the texture
    if (renderTex != null) { 
        GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, renderTex[0]);
        GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB, 
textureWidth, textureHeight, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, null);
        return;
    }
    // all my normal fbo initialisation stuff here
}
I found that the texture was being rendered correctly to the FBO, but that it was being displayed at the wrong size. It was as if the first time the texture was sent to the default framebuffer the texture size was set permanently, and then when a resized texture was sent it was being treated as if it was the original size. For example, if the first texture was 100x100 and the second texture was 50x50 then the entire texture would be displayed in the bottom left quarter of the screen. Conversely, if the original texture was 50x50 and the new texture 100x100 then the result would be the bottom left quarter of the texture being displayed over the whole screen.

The solution I came up with, after many false starts, was always start with the biggest possible texture, and then pass a scaling parameter into the vertex shader to enlarge any textures which were too small. This wasn't much effort, because I already had similar code in the vertex shader to convert from OpenGL coordinates into texture coordinates.

Here is my vertex shader. cMapViewToTexture is the mapping from view coordinates, which range from -1 to 1, to texture coordinates which range from 0 to 1. uScaleTexture is how big the current texture is compared to the original.

attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
varying vec2 vPosition;
uniform float uScaleTexture;
const vec2 cMapViewToTexture = vec2(0.5, 0.5);

void main(void) {
    gl_Position = vec4(aVertexPosition, 1.0, 1.0);
    vPosition = vec2(uScaleTexture, uScaleTexture) * 
(aVertexPosition * cMapViewToTexture + cMapViewToTexture);
}
This does work, and although it is a workaround rather than a proper solution it doesn't have drawbacks other than being a little bit of extra code to write, so I won't be revisiting it in a hurry.