For my initial code, incorporating only a single low-resolution size, I used the instructions compiled from the following sources. This list starts with the more direct, practical code, down the the more theoretical explanations of the render to texture technique.
- straightforward code: OpenGL Tutorial 14 : Render To Texture
- great explanation: Learning WebGL: Lesson 16 – rendering to textures
- further good explanation, and in related areas: http://zach.in.tu-clausthal.de/teaching/cg_literatur/frame_buffer_objects.html (reformatted from gamedev.net: OpenGL Frame Buffer Object 101)
- good detailed example and explanation: swiftless tutorials: OpenGL Framebuffers
- in-depth explanation of the relevant OpenGLcommands: http://www.songho.ca/opengl/gl_fbo.html
private void SetupRenderToTexture() {
// check if we only need to resize the texture
if (renderTex != null) {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, renderTex[0]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB,
textureWidth, textureHeight, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, null);
return;
}
// all my normal fbo initialisation stuff here
}
I found that the texture was being rendered correctly to the FBO, but that it was being displayed at the wrong size. It was as if the first time the texture was sent to the default framebuffer the texture size was set permanently, and then when a resized texture was sent it was being treated as if it was the original size. For example, if the first texture was 100x100 and the second texture was 50x50 then the entire texture would be displayed in the bottom left quarter of the screen. Conversely, if the original texture was 50x50 and the new texture 100x100 then the result would be the bottom left quarter of the texture being displayed over the whole screen.The solution I came up with, after many false starts, was always start with the biggest possible texture, and then pass a scaling parameter into the vertex shader to enlarge any textures which were too small. This wasn't much effort, because I already had similar code in the vertex shader to convert from OpenGL coordinates into texture coordinates.
Here is my vertex shader. cMapViewToTexture is the mapping from view coordinates, which range from -1 to 1, to texture coordinates which range from 0 to 1. uScaleTexture is how big the current texture is compared to the original.
attribute vec2 aVertexPosition;
attribute vec2 aPlotPosition;
varying vec2 vPosition;
uniform float uScaleTexture;
const vec2 cMapViewToTexture = vec2(0.5, 0.5);
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0, 1.0);
vPosition = vec2(uScaleTexture, uScaleTexture) *
(aVertexPosition * cMapViewToTexture + cMapViewToTexture);
}
This does work, and although it is a workaround rather than a proper solution it doesn't have drawbacks other than being a little bit of extra code to write, so I won't be revisiting it in a hurry.