In iOS 5, OpenGL ES Texture caches were introduced to provide a direct way from the camera video data to OpenGL without the need of copying the buffers. There was a brief introduction to texture caches in session 414 - Advances in OpenGL ES for iOS 5 of WWDC 2011.
I found an interesting article which abuses this concept further in the end and circumvents a call to glReadPixels by simply locking the texture, and then accessing the buffer directly.
glReadPixels is really slow due to the tile-based renderer which is used in iPad 2 (even when you use only 1x1 textures). However, the described method seems to process faster than glReadPixels.
Is the proposed method in the article even valid and can it be used to boost applications which rely on glReadPixels?
Since OpenGL processes graphics data in parallel to the CPU, how should the CVPixelBufferLockBaseAddress call know when the rendering is done without talking to OpenGL?