Nevertheless, what do you think about a fallback for users without shader support?
return new ShaderContext (target);
if you look at OpenGLGraphicsContext, line 1029, there's a numQuads setting which defines how large the chunks of data are that get sent to the GPU
I feel obliged to mention that once the OpenGL Renderer is on, those times/frame become somewhat arbitrary
OSX 10.7, MacBook Pro, GT 330M, 8GB RAM
Software: 4-12 ms
OpenGL 1.5-2.0 ms
Software Renderer: 12ms - 26ms Render Time
OpenGL Renderer: 6ms - 15ms Render Time
Times vary due to scaling animation...
It seems to me that your render times are not ok, too. Because with the old version, it takes not even 2ms at a resolution of 2560 x 1440 and fullscreen on my system. Could you please do a test without animating the size and set it to maximum and then post your render times and your screen resolution.
If it is much higher than 2 ms, you have the same performance issues that I have.
Users browsing this forum: Google [Bot], Google Feedfetcher, Rail Jon Rogut and 2 guests