While testing some simple examples with glDrawArrays I noticed switching it to shaders can cut my frame rate by over half (from 600- 300). I know I am using a bunch of deprecated code in GLSL right now but I don't expect it to cause that much of an fps drop. I would say it was just my intel graphics chip but if the FFP can handle it I see no reason shaders can't.
//Sending 10,000 quads with GLfloats (So Vertices vector has 80,000 elements, currently no indexing).
//Vertices allocated earlier in code, before main game loop
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glEnableClientState(GL_VERTEX_ARRAY );
glVertexPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_QUADS, 0, Vertices.size() / 2);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY );
The highest Opengl version I can run is 2.1 so 120 in shaders.
I know this is a fairly pointless test right now but it is still surprising to see, anything obvious I am missing?
I know. Your GPU may be optimized for the FFP, but at the time it was released maybe shaders weren't widely used, so they didn't bother to but as much time into it as they should have.
----
Or maybe it's something else. Can I see your full program Loop?
Most likely not the cause, but just curious. I still have a strong feeling that it's the GPU. Have you tried to test the program on a different computer?
Oops I must of removed glLoadIdentity when cleaning up my main loop to post. No change when adding it back.
I guess it being the GPU is a pretty good guess (I was hoping it wasn't because I am not in a situation to get another one, no one should be using an Intel HD i3 these days so my fault). I don't currently have any other computers to test this on that will perform any better than this.
If you want I can upload the .exe with/without shaders for you to confirm your theory but I don't want to waste any more of your time (+ no one trusts files online these days). Thanks for your advice though.
Check which opengl driver you're using. Microsoft's "fallback driver" (GDI) is notorious for odd performance issues. You might have to install a driver from your graphics card manufacturer to see the true numbers.
1 2
glGetString(GL_VENDOR); // should give you which vendor you're using
glGetString(GL_RENDERER); // should give you which driver you're using
If you find that you're using Microsoft's GDI driver, you can find Intel's drivers here:
I actually read that somewhere before I posted here, my drivers weren't fully updated so I used that link you sent earlier and now it's fully updated.
That didn't solve the problem though. Here is what Opengl says (which is what I expected) http://puu.sh/6Jejf.png
Edit - Oops I typed vendor there, oh well!
So I think we are back to Intel HD with i3 CPU (one of the first HD chips they made) is bad and should be avoided.
If you're using Intel's driver, then my next suggestion would be to set up smart indexing. That can make a huge difference, especially if a lot of the vertex data is redundant.
* Also, are you sure you aren't re-compiling your GLSL program each frame? I've seen that happen as well.
The "Bad" situations like the i3 are the right ones to aim for a "good enough" threshold with. Bring the bottom up to snuff and the newer cards won't even break a sweat (unless they're using the GDI driver!).
Yeah I realize it's good to aim for the lower graphics cards as soon as possible but with shaders taking such a performance drop it might just be a better idea to code everything using FFP from the start.
Kind of depends if I need per pixel lighting etc I suppose.
Try indexing it and re-checking your benchmark. I've never met someone who would willingly go back to the FFP after working with shaders. Any case, the best of luck on your project.