Sdl_displayformatalpha

I've set up a hardware surface in SDL and designated it as 32 bit. I've then gone through in photoshop to explicitly make every image I'm using 32 bit, and have confirmed by checking their details in Windows. Is it still necessary to call Sdl_formatalpha for each then? I'm using thousands of images and the coding to do this for each would be a pain, so I only want to do it if there's an actual (and significant) gain to be made.

No, although it's preferable. Calling this function ensures that the channels on the surface and the channels on the display are in the same order.

I'm using thousands of images and the coding to do this for each would be a pain
Why? Surely, you must have something like this, right?
1
2
3
4
5
6
7
const char **paths = /*some array of paths*/;
SDL_Surface **surfaces = /*allocate destination array*/;
for (int i = 0; i < paths_length; i++){
    SDL_Surface *surface = /*load surface from paths[i]*/;
    surfaces[i] = SDL_DisplayFormatAlpha(surface);
    SDL_FreeSurface(surface);
}
Not sure what you mean by "in the same order".

To answer your question, it's a project I've been working on for 5 years. I wasn't aware there were speed gains to having multiple sprites on one consolidated surface when I started, so each asset is individually named and loaded one at a time. The code is around 250k lines at this point, so modifying something so fundamental is a daunting task. Are there massive speed gains to be made by doing so? (Like doubling rendering speed or something)? The same question would apply to using SDL_DisplayFormatAlpha. Given I know with complete certainty that the screen surface and the thousands of individual surfaces all have the same bit depth, should I expect a massive speed boost by calling the function?

It's a turn based game, so speed isn't absolutely critical, but Im averaging only 10 or 12 FPS. I'm not using OpenGL because it seems the coding to blit the graphics would all have to change significantly if I did so. I tried getting it to run with the outdated OpenGLBlit flag yesterday, but couldn't get more than a black screen as output, and after a while, just went back to using an unaccelerated HW surface.
Not sure what you mean by "in the same order".
The channels in a pixel can appear in 24 different orders: RGBA, RGAB, RBAG, etc. When blitting surfaces, if the source and destination orders match, a simple series of memcpy() is enough. If the orders are different to blit SDL has to, for each pixel, load each channel from the source into a register and then store each into the destination.

Given I know with complete certainty that the screen surface and the thousands of individual surfaces all have the same bit depth, should I expect a massive speed boost by calling the function?
Maybe so, maybe not. From what you're saying, it seems like it'd be easier to ensure that the channels are in the correct order at the file level.

It seems to me that you've written yourself into a corner with your design. Not having an Image or Texture class for such a large project is absolutely insane. For example, if you find a resource management bug, you have no way of fixing it everywhere at once; and as you've probably realized by now, making any changes to the rendering pipeline or the loading mechanism is pretty much impossible.
This is the worst thing to hear, but my recommendation is to just scrap all the graphics code and start from scratch. What you have now is unmanageable. If you'd like, I can send you the source from a engine I was working on a while back that implements a Texture class with subtextures and reference counting. That should at least give you a jumping-off point. If you design the classes well, the restructuring doesn't need to be too traumatic.
So I've begun applying the DisplayFormatAlpha function. Even when all I did was change a couple of the mlre commonly used graphics I saw a 20% increase in framerate. Turns out it's a pretty important step. Thanks!
Topic archived. No new replies allowed.