Save a specific surface with SDL/OpenGL

Hello! i am learning OpenGL and i thought of making a paint program where you also could save the images and i wonder if there is any way to get a specific surface from the program, because i would not like to save the surface where you are drawing at and not the tool panel. So im thinking of working with surfaces like you are doing with SDL, is there any equvalent of that in opengl.

Thanks in advance.

If its possible it would be nice if i could just convert the OpenGL surface to a SDL_Surface* because SDL have a function that can save BMP. Is this possible?
Last edited on
OpenGL isn't really the best way to do this. A paint program does most of its work in the CPU, so to draw to the screen you'd have to continuously send buffers to the video device. Kind of a waste of resources.
SDL is actually a much better choice, since its blitting methods are already software-based.
I just like the gl transformations and rotations and thats why i want it to be OpenGL but yea, SDL is an option. i actually found something about ReadPixels() and im gonna look into it further and see if i can get a SDL_Surface * out of it. Tho im not quite sure how to do that.

But well, i understand how you mean and i am considering it. But i want it to be advanced, a real challange for me.

Still, thanks for you reply!
I just like the gl transformations and rotations and thats why i want it to be OpenGL
Those are merely linear transformations. They're easy to implement in software.

But i want it to be advanced, a real challange for me.
I just don't think it makes a lot of sense. You're making something much more complex than it has to be, while at the same time losing efficiency.
If you're going to challenge yourself, challenge yourself with something sensible.
If i could implement them, well that would be cool. But i have no idea on how to do that really. I'll have to google some. But well, thank you again for your reply
Real quickly:
f(src)=dst
If you multiply the coordinates of a pixel in the source by the transformation's associated matrix, you get the coordinates in the destination. There are reasons not to do this, though: suppose the transformation scales the source by 3; merely copying every pixel in the source to the destination would mean that only 1 in 9 pixels get filled.
src=f-1(dst)
What you do instead is invert the matrix (note that matrices whose determinant is 0 are non-invertible. In those cases, the destination should just be black or transparent) and compute the coordinates in the source for every pixel in the destination. While doing this, it's entirely possible that you'll get some coordinates that fall outside of the source. You'll have to handle that.
There's one more interesting aspect: suppose again you're scaling the source by 3. f-1(1,1)=(0.333,0.333). Your first inclination will be to truncate the coordinates. There's something else you can do that will make the result look much nicer. You can figure that out.
Topic archived. No new replies allowed.