I know for some that this will seem obvious, but I wanted to share something that I found quite surprising.
Often times I read things on "optimizing" your program, and a lot of the times it's bogus that is either false or is just hardly relevant (whether it ever was or it just isn't nowadays with today's machines). One thing that really did make a significant difference for me though was trying out something I read in an article called data locality.
Putting the thing in a nutshell quite quickly, it's about how processors have picked up quite a bit, but RAM is still lagging behind. Due to this, performance-critical programs should organize their data so the CPU doesn't waste cycles looking for data.
Basically, I changed a tile map editor I was working on from iterating through a vector of tile objects that wrap over sf::Sprite, to iterating through a renderable component that is synced with the tiles.
Sort of in short
1 2
|
for(...)
m_tiles[i].draw(window); // window.draw(m_sprite)
|
to
1 2
|
// Have a contigous array
m_sprites = new sf::Sprite[someSize];
|
1 2 3 4 5 6
|
MapRenderableComponent::draw(Map * map)
{
for(...)
window.draw(m_sprites[i]);
}
|
It's by no means complicated actually. I just thought I'd share because this actually brought me from a maximum of 150 FPS to 220 FPS
i5 intel processor duo-core
6 gigs of ddr3 RAM
Intel(R) HD Graphics Family 1696 MB <- I hate this integrated card