2d matrix of pixels to screen

I have a program (slime mold simulation) that generates a big 2d array of values. I want to draw this matrix to the screen as pixels in real time. I see a lot of solutions that can make a image from a 2d array, but those are not fast enough to get a reasonable fps at a HD resolution (1920x1080 for example).

At the moment I use Allegro 5, as that is the only visualization library that I know how to use (and the only one that I know of that is easily installed using Nuget). I often have a lot of problems getting c++ libraries to work because I always just use Nuget (all linking stuff is hard okay).

By drawing to a locked bitmap and skipping black pixels I can squeeze some performance out of that, but at a quarter of a million particles the draw time is from just as long as the (unoptimized) simulation time (63 ms) to 3 times as long, depending on the amount of black pixels.

At the moment I'm doing the preprocessing of the pixels (scaling, making pretty colors) by hand in in CUDA, which wasn't that hard because i'm doing the simulation using CUDA anyway.

Allegro has access to fragment shaders, but the documentation is difficult for me to understand as I don't know a lot of the stuff going on there. This means that if there is an easier option, I'd rather not mess with that.

I know there are a couple options for me out there, but before I go out of my way to learn a completely new library or something like that I would like to know the best library for this project, as I don't have that much free time to program.

Long story short, my question is: what is the quickest way to draw a 2D array to the screen in real time?

Example of output now (saved frames and stitched together for video): https://youtu.be/U1YVSvEcLZ4

Edit: My platform is Windows and I am programming in Visual Studio.
Last edited on
what OS?
modern graphics cards can draw 2d even high resolution at 60+ FPS with little trouble. Its likely able to draw faster than you can generate the pixels.
The answer is likely SFML, but lets get all the info first.
Last edited on
Hey jonnin, sorry about that, I added the information.
Last edited on
maybe take a look at the example here: http://www.rastertek.com/dx11tut11.html
take their example project and rewire it to just draw a RGB array that you provide as a function, then connect the two?

you can also try other libraries. directx is really meant for 3d, the 2d stuff is kind of left over from directdraw way back when, but its in there.
typically 2d just has 2 buffers. you draw one while the other is being updated, the reverse and draw the new one and fill in the old one, back and forth.

If this isnt a good example see if you can find a better one for 2d. It looks a little over-cooked, but typical with this kind of question, a simple 'me draw pixels' example is tough to run down, everyone wants to add a bunch of crap you don't need on top of that making it tough to get started with the basics.

dx12 is out but 11 has more examples. may not want to go bleeding edge? The older stuff may work fine in 12, or not. They tend to screw stuff up on major version updates.
Last edited on
Sorry for not replying, I've been playing around with different options. I saw the use of vertex buffers in your link and found out Allegro 5 (what I was using anyways) also supports this. I used the al_draw_prim() function with the ALLEGRO_PRIM_POINT_LIST format to draw all pixels as points to the screen, which was much, much faster. It probably isn't the most efficient way, as every point now carries the coordinates at which it will be drawn, which is a bit overkill if you want to draw all the pixels on screen, but it works pretty well this way. It also allows the use of textures to color the pixels which is an added bonus. This is the code I used if anyone in the future has the same problem as I had. Funny enough it is actually a lot shorter than the code I used before:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
void draw_array_vertex(int* field, ALLEGRO_VERTEX* vertices, ALLEGRO_DISPLAY* display, ALLEGRO_BITMAP* texture, Settings settings) {
	int texWidth = 1;
	if (texture != NULL) {
		al_clear_to_color(al_get_pixel(texture, 0, 0));
		texWidth = al_get_bitmap_width(texture);
	}
	else
		al_clear_to_color(al_map_rgb_f(0, 0, 0));

	int nonBlack = 0;
	for (int x = 0; x < settings.width; x++) {
		for (int y = 0; y < settings.height; y++) {

			float l = ((field[y * settings.width + x]) / static_cast<float>(settings.maxFieldval));

			if (l != 0) {
				vertices[nonBlack].x = x;
				vertices[nonBlack].y = y;

				if (texture != NULL) {
					vertices[nonBlack].color = al_map_rgb_f(1, 1, 1);
					vertices[nonBlack].u = static_cast<int>(l * (texWidth-1));
					vertices[nonBlack].v = 0;
				}
				else {
					vertices[nonBlack].color = al_map_rgb_f(l, l, l);
				}
				nonBlack++;
			}
		}
	}

	al_draw_prim(vertices, NULL, texture, 0, nonBlack, ALLEGRO_PRIM_POINT_LIST);
}


(this code uses a 1d texture to color pixels if provided. if no texture is provided it just colors it on a black to white scale).
Nice, is it fast enough now?
- is texture likely to become null midstream, such that checking in the INNER loop is appropriate, or can you check it before both loops and stop there (this plays off the next idea of a default color)
there may be other stuff you can factor around ... like setting a default color to all vertices to avoid that else or something inside. But if its fast enough, you can leave it be too.
Last edited on
It now takes around 23 ms to draw the pixels when most of the 1920x1080 pixels aren't black without a texture or around 38 ms with a texture. This means it is already some 70 times quicker than the original code I had. I can speed this up a little by distributing the filling of the vertex array over a couple threads and I can speed up the whole program a lot by doing this at the same time as doing the simulation on the gpu. When I distribute the load over multiple threads I do draw all pixels because skipping black pixels actually takes longer due to the added complexity. Everything combined I get around 30 fps while using a texture, or lower when the simulation takes more time, but that isn't because of the drawing ofcourse.
Last edited on
yes that often astonishes even experienced programmers, that you can do more work faster than avoiding work with complex logic. computers are really, really good at doing the same thing over and over. They are frequently a good bit worse at doing the same thing over and over sometimes, and something else other times. That was the basic idea behind asking if you can set a default color and skip some of the decision making.
Does Allegro not have a way to directly draw an image on the framebuffer? It seems pretty silly to draw it as a list of individual points when you already have the color information loaded in a bitmap. The typical solution is to upload the bitmap to a streaming texture (a texture that the GPU knows will be updated frequently) and then draw the texture with a quad covering the entire screen.
The values are stored in a dynamic array of integers so I can easily pass it to the gpu and do calculations with it. My first approach was to lock a bitmap in cpu memory and write all the values to that, and then draw that bitmap to the screen. This however wasn't fast enough. I don't know anything about streaming textures so I can't say too much about that.
Topic archived. No new replies allowed.