GPU-Programming - a few questions to get started

Aug 11, 2011 at 9:29am
I've written a primitive tile rendering engine and noticed, that it renders slow on the CPU. That's why I want to get into GPU programming now.

What I'm planning to do is to manage an array of tile indices in the CPU, giving it to the GPU and have it render the textures of each tile at their given positions.

But I have no clue where to start now. I have three questions to get started:
* Is transferring a whole array or several changes at runtime to the GPU possible and fast?
* Are shaders that what I'm looking for? I've heard about them, but I'm not sure whether they are just modifications of already existing engines or the real code to run on the GPU (or in other words: the maximum possible access to the GPU)?
* How do I know that the GPU is ready to receive the next data (for example camera offset changes)? I don't want to interrupt a single GPU loop resulting in rendering mistakes.
Aug 11, 2011 at 10:32am
I think the easiest way to have gpu accelerated 2d graphics is to use the Cairo library ( http://cairographics.org/ )
Aug 11, 2011 at 11:19am
closed account (zb0S216C)
CUDA[1] is your answer. Assuming your graphics card supports the CUDA feature, then you're good to go.

References:
[1]http://developer.nvidia.com/category/zone/cuda-zone


Wazzak
Aug 11, 2011 at 2:35pm
closed account (1yR4jE8b)
Or you could use OpenCL which runs on Nvidia, ATI, and Intel graphics chips instead of just NVIDIA.
http://www.khronos.org/opencl/
Aug 11, 2011 at 3:26pm
He is talking about rendering. Are not Cuda and OpenCL for data processing?
Aug 12, 2011 at 8:26am
Well, thank you, but none of you have directly answered my questions yet.

I think shaders do that what I want to do. In the meanwhile I've found GLSL and am reading the manual for it.

All I want to do is having the CPU give the GPU an array of tile indices and camera offsets and the GPU processing them. The CPU will be too busy to process every tile and tell the GPU to draw each texture.

As far as I've learned from the manual I could just write a vertex shader with each vertex having information about its texture. I assume that I have to send a flat model of vertices to the GPU and change each vertex' information via CPU as soon as it's texture should change.
Topic archived. No new replies allowed.