I just finished going through NeHe's OpenGL Tutorials and I'm trying to understand how to program my movement through the world , based on the movement of a Camera, instead of having the world move in the opposite direction that input would tell the Camera to move.
You don't really move a camera, you transform, rotate, etc. each vertex in the "world" to emulate movement. Your "camera" is always in the same spot. Or... I guess it doesn't have to be depending on your idea of how movement works.
For example, in our universe, we know the world doesn't revolve around us (in the literal sense). But in the digital universe, it does. The definition of movement can't be the same as the definition of movement in the real world, rules are different.
Ok so it's ok to modify the coordinates of the objects in the world has im moving through it?
Because in 2D the way I did it is that I would have Set Coordinates for my objects like lets say ObjectX and ObjectY , and when I moved through the world I would have CameraX and CameraY that would be modified based on movement, so let's say MoveRight means CameraX += 1; and MoveDown means CameraY += 1;
And then at the moment of Calling the Rendering of an object,
I would pass ObjectX - CameraX and ObjectY - CameraY, instead of Passing ObjectX and ObjectY. ObjectX and ObjectY are never modified, a new value affected by Camera is passed, this way it makes it way simpler for collisions...
It's Simpler because you have World Coordinates that can be uised for collisions and then you have Screen Coordinates for Rendering , and ObjectX - CameraX is a Screen Position and ObjectX is a World Position.
So you are saying that it basically is the same in OpenGL 3D?
Ho I think I got, what we are actually doing is affecting the glTranslatef position with the camera position!
int DrawGLScene(GLvoid) // Here's Where We Do All The Drawing
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer
glLoadIdentity(); // Reset The View
I'm complaining that toy problems to learn the basics require such "new" hardware.
Edit: I suppose that the quotes may be misinterpreted. I understand that 4 years is a long time in our industry, but updating hardware is expensive (consider having to change an entire lab)
The idea of programmable shaders (which is primarily what is hw demanding about OGL 3.3) predates 2010. So even if your hardware is older than 4 years old, you might be able to get away with driver updates.
And, if all else fails, you can always use Mesa for software-based (hardware accelerated if possible) OpenGL 3.0 graphics. I'm pretty sure. See http://mesa3d.org/
95%+ support directx 10. That is roughly the same as opengl 3, that is not to say some of the features of opengl 4 aren't included. Specifically some that deal with shaders that aren't really hardware issues. Probably the biggest one for me is explicit uniform locations, that is supported still on my last GPU that's about a decade old.
Your scene is moving around your camera. Your camera is always facing (0, 0, -1) where the order is x, y, z.
Also, you don't want to use those functions! Instead, you'll either want to create your own (which is nothing but matrix manipulation mind you) or use a library that does it for you like GLM to create the matrices required to "move" your objects or camera.
EDIT: In other words, you can move your camera by manipulating the View matrix. Eh... I know some people like to hide math behind the libraries... but it makes more sense if you do the math yourself for a few vertices (take a vertex, multiply the model matrix by the vertex... multiply the result by the view matrix... multiply the result by the projection matrix. This process is described mathematically in every OpenGL book even though they generally provide you tools to avoid it... something I don't really like).