Math in programming

Pages: 12
Again, thanks for all of the replies.

I suppose that I'll know how hard it is when i try it ;). As of now, I'm learning spritebased graphics but when I have learnt that, I'll probably try and learn OpenGL or maybe something like Irrlicht( ;) ) to see if it is as hard as some say or not. But it seems that it's not just graphics that is math, it's also AI and I suppose that I'll learn that as soon as I've learned the basics of SFML to make a Pacman game or something. Or is it only 3D AI that is hard?

You all seem to have different opinions on this subject and it's very intresting to read of them, please continue if there is more to say.
It seems like pretty fun math, though. Especially since it's math that you actually put it into practice.
Last edited on
kfmfe04 wrote:
So how hard the math is depends on what problem you are trying to solve.
.. and on what level of abstraction you're working.

EDIT:
@Calsong:
Yes, these are very interesting math topics and concepts, although most are just applications of general subjects. Others are more reliant on computer science subjects. AI, for example, is strongly relevant to the subject called "neural networking". As of the "Pacman" example, Pacman doesn't have an AI.

AI, Artificial Intelligence, is the subject that goes into the creation of a "bot" (also referred to as "computer player" a lot) that simulates a real player. This means that they "react" on the environment (of which the normal players are a part of). This is deterministic - the same case will yield the same output reaction. Pacman's so called AI, however, uses a less sophisticated way to simulate a computer player; randomness. When a ghost can change it's direction (other than completely turning 180 degrees) during a normal run, it will have a 1/8 chance of going into that road, if it hits a wall, it will randomly pick a valid direction. The interpretation of the player can make it so, that they think the ghosts are "chasing them", "closing them in", "picking at him, one at a time" or "going all out" - this is just interpretation.
Whether a real good AI should rely too heavy or a lot less on this interpretation is a subject to discussion. I myself, think, that it is not a bad thing to achieve simulated AI - as long as it is not noticeable. If randomness gives a real-life simulation, randomness it is.
Last edited on
Usually AI involves some kind of training or learning about the environment or the ideal case.
The agent will change its behavior based on what it has learnt.

Mathematically, this is often viewed as an optimization problem.
There is no perfect solution, only better or worse ones.
Last edited on
That depends on the area of application. It is, in some cases, possible to create a "perfect AI". In the game Tic-Tac-Toe, it is easily possible to create an AI that never loses (and exploits faults made by the human player, when possible). It's just the question if you want that to actually happen. Would it be fun to have an enemy that knows exactly what to do in what occasion, so that he never loses? This is where randomness, or rather, an offset in determinism - abstractly put, data hiding from the AI - comes in. Then again, it depends on the area of application.
Well, the problems that have deterministic solutions aren't the interesting problems in AI.

If you can write a straight-up algorithm for your problem, you don't need AI - I mean, in that case, the intelligence comes from the mathematician's brain, not from the silicon.

The hard AI problems involve things like picking stocks, answering like a human being (or building an universal translator), a program that can replace programmers, etc... I'm giving extreme examples.

Now, there are hybrid problems like chess. IMHO, chess programs that use too much brute force computational power or are heavily based on libraries of openings are not very AI at all - they are more algorithmic.

(Sorry for hijacking the thread - we veered off from math required for programming to AI)
Last edited on
You all seem to have different opinions on this subject
That's mostly because we define 'hard' differently.
Pacman's so called AI, however, uses a less sophisticated way to simulate a computer player; randomness
not really.. see http://www.atariage.com/forums/topic/68707-pac-man-ghost-ai-question/

kfmfe04, could you explain where you need hard maths in writing AI? I've never done this, but it sounds a lot more like a matter of program structure than mathematical calculations..
Last edited on
Everything done by a computer is a mathematical calculation or operation. There are even new philosophies that look at nature as a computing machine through the eyes of physics:

http://www.springerlink.com/content/v46w77m702372263/

For example, AI, in particular has problems with pattern recognition: classic chess programs are closer to the algorithmic end of the scale, whereas the game of go requires pattern recognition. It is no accident that programs can be written to beat professional human players in chess, but not (yet) in go (last I checked, most go programs are intermediate).

Furthermore, let me say that I am defining AI in the classic broad sense, not just in the narrow sense of creating agents in games. AI in the broad sense is an on-going, unsolved computer science problem. We are talking about hard problems - writing programs that seek to simulate creativity, learning, perception, and understanding. There are enough researchers working hard that most unsolved issues are non-trivial.

Just scratch the surface of any of the topics listed here and you will find hard problems:

http://en.wikipedia.org/wiki/AI-complete

With AI, it's often not even a case that the math is hard. The models for creating the math haven't been well formed/defined. If the models were good and effective, creating AI would simply be an exercise in understanding the math and coding it up!

Part of the problem may be in the nature of intelligence. Even in a neural net, much of the "smartness" seems to be in the inter-relationship between nodes and not in the nodes themselves. Somehow, the sum of the parts is greater than the whole - typically, not an easy thing for scientists to analyze (non-linear systems are much harder than linear systems).
Last edited on
closed account (EzwRko23)
Writing simulators of any kind (e.g. electronic circuit simulator or physics simulator) is also nothing but math, full of traps. Many formulas learned at high-school / university "don't work" properly in computers.
Many formulas learned at high-school / university "don't work" properly in computers.
xorebxebx is 100% right!

Try this trivial elementary school example in /tmp/junk.cpp:
1
2
3
4
5
6
7
#include <iostream>

int main()
{
  float a=0.05,b=0.07,c=0.12;
  std::cerr << ((a+b)==c) << std::endl;
}


/tmp$ gcc /tmp/junk.cpp -o junk -lstdc++ && ./junk
0
/tmp$ 

The output is obviously wrong, in the math-sense, but correct with respect to computer science.

Even explanations for this monstrosity (at least for students new to programming) can be simple:

http://en.wikipedia.org/wiki/Floating_point

or in this case, use non-trivial discrete math:

http://docs.sun.com/source/806-3568/ncg_goldberg.html

A practical aside: what if you were writing an accounting system and those numbers were to represent 5 cents, 7 cents, and 12 cents? This is the reason a proper Money class uses some form of Decimal class/type (perhaps implemented with ints or longs) and not floating point.

Oh, and xorebxebx's first comment:
Writing simulators of any kind (e.g. electronic circuit simulator or physics simulator) is also nothing but math, full of traps.

is also very true. I remember in high school, writing a planetary gravity simulation on an original 1984 Mac, where you use simple physics formulae to simulate a solar system. It's not too hard to get a planet to rotate around a sun (seeing Keppler's laws in action is very geeky-cool), but it's actually not so easy to get a moon around a planet and a planet around a sun to stabilize the way you like!
Last edited on
Thumper again: That doesn't seem like very hard math, though. Is it? I mean, you don't have to come up with the formulas yourself, you just have to remember them. Or is there more math that is harder? Or is it harder than it looks?


Many math books, at least some of the ones that I've seen, will give you basic formulas, but often times you don't need the basic formula, you need a modified formula or a more complex formula based on the basic formula, so in that sense you will have to come up with some of the formulas on your own. That is, again, reiterating what others have said, depending on what you are going to be programming.

For example a cryptography book i had once gave the formula for finding the inverse of a 2x2 matrix modulo m for a Hill Cipher. Unfortunately, the general formula for a NxN matrix is far more complex, and even though way back in the apendix this book did give the formula for finding the inverse of a NxN matrix, it was so confusing I needed to find a different book to explain it. So bottom line: the more math the merrier.


And getting back to the main question, if I can add to it I'd also like to ask what additional college level math courses are there that would prepare one for college? I already know about Discrete Math, Linear Algebra, Statistics, Alegebra etc, but what are some other courses that would be well suited for a computer programmer or scientist.

In various texts I have seen listed Discrete Structures, Fourier Analysis, Wave Theory, Complex Number Theory & Analysis, and a few others as being well suited for computer science (and a few for electrical & computer engineering), but apart from that I don't really know much about them. What are they about? Can anyone on here describe them and how they might be useful? And also I'd like to make a conjecture...Seeing 'Applied Combinatorics' listed in university mathematics listings and reading that it is about the 'mathematics' of counting, doesn't really let me know what its about... but I can only assume it has to do with permutations? Would it be helpful say, if I wanted to be able to determine how many valid sudoku puzzles there are, or say given a simple monoalphabetic substitution cipher such that no plaintext letter is mapped onto itself, what is the key size? Both are questions that at various times I have tried to solve.

For a sudoku the first line is 9!
the second line the first three cells are 6*5*4, then the next three might be 6*5*4, but depending on if the numbers in the first line second grid match any of the numbers of the second line first grid, it might be yada yada yada yada... and who knows what for the remainder of the sudoku.

and for a monoalphabeticsubstitution cipher of a 26 letter alphabet:
to ensure nontrivial keys you want no letter to be mapped to itself, and no letter to be mapped to the immediately preceeding or following letter. or for that matter no letter may be shifted by the same ammount of the letter to which it shifts, or which is shifted by.

Yes I know that last sentence doesn't make anysense, but I think you get the idea. And if you get the idea, that must mean that it does make sense.

Would that be something like:
25 (cannot be shifted to itself so 25 possibilities for the first letter) * 23 (If A = mapped to B, B cannot be mapped to A, and cannot be mapped to C (also cannot be mapped to B)) here is where it gets tricky
is it merely 25 * 23! or something else. because like the sudoku problem above I can't always determine how many possible permutations there are because the next product seems to be dependent on what some of the previous mappings are mapped to.

I hope someone can understand what I'm saying and can tell me what branch of mathematics it deals with. Also I hope I havn't veered off course and that the others on this forum will find this intriguing also.
Last edited on
closed account (EzwRko23)

It's not too hard to get a planet to rotate around a sun (seeing Keppler's laws in action is very geeky-cool)


Oh. This one is also pretty tricky, actually. I have seen quite a lot of such home-made simulators using Euler's integration for that, which is pretty lame and very inaccurate. Solving differential equations requires at least a second-order differentiation scheme to be used to give a reasonable accuracy, and I doubt you come up with a stable second-order method by just using what you have learned at high-school (ok, maybe the trapezoidal rule, but it has some other undesirable properties). A simple problem can become extremely complex when you dive deeper into it.
Last edited on
I wonder why one wouldn't use Simpon's rule for the approximation of integration when dealing with integration approximation on a computer (not by hand). It's more expensive computationally than the trapezoidal rule, true, but more accurate in general and the accuracy is worth it.

Anyways. I disagree that it's hard to get a planet to rotate around a sun. Just sayin'.

-Albatross
Last edited on
closed account (EzwRko23)
It is hard to make it rotate according to Kepler's laws. If you use Euler method, the eliptical orbit will rotate slowly.



I wonder why one wouldn't use Simpon's rule for the approximation of integration when dealing with integration approximation on a computer (not by hand)


Because it is unstable.
Implicit Euler's integration is inexact, but at least 0- and A-stable.
Admittedly, it is unstable. True. That said one can reduce the negative effects of this instability to a degree. Do you know what I'm suggesting?

EDIT: Of course, one could always have the integral in question solved.

-Albatross
Last edited on
In general, the issue is, you are discretizing everything - from the values of the positions and their instantaneous velocities to a time non-infinitesimal step-size delta-t. Going from smooth differential equations to difference equations for digital computers so you can step through time a delta-t also incurs errors. Those errors can blow up over time.

In addition, sometimes, there are stability issues in the equations themselves. In particular, the gravitational simulator also suffers from the n-body problem:

http://en.wikipedia.org/wiki/N-body_problem

Note: try clicking on the 3-body gif in the wiki article - the animation is interesting.

So even though we can all observe an N-body stable Solar System, it's non-trivial just trying to simulate a stable moon-planet-star system.

Or perhaps, the Solar System is not as stable as we thought, after all:

http://www.fortunecity.com/emachines/e11/86/solarsys.html
Last edited on
Topic archived. No new replies allowed.
Pages: 12