The current program I am writing crashes because the memory exceeds the RAM limit in Windows. Would parallel programming fix this issue? Other than that I am not particularly sure how to fix this problem. The code does need to use a lot of memory because it is a Monte Carlo simulation for some complicated theoretical physics. Any suggestions?
I think you mean distributive computing, not parallel programming. In which case yes breaking the data into smaller chunks may help. That depends on if you are able to break the data up.
Parallel processing won't necessarily fix your problem. Parallel processing allows multiple processors or cores to run different parts of your program at the same time. That typically doesn't increase the amount of memory available to your program.
You probably need to look at ways to break up the data and have only certain portions of the data in memory at a time.
First make sure you aren't leaking memory. Then make sure you aren't using it inefficiently. In other words, are you sure that your program really NEEDS gigabytes of memory?