My console application solved an optimization problem. During the optimization process, however, the CPU load is roughly 10% and RAM usage is 8 GB (out of 36). Is there a way for console applications to use computer resources more efficiently?
Do you mean that your process alone is taking 8GB of RAM? That's a whole lot of memory. Can you post your code? Or maybe describe the problem and the way you solved it?
10% of CPU cycles is pretty good generally but of course that number is meaningless without knowing what you mean by "optimization problem". 8GB is a lot of RAM but since you have 36GB you have a ton of overhead so... Meh. I would worry more about lowering execution time than lowering CPU load. You never know what might be running in the background that would steal some of your cycles and again, 10% is already quite low.
I use a C++ interface to do the optimization by CPLEX. Since CPLEX is a well-developed optimization software, I assumed the computer does not use its full potential because of the C++ interface. The process uses less than 8GB, but the point I was trying to make is that it does not use the remaining 28 GB. I am not sure what the overhead is in this case and whether it plays a role here.
I'm not sure you understand the meaning of "optimization" as it relates to anything here.
CPLEX will use the amount of CPU process it needs to solve the problems given to it. (This is true of most processes.)
The choice of programming language or presence of the console is completely, utterly, irrelevant -- having no effect on the amount of CPU use.
The point of CPLEX is to solve polynomial problems in as efficient a way as possible. So instead of using an algorithm that takes, say O(n4) it optimizes the solution to use code that takes only, say, O(n2.8). This is a considerable savings.
Here's an actual example. Suppose you have 50 matrices of varying sizes that you must multiply together. A1*A2*...*A49*A50.
There really is no way to optimize multiplying two matrices -- it's an O(n3) problem. But, software like CPLEX recognizes that multiplying a matrix produces a smaller matrix.
So the question then becomes to figure out which pairs of matrices to multiply first so as to reduce the size of the matrices as quickly as possible.
Optimizing the order of multiplication ultimately reduces the total amount of work that needs to be done to multiply all 50 matrices together, which makes a significant savings.
(I don't know whether CPLEX can actually handle matrices or not.)
Simply put, CPLEX adds cutting planes to reduce the LP region, and then used branching operations to find feasible solutions. Cutting planes may or may not involve manipulations with matrices; it basically finds valid inequalities and adds them to the formulation. During the cutting procedure the CPU usage is low as explained above. During the branching phase, however, the CPU usage is high. Loosely speaking, branching is more like an explicit enumeration. To summarize, I am concerned that it appears that during the cutting phase CPLEX does not "use CPU's full potential." P.S. Memory usage is low in either phase.
What I'm saying is that the purpose of the program is to reduce the amount of work the CPU must do. There is no reason that the algorithm should be expected to tie up the CPU's resources at any point -- the whole point is that it optimized such that it doesn't need to (and shouldn't).