Is intensive single-core application bad for CPU?

Hi everyone,
I'm making my own raycasting game and, since i'm a beginner and did it without planning ahead, i've made it straight forward, direct, no multithreading or any type of CPU usage optimization; so it's a single-thread application right?

The application happens to be using the same core of my 6-core CPU, you can see the CPU (phisical and logic) usage in the image down below. Also the core 2 is always at 44°C while the others are at 33°C (more or less).

I was wondering, is this intensive usage of a single cose harmful for the CPU?


PS sorry in advance for my poor english
Last edited on
'it depends'.
if you screwed with the computer's hardware (overclocking is the primary example) and disable the fans or have insufficient cooling to begin with, you can certainly overheat and destroy hardware.

a properly cooled and properly throttled to its tolerances (factory defaults) machine should not burn up even if run for days, weeks, etc at full capacity.

while each cpu has its own thermal sensor, they are cooled as a unit if all on the same piece of hardware. The block of hardware is able to run for hours with all of your cores maxed out without overheating (again, if properly cooled and not overclocked and so on).

bottom line is that while you can break hardware, it is almost always tied to a combination of tampering with it and failing to cool it (dead fans count, check them from time to time).

if you have 6 cores and are burning one of them for more than a few seconds, you should perhaps consider threading the workload so all the cores help out and your code runs faster...
Last edited on
44°C is well within the safe range.
I play heavy games, like Doom Eternal or Metro Exodus, my PC reacts pretty well to every conditon. I'm concerned not much in over-heating but in the long time intensive usage of the same core, i've never seen something like that (keep in mind that the game have framerate unlocked BUT i plan to cap it): the core 2 is at 100%

I would like to multithread my code but let's say that my variable are ALL GLOBAL VARIABLES.. yes.. i know.. ^_^'

I've made a thread with one of the main function, the one which is more demanding, the framerate is dropped from 120-150 to 40 BUT i think i'm using all the cores:

Why are the curves so spiky?(!)

..I think there are 2 solutions: cap framerate to 30 and have a low resolution to not over-use the core; or multithread the code, which will be a real pain in the a55, but i've to say that this type of game engine has a logic that can be multithreaded very well.. still listening to your suggestions!!
Last edited on
To answer the title question, kind of. A modern computer is better equipped to handle it, but in general if you are talking about code you control, then be kind to the hardware when you can (you know some fool is going to try to run it on outdated hardware then blame your code for the burnout). It's a good idea to cap or vsync your frame rate; having 300FPS is not going to make you a better player if you're using a 60Hertz screen.

If you don't have the option to be nice, then it mainly depends on each OS and how the developers decided to deal with the problem. On Ubuntu Linux a single thread infinite loop will cause the thread to run at maximum speed until the OS decides that one CPU has been used too long, and it will switch the thread to a different CPU. Pretty sure this swap is heat triggered, but could be time related.

I've been running an infinite increment variable loop for a couple of minutes while typing, and it has touched 5 out of 8 of my laptop's CPUs so far. I'm going to shut it down now because my fan is turning on even though the room is cold. Yeah, still not a good idea, but it does share out the work and the heat load between cores.
Interestingly, a 4 thread program running the same program at a comparable total speed is actually going to produce more heat because there is more calculations required to keep all 4 cores in line; but that heat will be more evenly distributed on your heat sync so it will be cooled more efficiently... so a 4 thread program is still going to be healthier in the long run.

A graphics card is recommended to take on a large amount of work, they tend to have hundreds or even thousands of computing cores to share the load across, and hopefully a much sturdier heat distribution system. Let's not talk further about this until prices come down again...

Anyhow, the spikes in the charts and frame rate are likely as the task is being swapped between actual cores.

It's a good idea to be aware of your maximum temperatures on your CPU, Do some proper maintenance to clean away dust out of your case and off your hardware, and don't keep your computer in a tight enclosure.
Last edited on
It will be faster to run a single-threaded app on the same core. You loose the value of the faster caches when the scheduler moves things around.

Make an app run well on multiple cores can be somewhat involved on NUMA architectures, just splitting tasks to run on different threads may be ok on SMP architectures, but your NUMA computer my have different memory banks. Having said that most desktop/laptop computers use a single memory bank.
If you don't like always running this cpu intensive thread on the same core most times, you can set an affinity mask to specify logical processor(s) that the thread is allowed to run on. See
To newbeig: I agree, in fact i'm always thinking about worst-end PCs, before i used to code my game in a i3 2th gen downvolted in order to be sure of the performance :)
Yes, the framerate argument is quite tricky ehehe but anyway for a Wolf3D-like game 60FPS are wonderfoul, but 30 may be just not enough smooth, barely enough (personally i HATE 30fps, fan of solid 60fps, no more).

My game uses ONLY the CPU for everything, the GPU in my case is just to keep textures.

The cooling system of my PC is good enough, i'm not concerned mainly in heat (or local heat, which is worse from a termomechanical point of view) because 5 cores are cool, so cpu heatsink is way way more than enough.

To kbw: never heard of NUMA.. i suppose splitting the main function in more thread will speed things up and distribute the load, adding the time needed for swapping core, yes, but i think it will still be almost quite two times faster.. and CPU usage efficient.

To seeplus: have never heard of it, so are you saying that i can choose which (logic) core my application uses?
Last edited on
never heard of NUMA.

i suppose splitting the main function in more thread will speed things up and distribute the load ...
Well, it depends on what those threads are doing and how they do it. There's no magic bullet here. It could be made slower doing that.

100% CPU usage is not a source of systematic failure: the CPU is designed to execute any workload without failing. Just fully utilizing the CPU won't bring a system (that is working correctly) beyond its operational limits.

When we talk about random failure the story is a little more interesting.

Most systems are more likely to randomly fail at full load. This is because there is more stress on loaded components. Even so a CPU (or the system as a whole) has a "reasonable" probability of random failure as long as it's operated according to its own spec.

The "reasonable" probability of random failure incurs a risk, but for most uses the risk is acceptable and the user generally shouldn't worry about it. But in cases where the risk is unacceptable, it might be possible to mitigate it. For example if the vast majority of system failures happen consequent to failing CPU heat-sink fans, it might be possible to mitigate the risk of a fan malfunction by keeping CPU temperatures down.

By analogy, a properly-maintained vehicle in good working order, doesn't require frequent stops on the highway. This is true even though you're more likely to have failures if you run your car for a long time and give any latent issues a chance to cause problems.

Some idle time might help cars or CPUs with dysfunctional cooling, but if we assume everything is operating to spec, there's no reason to take breaks.
Last edited on
To kbw: mm i'think NUMA is a too advanced concept for my amateur programming.
Yeah, you are right, no magic bullet :/ btw in a raycasting scenario multithreading is very good because you render a vertical line at time and each one is pefectly indipendently rendered, calculated, frome the others.

I'm wondering if there's an easy way to "split" a function (render vertical lines from 0 to screenwidth/2 and from screenwidth/2 to widthscreen) even if the function uses only global variables, i don't think so, that's why global variables are for: if there is one "global_variable" you can simply create another one identical, there will be 2 of the same "global_variable".
(Hope i'm expleining myself enough ahaha!)

To mbozzi: mm i already know it, what i mean for "harmul" is: suppose i perform a simple code, something like

int x;

WHILE (true)
       for (int i = 0; i < 1000000; i++)
              x = i;

That code, which is in loop, will will performe the same identical code over and over, and that means that the "transistor and wires" used are always the same.. it's like walking always on the same part of a carpet, it will be under a more stress and will deteriorate sooner than a normal distibute use, right?

I know i'm practically talking about CPU architectures and its logic to perform code, which i know nothin at all, but that it's what i was worried about the most.
The example case still fashinate me, but i've noticed that a lot of my code uses different input numbers, so it's not using the same "transistors and wires".

Do you know any section of this forum, or other forums, that is possible to talk about this "architecture and code" arguments? :)
Last edited on
May not be relevant, but do you know about parallel STL Algorithms introduced with C++17 where you can specify a threading execution policy?
That code, which is in loop, will will performe the same identical code over and over, and that means that the "transistor and wires" used are always the same.. it's like walking always on the same part of a carpet, it will be under a more stress and will deteriorate sooner than a normal distibute use, right?

This is "allowed by the spec", and "common" because tight inner loops occur in lots of interesting software.

Electronic parts do wear, and this wear is accelerated by stress factors such as heat, current, vibration, voltage, electromagnetic field strength and so on.

Overall the reliability of dense integrated circuits is decreasing. Since the failure of Dennard scaling, power densities in ICs have started to increase. But power density and size are inversely proportional to reliability. Some related effects are electromigration
and hot-carrier injection
You might also take a look at
See also
My understanding is that these issues become more significant with decreasing size and increasing power densities.

So yes the CPU does "wear out" in the course of normal operation, but the code you posted (a hot inner loop) is both within the spec and practically common. Because of these things the CPU designers have accounted for it when designing their product. IMO you shouldn't worry about seriously impacting device lifetime by using it normally.
Last edited on
Registered users can post here. Sign in or register to post.