Memory leaks and Harmful programs

I've never gotten a satisfactory answer explaining memory leaking from a program and causing the computer to finally give up and crash itself to hell.

How exactly does a program simply waltz around the computer claiming limitless blocks of memory? And how do you reclaim that memory after the computer has suffered the breakdown?

Since Java does its garbage collection they don't bother to explain it all that well, and I'm not 100% on garbage collection either.
You know what? I've never really understood this either.

I'll try to sum up my "suspicions" on the subject (basically what I've always assumed, so naturally it's going to be mostly wrong).
I guess memory can leak in that you don't deallocate it from the heap; so it just kinda stays there.

A program can't claim "limitless" blocks of memory... it would surely segfault long before then?
But after the computer has crashed; the memory (being volatile) would be "reset" (all of the capacitors would be uncharged and the transistors set to the 0 position). So once the computer booted again, it wouldn't matter. Well; it would, you'd be angry at the developer (or indeed yourself)... but it wouldn't have any other lasting effects?

Anyone else got a slow connection to this site today? I think the server might be under alot of stress, because every other server I've connected to has been processed like normal. Then again, "every other server" is Google; and I can't see them using anything but the most expensive servers they can get.
I've never gotten a satisfactory answer explaining memory leaking from a program and causing the computer to finally give up and crash itself to hell.

Well, here's an answer from me: a memory leak in any program should not cause the system to crash. If it does, it is a bug of the operating system. Simple and clear :)

closed account (z05DSL3A)
Hmmm...Your poor code eats system resources to the point that the OS can't operate and it's a bug in the OS?
That is one of the tasks of the operating system: not to allow my bad code to crash it.

In other words, if my bad code requests more system resources than my OS can give away without losing its functionality (in particular the ability to kill my bad program), then the OS is bad too.
Last edited on
closed account (z05DSL3A)
But what about your software eating nearly everything as the OS itself needing more memory than is available?
I guess that is why the OS itself (I am speaking of my windows and linux systems here) keep some operational memory at hand reserved. In other words, your scenario shouldn't happen.

I am of course talking how it should be, if we were to assume we froze the windows/linux/whichever system you like development process and cleaned all the bugs.

Most OS bugs I believe appear not from the working programmers doing stupid things, but by the requirement for them to quickly refactor their code to accommodate for new functionality that wasn't originally planned on.
closed account (z05DSL3A)
I have seen enough Windows servers grind to a halt due to lack of resources to know that 'my scenario' does happen. The degree of fault tolerance is also by design. If an OS fails because of a fault that it is not designed to be tolerant of, you can't say it is a bug. I would not expect a commodity operating system to be very fault tolerant, so I am not surprised when they fall over when software misbehaves.

Of course it would be nice to have a completely fault tolerant desktop operating system for a reasonable amount of money but it will never happen.
In my opinion, an OS should be designed to handle all possible non-hardware faults. Any OS that is not originally *planned* to handle all software situations is inherently bad.


For example, I never check the success of memory allocation. The reason is, I do not want my program to continue execution if it doesn't have enough memory. So, instead of cluttering my code with useless memory checks, I prefer to have the system generate a bad_alloc for me, which both my Windows and Linux systems do faultlessly for me.

Of course it would be nice to have a completely fault tolerant desktop operating system for a reasonable amount of money but it will never happen.

It would be much better to have a completely fault tolerant desktop operating system for free ;) .
Your poor code eats system resources to the point that the OS can't operate and it's a bug in the OS?
If an OS fails because of a fault that it is not designed to be tolerant of, you can't say it is a bug.
Technically, no, it's not a bug. The system is running as it was meant to run. It is, however, both a design flaw and a security hole. A process running in user space shouldn't be capable of bringing down the system just like that. Sure, it's impossible to make the system unbreakable, but it should at least be sturdy enough not to die from a process allocating too much memory.

I've heard of someone who got his MBR borked while running an old FMV game (Johny Mnemonic) on DOSBox. It's a rather extreme case, but this is exactly the kind of thing an OS shouldn't allow its processes to do.
... That installs itself within a matter of nano-seconds choosing the options it already knows you want (A)
So in most cases I won't have to worry about overloading the system with a program because it should stop feeding the program resources before the system kills itself?
Seems feasible.

Technically, no, it's not a bug

So it's a 'problem' that can't actually be fixed?
closed account (S6k9GNh0)
I would say the real answer to this is to make sure your software runs correctly. Although, for security purposes, I believe the OS should handle this before it becomes too serious.

Out of all seriousness, try it out for yourself on different operating systems and see what happens. I'm sure that there is also an answer in the Linux documentation (not so sure about Microsoft but I'm sure there is...).
Last edited on
All major operational systems running nowadays are *planned* and *intended* to never ever crash on absolutely any software failure. Since there are only finitely many states a digital processor can have (in case it has no hardware problems), it is actually possible to even "prove" that a system functions faultlessly.

I put "prove" in quotations since, to make the proof, you will need to write a program to do that for you, which in turn, might be buggy. However, there exist, for example, elementary C compilers whose correctness has been proven in an automatic fashion.

Proving in a similar way that a modern operational system is perfectly safe however is computationally unfeasible. Instead we use well human attention and care. Human care can, in fact, make a system perfect, if that system were to stop evolving, and only got its bugs fixed.
Last edited on
Topic archived. No new replies allowed.