Why??

So, almost every tutorial I have read has told what to do for program x with code y and how it does it. For example, take pointers. I know when to use them, how they access the byte (I think that's the amount of data) in an array. What I don't know is why. Why does it have to be compiled to machine language? Why can a computer only understand 1's and 0's? Why are you able to use that for making languages?
The nature of 1's and 0's rest in the nature of electricity itself and the semiconductor materials used to produce the chips. I am by no means an expert, so wait for a decent answer. What I CAN tell you is that your questioning will require a lot of reading and investigation.

A dirty summary can be that transistors (I think that's the proper name) are built using a semiconductor material that can "remember" its state, and furthermore it only allows current in one direction. This is the basis of all modern chips. With this, engineers are able to compute pretty much anything.

Everyone then started programming in this language, but it was too tedious, so someone took the patience of creating a 1-to-1 relationship between machine language and a new language that was more human-friendly: Assembly language. One instruction in assembly corresponds to one instruction to the processor. But this still wasn't good enough, so people started developing better (higher level) languages like Pascal and C, and later C++. Their original compilers were created in assembly, but I bet the first C++ compiler was created using a C compiler. :D At least I would have done that.

And that's it in a nutshell, and may have several errors. Hopefully someone will come along and correct me.
Hmm... good question. I believe if you got chance to open up a desktop computer for e.g you should see a green color (I don't know got other color or not) motherboard that is soldered with lots of chips and those small little transistors etc etc. In layman terms each of them represent a bit 0 for off,1 for on. So if you have 8 of them side by side, you have a series of 1 0 which represent a byte. Now if you extrapolate and image you have thousands of them packed tightly, they act like what a computer should do.

Long long time ago, the first program (correct me if I'm wrong) is punch cards. And those punch cards are similar to the 1 0 and all of them are instructions to the computers to process. Then later you have assembly where you actually use text editors to write lines of assembly code, compile, link and run, that do away with punch cards.

After assembly, you have C (or other equivalent) which are more human readable with variable naming, curly braces etc etc. And then soon that seem too primitive and you have higher level languages and so on and so forth.

So no matter how many layers you lay on top, at the base level, computer can only understand 1 0 which is your machine language. But we have evolved so much that newer programmers are shielded from all these details underlying.

My explanation is flawed but I guess it form some base or reference for you to look up further via wikipedia or consult retired programmers during their times lives are pretty harsh for them.
thank you
The answers for 1 and 0 were all wrong by the way - we have 1's and 0's because it's simple to handle. There is no such thing as "power on, power off" in real life - there is always some kind of electric flow. For circuits, it's just much easier to tell the difference between high and low voltage (I think it's usually something between 0V and 1.5V for 0, something between 3.5V and 5V for 1 and everything else undefined - I am by no means an expert in electronics though, so don't take my word for it).

You could also define states between 0 and 9, or states for -1,0 and 1 (I believe both were done or at least theoreticized in the past), but 0-1 is technically easier to realize and isn't particularily hard to work with on a logical level either.

And stuff has to be presentable in machine language because your computer is a machine that only knows how to perform a very limited set of operations. Everything else we do with computers is just a desperate attempt to ignore that fact. (and it works most of the time too!).

Long long time ago, the first program (correct me if I'm wrong) is punch cards

Has nothing to with it. The first programs were made by hard-wiring them. Punched Cards predate computers, though they were later used for data storage (including programs).
I've never heard of a decimal-based computer, but ternary (-1, 0, 1) has definitely been attempted. Rather than cutting off "ranges of voltages", it is done by having -1 and 1 having opposite polarizations.

The problem with anything above ternary is precision. Two opposite cases are generally easy to distinguish and, in this case, so are three, because they are all clearly mutually exclusive:
0 (very low) <-> 1 (not very low)
0 (very low) <-> -1 (not very low)
1 ('positive) <-> -1 ('negative')

To go beyond that, you'd need an additional dimension that can be easily split signals in two exclusive halves.

Attempting to use a range-based system is basically reverting back to analogue systems, which suffer from static (signal degradation leads to low precision leads to 'interpretation' faults).
The book you're looking for is "Code" by Petzold. It builds a simple computer from logic gates and demonstrates exactly how these high and low voltages (ones and zeros) are interpreted as instructions.
closed account (1vRz3TCk)
Has nothing to with it. The first programs were made by hard-wiring them.

I would disagree, a program is a list of instructions that is used to control the behavior of a machine.

The Jacquard loom (1801) used punched cards to control the loom arms to produce decorative patterns automatically. This is still a program, only it is based on a mechanical system.
Somewhere in all of this you have the correct answer. Transistors are the logic gates that make the decisions that are then interpreted as signals. Most can only hold their state (On or Off) as long as power is applied to them. The componenets inside of your PC are not discrete transistors though so don't try to count them or anything like that. They are collected into "black box" packages called Integrated Circuts (IC's). It's because of this that you need to eventually get to 1's and 0's.

Using Machine Language as an intermediary step is done because this is the only effective way to communicate with your processor\CPU. Over the past 30 years or so that commercially availible computers have been around there have been far too many variations, both subtle and substantial in the architecture and instruction sets of the processors to keep up with. So rather then have a different compiler for each and every CPU variation, it was decided to allow the processor to translate assembly instructions on its own into a useful collection of 1's and 0's, you can think of this design concept as being the great grandfather of the modern DDK.

The term that hanst99 is looking for when describing the voltage states of the semiconductor is Trigger Voltage, and it varies widely depending on the material that the semiconductor is made from. The example he gives is common for silicon, germanium is another common element used and if I remember correctly the trigger voltage is about half of that. The only correction I want to make to his post earlier is when he says there is always some electron flow, this is true but the way he words it makes it seem intentional, it is in fact considered to be waste.

Ternary devices do exist and are used but not in modern computer systems just yet. They are mostly LED's that turn one color if voltage is applied in one direction and a different color if voltage is applied in another, the third state is off.

EDIT: This is truley an interesting hobby to read up on. Don't expect it to have much in the way of real world applications when dealing with PC's but it's not a completley useless knowledge set either.
Last edited on
Ternary computers have gone beyond the theoretic, haven't they?

I'm curious to what the advantages of such a computer are. If everything would be in trinary rather than binary, would that lead to increased memory capacity (a 64bit int only requires a '40trit' int for roughly the same capacity)? Increased signal transmission speed (Less trits vs bits for the same signal)? Or am I oversimplifying things?
closed account (1vRz3TCk)
Ternary computers have gone beyond the theoretic, haven't they?

Setun was a balanced ternary computer developed in 1958 at Moscow State University.

I'm curious to what the advantages of such a computer are. If everything would be in trinary rather than binary, would that lead to increased memory capacity (a 64bit int only requires a '40trit' int for roughly the same capacity)? Increased signal transmission speed (Less trits vs bits for the same signal)? Or am I oversimplifying things?

I think that it is more to do with ternary logic. A single 'digit' can have three states True, false, and unknown. I think that Donald Knuth talks about it's elegance and efficiency but I haven't looked at it in any depth (as of yet).
Last edited on
I wrote "Modern" when I should have wrote "Commercially Availible". The concept is a functional one but I don't know of anything that you can buy that uses it. The only advantage I can think of is space saving. In fact I can imagine there being a disadvantage in the propagation delay of having to go from '1' to '-1', first you would have to ground the device to get it to '0' then you would need to apply a negative voltage to decreament it to '-1'. I should say that although I would identify myself as an "Expert" in this particular field I tend to stick with the market and I don't know a whole lot about ternary systems.
@Computerfeek01: Yeah, I figured it wouldn't be much help programming, but since I want this to be my career, I figured that I should learn some background as to why the computer works the way it does.

Thank you everyone though, this has been extremely helpful and I will take a look at that book "code"
Why digital?

Analogue computers were made, these are quite different. You join up various math function represented as circuits, adjust some input levels and read the results from a dial or screen. Theoretically, these machines could have infinite accuracy, unlike digital computers where the accuracy is always limited by only having a certain number of digits.
http://en.wikipedia.org/wiki/Analog_computer

It turned out that these computers were not so accurate because they are sensitive to the environment. If things heat up or cool down then the readings will change. Digital computers are robust, if the voltage levels change a bit the signal is unchanged.

Why binary?

Other systems have been used, e.g. ENIAC used base 10
http://en.wikipedia.org/wiki/ENIAC
The trouble with this is it is less efficient. If you have 10 valves in a "ring counter" representing a single decimal digit, then only one valve is on at a time. The circuit can store a number from 0 to 9.
Put these 10 valves in a binary circuit and they can store from 0 to 1023. Much more numbers.

Trinary arithmetic could be more efficient provided the trinary unit is a similar small size and high speed as a binary unit.
Topic archived. No new replies allowed.