Assembly Language

Pages: 12
I recently got the option to take a online class for school, and I have the choice between assembly language and visual basic. So my question is is assembly language used anymore? And I am just curious were any games created in assembly?
according to the article which I read few days ago Assembly is going to be out of fassion.
people mostly use higher level languages and aditionaly put some assembly code into their code to speed up some critical part of program.

about making games in assembly? I think is better to kill your self then writing game using assemly :D
And I am just curious were any games created in assembly?

In the past, yes. Assembly is no longer used today, except for short snippets in some projects.
See zsnes for a game-related program written mostly in assembly.

But considering the other option, I'd still go with assembly. It's still useful for analyzing the assembler code generated by compilers.
Learning Assembly gives you a better understanding of how the computer works. Is visual basic still used anymore?
Is visual basic still used anymore?


I would never spend even one minute of my life to learn visual basic.
not only because it's Windows only language but programs wroted in visual basic are crapy and unstable.
Games are not "created in assembly", but assembly languages (there's at least one for each CPU type) are very much in use.

It's true that there aren't many jobs where people write assembly straight up, but being able to read and understand it is necessary when dealing with any sort of optimization in high-performance computing (that includes games).

Here, for example, is a popular article on a C/C++ language feature and its impact on performance http://cellperformance.beyond3d.com/articles/2006/06/understanding-strict-aliasing.html -- note how assembly is used to illustrate the text.
Last edited on

but being able to read and understand it is necessary when dealing with any sort of optimization in high-performance computing (that includes games).


HPC is based first on efficient distributed algorithms, not microoptimisations at the assembly level. That is why they use Fortran, Java, Erlang and Ocaml in HPC much more often they use assembly and C or C++. It is scalability that matters to HPC, not raw performance on a single core. I recently got hands on experience with one of the supercomputers they bought to our lab - 128 cores, 384 GB RAM, and guess what... you can't even program it in C or C++. It executes Java natively. :P

Assembly is important to compiler / VM / OS writers, not application writers. For an application writer it is much more important to know how computer works (e.g. the architecture of memory / caches etc.) and what are the capabilities of the compiler or OS rather than uderstanding assembly.
Last edited on
@rapidcoder Understanding assembly is a necessary part of understanding the capabilities of the compiler, whether you're writing an embedded system or a scalable multicore application. Simply because that's what the compiler's output *is*.

We'd be at a disadvantage against your competition if we didn't care which CPU instructions were generated on critical code paths. It becomes even more true as the scale goes up, as certain instructions affect multiple cores.
I picked up a bit of x86 (atleast that's what I think it is) by peeking at the Disassembler of my IDE. I used to count instructions to measure performance, until I found out "1 instruction = 1 instruction" isn't a valid assumption. Now, I just use it to disprove people who still believe using "++i" is more efficient than "i++" in a for-loop. Worth every minute.
i would rather use assembly than vb. assembly can never truly be a dead language because then c is missing a link to ml during compilation, so it must still be used.
@Gaminic
I just use it to disprove people who still believe using "++i" is more efficient than "i++"

The poeople who belive that may be me as well, we have been over this before.
it looks that you have temporary proven that this isn't the true cos we were talking about for loop isn'it?
but you also didn't understand what I wanted to say exactly by saying that.

take a look at the end of this site:
http://www.parashift.com/c++-faq-lite/operator-overloading.html#faq-13.15

now don't tell me you wanna go into that diskusion.
@Gaminic,
Generally less instructions translates to better performance, but obviously if one instruction takes 5 clock cycles to complete and another takes only 3, the one that takes less cycles will be faster. Also, it depends on whether a process is more CPU-bound or more I/O-bound - I/O bound processes are usually slower because unless they use asynchronous I/O the OS can't run them until the input/output is ready.

Still, though, having less instructions is generally beneficial, whether because it means the executable file is smaller (meaning less cache misses) or because it takes less time to complete (but again, 10 instructions taking 5 cycles each is slower than 20 instructions that take 2 cycles each).
@codekiddy
@Gaminic
I just use it to disprove people who still believe using "++i" is more efficient than "i++"

The poeople who belive that may be me as well, we have been over this before.
it looks that you have temporary proven that this isn't the true cos we were talking about for loop isn'it?
but you also didn't understand what I wanted to say exactly by saying that.

take a look at the end of this site:
http://www.parashift.com/c++-faq-lite/operator-overloading.html#faq-13.15

now don't tell me you wanna go into that diskusion.

Really, this is your argument?
a) I explicitly said "in for-loops" to make sure I wouldn't overstep the boundaries of what I've tested. Extrapolation is always nonsense. Just because you left that part out when you quoted me, doesn't mean you can call me on that. Additionally, in the topic where we discussed it, you were specifically telling someone to use prefix operators in his for-loops. That is what I was talking about; that is what I disproved; that is where you were wrong. Don't go changing my quotes so you can find a loophole to "disprove" what I said.

b) The link you posted specifically says it doesn't matter for intrinstic types like ints, which, again, was what you were talking about, which was what I disproved. If you're going to provide sources to support your side of the discussion, at least make sure they're actually supporting your side of the discussion. Also, just because it's on the internet doesn't mean it's [still] correct. Pull up your IDE and test it. You know, like I did, when I provided proof of the opposite.

Seriously, stop trying to defend yourself; especially since I'm not trying to attack you. It's a common misconception that using prefix increment in your loops is faster than postfix increments. [Probably because it once was, a few compiler editions ago.] Rather than being stubborn about it and quote sites you apparently haven't actually read, you should join me in my quest to rid the world of this fairy tale.
Last edited on
@chrisname
That's exactly my point. Unless 1 instruction = 1 instruction, it's still "meaningless" to count instructions just like it's meaningless to count lines of code.

Most beginners believe that less code = faster code, falsely believing that each built-in function requires equal time. In the second phase, they will believe that the generated assembly code is the baseline, thus less instructions = faster code, while even that isn't true.

My world was rocked when I found out setting a variable to zero by xor-ing with itself is apparently faster than simply setting it to 0.

http://www.parashift.com/c++-faq-lite/operator-overloading.html#faq-13.15
now don't tell me you wanna go into that diskusion.


The FAQ entry is partially wrong on this one, because it states i++ can never be faster than ++i. But the following code:

 
int a = b++;


can be expected to run slightly faster than this one:
 
int a = ++b;


So, as long as you don't need the result, the FAQ is right, but when you need the result, you should organize your algorithms in such a way, that you better use postfix ++ rather than prefix ++.
Last edited on
rapidcoder,
and what about temporary object being created(sometimes) using postfix?
If there is some additional temporary object, postfix will be probably slower, but it depends on the size of the object and where / how it is allocated. If compiler is able to elide it, postfix will be faster.
Last edited on
Gaminic,
I'm not defending my self and I don't think you are atacking me.
also I do not have nothing against you.
I'll just say u're right.

but using postfix operator only but only when it is rely neded is a good practice.

rapidcoder,
The FAQ entry is partially wrong on this one, because it states i++ can never be faster than ++i. But the following code:


int a = b++;
can be expected to run slightly faster than this one:

int a = ++b;


I think you misunderstand.

in your example of cource this does not matter.
the following code:

int a = b++;

can be expected to run slightly faster than this one:

int a = ++b;


no it can't be expected to run faster.
Cubbi,
no it can't.

but the result of a is not the same.

again:
and what about temporary object ?

when I say temporary object I don't mean in this funy example but reather incrementing a big object.

making a copy of such big object while using postfix is *NOT* the same isn it???

EDIT:

and what about an big array of big objects?
how many time will be spent to copy all those objects?
again(copy is not perfomed always but sometimes)

so don't post such funy example but reather think about real work objects and not int a LOL
Last edited on
Pages: 12