Why Assembly Sucks

Pages: 12
@helios makes a good point that x86 is RISC internally, though there is some microcode and the language itself isn't RISC.

I think the name was poorly chosen, because it seems to imply the focus of attention and the important point of the development was a simplification of the language, but that isn't really the right viewpoint. The real focus was actually the fact that the simplification was in service to making circuits to the work, such that all assembler language primitives represented this very simple instructions executed by hardware. This naturally implied that several instructions are strung together to complete what a CISC CPU would consider one instruction.

There's one more layer to that, too. The original design considered the compiler (assuming C, not assembler directly) as an intelligent and adaptive bridge between this simplified language representing all of the hardware primitive circuits which did the work and the software being written. It was well recognized in the late 70's that transistor real estate would escalate wildly, meaning that what could be implemented directly in hardware would grow over time, which subsequently added more instructions to the processors "language" while still retaining the instruction to circuit relationship of the original design. They knew, in other words, that the "reduced" set of instructions would not remain stable over time, so the compiler was considered a flexible and important part of the design - RISC was essentially dependent on the compiler to work over time as new generations of the CPU emerged.

The original reason for micro-code emerged out of RAM limitations and common subroutines. The early (tube based) CPU's were very, very primitive. Many didn't have multiplication, or other common needs. These are written as routines in software. Of course, with extremely limited RAM, this left little for the actual application code. Moving common routines "into the CPU" relieved RAM burden, but the operation was nearly identical to using subroutines for those features. As that "library" of subroutines grew, the "name" it took on was "micro-code".

This feature gave the impression that the CPU was getting "better" and more capable, when in fact the primitive circuits in those CPU's hadn't changed much. It made the application software easier to write, which was itself a power to the developer, it made no actual difference to performance.

Yet, over time, as real estate began to grow, some of those micro-code instructions (which had become common assembler instructions) gained circuitry to perform that work.

Hardly any early computer exemplified the ultimate evolution of these notions as the Cray 1. There's a running joke that RISC doesn't really stand for "Reduced Instructions Set Computer", but instead (should) mean "Really Invented by Seymour Cray". Cray's first machine was considered a national security secret, which is partly at the origin keeping him from being credited with the invention of RISC. The true fundamental reason for RISC - that of circuitry doing the work, not subroutines embedded inside the CPU - was primarily due to Seymour Cray. It was only realized years later who really "did it first", but Cray has enough credit as it is, so "history" is left alone, and IBM (and it's engineers) gets much of the credit. In truth, IBM's engineers "accidentally" developed similar technology independently, without even knowing what Cray had done a few years earlier.
Hmm... Are ARM CPUs really not microcoded?
Not in the traditional sense, no.
Topic archived. No new replies allowed.
Pages: 12