• Aucun résultat trouvé

ASSEMBLY LANGUAGE AND ADDRESSING MODES

Dans le document COMPLETE DIGITAL DESIGN (Page 93-98)

Basic Computer Architecture

3.9 ASSEMBLY LANGUAGE AND ADDRESSING MODES

With the hardware ready, a computer requires software to make it more than an inactive collection of components. Microprocessors fetch instructions from program memory, each consisting of an op-code and, optionally, additional operands following the opop-code. These opop-codes are binary data that are easy for the microprocessor to decode, but they are not very readable by a person. To enable a programmer to more easily write software, an instruction representation called assembly language was developed. Assembly language is a low-level language that directly represents each binary op-code with a human-readable text mnemonic. For example, the mnemonic for an unconditional branch-to-subroutine instruction could be BSR. In contrast, a high-level language such as C++ or Java contains more complex logical expressions that may be automatically converted by a compiler to dozens of microprocessor instructions. Assembly language programs are assembled, rather than compiled, into opcodes by directly translating each mnemonic into its binary equivalent.

Assembly language also makes programming easier by enabling the usage of text labels in place of hard-coded addresses. A subroutine can be named FOO, and when BSR FOO is encountered by the assembler, a suitable branch target address will be automatically calculated in place of the label FOO. Each type of assembler requires a slightly different format and syntax, but there are general as-sembly language conventions that enable a programmer to quickly adapt to specific implementations -Balch.book Page 72 Thursday, May 15, 2003 3:46 PM

Basic Computer Architecture 73

once the basics are understood. An assembly language program listing usually has three columns of text followed by an optional comment column as shown in Fig. 3.14. The first column is for labels that are placeholders for addresses to be resolved by the assembler. Instruction mnemonics are lo-cated in the second column. The third column is for instruction operands.

This listing uses the Motorola 6800 family’s assembly language format. Though developed in the 1970s, 68xx microprocessors are still used today in embedded applications such as automobiles and industrial automation. The first line of this listing is not an instruction, but an assembler directive that tells the assembler to locate the program at memory location $100. When assembled, the listing is converted into a memory dump that lists a range of memory addresses and their corresponding contents—opcodes and operands. Assembler directives are often indicated with a period prefix.

The program in Fig. 3.14 is very simple: it counts to 30 ($1E) and then sends the “Z” character out the serial port. It continues in an infinite loop by returning to the start of the program when the serial port routine has completed its task. The subroutine to handle the serial port is not shown and is referenced with the SEND_CHAR label. The program begins by clearing accumulator A (the 6800 has two accumulators: ACCA and ACCB). It then enters an incrementing loop where the accumula-tor is incremented and then compared against the terminal count value, $1E. The # prefix tells the as-sembler to use the literal value $1E for the comparison. Other alternatives are possible and will soon be discussed. If ACCA is unequal to $1E, the microprocessor goes back to increment ACCA. If equal, the accumulator is loaded with the ASCII character to be transmitted, also a literal operand.

The assumption here is that the SEND_CHAR subroutine transmits whatever is in ACCA. When the subroutine finishes, the program starts over with the branch-always instruction.

Each of the instructions in the preceding program contains at least one operand. CLRA and INCA have only one operand: ACCA. CMPA and LDAA each have two operands: ACCA and associated data. Complex microprocessors may reference three or more operands in a single instruction. Some instructions can reference different types of operands according to the requirements of the program being implemented. Both CMPA and LDAA reference literal operands in this example, but a pro-grammer cannot always specify a predetermined literal data value directly in the instruction sequence.

Operands can be referenced in a variety of manners, called addressing modes, depending on the type of instruction and the type of operand. Some types of instructions inherently use only one ad-dressing mode, and some types have multiple modes. The manners of referencing operands can be categorized into six basic addressing modes: implied, immediate, direct, relative, indirect, and in-dexed. To fully understand how a microprocessor works, and to efficiently utilize an instruction set, it is necessary to explore the various mechanisms used to reference data.

Implied addressing specifies the operand of an instruction as an inherent property of that instruc-tion. For example, CLRA implies the accumulator by definiinstruc-tion. No additional addressing infor-mation following the opcode is needed.

.ORIG $100

BEGIN CLRA

INC_LOOP INCA

CMPA #$1E ; compare ACCA = $1E BNE INC_LOOP ; if not equal, go back LDAA #'Z' ; else, load ASCII 'Z' BSR SEND_CHAR ; send ACCA to serial port BRA BEGIN ; start over again

FIGURE 3.14 Typical assembly language listing.

-Balch.book Page 73 Thursday, May 15, 2003 3:46 PM

74 Digital Fundamentals

Immediate addressing places an operand’s value literally into the instruction sequence. LDAA

#’Z’ has its primary operand immediately available following the opcode. An immediate oper-and is indicated with the # prefix in some assembly languages. Eight-bit microprocessors with eight-bit instruction words cannot fit an immediate value into the instruction word itself and, therefore, require that an extra byte following the opcode be used to specify the immediate value.

More powerful 32-bit microprocessors can often fit a 16-bit or 24-bit immediate value within the instruction word. This saves an additional memory fetch to obtain the operand.

Direct addressing places the address of an operand directly into the instruction sequence. Instead of specifying LDAA #’Z’, the programmer could specify LDAA $1234. This version of the in-struction would tell the microprocessor to read memory location $1234 and load the resulting value into the accumulator. The operand is directly available by looking into the memory address specified just following the instruction. Direct addressing is useful when there is a need to read a fixed memory location. Usage of the direct addressing mode has a slightly different impact on var-ious microprocessors. A typical 8-bit microprocessor has a 16-bit address space, meaning that two bytes following the opcode are necessary to represent a direct address. The 8-bit microprocessor will have to perform two additional 8-bit fetch operations to load the direct address. A typical 32-bit microprocessor has a 32-32-bit address space, meaning that 4 bytes following the opcode are nec-essary. If the 32-bit microprocessor has a 32-bit data bus, only one additional 32-bit fetch opera-tion is required to load the direct address.

Relative addressing places an operand’s relative address into the instruction sequence. A relative address is expressed as a signed offset relative to the current value of the PC. Relative addressing is often used by branch instructions, because the target of a branch is usually within a short dis-tance of the PC, or current instruction. For example, BNE INC_LOOP results in a branch-if-not-equal backward by two instructions. The assembler automatically resolves the addresses and cal-culates a relative offset to be placed following the BNE opcode. This relative operation is per-formed by adding the offset to the PC. The new PC value is then used to resume the instruction fetch and execution process. Relative addressing can utilize both positive and negative deltas that are applied to the PC. A microprocessor’s instruction format constrains the relative range that can be specified in this addressing mode. For example, most bit microprocessors provide only an 8-bit signed field for relative branches, indicating a range of +127/–128 bytes. The relative delta value is stored into its own byte just after the opcode. Many 32-bit microprocessors allow a 16-bit delta field and are able to fit this value into the 32-bit instruction word, enabling the entire instruc-tion to be fetched in a single memory read. Limiting the range of a relative operainstruc-tion is generally not an excessive constraint because of software’s locality property. Locality in this context means that the set of instructions involved in performing a specific task are generally relatively close to-gether in memory. The locality property covers the great majority of branch instructions. For those few branches that have their targets outside of the allowed relative range, it is necessary to perform a short relative branch to a long jump instruction that specifies a direct address. This re-duces the efficiency of the microprocessor by having to perform two branches when only one is ideally desired, but the overall efficiency of saving extra memory accesses for the majority of short branches is worth the trade-off.

Indirect addressing specifies an operand’s direct address as a value contained in another register.

The other register becomes a pointer to the desired data. For example, a microprocessor with two accumulators can load ACCA with the value that is at the address in ACCB. LDAA (ACCB) would tell the microprocessor to put the value of accumulator B onto the address bus, perform a read, and put the returned value into accumulator A. Indirect addressing allows writing software routines that operate on data at different addresses. If a programmer wants to read or write an arbi--Balch.book Page 74 Thursday, May 15, 2003 3:46 PM

Basic Computer Architecture 75

trary entry in a data table, the software can load the address of that entry into a microprocessor register and then perform an indirect access using that register as a pointer. Some microprocessors place constraints on which registers can be used as references for indirect addressing. In the case of a 6800 microprocessor, LDAA (ACCB) is not actually a supported operation but serves as a syntactical example for purposes of discussion.

Indexed addressing is a close relative (no pun intended) of indirect addressing, because it also re-fers to an address contained in another register. However, indexed addressing also specifies an off-set, or index, to be added to that register base value to generate the final operand address: base + offset = final address. Some microprocessors allow general accumulator registers to be used as base-address registers, but others, such as the 6800, provide special index registers for this pur-pose. In many 8-bit microprocessors, a full 16-bit address cannot be obtained from an 8-bit accu-mulator serving as the base address. Therefore, one or more separate index registers are present for the purpose of indexed addressing. In contrast, many 32-bit microprocessors are able to spec-ify a full 32-bit address with any general-purpose register and place no limitations on which regis-ter serves as the index regisregis-ter. Indexed addressing builds upon the capabilities of indirect addressing by enabling multiple address offsets to be referenced from the same base address.

LDAA (X+$20) would tell the microprocessor to add $20 to the index register, X, and use the resulting address to fetch data to be loaded into ACCA. One simple example of using indexed ad-dressing is a subroutine to add a set of four numbers located at an arbitrary location in memory.

Before calling the subroutine, the main program can set an index register to point to the table of numbers. Within the subroutine, four individual addition instructions use the indexed addressing mode to add the locations X+0, X+1, X+2, and X+3. When so written, the subroutine is flexible enough to be used for any such set of numbers. Because of the similarity of indexed and indirect addressing, some microprocessors merge them into a single mode and obtain indirect addressing by performing indexed addressing with an index value of zero.

The six conceptual addressing modes discussed above represent the various logical mechanisms that a microprocessor can employ to access data. It is important to realize that each individual micro-processor applies these addressing modes differently. Some combine multiple modes into a single mode (e.g., indexed and indirect), and some will create multiple submodes out of a single mode. The exact variation depends on the specifics of an individual microprocessor’s architecture.

With the various addressing modes modifying the specific opcode and operands that are presented to the microprocessor, the benefits of using assembly language over direct binary values can be ob-served. The programmer does not have to worry about calculating branch target addresses or resolv-ing different addressresolv-ing modes. Each mnemonic can map to several unique opcodes, dependresolv-ing on the addressing mode used. For example, the LDAA instruction in Fig. 3.14 could easily have used ex-tended addressing by specifying a full 16-bit address at which the ASCII transmit-value is located.

Extended addressing is the 6800’s mechanism for specifying a 16-bit direct address. (The 6800’s di-rect addressing involves only an eight-bit address.) In either case, the assembler would determine the correct opcode to represent LDAA and insert the correct binary values into the memory dump. Addi-tionally, because labels are resolved each time the program is assembled, small changes to the pro-gram can be made that add or remove instructions and labels, and the assembler will automatically adjust the resulting addresses accordingly.

Programming in assembly language is different from using a high-level language, because one must think in smaller steps and have direct knowledge about the microprocessor’s operation and ar-chitecture. Assembly language is processor-specific instead of generic, as with a high-level lan-guage. Therefore, assembly language programming is usually restricted to special cases such as boot code or routines in which absolute efficiency and performance are demanded. A human programmer will usually be able to write more efficient assembly language than a high-level language compiler -Balch.book Page 75 Thursday, May 15, 2003 3:46 PM

76 Digital Fundamentals

can generate. In large programs, the slight inefficiency of the compiler is well worth the trade-off for ease of programming in a high-level language. However, time-critical routines such as I/O drivers or ISRs may benefit from manual assembly language coding.

-Balch.book Page 76 Thursday, May 15, 2003 3:46 PM

77

CHAPTER 4

Memory

Memory is as fundamental to computer architecture as any other element. The ability of a system’s memory to transact the right quantity of data in the right span of time has a substantial impact on how that system fulfills its design goals. Digital engineers struggle with innovative ways to improve memory density and bandwidth in a way that is tailored to a specific application’s performance and cost constraints.

Knowledge of prevailing memory technologies’ strengths and weaknesses is a key requirement for designing digital systems. When memory architecture is chosen that complements the rest of the sys-tem, a successful design moves much closer to fruition. Conversely, inappropriate memory architecture can doom a good idea to the engineering doldrums of impracticality brought on by artificial complexity.

This chapter provides an introduction to various solid-state memory technologies and explains how they work from an internal structural perspective as well as an interface timing perspective. A memory’s internal structure is important to an engineer, because it explains why that memory might be more suited for one application over another. Interface timing is where the rubber meets the road, because it defines how other elements in the system can access memory components’ contents. The wrong interface on a memory chip can make it difficult for external logic such as a microprocessor to access that memory and still have time left over to perform the necessary processing on that data.

Basic memory organization and terminology are introduced first. This is followed by a discussion of the prevailing read-only memory technologies: EPROM, flash, and EEPROM. Asynchronous SRAM and DRAM technologies, the foundations for practically all random-access memories, are presented next. These asynchronous RAMs are no longer on the forefront of memory technology but still find use in many systems. Understanding their operation not only enables their application, it also contributes to an understanding of the most recent synchronous RAM technologies. (High-per-formance synchronous memories are discussed later in the book.) The chapter concludes with a dis-cussion of two types of specialty memories: multiport RAMs and FIFOs. Multiport RAMs and FIFOs are found in many applications where memory serves less as a storage element and more as a communications channel between distinct logic blocks.

Dans le document COMPLETE DIGITAL DESIGN (Page 93-98)