• Please review our updated Terms and Rules here

Physical Behavior of an Address Bus

segaloco

Experienced Member
Joined
Apr 30, 2023
Messages
130
Posed this question on another forum but casting a wide net. I'm trying to solidify my understanding of how a CPU materially addresses components using the pins of an address bus. Basically my shoddy understanding goes something like this:

An operation is decoded by a CPU and found to involve an external memory access. The bits of this memory address are set on the address pins, such that on a 16-bit data bus, an address of 0x2002 results in pins A13 and A2 being logic-high and the remaining pins being logic-low.

Where it falls apart for me is where these pins are physically connected in a circuit and how they influence a single CPU IC communicating with different ICs located on different parts of a physical board.

So say you have a piece of hardware with a CPU bearing a 16-bit address bus and 8-bit data bus, work RAM chip with 0x800 addressable bytes and a graphics processor with 8 addressable one-byte registers. The work RAM is based at 0x0000 and the graphics processor at 0x2000. The above address is emitted on the address bus. What is it that sees A10-A0 and D7-D0 connected to the corresponding buses on the work RAM IC if A13 is logic-low but A2-A0 and D7-D0 connected to the graphics processor if A13 has logic high? Are both the low-order bits of the address bus *and* data bus switched to only connect to the target IC, or do one or the other set of physical pins make physical circuit connections to all components, with some other thing ensuring only the data bus contents from the expected IC make a connection?

If it is that the low address lines touch all ICs they influence every cycle, what prevents ICs other than the target from taking the contents of their address lines meaning they should do something. Is the read/write pin also arbitrated to the various ICs so that anything other than the target is on "read" and is simply exposing something on a data bus that is grounded out that cycle or otherwise doesn't go anywhere?

Thanks for any insights, this is an area I want to understand better.
 
Only parts of the address bus are decoded to provide a "chip select" pin to the relevant ICs.

Let's take a much simpler example...

You have a 16 bit address bus.

You have RAM from $0000 to $7FFF.

You have graphics RAM from $8000 to $BFFF.

You have ROM from $C000 to $FFFF.

This means that we can use A15 and A14 (assuming the address bus starts with A0) to decode for our devices as follows:

RAM = (A15 = 0).
Graphics RAM = ((A15 = 1) and (A14 = 0)).
ROM = ((A15 = 1) and (A14 = 1)).

A0 through A13 are wired in parallel to all of the devices.

In the case of the RAM - A14 is wired to the RAM array as well.

I hope you can see that only one area of RAM or ROM are enabled at any one time.

Of course, a CPU can read and write from/to the address space. Let's assume we have a READ and a WRITE signal from the CPU.

We don't want to WRITE to the ROM, so we would include this in our address decoding logic:

ROM = ((A15 = 1) and (A14 = 1) and (READ = 1)). On the assumption that READ = 1 signifies a read cycle.

Obviously, the logic can get more complex for real-world machines.

Do this make sense?

Dave
 
in most designs there is an address decoder. it connects to the high order address bits and generates a select signal that determines which memory device responds. The low order address bits select which location in the selected chip is used.
in a system that has an I/O address space the decoding is a bit more complicated but the result is the same. selecting one device that will respond.
The definition of low/high order is arbitrary and will vary between systems.

Dave I think your answer was better than mine.
 
These three really helped me to finally grasp the low-level shenanigans after quite some time of high-level programming

 
Between this and my other thread, I think I understand this now.

So a selection of high order address bits are essentially condoned off to pass to a multiplexer or some other circuit. Those high order bits essentially map down to setting logic-high on a specific pin coming out of that circuit that connects to a pin on yond memory-mapped device that enables it or at least its connection to the data bus.

The low order address bits as well as the data bus and read/write pin are connected to all the devices, so all the devices "see" those parts of the memory situation, but only the desired one *acts* on it because it is the only IC that has been directed to do so by the information encoded in the high order of the address?

So all in all, if you had three memory mapped regions on a CPU and you put an address on the bus, if I tapped those physical pins outside each individual component, I *would* see the address reaching all of them, but I would only see one of the three receiving a logic-high on whatever enable/select pin is exposed, with the logic-low on the others precluding those ICs from doing anything with the address their receiving or trying to do anything with bits seen on the data bus?
 
...if I tapped those physical pins outside each individual component, I *would* see the address reaching all of them, but I would only see one of the three receiving a logic-high on whatever enable/select pin is exposed, with the logic-low on the others precluding those ICs from doing anything with the address their receiving or trying to do anything with bits seen on the data bus?
That's generally the way it's done, yes.

But to be clear, when you say, "So all in all, if you had three memory mapped regions on a CPU and you put an address on the bus," this is wrong. You don't have any mappings on the CPU; all the CPU knows is that it puts an address on the bus and (for a read cycle) sees something on the data bus later that it loads. It has no idea where this came from or how any particular devices were selected. Heck, you can simply put some pull-up/pull-down resistors on the data bus pins to give the CPU a NOP (or any other) opcode, and it will happily execute that on every opcode read cycle.

It's the address decoding logic, completely external to the CPU, that determines what devices will respond to a read request, or listen to a write request. (And even whether these devices are told it's a read or write request.) Though you don't have to, it's generally easiest to have the address and data buses always connected to all devices and use the "select" or "enable" pins to tell the particular device you want to associate with an address to stop ignoring what it sees on its address and data pins.

For CPU reads, you have to be careful that you don't enable two devices for the same address, or they'll both try to drive the bus and you'll end up with who knows what on the data bus. For writes, not such a big deal; you could enable two or more devices at one address and they'll all get written to simultaneously.
 
Ah I meant memory mapped as in they are peripherals on the address bus, as opposed to some other connection through other peripheral controllers, non-addressible things like quartz oscillators, just making the distinction with inaccurate terminology.
 
Yes, despite a CPU supposedly being the intelligent part of the computer, as far as the address decoding is concerned, it is pretty stupid.

This external logic means that a 6502 CPU can interface with the memory and devices on a Commodore PET, an Apple IIe, or a whatever, and is completely unaware of what it is actually talking to.

The only important thing is where the 'reset vector' is (where the CPU obtains it's first instruction from) and then the machine's firmware takes over to make it all work.

Dave
 
So an extrapolation of this using the NES. Aside, Dave did you know from my description I was describing a 6502 system or just happy coincidence?

So the cartridge PRG ROM is in the 0x8000-0xFFFF CPU address range. The expectation is there is a memory circuit on the cart exposing a 15-bit address bus to which A14-A0 are always connected. A15-A13 also go into a multiplexer such that if A15 is high, the ROM SEL pin of the cartridge memory system is selected. The end result then of a memory read access at address 0x8123 is that you're really telling the multiplexer to enable the cartridge memory via the 0x8000, and then you're telling the memory IC itself to put the byte 0x123 (in memory IC, not CPU, address space) on the data bus lines. In code in that memory, addresses are referred to including the high bit since that is how the CPU *using* that data is going to address it, but if I stripped that chip right off the board and connected it straight to a CPU and had ROM SEL permanently high, the CPU could address it via 0x0123 *or* 0x8123 since the high bit is essentially meaningless at that point.

I'm assuming this is why RAM mirroring is so common in small systems, it's not that some physical chip is saying addresses wrap around at a particular point, rather, the memory controller only receives certain pins so any memory address that doesn't result in the chip enable going low will be interpreted up to the number of bits connected?

So would this technically make physical CPU addresses another level of "virtual addressing" where the high bits are in essence a page pointing at different ICs and then the low bits are the true physical address for that component? Its just not VM in the sense we use it because the CPU has zero influence over *how* things are mapped, it's a fixed mapping of components arbitrated by a multiplexer picking which chip is "on"?

Pardon if my questions are lengthy I just want to really make sure I understand this.
 
A 6502 reset vector is at the top of the memory space, so I had an inkling!

I think your high level understanding is good now.

Bear in mind that chip select pins can be either HIGH to enable or LOW to enable.

Virtual or bank switched memory adds a new dimension into the mix!

Dave
 
Cool, I really appreciate all the feedback on this matter. I've puzzled off and on for a while on this level of detail in systems architecture. Makes sense that circuits could operate on either high or low, seems like it'd be an implementation detail on how that pin having voltage actually causes the individual lines to switch connection to their sources. This all makes me appreciate how much thought goes into creating an effective physical memory map!
 
So would this technically make physical CPU addresses another level of "virtual addressing" where the high bits are in essence a page pointing at different ICs and then the low bits are the true physical address for that component? Its just not VM in the sense we use it because the CPU has zero influence over *how* things are mapped, it's a fixed mapping of components arbitrated by a multiplexer picking which chip is "on"?

Pardon if my questions are lengthy I just want to really make sure I understand this.
You're getting close to a memory management unit...(MMU)
Take that decode logic we were talking about and make it something the CPU can alter.
Now the cpu can address more memory than it's address lines could normally control.

It was common with 8 bit micros to take the top address lines to address a fast ram.
The data from that RAM was used as additional address lines for main memory.
 
It was common with 8 bit micros to take the top address lines to address a fast ram.
The data from that RAM was used as additional address lines for main memory.

A positively dirt-common device used to implement memory paging in small computers from the late 70's through the 80's is the 74LS670 4x4 register file. Basically this is a tiny static RAM chip with 2 address lines (4 locations) that each hold 4 bits. (16 possible values.) This chip is "double-ended" in that it has separate read and write "sides" with independent address lines, allowing a cell to be updated independently of whatever location is being read out the other side, so... in practice a typical application in an 8-bit computer would be to, on the "read" side, attach the two address lines to the topmost two bits of the computer's address bus; on a chip like a Z80 or 6502 this would have the effect of breaking the 64K address space into 4 16K pages. With a single '670 this essentially quadruples the available memory space to 256K (16 pages of 16K), selected by writing a 4 bit value into the corresponding location on the "write" side of the '670. And, of course, if you really need a lot of breathing space you could use a pair of '670; that gives you 256 16K pages, or four megabytes.

(To make this clear in terms of bits, the single '670 subtracts two bits from your 16 bit address space while adding 4 bits; the resulting space is 18 bits, or 256k. A pair of them in parallel still subtracts two bits but gives you back 8, so now you have 22 bits to play with, 4 megs worth. It just comes at the cost of having to arrange your code and data into 14 bit, IE, 16K pages.)

Again, though, the CPU itself doesn't have any idea what's going on here; it still just advances its program counter through its native 64K address space as it executes code. It's on the programmer to, as necessary, write new values to the memory locations/IO port you decoded the "write" side of the mapper into to update what page you're actually executing from.
 
Again, though, the CPU itself doesn't have any idea what's going on here; it still just advances its program counter through its native 64K address space as it executes code. It's on the programmer to, as necessary, write new values to the memory locations/IO port you decoded the "write" side of the mapper into to update what page you're actually executing from.
Sounds a lot like the mapping used on later NES cartridges to achieve larger storage sizes. Nintendo's initial answer was the Famicom Disk System, essentially an expensive bank switcher where the banks are files on a Mitsumi Quick Disk. Some other studios managed to add mappers to their cartridges, Nintendo followed suit with the multi-memory controller (MMC) series of mappers.
 
Sounds a lot like the mapping used on later NES cartridges to achieve larger storage sizes.
Bingo. The NES mapper "maps" regions of the cartridge into CPU (and PPU) address space areas, under CPU control. The required hardware can be very simple, as seen in some discrete NES mappers.

Memory banking is most common in 8-bit (and derived) systems, and provides mapping functionality to extend the address space.

A memory management unit (MMU) is a more advanced variant, which combines mapping functionality with access control. So the CPU can protect itself from reading, writing or executing some of the mapped regions, depending on which privilege mode the CPU is in.

Some smaller microcontrollers provide a memory protection unit (MPU), which provides access control but no mapping functionality.
 
So all in all, if you had three memory mapped regions on a CPU and you put an address on the bus, if I tapped those physical pins outside each individual component, I *would* see the address reaching all of them, but I would only see one of the three receiving a logic-high on whatever enable/select pin is exposed, with the logic-low on the others precluding those ICs from doing anything with the address their receiving or trying to do anything with bits seen on the data bus?
Several CPUs use multiplexed address and data busses, or have separate I/O busses ( which generally is an address and / or data bus designed to talk to peripheral chips rather than memory. ) You may have 24 lines of which all are used to present an address, and 8 or 16 are used for ferrying data also in two separate bus states. Microcontrollers can be even more complicated where they may have banks of general purpose I/O pins which can be assigned to form a memory and / or address bus - because often they have their own internal RAM and ROM.

So depending on what CPU, it can be a bit more complex. The actual timing and signalling of using these busses can be complicated by the need for refresh of dynamic RAM, bus mastering by things like video chips, or high speed I/O peripherals. Looking at the signalling diagrams in a CPU data sheet will give you a flavor for this.

So if your interest is in eventially soldering stuff together... you should likely focus on the particular CPU / architecture of interest, because the generallities can quickly turn into a lot of piccayune detail rather rapidly.
 
There are plenty of 'old' books available (in PDF format on the internet) detailing specific microprocessors and how to interface them to hardware. You specifically mention the 6502 CPU - so these 'books' abound...

Dave
 
If you're referring to the original vendor hardware manuals, I have copies for several CPUs. I do recall the MOS manual mentioning a demultiplexer scheme much like that used in the NES, I just didn't get into the weeds on it when I was perusing that one. I like to just flip through various manuals I have while having a meal so absorb all sorts of random tidbits without context, only later finding "oh I was reading about subject <xyz>". That's something I like about having print copies of this stuff, I'm more liable to just read random bits and learn something I wouldn't have actually gone looking for intentionally in, say, a PDF.
 
Several CPUs use multiplexed address and data busses, or have separate I/O busses ( which generally is an address and / or data bus designed to talk to peripheral chips rather than memory. ) You may have 24 lines of which all are used to present an address, and 8 or 16 are used for ferrying data also in two separate bus states. Microcontrollers can be even more complicated where they may have banks of general purpose I/O pins which can be assigned to form a memory and / or address bus - because often they have their own internal RAM and ROM.

FWIW, multiplexing is mostly extinct on modern CPUs. (When I say "modern" I mean post-80's.) The most popular CPUs that used it were the 8088 and 8086; not only do they multiplex the address and data busses, but since they have 20 bit addresses they also multiplex the top 4 bits of the address lines with some status signals. (The operation of said signals is pretty complex; all of Intel's early CPUs typically relied on a companion chip(s) to offload some tasks. For instance, it's a significant hassle to try to use the 8088/86 without an 8288 bus controller chip to turn those multiplexed status lines into discrete memory and I/O read-write signals.)

That said, most of the time when you have a machine with a CPU like this there will be a wad of latches between the CPU and the rest of the system to demux everything. (For instance, the expansion, and most of the internal, bus in an IBM PC is fully demuxed so cards can treat it essentially the same way they'd interact with a non-multiplexed 6502 or Z80 bus.) There are exceptions, though, which would matter to someone building a peripheral for such a computer.


The actual timing and signalling of using these busses can be complicated by the need for refresh of dynamic RAM, bus mastering by things like video chips, or high speed I/O peripherals. Looking at the signalling diagrams in a CPU data sheet will give you a flavor for this.

Yes, things can really start giving you a headache when the machine employs DMA in any form. (In the IBM PC, for instance, there's a DMA controller that can "take turns" with the CPU in driving the bus, but it's still only the other thing that can drive values *onto* the address bus; IE, any peripheral that's going to use DMA has to have its driver program that central DMA chip's counters with the start-end ranges of the transfer and us a single set of pins to just tell it to increment. Other machines allow true busmasters, where an external device can signal it wants control of the bus and can actually drive it, requiring all motherboard drivers to turn themselves off while that busmastering device has carte blanche to do whatever it wants... although that might itself be watchdogged by an MMU...)
 
Back
Top