• Please review our updated Terms and Rules here

8259A PICs in PC, XT and AT

Scali

Veteran Member
Joined
Dec 13, 2014
Messages
2,024
Location
The Netherlands
I have recently been toying around with the auto-EOI mode of the 8259A, to try and save a few bytes/cycles for time-critical timer interrupt stuff.
I got it working on my IBM 5160.
But then I got a 286 machine, and well... the plot thickened.

Namely, I wanted to reset the PIC back to its original state when my program exited. So I took the code from the IBM PC BIOS and put that in my application.
Since an AT-class machine has two PICs, it would need to be restored to the cascaded setup instead (one of those rare occasions where you have to know exactly what hardware you're running on, because you cannot assume backward compatibility).
So I added some code to detect the second PIC, and use the code from the AT BIOS in that case.

So far so good, the detection worked, and I got it working on both my 5160 and my 486 clone.
But when I tried to run it on my 286, it didn't work. I found that if I keep sending manual EOIs, the system appears to work correctly, but for some reason it didn't switch into auto-EOI mode.

So instead of just setting the first PIC up in a PC/XT configuration and ignoring the second PIC, I instead took the full AT BIOS setup, and just changed the auto-EOI bit there. Then it magically started working on my 286.
After some troubleshooting I found out that running a single PIC was not the issue. The problem was that the PC/XT sets ICW4 to a value of 9, where the AT sets it to a value of 1. The difference here is in the 'buffered mode'.
For some reason, auto-EOI doesn't work on my 286 if I set buffered mode.

So, perhaps someone here can shed some light on this, eg:
- Why does the AT initialize the 8259As with a different setting for ICW4? Is this because of changes in the circuit when adding the second PIC?
- What exactly is the effect of enabling/disabling buffered mode?
- How can auto-EOI be affected by the buffered mode setting? It doesn't seem to make sense to me. Auto-EOI is something that the 8259A should do internally, right? So how could it be affected by the buffered mode setting, which controls a signal to the outside world?

My 286 is a late model, which has an integrated chipset, so there are no physical 8259As present (and they probably integrated the whole two-PIC circuit in a single chip). It is possible that the implementation of the 8259A circuit is slightly bugged here, and the behavior is different from a real IBM AT (in which case my 486 appears to have a more accurate integrated chipset design, as far as the 8259A goes).
Does anyone have a real IBM AT to test this?
I have some simple test-code in a Turbo C++ project here anyway, contains both source and binary: https://www.dropbox.com/s/4d31o2aeti3hx0x/AUTOEOI_286.zip?dl=0
 
Without me checking the schematic diagrams for your particular machine - I can offer the following observation(s):

When performing end of interrupt processing, communications is involved between the master and slave PIC(s) (8259 devices). If all of the PICs are interconnected directly - then there tends to be no problem and buffered mode is not required. However, if the PICs are not interconnected directly - then buffers may need to be enabled between the devices to permit communication.

We use Intel 286/10A CPU boards in a MULTIBUS 1 environment and this is exactly the issue we have. Some of the slave PICs are on different cards to the CPU and (therefore) the MULTIBUS data/address buffers need to be enabled when performing master/slave PIC interactions. If I remember correctly, this is via the not(SP)/not(EN) pin (16) on the PIC.

You should find (if you peruse the schematic diagram) that the two PICs may not be directly connected to each other.

Also note that buffered mode needs to be selected for both the MASTER and SLAVE PICs (ICW4=XXXX10XX for buffered slaves and ICW4=XXXX11XX for a buffered master).

Dave
 
Thanks for that input, Dave. Things are slowly becoming a bit more clear.
I think what is confusing here is that the PC/XT BIOS sets up the single PIC in buffered slave mode (it sends 09h, so 00001001b). I wonder if this is deliberate or not, and if so, what the reason is for that choice.
The PC/AT BIOS sets both PICs up in unbuffered mode. From what I understood, in that case, the SP/EN pin is used to indicate whether a PIC should be master or slave. So their roles are hardwired, it seems, not programmed by the ICW values.
I think that's what we see here at page 1-76: http://www.minuszerodegrees.net/manuals/IBM_5170_Technical_Reference_1502243_MAR84.pdf
Master has SP/EN to +5v, and slave has SP/EN to GND.

I also looked up what the 5150 looks like. See page 1-37: http://www.minuszerodegrees.net/manuals/IBM_5150_Technical_Reference_6322507_APR84.pdf
Apparently there is indeed a difference in how the 8259A is wired up in the PC/XT. The SP/EN is not just hardwired to +5v or GND here, but connected to some logic.

That would explain the slightly different setup code.
 
Last edited:
When ICW4 is set as XXXX0XXX (non buffered mode) then the SP/EN pin is an INPUT with '1' meaning MASTER PIC and '0' meaning SLAVE PIC (i.e. hardwired).

When ICW4 is set as XXXX1XXX (buffered mode) then the SP/EN pin is an OUTPUT driving (mainly) enable/disable logic for data bus buffers. In this case, determination of the PIC as a MASTER or SLAVE is off-loaded to the software by setting this fact in ICW4 (XXXX10XX or XXXX11XX as a slave or master respectively). In this case, the SP/EN output pin is 'active' whenever the PIC is driving its data bus buffers (e.g. outputting an interrupt vector to the CPU). Depending upon 'where' in the circuit the PIC is located, the SP/EN pin could be used to enable or disable the data bus buffers as appropriate.

If the SP/EN pin has been wired to some logic - ICW4 MUST specify that the PIC be used in buffered mode - otherwise the PIC will assume a master or slave role depending upon the (random) voltage level on the SP/EN pin!

If the SP/EN pin has been pulled high or low (i.e. it is an input) then non buffered mode must be set in ICW4.

It's one of these cases where the hardware design dictates how you program the device... Get it wrong - and it doesn't work as designed...

Dave
 
If the SP/EN pin has been wired to some logic - ICW4 MUST specify that the PIC be used in buffered mode - otherwise the PIC will assume a master or slave role depending upon the (random) voltage level on the SP/EN pin!

Ah yes, very good. So indeed, the different circuits mean that you MUST have different settings for ICW4 depending on whether you have a 5150/5160 or a 5170 (and hopefully all clones are faithful to these).

It's one of these cases where the hardware design dictates how you program the device... Get it wrong - and it doesn't work as designed...

Yup, so IBM is doing it right with different initialization settings for their different machine designs.
The problem is that you can't read back the ICW from a running system, so you won't be able to detect what kind of hardware configuration it has.

The best I can do is detect whether you have a second PIC, and assume that a single PIC is always set up the way a real PC/XT is, and a dual PIC is always set up the way an AT is.
We can only hope that the clones are faithful to the IBM design, and use their PICs in the same way.
 
Another interesting passage in the 8259A manual is this by the way:
The AEOI mode can only be used in a master 8259A and not a slave. 8259As with a copyright date of 1985 or later will operate in the AEOI mode as a master or a slave.

My 5160 is a model from 1987, so probably has post-1985 8259A chips as well. I merely flipped the AEOI-bit in the setup sequence from the BIOS. This means it would run in buffered slave AEOI mode. Which only post-1985 chips would do, apparently.
I guess I should try to run it in buffered master mode.
Also, I'm not entirely sure why they set it to slave in a single PIC system... or what the difference would be for a single PIC running in master mode as opposed to slave (as far as I understand it, the master/slave setting decides whether the cascade pins are used as an input or output. But with a single PIC, nobody is sending or listening to those pins... the only problem could be in slave mode, when it listens to these pins, but they are not connected, so you get random input. But that is not happening, apparently, because it is running in slave mode by default).
 
Last edited:
I can't answer your specific question - but I may be able to shed a bit of additional light...

You may be confusing MASTER/SLAVE with single/cascade. ICW1 D1 (SNGL) specifies whether this is the only PIC in the system (single) or multiple (cascade). If single - ICW3 is not accepted (specifiying SLAVE_ID or which master IR has a slave connected). If using buffered mode in a single PIC system - then the M/S bit of ICW4 probably is a "don't care". It has to be either a '0' or a '1' because you are writing a complete byte of course.

Dave
 
I can't answer your specific question - but I may be able to shed a bit of additional light...

You may be confusing MASTER/SLAVE with single/cascade. ICW1 D1 (SNGL) specifies whether this is the only PIC in the system (single) or multiple (cascade). If single - ICW3 is not accepted (specifiying SLAVE_ID or which master IR has a slave connected). If using buffered mode in a single PIC system - then the M/S bit of ICW4 probably is a "don't care". It has to be either a '0' or a '1' because you are writing a complete byte of course.

Ah yes, you're probably right, that makes sense.
Anyway, I'll experiment with setting the master-bit in ICW4 to see if it works like that on my 5160 as well. And I wonder if setting buffered mode with master will work on the 286.

Update:
- Setting buffered mode/master with auto-EOI (send 0Fh to ICW4 instead of 0Bh) works fine on my 5160. This may be more compatible with old 8259A chips, so I will stick to this.
- Setting buffered mode and correctly setting the first PIC to master also works on my 286 (so using the same 0Bh init as for my 5160, ignoring the second PIC, so no ICW3). This seems to support the theory that the 286 may mirror the behaviour of pre-1985 8259A chips. A number of these probably ended up in real IBM ATs as well, so it is not that crazy, although it seems a bit of an anachronism in a 286-20 with a BIOS date of 07/07/91...
- In order to reset the system to the proper mode on exit, I still need to have separate codepaths for single-PIC and dual-PIC systems. So I might as well use the 'safer' mode of running the PICs in unbuffered mode on machines with two PICs, so I will keep the two separate versions of auto-EOI initialization as well as resetting back to the initial state.

Thanks for your help and insights. In this case it really helps to know a bit about how the hardware is wired into the system.
 
Last edited:
I've done some more testing and debugging, and I think I've come up with some good code to set up auto-eoi, and verify that it works, for both 8259A's.
I also made the test capable of detecting old or new 8259A behavior. It detects both integrated 8259A's on my 286 clone as 'old', while it tests 'new' on my other machines. I haven't tested on real pre-1985 8259A chips yet, but I hope someone with the right hardware will do that soon.
Because the behaviour on my 286 is that in buffered slave mode (either standalone or in a cascaded setup), the AEOI flag is ignored. In non-buffered slave mode, or in standalone buffered master mode, AEOI works. This means I can get both 8259As in auto-EOI mode still.

The detection routine is quite simple, once you figured it out :)
Namely, both 8259A's have a timer attached to them. The first one has the classic 8253 timer, and on AT machines, you have the RTC.
So I set up these timers to generate periodic interrupts, and I install a handler which increments a counter, but does not fire an EOI.
If the counter increments more than once, you know that the PIC is performing auto-EOI. Otherwise, no EOI is received, so no new interrupts are sent to the CPU.

In a cascaded system, you can test for an old 8259A by setting up the first PIC as standalone, and then set buffered slave+AEOI, ignoring the second PIC.
The second PIC can of course be run in buffered slave mode in cascaded mode.

I'll have a blog up soon, once I have verified that the old/new 8259A detection works on real chips.
You should be able to tell by the (C) '85... This is old:
Intel-P8259A.jpg

And this is new:
Intel-D8259A-2.jpg


What may complicate matters somewhat is that NEC 8259 chips are also commonly used, and they don't have this copyright afaik. I also don't know if they were ever updated to the 1985-spec (or if they even had the glitch in the first place).
Looking at the datasheet, it seems it's a copy of the Intel one: http://www.datasheetarchive.com/dlmain/Datasheets-24/DSA-478192.pdf
But it makes no mention of the 1985 thing. So it probably is a pre-1985 clone, and was never updated.
According to CPU-World, AMD, Siemens and UMC also made them.
 
Last edited:
Well, so far I have tested various 8259As, but no real Intels yet. Only NECs or integrated solutions (Faraday FE2010, Headland HT18/C, and some ALi 486 chipset).
So far only the Headland-variation had slightly different behaviour. The NECs should have been pre-1985, but don't behave like the Headland does.
Which makes me wonder what real Intels from pre-1985 DO behave like.
A simple test-program with Turbo C++ source can be found here, if anyone is interested: https://www.dropbox.com/s/0h6p3fpmiyiccff/8259A.zip?dl=0
 
Did you try 8259 in popular Chips and Technologies 82C206 and clones ?

No, I don't have any such machines myself. But if anyone cares to try, I have linked my test-tool with source and binaries above. I should add it to the blog as well, actually.
It would be nice to have a good overview of what kind of chips you may encounter and what kind of quirks you may have. So far the Headland chipset was the only quirky one I found.

P.S. There was also a bug in the AMD 9517 DMA controller used in early BM PC.
http://www.vintage-computer.com/vcf...8237-DMA-Controller-issues-in-XT-s-and-clones


Ah, interesting! PCs are really a mess, I'm surprised they work at all :)
 
Well, so far I have tested various 8259As, but no real Intels yet. ...
Which makes me wonder what real Intels from pre-1985 DO behave like.
A simple test-program with Turbo C++ source can be found here, if anyone is interested: https://www.dropbox.com/s/0h6p3fpmiyiccff/8259A.zip?dl=0
I found that one of my IBM AT motherboards has two Intel branded 8259A's (P8259A with "I" symbol and "INTEL", copyright '85).
Not "pre-1985", but because of your "no real Intels yet", I ran your test program on the motherboard.
It outputted:

AT-compatible machine
New 8259A?
Yes AEOI!
New 8259A2?
Yes AEOI2!
 
I found that one of my IBM AT motherboards has two Intel branded 8259A's (P8259A with "I" symbol and "INTEL", copyright '85).
Not "pre-1985", but because of your "no real Intels yet", I ran your test program on the motherboard.
It outputted:

AT-compatible machine
New 8259A?
Yes AEOI!
New 8259A2?
Yes AEOI2!

Ah, thanks. At least my code is now officially verified on a real IBM AT, not just clones :)
I have opened up some 8088 machines, and found that my 5160 and Philips P3105 both use a real Intel post-1985 8259A. They were detected as new 8259A, and AEOI was enabled properly.
The biggest question I have left is: what happens when you have an AT with an old 8259A as slave?

Edit: as an aside: I noticed that my 286 masked out the RTC interrupt by default. So I didn't just have to reprogram the RTC to generate interrupts for each tick, but I also had to change the interrupt mask to get it to work. Any idea if the real AT does the same thing? (I suppose the answer is hidden somewhere in the BIOS listing as well, but probably easier to just boot the system, open debug and do an in al, 0A1h to see what value it has).
 
Last edited:
Edit: as an aside: I noticed that my 286 masked out the RTC interrupt by default. So I didn't just have to reprogram the RTC to generate interrupts for each tick, but I also had to change the interrupt mask to get it to work. Any idea if the real AT does the same thing? (I suppose the answer is hidden somewhere in the BIOS listing as well, but probably easier to just boot the system, open debug and do an in al, 0A1h to see what value it has).
On one of my IBM ATs, I booted to BASIC then executed "print inp(&ha1)". 253 was the result.
 
On one of my IBM ATs, I booted to BASIC then executed "print inp(&ha1)". 253 was the result.

Ah right, thanks.
So that means it indeed masks the RTC, and everything else, except for irq 9. I don't think anything is connected to that on a standard AT? But irq 2 is rerouted to irq 9, so perhaps they figured it shouldn't be used, and therefore shouldn't be masked?
It is somewhat interesting... because this means that by default an AT won't generate any irq's above 7. So it pretty much acts like a standard PC/XT, unless you deliberately enable the extra stuff. That may have been a conscious decision by IBM.
 
Back
Top