• Please review our updated Terms and Rules here

I wonder if part of why EGA had write only registers...

I wouldn’t call it exactly “common” but 400 line video standards using 26Khz frequency monitors were out there around the time EGA was introduced (Tandy 2000, AT&T 6300, etc.) and I can’t imagine the small frequency difference between these and EGA could have much effect on the price of the monitor.
Those may not have been so common, but 640x400 at 24.8 kHz appeared here in Japan in 1982 and was extremely common by 1984, thanks to the popularity of the PC-8801 and PC-9801 systems. (And some Fujitsu FM-77 models a 400 line mode, too, though I don't know if it was the exact same frequency.) So if you were looking to OEM a monitor that would seem to me to be the way to go.
 
Heck I suspect this is the real reason why PC's Limited only offered Hercules and EGA at one point.
 
I'm not sure about that. My lowly Mitsubishi 12" and my Sony 13" monitors were purchased for EGA application. Both had switches to select between analog and digital--they were not purchased for their analog capabilities as VGA didn't yet exist.
There were analog RGB monitors for the Apple IIGS, Amiga, and Atari ST, but I believe they were all 15 KHz.

We had a NEC multi-sync at the lab I worked in (1986) that was a great picture but I think it maxed out at EGA resolutions. Not sure if it was analog capable. Ironically, one of our tasks was supporting a VAX 11/750 with a big professional monitor (probably 512x512) and color framebuffer for visualizing molecular structures for the biology department in Lilly Hall (yep, Purdue). Thing was dog slow. I can't imagine they had much use for it after the 386 and VGA became available.
 
Last edited:
Maybe the total dot clock for 640x400 at 60Hz was too high for EGA's hardware?

Strange but true: the memory bandwidth needed for CGA 80 column text mode is actually *higher* than a single *VGA* resolution graphics bitplane.

(VGA’s graphics pixel clock is ~25mhz vs. ~14mhz. It’s not *twice* as fast, as you might expect considering the monitor frequency is exactly doubled, because there’s a lot tighter margins, IE, smaller horizontal blanking intervals. But on top of that, CGA text mode has to fetch *two* bytes for every 8 pixels, the character code and a separate attribute byte, while a graphics bitplane only needs one. So, yeah, CGA actually hits memory harder… and it’s why many CGA cards can manage no “snow” in graphics mode but not text.)

With that in mind I don’t think upping EGA’s line rate from about 21.5khz up to 24 or 26 (or all the way up to VGA’s 31.5khz) would have been a deal breaker? EGA doesn’t have the “attribute problem” because the video refresh hardware has 32 bit access to the bitplane memory.
 
Strange but true: the memory bandwidth needed for CGA 80 column text mode is actually *higher* than a single *VGA* resolution graphics bitplane.

(VGA’s graphics pixel clock is ~25mhz vs. ~14mhz. It’s not *twice* as fast, as you might expect considering the monitor frequency is exactly doubled, because there’s a lot tighter margins, IE, smaller horizontal blanking intervals. But on top of that, CGA text mode has to fetch *two* bytes for every 8 pixels, the character code and a separate attribute byte, while a graphics bitplane only needs one. So, yeah, CGA actually hits memory harder… and it’s why many CGA cards can manage no “snow” in graphics mode but not text.)

With that in mind I don’t think upping EGA’s line rate from about 21.5khz up to 24 or 26 (or all the way up to VGA’s 31.5khz) would have been a deal breaker? EGA doesn’t have the “attribute problem” because the video refresh hardware has 32 bit access to the bitplane memory.
This brings up an interesting point: the EGA uses RAM fonts in A/N mode, so it is actually doing three accesses per character: the character and attribute which can be fetched in parallel, but the font data has to be read *after* the character code is read. So its hitting the memory pretty hard, too.
 
This brings up an interesting point: the EGA uses RAM fonts in A/N mode, so it is actually doing three accesses per character: the character and attribute which can be fetched in parallel, but the font data has to be read *after* the character code is read. So its hitting the memory pretty hard, too.

not a problem. the character, attribute, and font glyph are read from separate planes.
 
We had a NEC multi-sync at the lab I worked in (1986) that was a great picture but I think it maxed out at EGA resolutions. Not sure if it was analog capable.
It probably was analog capable, with a switch.

IMG_0441.jpeg
 
Yes, MDA is also 350 lines, but it’s 50hz instead of 60, so it’s not EGA frequency. (EGA also can *use* an MDA monitor, but only in a weird mode specific to that combination and not compatible with color software, unlike mono VGA.)

Oddly enough IBM also had a color 350 line display for the 3270/PC, but it doesn’t run at EGA frequency either. (In mono mode it uses the same 18Khz as MDA, but the color mode is 24Khz, vs. EGA’s 21.)
Hercules itself was designed as an extension of MDA to use the same monitors.
 
Strange but true: the memory bandwidth needed for CGA 80 column text mode is actually *higher* than a single *VGA* resolution graphics bitplane.

(VGA’s graphics pixel clock is ~25mhz vs. ~14mhz. It’s not *twice* as fast, as you might expect considering the monitor frequency is exactly doubled, because there’s a lot tighter margins, IE, smaller horizontal blanking intervals. But on top of that, CGA text mode has to fetch *two* bytes for every 8 pixels, the character code and a separate attribute byte, while a graphics bitplane only needs one. So, yeah, CGA actually hits memory harder… and it’s why many CGA cards can manage no “snow” in graphics mode but not text.)

This made me raise an eyebrow at first so I tried to do the math.

80x25x2 is 32Kb per frame, but the CGA ends up re-reading characters and attributes for each row of the character font as it scans a character row, so it ends up being 256Kbps for the visible data. However the CRTC keeps scanning addresses even through hblank so it's something like 114x25x2x8 or 365Kbps @ 60Hz. the VGA CRTC i assume carried over the same improvement the EGA CRTC had and does not read out memory addresses during hblank, instead multiplexing the row counter during that time. That would make a single VGA 640x480 bitplane 307Kbps, so yeah, the math checks out.
 
That would make a single VGA 640x480 bitplane 307Kbps, so yeah, the math checks out.

Yeah, it’s definitely kind of a brain melter. Also remember, so far as bandwidth impacts hardware limitations it’s the time you have to fetch each byte that’s the hard limit. (IE, yes, a 640x480 bitplane is 50k more memory per 60hz frame, but because the individual cell period is longer you could get away with slower RAM anyway.)

Smaller h/vblank areas do mean of course that if your memory is slow enough you have to wait the CPU in the active area to avoid snow your redraw performance might go in the toilet, but didn’t the EGA we got in real life already have this problem? It has really tight porches, even tighter than VGA I think, and I’m pretty sure it handed out plenty of wait states if you tried to access it in the active area.
 
The original NEC MultiSync was analog compatible. The DE-9 pin out matched the IBM PGC and was marketed as a display for such. The later MultiSync II came out around the same time as the first batch of PS/2s and added a 15-pin VGA adapter to the box to support the new standard.
 
It let them reuse the 8x14 font. Think of the font development dollars saved.
...and they still ended up redoing the font anyway - for some reason, they decided that the 5154 was only going to support 640 dots horizontally (not 720, like the 5151's 80 columns at 9 dots per char)... so they had to adapt the charset to play nice with 8- as well as 9-dot widths. Classic IBM. :-D
 
not a problem. the character, attribute, and font glyph are read from separate planes.
I was initially thinking they would have read the character code and then the font data, but it would make sense for the EGA to pipeline the reading of the character font with the next code/attribute so it could all be done in parallel.
 
Last edited:
I was initially thinking they would have read the character code and then the font data, but it would make sense for the EGA to pipeline the reading of the character font with the next code/attribute so it could all be done in parallel.

One does have to happen after the other, but in general, everything you see on the screen is delayed by at least a character clock anyway. This is how pel-panning is possible - if the attribute controller was rasterizing what was being read out from video memory at the exact instant, it wouldn't be able to shift pixels to the left and reveal pixels from the 'next' clock, because the next clock wouldn't have been read yet.

The sequencer, which is basically the EGA's timing orchestrator, instructs the attribute controller when to load a new character glyph from memory, and this is coordinated to happen once the miscellaneous logic has calculated the correct memory address for the font glyph. The attribute controller has its own 16-bit bus to planes 2 and 3, from which it reads the attribute byte and 8-pixel glyph span respectively, bypassing the graphics controllers entirely.

As to how the attribute data gets into plane 2 considering characters and attributes appear sequential in video memory, this is done through the EGA's odd/even addressing mode capability. The EGA makes it appear that planes 1 and 2 are interleaved, so all the software written for text mode doesn't break. But it also solves the bandwidth problem nicely, since we aren't hammering the same plane/chips 3 times per character clock.
 
Last edited:
One does have to happen after the other, but in general, everything you see on the screen is delayed by at least a character clock anyway. This is how pel-panning is possible - if the attribute controller was rasterizing what was being read out from video memory at the exact instant, it wouldn't be able to shift pixels to the left and reveal pixels from the 'next' clock, because the next clock wouldn't have been read yet.

The serialization process has to happen once all data has been fetched, that's clear, and comes out of the attribute controller. Check.
The sequencer, which is basically the EGA's timing orchestrator, instructs the attribute controller when to load a new character glyph from memory, and this is coordinated to happen once the miscellaneous logic has calculated the correct memory address for the font glyph. The attribute controller has its own 16-bit bus to planes 2 and 3, from which it reads the attribute byte and 8-pixel glyph span respectively, bypassing the graphics controllers entirely.

Which is odd, because the attribute byte is in plane 1, the character code in plane 0. So there is logic to read the code/attribute in parallel because the character code and attribute occupy the same address in their respective bit planes. The EGA manual only has bit plane 2 connected directly to the attribute controller (8 bits), which would make sense for fetching the glyph image once the character code and attribute byte have been read in from planes 0 and 1. The manual alludes to the CRTC generating the addresses to fetch the character/attribute but that must somehow be routed to the attribute controller. Does the graphics controller #1 act as a bypass even though it isn't directly responsible for processing the data?

As to how the attribute data gets into plane 2 considering characters and attributes appear sequential in video memory, this is done through the EGA's odd/even addressing mode capability. The EGA makes it appear that planes 1 and 2 are interleaved, so all the software written for text mode doesn't break. But it also solves the bandwidth problem nicely, since we aren't hammering the same plane/chips 3 times per character clock.
The EGA interleaves planes 0/1 to the CPU, not 1/2. And there is still the issue of two effective memory fetches required to scan out a glyph unless the attribute controller has an independent address bus to to plane 2 - which isn't clear from the simplistic diagram in the manual. The EGA uses 100 ns DRAM vs 120 ns for the CGA, so ~20% more bandwidth but 12.5% faster dot clock. If two memory fetches are required per glyph, that doesn't leave a lot of extra overhead compared to the CGA and may explain all the wait states required to avoid snow.
 
Back
Top