• Please review our updated Terms and Rules here

How does the Kaypro video work?

whartung

Veteran Member
Joined
Apr 23, 2020
Messages
886
I looked about on the web a bit, and it seems that the Kaypro video is an isolated, standalone system from the main board.

Does it interface simply through standard Z80 I/O ports? It has its own RAM, it's own clock, etc. It seems like it's more like an embedded serial terminal, without the serial interface but something more native. What display features did it support? Seems like it was only Blink.
 
There are Diagnostics for the Video that were written many years ago, and come with source.
They are on the Web along with Memory testing software. You might want to look at that
source to get more information.

If I remember correctly the video is mapped to 0x4000 and is 80 x 25 lines. There are more
video codes other than blinking. See attached Photo.

Larry
 

Attachments

  • KayproVideo.png
    KayproVideo.png
    162.3 KB · Views: 39
I dug around until I found the Alignment Software and the screen starting address
is 0x3000.

Code:
00430         LD    HL,3000H    ;START OF VIDEO PAGE
00440         LD    DE,3001H    ;NEXT ADDR
00450         LD    BC,0BFFH    ;TOTAL VIDEO PAGE
00460         LD    (HL),20H    ;CLEAR SCREEN
00470         LDIR            ;BLOCK MOVE

This routine clears the screen.

Larry
 
To answer you question, there are two styles of video (from the computer's perspective) in Kaypros. The early models had video RAM mapped into the CPU address space after (and along with) the ROM, and used discrete logic to scan that RAM and produce the video signals. The later models used the Synertec 6545 CRT controller chip, sometimes with a custom LSI chip to replace some external logic, and video RAM isolated/hidden on the CRT controller bus (accessible from the CPU only by using CRT controller commands/actions), to produce the video signals. In both cases, all of this is done on the "mainboard", along side the CPU and peripheral chips. The video signals then go off-board to a separate video board that uses the signals to produce what is necessary to drive the cathode ray tube (high-voltage, etc). So, no, the Kaypro video is not like a separate terminal and it cannot function without the right software running on the CPU. There was a program provided that could make the Kaypro "act" like a terminal, but then you lose the computer portion of the Kaypro as it is running the "terminal" software.

This is in contrast, for example, to the Heathkit H89 where there is a separate, completely independent, "Terminal Logic Board" that is capable of functioning as a stand-alone terminal. The H89 has two Z80 CPUs, one drives the TLB while the other is the actual computer that the user programs run on.
 
The early Kaypro (video) supported only the reverse-video attribute. The later models supported reverse, high/low intensity, and blink, plus there was a coarse bitmap graphics capability.
 
On the non CRT controller types you also have the option to select another font with algebraic symbols if I recall correctly.
The video ram is not directly addressable unless you switch banks first. It is however always advisable, if you try to write software for CP/M for example, to use the proper 'hooks' for this and not write directly to screen memory.
 
If you need some 4-1-1 on the electronics of the video board, I waded into that swamp a while ago and have copious notes. Take a look here and send a PM and I will send you all that i have..


Captain Video.jpg
 
I’ve been staring at the Kaypro II video schematics tonight (the first version, without a controller) and it’s an clever design.

The video circuitry runs on its own clock, and no effort was made to synchronize memory access between the CPU and the video circuitry. It simply gives the CPU priority access to the VRAM. The video circuitry needs to read from VRAM every 7 pixels to fetch a row for a glyph out of the character generator ROM. If the CPU is busy accessing the VRAM when the video circuitry is trying to fetch those pixels, the video circuitry will read out black pixels.

What’s is clever is that this isn’t a big deal. Even if a scanline for a single character is missed, it’ll be drawn on the next frame, and was likely drawn in the previous frame, so the phosphor persistence on a display refreshing at 60 Hz will make the glitch barely noticeable. The video circuitry fetches from VRAM every 500 ns, much faster than the CPU’s theoretical maximum of a fetch or write every 3 µs. And a lot of the time the video circuitry is either vertical blanking, horizontal blanking, or doing blank lines between lines of text, so the chance of the CPU stepping on the video circuitry enough to cause noticeable glitching of the display is quite low.

Anyway, I thought it was a neat solution to the problem. It wasn’t under-engineered (looking at you, ZX80), or over-engineered with expensive dual port memory. It was just an elegant solution where good enough was actually good enough.
 
I’ve been staring at the Kaypro II video schematics tonight (the first version, without a controller) and it’s an clever design.

The video circuitry runs on its own clock, and no effort was made to synchronize memory access between the CPU and the video circuitry. It simply gives the CPU priority access to the VRAM. The video circuitry needs to read from VRAM every 7 pixels to fetch a row for a glyph out of the character generator ROM. If the CPU is busy accessing the VRAM when the video circuitry is trying to fetch those pixels, the video circuitry will read out black pixels.

What’s is clever is that this isn’t a big deal. Even if a scanline for a single character is missed, it’ll be drawn on the next frame, and was likely drawn in the previous frame, so the phosphor persistence on a display refreshing at 60 Hz will make the glitch barely noticeable. The video circuitry fetches from VRAM every 500 ns, much faster than the CPU’s theoretical maximum of a fetch or write every 3 µs. And a lot of the time the video circuitry is either vertical blanking, horizontal blanking, or doing blank lines between lines of text, so the chance of the CPU stepping on the video circuitry enough to cause noticeable glitching of the display is quite low.

Anyway, I thought it was a neat solution to the problem. It wasn’t under-engineered (looking at you, ZX80), or over-engineered with expensive dual port memory. It was just an elegant solution where good enough was actually good enough.

I've been building my own video circuitry lately out of a two gals for the raster/counter generator, one gal as the mixer for producing grey shades and a couple of gals ( that are just latches ) that interface to the bus. That's just a few chips, but it produces 256x192 or 512x192 hires graphics... Video circuits are not that complex and I can access the video RAM at up to 7 MHz using 1985 tech.

But the zx80 was all about cost at a time when computers were very expensive, and RAM was super-expensive. So by software generating the video, they could reduce video size as program size increased - The entire memory in a zx80 was just 1K - that included program memory (BASIC) and Video memory. Compare this to an equivalent memory for 40x25 graphics, and that alone is 1K, without any program memory. The zx80 was amazing because you could learn to program on it, and it worked at a time when computers were out of the range of most budgets.

Given they used NMI and NOPs to generate video on the zx80, it was pretty amazing indeed. It really was a graphics coprocessor - And it had so few chips the cost was practical.

That led to the zx81 and the ULA, and the ZX Spectrum - which slowed the CPU clock to avoid video contention.

It's easy to forget that in 1980, there were very few choices, By 1985, the world had opened up and was rapidly expanding, with memory being very cheap by then... It was halving in cost every year or so. Reverse application of moore's law.
 
...
What’s is clever is that this isn’t a big deal. Even if a scanline for a single character is missed, it’ll be drawn on the next frame, and was likely drawn in the previous frame, so the phosphor persistence on a display refreshing at 60 Hz will make the glitch barely noticeable. The video circuitry fetches from VRAM every 500 ns, much faster than the CPU’s theoretical maximum of a fetch or write every 3 µs. And a lot of the time the video circuitry is either vertical blanking, horizontal blanking, or doing blank lines between lines of text, so the chance of the CPU stepping on the video circuitry enough to cause noticeable glitching of the display is quite low.
...
While all of this is statistically true, in practice this kind of video refresh does cause noticeable "noise" during periods where the CPU is doing a lot of updates, like when scrolling or clearing the screen. I don't have a real */83 Kaypro to see whether the (unusually) slow phosphor mitigates the issue at all, but I have seen many other similar display circuitry that produces a fair amount of noise in practice.

Kaypro did spend a fair amount of effort eliminating this issue on their */84 models - it was not a "freebee" from using the CRT controller chip (it required both hardware and software). In fact, it's the reason that "plain" 6845 chips won't work in Kaypros (must have 6545 or 6845E).
 
This is where the 6502/680x had an advantage over the Z80/8080 and such: the CPU clock could be synchronized to the video as, VIC20 was an excellent example. C64 can do it mostly but since it needs to pull in extra data for the sprites, the CPU was hogged sometimes (badlines)

BTW as far as I know the ZX81 did not slow down the clock of the CPU, it just executed the user program during the blanking periods in SLOW mode, in FAST mode it just cut off the video routines just as the ZX80 did. The ZX Spectrum did some CPU clock tricks to avoid contention. The video memory had to be located in the first 16kB, the extra 32kB of the 48 kB model had no video bus contention.
 
This is where the 6502/680x had an advantage over the Z80/8080 and such: the CPU clock could be synchronized to the video as, VIC20 was an excellent example. C64 can do it mostly but since it needs to pull in extra data for the sprites, the CPU was hogged sometimes (badlines)

BTW as far as I know the ZX81 did not slow down the clock of the CPU, it just executed the user program during the blanking periods in SLOW mode, in FAST mode it just cut off the video routines just as the ZX80 did. The ZX Spectrum did some CPU clock tricks to avoid contention. The video memory had to be located in the first 16kB, the extra 32kB of the 48 kB model had no video bus contention.

The zx81 didn't address bus contention. It WAS the video processor and delivered the bytes via the z80's address lines, while feeding the z80 NOPs. So in fast mode, it was fast because it was producing no video, and in slow mode, it handled the scanline NOPs via NMI every scanline.

The ZX Spectrum could have used wait states to address video contention, but since they had the ULA that also provided the CPU clock, and they were limited in pin availability, and they knew exactly when the video bytes and attributes were being accessed, it was easier ( and less chips ) to freeze the z80 clock and force the z80 to wait.
 
Back
Top