• Please review our updated Terms and Rules here

RAMdisk under CP/M 2.2

Yep that seems like what you described - I guess what I'm explaining is that you could replace U10 with a buffer and run it from A8 to A15 instead of D0-D7. This would give you a 256 byte I/O page that you don't have to reset each time you want to read a new byte - If you address lines are stong enough, you could even run then directly - eg, z80A8 to RAMA0, z80A9 to RAMA1 etc. up to z80A15 to RAMA7...

All you do is load the lower 8 bits of the RAM DISk address into the B register, Load the C with your port, and use IN(C) and OUT(C) - then just change b and repeat.

And if you set B to FF, and HL to your destination location + FF, and use INDR followed by IND, you can transfer 256 bytes from the ramdisk to memory with just two opcodes, and no further OUT's required.

At least that's the theory... I haven't been able to test it on a real PCB yet, but it's well enough documented that it should work.

David.
 
A lot of the older boards used a 7bit counter and a sector latch. That made the mapping of sectors suitably trivial. Some of the European cards were also quite big because in parts of Europe it was very hard to get disk drives and media at sane prices so you would load up your ramdisks from tape, start CP/M and at the end of the session save it back to tape
 
I don’t imagine it’ll matter here but I guess it is worth calling out that any circuit that relies on the high address lines in the Z80 won’t be backwards compatible with an 8080/8085 based machine. In, say, an S-100 card that would be expected to work with either then, yeah, this auto-increment feature would be done via a hardware counter.

If you wanted to go that route I think it would be pretty easy to program a GAL like a 22v10 to act as an auto-increment counter/latch. (Obviously you could just use hardware presettable counters, but the ones I can think of off the top of my head are only four bit so you need more chips.) Then it is only one OUT command to set the starting address for a sector read and a loop of normal INs to read it. (Like the high address z80 method, basically, but it works on any CPU.)
 
@Eudimorphodon , That makes a lot of sense. It's a shame there's no hardware architectures database and everything keeps getting reinvented all the time... In my case, it's dedicated to a z80 system, and my FDOS/CCP is all written in extended z80 and I use the index registers, so it's a z80 specific CP/M architecture I was working on. It's backwardly compatible with 8080/8085 but not forwardly compatible - it would never support changing from z80 to 808x.

The other advantage of using the B register to drive lower address bits and a RAMDISK is a third mode of operation immediately presents, rather than a RAMDISK, memory can be accessed via I/O on a byte-by-byte basis. It's just a shame that CP/M likes 7 and 5 bit counters instead of making everything a nice clean 8 bits, but that is a throwback to 128 byte sector floppy disks I imagine.
 
The other advantage of using the B register to drive lower address bits and a RAMDISK is a third mode of operation immediately presents, rather than a RAMDISK, memory can be accessed via I/O on a byte-by-byte basis. It's just a shame that CP/M likes 7 and 5 bit counters instead of making everything a nice clean 8 bits, but that is a throwback to 128 byte sector floppy disks I imagine.

I guess there’s sort of a devil’s advocate part of me that wonders why in the end you’d really want to, if you’re designing your own hardware, bother with doing I/O driven memory for a RAMDISK instead of just doing an MMU and just emulating all the track/sector mapping in software? I guess I don’t know how CP/M’s disk drivers work, but ignorantly it seems to me that if you’re handing off control to a driver who’s job it is to tickle a disk controller into filling a buffer with the contents of a disk sector it wouldn’t be much of a stretch to have that driver implement CHS to “LBA” memory address translation to determine which page to map in (after disabling interrupts) to do a memory copy from.
 
Hi @Eudimorphodon , I guess it's because the Z80 has 64K of memory, and I couldn't find a paging and data segment system I liked in any other z80 architecture. They are all deficient, and there's no non-proprietary way for programs to all use extra memory unless they were written for that platform, and when things are written for a platform, and you want multiple concurrent processes ( whether multitasking or not ) then how do you manage memory so that when a process terminates, the memory which it used, which is probably scatterred all over the memory map, is released and can be easily picked up by other processes and used, when programs generally want contiguous memory?

By mapping DISK and MEMORY over the exact same map, and setting aside memory in 4K blocks and using the 0-block as a directory structure, I can use the built-in CP/M disk handling routines to do all my memory management. So any CP/M will be able to run on my system, and programs written for my version of CP/M will work with any CP/M, of any version, and it will still support the MMU because it's built at the BIOS. The BDOS automatically populates the memory map because it thinks it is disk. Programs can use memory as disk, executable or IO space and if the memory is used as disk, then as files are created and deleted they automatically and dynamically use whatever memory space is available - like an expandable RAMDISK where the OS doesn't care whether it's files or programs that use up the memory space. Even the best RAMDISKS available on Windows 11 can't do that yet.

In my case it works, because if it's a file, it's a memory block and if it's a memory block it's a file, and the CP/M O/S doesn't care and figures it all to be disk by default, and so tracks the allocations in system memory at the same time. So it's an elegant solution that solves that problem.

It comes with some nice side effects too, for example, being able to, as was mentioned earlier, write out the entire memory snapshot to tape, or store it on a floppy disk at the end of the day. It would make for a very fast CP/M.

And the code I mentioned earlier is probably the tightest ramdisk code... Out Track, Out Sector, Setup z80 registers, INDR, done... All at block transfer speeds. As an architecture it can match the 8086 for versatility and far exceeds DOS in terms of memory management... It's probably even faster than an external hardware DMA Ram Disk when you consider the overhead of setting that up.

But mostly, when the whole idea came out in the 80s, it was said to be impossible. Not impractical or before it's time. Impossible. I'm pretty determined to prove that wrong.

Anyway, I learned a lot about RAMDISKS and run several in my emulator that I'm using to build my OS, and now I'm writing in the new hardware elements into the OS. It's a lot quicker to program up that way. And I figured what I found out might be useful to Myke given how close what he is doing is to what I am doing.
 
Interestingly enough, as I was searching for more CP/M, Z-80, STD & S-100 Bus stuff, i came across the old S-100 Journal magazines on archive.org. The Spring 1986 has an article titled "Installing a RAM-Disk in a CP/M 2.2 System", and it has the assembly code listings as well. So, I'm going to read that and see what I can use from it.
 
I found a very good replacement for the Dallas NVRam chips that avoid battery worries, at least for the DS1225, this is the FM16w08 FRAM. I have been using these and FM18w08 in my Tek 2465B scopes for a decade now with no issues.

Whether FRAM could be an option for your project, I'm not sure, but as the years have gone by, I favor the FRAM over the Dallas IC's. There is some info on these great memory IC's in this article:

 
How big? If you map your RAMdisk RAM into a 128-byte segment of memory, you can use a 16 bit latch on the upper bits to get up to 16MB of a disk, sector addressable.
 
I found a very good replacement for the Dallas NVRam chips that avoid battery worries, at least for the DS1225, this is the FM16w08 FRAM. I have been using these and FM18w08 in my Tek 2465B scopes for a decade now with no issues.

Whether FRAM could be an option for your project, I'm not sure, but as the years have gone by, I favor the FRAM over the Dallas IC's. There is some info on these great memory IC's in this article:

That does look like a very interesting alternative. Sure wish it came as a PDIP though.
 
Given the speed that most CP/M systems run at, serial (I²C or SPI) SRAM might make sense--and reduce the pin count tremendously without having a big effect on performance.
 
Well, maybe some day. Right now I'm trying to stay 'old school' and use up my bins of parts...... That said, I have been eyeing an old Intel SBC card cage, get a couple of Multibus proto boards, and build something completely from scratch. The June & July 1978 issues of Kilobaud magazine had articles on a minimalist Z-80/S-100 system that I could adjust to work with the Multibus pinout. Those boards are bigger and I can stick more on. I'm always thinking of some new scheme.....
 
At your current rate, I suspect you're a prime candidate for address-bus fan-out by December ;)

I really want to hear how your project is going and what you discovered in those old articles :)

David
 
Hi @Eudimorphodon , I guess it's because the Z80 has 64K of memory, and I couldn't find a paging and data segment system I liked in any other z80 architecture. They are all deficient, and there's no non-proprietary way for programs to all use extra memory unless they were written for that platform, and when things are written for a platform, and you want multiple concurrent processes ( whether multitasking or not ) then how do you manage memory so that when a process terminates, the memory which it used, which is probably scatterred all over the memory map, is released and can be easily picked up by other processes and used, when programs generally want contiguous memory?

By mapping DISK and MEMORY over the exact same map, and setting aside memory in 4K blocks and using the 0-block as a directory structure, I can use the built-in CP/M disk handling routines to do all my memory management. So any CP/M will be able to run on my system, and programs written for my version of CP/M will work with any CP/M, of any version, and it will still support the MMU because it's built at the BIOS. The BDOS automatically populates the memory map because it thinks it is disk. Programs can use memory as disk, executable or IO space and if the memory is used as disk, then as files are created and deleted they automatically and dynamically use whatever memory space is available - like an expandable RAMDISK where the OS doesn't care whether it's files or programs that use up the memory space. Even the best RAMDISKS available on Windows 11 can't do that yet.

It's certainly an ambitious plan you've got there, and believe me, I'm keenly interested in these sorts of ambitious plans because I have my own retro-computer ideas perpetually in the works. :) I do have a couple questions, though... just trying to wrap my head around the whole thing.

If I understand from your hardware diagram you've essentially designed this around a 20 bit address space, with 8 bit's worth of 4K pages, right? The special sauce here is that you've *also* added the multiplexers and "track/sector" latches to make the whole space available to I/O instructions, and the idea is essentially to use the CP/M disk directory system's organization to do double-duty as your memory block manager? I guess I'm still digesting it, but here's some kind of random questions/observations about it?

  • Is all that extra hardware to multiplex the RAM paging with 128 byte "sector addressing" really going to save much time, computationally speaking? I know CP/M's "native" sector size is 128 bytes, but CP/M 2.x already supported blocking/unblocking to map these to non-native sector sizes; off the top of my head it feels like many if not most CP/M systems, at least ones with 5.25" drives, already did this?
  • I guess with this in mind it feels to me like if you're going to treat memory as a "disk" using actual CP/M disk calls you could simply, in the DISK read/write code, design it to switch in the memory page that contains the virtual sector and use the Z80's memory block copy commands to copy the desired 128 sector bytes from the RAMdisk to the CP/M disk I/O buffer? Your hardware has 32 sectors per "track", making each 4K RAM page a track, all you need to do instead of doing I/O writes to your "track" and "sector" buffers respectively is do a single I/O write to your page register to drop in the "track", shift your "sector" number over 7 bits, and add that to the base address of the Z80 address location you paged the RAMdisk page into to set up for the memory copy. It looks like the LDD/LDDR memory copy commands have the same T-state counts as the IN/OUT versions of the commands, so that part should be equally fast, right? Seems like at most you're just saving whatever the difference in T-states is between a shift plus an add over an OUT? (I guess you'll also need to restore the page you switched out after you were done, but that's also a cheap operation.)
  • Covering both points above, since it is RAM we're writing to and not physical sectors, it actually seems like you *don't* really even need to worry about blocking/deblocking after all? Unlike a real disk with real hardware sectors that need to written "whole" on an update there's no problem with even a single "CP/M" sector write against a memory page since obviously the need to rewrite the neighboring sectors goes away. It's just a 128 byte block move into the correct address, the only difference is how you present that address. (IE, is it the Z80's normal address lines or this combination of latched and MUX'ed data per your hardware.)
Again, these are ignorant observations, maybe there's something about reusing existing disk read/write code that makes all this parallel addressing hardware a big win, but from some of the template disk codes I've scanned I'm not sure why that would be? It looks like the convention for setting the location of the 128 byte disk in/out buffer is to use "Function 26: SET DMA ADDRESS", it seems like the only real constraint would be that you need to make sure that the physical page frame you use when you switch in the correct RAMDISK page is a different one than where this resides.

It doesn't seem to me that there's anything about the rest of your plan that really relies on this I/O method of getting to the shared memory? Ultimately it's all one block of storage and how you access the various structures doesn't matter? The same ideas of being able to just dump all of memory to tape or disk at the end of the day or whatever all still applies. (FWIW, there were RAMdisk-equipped CP/M machines back in the day that supported that type of usage pattern, weren't there? I'm thinking of the Epson Geneve/PX-8 in particular, my vague recollection is the microcassette recorder in that machine was used indirectly by to dump images of files stored in the RAMdisk, it wasn't a drive "on its own"?)

Mechanics of that all aside, I do think it's an interesting idea. I assume you're going to set up CP/M to use 4K as the block size for files so you can match the block lists stored in directory extants directly to values you can poke into the MMU? (Thus every running process can simply be allocated a file the size of its desired memory footprint, and the pages assigned to it are directly drawn from the block values stored in its extant entries.) Of course that would waste a ton of space for actual files. :( (Minimum file size would be 4K.) Is the intent to use this memory structure just, or at least primarily, for processes handling and have actual applications live on different storage?

I guess one other thing:

how do you manage memory so that when a process terminates, the memory which it used, which is probably scatterred all over the memory map, is released and can be easily picked up by other processes and used, when programs generally want contiguous memory?

This is certainly a fundamental problem with segment-based memory management schemes, but not so much for paged ones? On modern CPUs it is advantageous to keep your page tables reasonably "clean" to optimize page table lookups, but you don't have this problem here. (There are only 16 translation slots for the CPU's entire address space, so even a dinky little SRAM can hold many per-process sets.)

With your setup I assume that when you initially load a "normal" CP/M program (IE, a non-relocatable one ORG'ed at 0x0100 that doesn't know anything about this memory == disk idea) you're going to modify the OS so when you load the .COM file it automatically makes it a "process file" the size of the TPA, assigns it sufficient free blocks from the block bitmap, and, programs the resulting block list into the process register before loading the file into those blocks? (The conventional way, IE, using conventional sector reads and writes to memory.) Are you going to add a facility so for future use you can take the resulting "process file" and convert it into a "process pickle", so instead of having to do a conventional load you could just "run" a saved process file and have the OS directly map the associated blocks directly memory without wasting time on the file I/O? (IE, loading a program really does become "Just look at the directory entry, get the list of blocks, and map them into a process latch slot. Otherwise I guess I'm not sure what the practical difference is from "normal" CP/M on a RAMdisk, both are doing copies instead of direct maps on loads?) Are you planning the write programs that do leverage page swapping in a "native" way, and in that case the way you allocate the pages beyond TPA is make the process file bigger? (Which will involve assigning additional extents; I'm not sure about this, but if there's multiple files on the disk might this require "compacting" the directory so the additional extents on the process file are contiguous in the directory? Like I said, I don't know a ton about how CP/M does this. Will the OS allow doing this at will as the process runs, or will you have to specify a desired process size when you launch it?)

Anyway, yeah, it's a really interesting idea. I wonder how using the CP/M directory idea to track page allocations compares to how, say, an EMS manager allocates memory to multiple processes, and how that handles fragmentation, if at all.
 
Last edited:
... Last question, I promise. ... I think the reason EMS crossed my mind is I'm wondering *if* you're going to support "native" process paging how are you planning to represent this at the application level? Will a program written for your system have to do page swapping itself (by manipulating its own process register slot by plugging in pages from the list assigned to it) or is this going to be an OS level service? And will the page list be abstracted with a translation system, IE, if the application requests 128K of "extended" memory over a base TPA allocation will it see it has 32 extended pages numbered from 0-31 and the OS will translate that to actual blocks based on the CP/M directory allocation based on relative position, or will the application just see the list of physical pages assigned to it? (which presumably could be in random physical order.)
 
Having the memory available through the I/O ports allows for a direct copy into/from the DMA buffer, which can be almost anywhere in the address space. Using the memory address space to do the same requires a paging system flexible enough to map your memory page anywhere and a bit more code to find a non-conflicting region. (Some trickery with read/write enables might also do the trick.)

If your hardware does not provide this functionality, you have to use a bounce buffer in the BIOS (which is the approach I am using), which requires every buffer to be copied twice.
 
Having the memory available through the I/O ports allows for a direct copy into/from the DMA buffer, which can be almost anywhere in the address space. Using the memory address space to do the same requires a paging system flexible enough to map your memory page anywhere and a bit more code to find a non-conflicting region. (Some trickery with read/write enables might also do the trick.)

The proposed design has 4K pages and 16 possible page slots, so it doesn't seem like it'd be that hard to handle? Worst case I guess you'd have to have the disk driver capable of choosing between three different options. (I was going to say two, but realized that there may be no requirement that the buffer be aligned on a 7-bit boundary so I suppose if you were SUPER unlucky it could run between two page frame boundaries?*) I assume the disk driver is going to reside in the OS area at the top of RAM, and you're not going to want to swap out the zero page (which is where the default location of the buffer is anyway), but other than that just picking any 12K spot in the middle of the address range and using any 4K section of it that *doesn't* contain the DMA buffer will do the needful?

(From what I can tell many CP/M disk systems had to disable interrupts on disk access, so as long as we do that we won't need to worry about any user code caring that we swapped a page out during our block move, as long as it's put back once the sector is transferred. Alternatively maybe the "process ID latch" could be leveraged so whenever there's an interrupt it always switches to a "Process 0" set of pages, and once handled the actual process ID set gets restored. If there's actually an intention to multitask/timeslice on this system then I suppose you'll need something like this anyway.)

For some reason this idea of I/O addressable RAM is giving me flashbacks to "Slinky" RAM cards for the Apple II.

(*Edit: Doh, I realized two choices is still fine as long as they're not contiguous. Your RAMdisk driver could just always assume it's going to swap 1000-1FFFH for the disk page unless the target buffer overlaps that at all at either end, in which case it maps to 3000-3FFFH instead.)
 
Last edited:
Hi Eudimorphon
Your observations are correct - Here's the thinking behind it -
1) 128 byte transfers - this really irked me, but it's a limit of the DMA space in CP/M - so I figure I might as well roll with it. Once that limitation is accepted, anything bigger with deblocking is just a compromise, since RAM doesn't care, as you point out - the only thing I considered was whether to make it 256bytes, since I'm writing the OS from scratch, but compatability concerns won out - as did some vague rumours of software and not wanting more than 32 sectors, but then I get this nice coincidence that 7+5=12 which makes 4K blocks, so I went with it, even if it is clunky - Also, the support hardware/latches to support that fit on just 2 PAL/GAL chips.. Aligning with hardware is also an objective.
2) I considered just paging, but I wanted a compatible architecture, with backwards compatibility to older apps, and I have no idea where the DMA might end up - and unless I want some freaky code to keep track of it all, I might end up paging out the DMA - which would be a bit messy. Also I figure people are going to do strange things like moving the DMA around while making calls, so having the DMA source in I/O land makes a lot of sense. I did consider paging it into the BIOS area, but it's 4K blocks and I have scratchpad DMA for directories in the BIOS... so I/O seemed a better choice.
3) Paging is *not* required. It's an elegant solution in which the OS or program can byte-step through memory, up to 256Mb, via clean I/O commands in 128 byte chunks, and I use the upper address lines to avoid a latch. It seems a more powerful architecture that allows separation of code and I/O space, and makes things easier for programmers to address.
4) Accessing the screen without paging - yep, you guessed it - I/O space or as a file. Direct bitmapped high resolution 128K video memories can be accessed by a 64K program and it works with character I/O or high resolution bitmaps ( and hardware graphics, but that's another element ) - I/O is one thing, but what if the video moves around? Why not just open "VIDEO.MEM" as a file, and random I/O ? Now even less memory is required. And if the video memory moves around, then so can the file, and the user doesn't care. It creates a very clean abstraction layer when compatability with future hardware is desired over fixed mapping. As long as "VIDEO.MEM" exists, everyone can find the video map. This avoids the "Osborne 1 60K problem" - Besides, I can't find an elegant solution to paging in 128K of video memory into 64K... Sure, it still works, and programmers can use it whichever way they prefer.
5) Wastage- for the MMU, yes. Each extent only has four allocations too, which is a pain, but there's a 1:1 mapping for memory, and it doesn't matter where memory is. Got too video cards? Want to use some upper memory for programs? Now you have 640K, or 706K or even 770K and use serial for console... Also, if you want a more efficient ramdisk for a specific application, you can do that too. The 64K disk at F0000 is a 27512 Eprom sized block that contains the boot code and has it's own 2K directory, and uses 1K allocations. It's seen as a single block in the M: but the L: sees files and that's where I store extended functions, like BASIC and other desirable commands like FORMAT etc. Also it can be deleted or ignored. But it gives fast boot and makes the system more like a home computer. Boot CP/M from ROM ( can also boot from a disk ) and any commands searched are searched from the local disk, then L: - which is useful, because you can't run MS Basic any other way under CP/M
Also it might be a bit wasteful, but it's RAM - so it won't see a lot of files in M: in normal use I imagine. But M: can be used like a normal ramdisk too, so files and system processes all get mixed up and work together with a nice clean resource management provided for free by CP/M... Which I think is also kind of elegant. My current CCP is 4K and the FDOS is also 4K, so there's a lot of spare space too.
6) - Yes, I will write a program to run "snapshots" including starting and stopping them under NMI, or via switchouts. And a way to have RST's automatically map to processes to create drivers - eg, Video drivers can be called by RST10, which pages in the process, and it is aware of this, and knows how to return to the original process. Initially the user process is process 0, and the user can do what ever they like, but it would make for a powerful multiuser system as well. And yes, it should be possible to snapshot a program, save it to disk, load it back into completely different memory space, and start execution again, or even move it between computers and execute it somewhere else like a container.
If I went multitasking, I'd add one more piece of hardware - a comparitor on the output from the MMU to generate a NMI when a program accesses memory that doesn't exist - then all I need to start a small CP/M program is 4K for the task and a shared CCP and FDOS, and I can leave the rest of it sparse, and I can still save and move it around, or duplicate it, with only a single moving allocation. And when it accessed memory it didn't load into, eg, data memory, then allocate a new block through the file system and just keep going... But that's not something I intend to do this time- it's just an idea that came up that would make a very cool "second generation" model...
I may have a trigger on jumps and calls to 0005 though to page in the FDOS as an option, and it can page back out when done, meaning that the FDOS can be called from anywhere as long a it's not trying to use a DMA above F000 and the TPA becomes 64K then. Minus the zero page of course, and even that can be overwritten, but the lower 64 bytes will ALWAYS be located at 10000 to 1003F - So that when images page in, any drivers using RSTs will still work.
And being able to snapshot CP/M software while running opens up all kinds of possibilities. It's a lot of fun. And don't forget this was intended as a GAMES machine - so you could pause your game, save the state, and put it away on disk when you shut down - I was thinking back to a time when this was difficult to do and when the power went out, you came back the next day and did it all again.
Oh, and to cover the bonus question - Let's say a 64K program extends itself with normal disk writes to 256K. It's still a single file, and can exist as a single process. Even if it's running in the default user TPA ( process 0 ). What will happen is that the first 16 pages are mapped to memory from 0000 to FFFF and the ENTIRE file is mapped to DISK random I/O from 00000 to 3FFFF - How it's mapped in the actual allocations and blocks of memory doesn't matter, but as you pointed out, you can look at an allocation and map it into memory anywhere you like.

Svenska - Yes, that is something I didn't want to impose with this architecture.

Super bonus question - Yes, I actually want to swap out the Zero Page - so that any snapshots taken while the 005C-00FF region is in use will still work. And I'll use this memory to save the snapshot state - and any peculiarities it wants, eg, shared memory, fixed allocations etc. Since it's not always desirable to randomly assign resources.

Chuck - I thought about going more granular, but decided on 4K since at some point, managing the memory becomes more of a problem than a benefit. Most systems seemed to settle on 8K pages, so I went a little further than that to align with my default "track". But 1K pages would have allowed better use of the MMU directory structure... I only get 4 allocations per extent.. ( now you know why I was asking those crazy questions about that earlier ).

At the moment - I have my "default" CP/M built, and the emulator functioning. I've recently installed the MAD ports into the emulator, and the DISKs L: and M: work - A through H are reserved for external and physical media, and I through P are reserved for internal mappings, with L: and M: fixed. L: is the BIOS image, and M: is the Memory Map. ( Which only shows the default TPA at the moment, but needs more default files, one for missing RAM and one for the BOOT DISK image (L: ) - But it boots like a real machine should - ie, loads the bootstrap, copies the BIOS and BDOS, and next I need to get the BIOS to load the CCP from disk. The CCP and BDOS don't require the additional hardware to run, and I want to keep it that way, so that it could run a straight CP/M image also.

But if I can write a backwardly compatible version of CP/M that can lay the smackdown on DOS, I will be happy :)
 
I should add that as a RAM disk - and an O/S, it does some allow some crazy behavior, since a called routines can see the program that called it and all of it's data - So for example, a word processor could embed a document with a basic editor, and call a "printer" routine, that would open the "document" file as a file, then go in, extract the document, and print it, then return control to the document. Or it could execute a "spell check" file that again opens the original workspace that is still executing as a file, then corrects the document, and returns control. It's a crazy idea that would have allowed for some powerful programming architectures that never existed... And of course, simple stuff like task switching is easy to do. It's a super-modular system in which the OS wears it's heart on it's CP/M generated sleeve.
Now I'm going to apologize to Myke for hijacking his thread.
I'll open up a new thread to show some of how it works so far this weekend.

David.
 
Back
Top