• Please review our updated Terms and Rules here

CP/M 2.x directory structures, boot strapping, etc.

I have an 8MB partition setup now and I can copy files between there and the RAMdisk.
Code:
dpblock2:
        ; an 8MB partition on IDE (with only 256 bytes per sector)
        dw 110              ; sectors per track  (in 128-byte units)
        db 5                ; block shift
        db 31               ; block mask
        db 1                ; extent mask
        dw 2047             ; highest block number  (DSM)  (in clusters)
        dw 511              ; highest directory entry number  (DRM)
        db $F0              ; directory allocation pattern  (bits set according to how many clusters
        db 0                ;        are used for the directory)  (32 bytes per directory entry)
        dw 0                ; directory checksum  (CKS)  (0 for non-removable media, else DRM +1 /4)
        dw 1                ; number of reserved tracks
The CHS geometry of the drive is 1010 cylinders, 12 heads, 55 sectors.

With 512 directory entires it was also quite slow intially, but that was a problem with the IDE driver code. I have the ATA specifications, but have never written the driver code before so there is some trial and error going on. Reads are fairly fast now but there is some glitchiness still to be solved. For instance, DIR occasionally fails to display the entire list of files. I think some sector reads are returning garbage data.

If I use the Z280 DMA controller, moving data from IDE to memory should be somewhat faster than a CPU loop, and moving data from memory to memory (with 16-bit transfers) should be much faster than an LDIR. It's interesting to see the benchmark results from a 33MHz system. It's clear that OS overhead is a significant factor compared to raw transfer rates.

There is one thing I'd like to confirm. When BDOS calls the BIOS to select another disk, does it expect BIOS to recall the last track and sector that were set for that disk, or does it always set the track and sector again anew?
I was always fond of 'patch.com' by Bill Rink. It is written in Turbo Pascal and no source was ever released. Use the 'turbinst.com' installer to configure for terminal codes. Send a PM if you're unable to find a copy.
Thanks for the tip. I also found this thread https://forum.vcfed.org/index.php?threads/what-are-your-favorite-cp-m-tools-or-utilities.1237498/ but haven't had time to go through it yet.[/code]
 
There is one thing I'd like to confirm. When BDOS calls the BIOS to select another disk, does it expect BIOS to recall the last track and sector that were set for that disk, or does it always set the track and sector again anew?
My BIOS code on various machines only keeps a single copy of the last track and sector values from the BDOS calls and I've never had a problem when using multiple different disks.
 
There is one thing I'd like to confirm. When BDOS calls the BIOS to select another disk, does it expect BIOS to recall the last track and sector that were set for that disk, or does it always set the track and sector again anew?
The BDOS will always call SETSEC, SETTRK and SETDMA after calling SELDSK.

The example BIOS in the Alteration Guide tracks the sector/track/dma values globally, not per-disk.
 
Typically, SETTRK, SETSEC, SETDMA just store the values and return. Those values are used by the subsequent READ/WRITE call. So, the BDOS always calls those before every READ/WRITE.
 
By Sir Clive Sinclair - so it probably wouldn't have been a FDC.

This whole scheme you’re describing was implemented in 1977 by Steve Wozniak with the disk controller for the Apple II, which uses an extremely simple state machine to implement GCR with the CPU doing the encoding/decoding. And Woz’s system was far more economical of memory than your plan of having to maintain a complete track buffer in memory, so… if we’re going to talk about what sort of system a notorious penny pincher would use it seems reasonable to guess he’d rip off Apple’s.

The system you’re describing is basically that used by the Commodore Amiga. Which of course typically had at least 512K of memory instead of 48k and a CPU with address space to match.

… FWIW, the reason Woz did what he did was mostly because when he started a disk controller that did everything in hardware was dozens of chips on an S-100 size board, because single chip disk controllers like the WD1771 weren’t *quite* out in the wild yet. By 1980 a Woz style disk controller didn’t save a whole lot in money *or* chips.

The Amiga does what it does because they wanted it to handle both MFM and GCR formats… and they also already had the general purpose DMA engine suitable for blitzing the track data and (by the standards of the day) tons of RAM to burn.
 
Last edited:
  • Like
Reactions: cjs
This whole scheme you’re describing

… realized that I used imprecise language here too late to edit. By this I meant, in the “Woz” paragraph, that using software to encode/decode FM/GCR bit representation goes way back before Clive Sinclair ever sold a personal computer.

(Although, actually, I have to chuck this out there: did Sinclair *ever* sell an 8 bit computer with a floppy drive? It’s not my area of interest, but my impression was at least that through the Spectrum Sinclair was stuck on those Micro tape drives and all the floppy drive systems for them were third party. So far as I know the only Spectrum shipped with a floppy drive was an Amstrad.)

The second time I meant it more literally, IE, the Amiga actually does read a whole track at a time. (The Apple II does interleaved sectors, which saves a shedload of RAM.) Track at a time would be a really pointless thing to do on a Z80 computer, but, eh, whatever floats your boat.
 
I think the ZX Spectrum 3+ came with an integrated 3" floppy drive?

Sure. But that’s actually an Amstrad. (It’s post Amstrad’s acquisition of Sinclair’s computer business and shares engineering with a CPC model. Including the use of a very conventional FDC controller chip.)
 
After all, even a big track isn't going to hold too many sectors. eg, 18 SPT @ 512bytes/sector so you could read a track in as around 12288 bytes of flux... So I'm curious as to whether anyone did this? Just read bits at the bit rate of the drive, loaded them all into a single memory space, then decoded them straight into the file requesting them? Likewise, the track can be rewritten in memory and then when a request comes for a different track, the system could just rewrite the track with updates, read the new track and cache that.

Documentation for the Amiga OS “Trackdisk Device”. Most mere mortals would use the calls which refer to “decoded” sector data, but the interfaces for reading/writing raw flux data are there.

Again, there really isn’t a great reason to do this in an CP/M computer, and it’s *definitely* not a technique that’s going to save you money if you dedicate a separate CPU to it. It’s not even like the Amiga gets much out of it; its native disk format crams a little more data on the disk than most conventional formats for MFM controller chips do (by getting away with tighter sector spacing), and it’s *pretty good* at reading alien disks that have a flux transition rate that’s in the rough ballpark of normal single/double density rates, but the PLL data separator gets defeated by truely oddball formats like variable data rate Apple Macintosh disks.(%) You could solve that by giving the data separator/sync recovery circuitry a wider range(*), but unless your goal really is to build a Z80 powered “disk analyzer” digitizing flux transitions into RAM is gross overkill and not anything like a real £200 “Loki” would have shipped with.

(* of course these days a dingus like a Greaseweasel doesn’t have a hardware PLL data separator, it can just oversample at a ridiculous rate and software filter the results.)

(% Edit the second, in case someone chimes in to “correct” this: the “data pump” in the Amiga *can* handle reading a GCR format Macintosh disk, but only if you use a clever piece of hardware to hang a Macintosh disk drive off it. The Mac did “variable data rates” by actually changing the speed the disk rotated at, so by using the Mac drive with the Amiga controller you get the flux transitions into the ballpark where the data separator can lock on.)
 
Last edited:
(Although, actually, I have to chuck this out there: did Sinclair *ever* sell an 8 bit computer with a floppy drive? It’s not my area of interest, but my impression was at least that through the Spectrum Sinclair was stuck on those Micro tape drives and all the floppy drive systems for them were third party. So far as I know the only Spectrum shipped with a floppy drive was an Amstrad.)

No, he never did.

He really did believe in his microdrives which were arguable better than what the alternatives were, but by 1985, he recognized that a floppy interface would have been necessary for CP/M compatible services. The microdrive wasn't a tape-drive per-se. It was more like a single sided, 2 track ( working as 1 track through a byte-interleave ) floppy drive. It used biphase FM encoding and had a variable number of sectors per track, up to 256. Each sector resembled a floppy sector, though he wrote the filename and sequence into the data steam preceeding the data, so that it could be read without having to reference a directory. It's not a bad system - and almost kept up with floppies, though we're talking about a 7.5 RPM disk system here with just one effective combined track.

The format was interesting. In practice, while it could support up to 256 sectors/track, for a maximum capacity of 128Kb ( 512 byte sectors ) the typical cartridge only supported around 80kb of data store. Typical loading times were around 8 seconds, and they had a 1:2 interleave on subsequent sectors to allow the CPU time to move data before the next block came along. They were pretty advanced for their time, and were a genuine contender against the floppy, but manufacturing issues and distribution costs did them in.

They were converted to CP/M format by the Russians, but that's another story. The format also has simple rules that can be bent if not broken and I managed to create a genuine microdrive format of 640K, and filled it with screen images, and someone who made a "gotek" style microdrive loaded the format in, and it loaded up all 640K just fine. Though it takes around 48 seconds to load any one file if that is done since I used the 5x retry loops to extend the data out from 128K, and reused sector numbers by varying the filename. 640K is the practical limit as the unmodified code of the Spectrum and Interface 1 cannot be tweaked any further than this.

I didn't know the Amiga did the same as I was planning. From my perspective, a RAM chip and some counters are cheaper than a FDC, and, well, have you ever noticed everything in the UK is backwards? Door handles, light switches, Driving side of the road? Sir Clive was no exception. Picking apart group coded data in software is exactly the kind of thing he would have done - he had encoders and decoders that he used with the microdrive, and they could be easily used to move microdrive data to a floppy drive ( I've considered making a floppy-based microdrive on more than one occasion ) but the FM encoding used would have halved the disk capacity.... Single density only. And memory chips were a lot cheaper in 1985, and there would be no certainty which drive he would have chosen, or what it's specifications were. I think this is a reasonably likely outcome.

You are absolutely correct that dedicating an extra CPU to it is not something he would have done, or even considered, but dedicating an extra CPU to a sound output? Well, that's something he might have done. He was looking at using the video card to do it at first, and that's where the RAM music machine comes into the picture, and maybe that is how it would have happened. Though I imagine simply having a second z80 to run the sound would have worked out cheaper in the end, and the memory it needs and buffers necessary would lend themselves to being reused as a floppy drive.
 
With 512 directory entires it was also quite slow intially, but that was a problem with the IDE driver code. I have the ATA specifications, but have never written the driver code before so there is some trial and error going on. Reads are fairly fast now but there is some glitchiness still to be solved. For instance, DIR occasionally fails to display the entire list of files. I think some sector reads are returning garbage data.
One easy way to check your IDE interface is working correctly is do a PIP with verify.

PIP c:=a:*.*[v] <--assuming drive C is IDE and drive A is RAM disk
 
He really did believe in his microdrives which were arguable better than what the alternatives were, but by 1985, he recognized that a floppy interface would have been necessary for CP/M compatible services…

Devices like the microdrive existed long before Sinclair. There were multiple attempts to use 8-track mechanisms, and the company Exatron sold a continuous loop wafertape device called the “Stringy Floppy” for the TRS-80 and other systems. (There were also multiple compact cassette based systems, both continuous loop and bidirectional auto-forward-rewind.) The thing all of them had in common was what they saved in original cost you paid for later in lousy performance and poor reliability. The writing was very much on this wall before Clive started banging his head against it.

From my perspective, a RAM chip and some counters are cheaper than a FDC, and, well, have you ever noticed everything in the UK is backwards? Door handles, light switches, Driving side of the road? Sir Clive was no exception.

Stupid is stupid even if it has a charming British accent?

And memory chips were a lot cheaper in 1985, and there would be no certainty which drive he would have chosen, or what it's specifications were. I think this is a reasonably likely outcome.

I looked in an old Byte magazine, as usual, and in 1984 an FDC chip cost about as much as the RAM you’d need for a track buffer. It’s all there, self contained, and just needs a couple port addresses to work. (And, notably, tons of very cheap machines like the CPC used them, instead of trying to roll something cheaper.) Maybe, just maaaybe, if the “Loki” really had materialized around having a DMA coprocessor similar to the Amiga chipset then maaaybe implementing an Amiga like scheme could make some sense, but… no. It’d be really, really dumb to ever build this from discrete parts.

Anyway. Here is a good explanation of how a software driven disk controller appropriate for an 8-bit machine works. Based on what I know about real life machines like the ZX-80/81 and Spectrum this seems way more “Sinclair-y” to me than some overweight memory-wasting track buffer system.

 
I want to put a short defense of the micro drive here. The transfer rate and storage were about the same as a single sided, single density floppy though seeks would be longer. Probably would have been more than adequate when the design was first laid out on paper especially at 10% of the cost of a floppy drive. Not so good several years later when the floppy drive offered much more storage for the same price.

Timex (of Portugal) developed a nice disk subsystem for their copies of Sinclair machines.
 
When I moved to NJ in the 80s,
I occasionally attended Sol Libes' ACGNJ (Amateur Computer Group of New Jersey) meetings in Scotch Plains, NJ.
By then it was more of a social group, meeting at the diner without any formal presentations.
All the learning opportunities I forfeited :-/

http://www.acgnj.org/hist.html
ACGNJ is the oldest personal computer club and user group in the world.
 
For what they were, Microdrives were a suitable product. They were much cheaper than floppy drives at the time, and probably stayed that way until much later in the 80s. As @krebisfan noted, they were similar in performance to single sided floppies. This was a market for people who couldn't afford home PCs, so a decent and reasonably fast storage environment was worthwhile.

Devices like the microdrive existed long before Sinclair. There were multiple attempts to use 8-track mechanisms, and the company Exatron sold a continuous loop wafertape device called the “Stringy Floppy” for the TRS-80 and other systems. (There were also multiple compact cassette based systems, both continuous loop and bidirectional auto-forward-rewind.) The thing all of them had in common was what they saved in original cost you paid for later in lousy performance and poor reliability. The writing was very much on this wall before Clive started banging his head against it.

I know about the others. I have a stringy floppy also, but they usually came in sizes of 16, 32 and 64kb. The minimum of a microdrive was 80k, and often were around 100k and they cost much less and had better distribution networks. But the Sinclair world shut down before he could reduce the cost. Maybe he couldn't.

MGT and others made disk interfaces for the Spectrum also, but the drives were expensive. so Microdrives were popular.

I looked in an old Byte magazine, as usual, and in 1984 an FDC chip cost about as much as the RAM you’d need for a track buffer. It’s all there, self contained, and just needs a couple port addresses to work. (And, notably, tons of very cheap machines like the CPC used them, instead of trying to roll something cheaper.) Maybe, just maaaybe, if the “Loki” really had materialized around having a DMA coprocessor similar to the Amiga chipset then maaaybe implementing an Amiga like scheme could make some sense, but… no. It’d be really, really dumb to ever build this from discrete parts.

Anyway. Here is a good explanation of how a software driven disk controller appropriate for an 8-bit machine works. Based on what I know about real life machines like the ZX-80/81 and Spectrum this seems way more “Sinclair-y” to me than some overweight memory-wasting track buffer system.


Even Spectrum floppy disk interfaces didn't use DMA. The z80 was wired to handle transfers at up to 200Kbytes/sec without a DMA, which is 4 to 8 times faster than a floppy disk before you consider seek times and access times... There's no need for DMA on a CP/M machine just for floppy access - it's not like the CPU is off doing other stuff while it's waiting for disk operations to complete. A Z80 was pretty cheap at the time, cheaper than a FDD, and you can add memory to it.

And you have enough I/O power alone to check a port status bit, read a byte, write to memory and loop for a 250KHz FDD without any tricks at all.

Or, if the z80 is dedicated, well, you can play with other signals, such as WAIT and use INIR and use the internal block transfer, as long as you use static memory for the buffer.

DMA is not necessary for most floppy disk speeds. Even with something as slow as a 4MHz z80. A FIFO buffer would also work just as well, though that is probably just another form of DMA. Or you could add a DMA, though they weren't particularly inexpensive that I recall?
 
And you have enough I/O power alone to check a port status bit, read a byte, write to memory and loop for a 250KHz FDD without any tricks at all.

That is my point: if you use an intelligent FDC then you don't need DMA (although a lot of systems that use FDC chips use it anyway, because it does let you accomplish other things like dropout-free serial communication more easily), you just need to shovel bytes off a port. If you're going to cook up some alternate scheme that requires you to not only handle the data transfer but decode it you're going to be sucking up even more CPU time... which is fine, if you don't mind wasting it (the Apple II and Macs using the IWM chip were pretty successful despite the fact they turn into doorstops while working the floppy disk), but...

I kind of feel like maybe you overlooked something in your estimate of what it's going to take to implement your "suck in a whole track" scheme: you said you just need "some counters and a memory chip", but what about clocking/data separation? You can't just run your input shift register at a fixed rate based on a system clock (unless you substantially oversample) because real-world disk drives vary in speed, and there's also the phenomenon of bits of different polarites "drifting" after they've been written to the media. To get an accurate matrix of bits into your "flux buffer" for the CPU to decode you're still going to need a "data separator" circuit to do the clock recovery and digitizing correctly, and you'll also need systems for determining when you're at the "beginning" of a track... etc.

The link I dropped in *hints at* how the Apple II Woz disk controller handles this in a "CPU economical" manner, but to really understand it you'll need to look up the "Understanding the Apple II" book it references. (Or a similarly detailed explanation of what's happening.) The long and short of it is the Apple II implementation uses a combination of a clocked flip-flop and a state machine constructed from a PROM feeding back into itself to essentially "oversample" at 2Mhz the bit windows and preprocess the data that ends up in the shift register. It's an elegant system, but it imposes very specific constraints on the data format, and it also relies on *extremely predictable* timing loops in the software that reads and writes the data bytes. Presumably in your system you're going to use those counters to replace the time-sensitive read-write loop and DMA the sampled data straight into memory, but in that case you're going to need something like what the Amiga's "Paula" chip has built into it, IE, a full PLL data separator, for degunking the bitstream. (The Paula also has a comparitor in it for looking for a "Syncword", allowing it to locate the start of a track by itself; Apple's system of course requires very active CPU participation to get the device synced up with the sector boundaries.)

*shrug* I mean, I guess you could blindly oversample the whole disk rotation into an oversize buffer (figure a buffer at least twice the size of what you expect a precise bitmap to take up) and quantize and analyze after the fact, but that's going to take even more unreasonable-for-an-8-bit amounts of RAM to handle. Here's a software floppy controller implementation for Arduino that can do sector-based I/O on FAT formatted disks with only a 512 sector buffer and no dedicated data separator hardware[/url], but the low-level read routines rely on ATMEGA8 assembly routines and for this kind of task a 16mhz ATMEGA8 can run rings around any Z80 derivitive short of the eZ80. (Nearly all Atmega8 assembly instructions execute in a single clock cycle.) Of course a system actually built in the 80's could use a regular off-the-shelf data separator chip to clock an input shift register, but for a design built *today* I don't imagine these chips any easier or cheaper to find than the FDCs they were designed to augment. (Some FDCs have perfectly adequate data separation built into them; even the original WDC1771 had a built-in data separator, although it was pretty universally agreeed that it was crap and you'd be better off with an external one.)

In any case, the point here is that if you get rid of that FDC you can't expect to just wire up a shift register to the floppy disk READ line and pump that straight into a blindly clocked counter. There are other moving parts here you need to add which are going to put your chip count considerably above that of an FDC-based system unless you cook an application specific IC to do it. And... sure, given Sinclair's ridiculous NIH attitude it's very possible they might have preferred to cook up their own ULA to work as a data pump; in fact wouldn't surprise me one bit if they just rehashed the FM unit they used in the microdrive and just pretended it was perfectly normal to be offering a single density system in 1985. (Which, honestly, wouldn't have been that crazy in the British market, at least; the first disk controller for the BBC micro was single density.) But color me not sold on the "track buffer" idea. It works for the Amiga because of some very specific aspects of how the machine is put together at the chipset level. (IE, its "everything is a blitter" architecture; that Paula chip that has the disk goo embedded in it is also the sound chip, which is also DMA driven.) It "Loki" really were just a straight-up Amiga ripoff at every level, down to having the equivalent of a Paula chip then, sure, I guess they could go that way, but... there's a lot of ifs in there.

I get that Clive Sinclair is, in some people's minds, the UK's Steve Jobs, but... maybe there's a good cautionary tale here. Apple was super high on its own supply in the late 1970's thanks to the percieved genius of Woz's disk controller, so in 1980 they set up a "Disk Division" to engineer in-house the next generation of floppy and hard disk drives for upcoming Apple computers. TL;DR, it was a huge fiasco. Sometimes you *can* reinvent the wheel or build a better mousetrap, but, sad to say, usually you can't.
 
@bakemono Did you get all your questions answered? you might want to start a new thread for any more. I'm going to stop watching this thread, others probably have already.
 
That is my point: if you use an intelligent FDC then you don't need DMA (although a lot of systems that use FDC chips use it anyway, because it does let you accomplish other things like dropout-free serial communication more easily), you just need to shovel bytes off a port.

Not sure why you can't economically reproduce this function independently without an OTS FDC, and I mentioned clock recovery earlier including speed variation in the stream, which can all be done with counters too which add to the discrete solution cost. FDCs are great, but I think we're both in agreement that this probably isn't how Sinclair would have done it, and they wouldn't have reused their existing chip because it was just "code" in a ULA. They might have put this in a new chip, and made their own FDC though, with an integral decoder.

CP/M 2.2 works quite happily with LBA. It doesn't really care what the physical structure of the disk is, or how the BIOS delivers the records.

Thank you for the link. I saw that long ago and didn't get a chance to read it, and now I know where it is, so will enjoy reading it later.

@bakemono Did you get all your questions answered? you might want to start a new thread for any more. I'm going to stop watching this thread, others probably have already.

Admonishment acknowledged. I think @bakemono got his initial questions answered, but given his thread, may be back with more later.
 
FDCs are great, but I think we're both in agreement that this probably isn't how Sinclair would have done it, and they wouldn't have reused their existing chip because it was just "code" in a ULA. They might have put this in a new chip, and made their own FDC though, with an integral decoder.

I'm not sure we're in "agreement" on that, or at least I wouldn't be in agreement on that if Sinclair had actually had competent technical management... but sure, I will say there's a non-zero chance they would have insisted on baking their own FDC chip out of sheer bloody-mindedness. But I will stand by saying that they would *not* have done it in the form of a full-track flux reader; a bad copy of Apple's IWM chip seems far more likely.

Anyway. If you want to start a new thread detailing your plan for a clock recovering data separator using "counters" I'd be interested to see the actual circuit you have in mind. It's easy to handwave how easy clock recovery from an MFM encoded source is, actually doing it in a small number of components is another thing altogether.
 
Back
Top