• Please review our updated Terms and Rules here

Silicon Valley ADP50 IDE controller

hargle

Veteran Member
Joined
Nov 30, 2007
Messages
1,397
Location
minneapolis, MN
So I had a free moment at work today, so I thought I'd do something fun, which is disassemble the ADP50 controller's BIOS.
There's some interest here in getting an updated BIOS available to break the 528MB barrier, and I'm sure it's possible to do.

So, I'm looking at the IDE reader/writer code, and unless I'm missing something, it all appears to be memory mapped I/O! I've never seen anything like it before - certainly not to a hard drive.

Looking at the specs:
http://stason.org/TULARC/pc/hard-di...LLEY-COMPUTER-INC-Two-IDE-AT-Interface-d.html
Sure enough, there's no jumpers or dip switches for an IO base address.
There's also no settings for the BIOS base address either, so that's a little odd too...

I find this really curious, and made me wonder why this sort of thing wasn't more common. Is it faster? Is it cheaper to build a card this way?
 
Looking at the specs:
http://stason.org/TULARC/pc/hard-di...LLEY-COMPUTER-INC-Two-IDE-AT-Interface-d.html
Sure enough, there's no jumpers or dip switches for an IO base address.
There's also no settings for the BIOS base address either, so that's a little odd too...

I find this really curious, and made me wonder why this sort of thing wasn't more common. Is it faster? Is it cheaper to build a card this way?

Was the memory-mapped-I/O tucked into the backside of the EPROM space? That wasn't unusual for a lot of add-ins, particularly some SCSI controllers.

To me, this means nothing more than this model was intended as a primary HD/FD controller only. No reason for any jumpers if everything's at a standard location.
 
Was the memory-mapped-I/O tucked into the backside of the EPROM space?

I'm not sure (I've only put about an hour into disassembling it so far) but here's a chunk of the code:

Code:
GetIDEStatus    proc near               ; 
seg000:073F                 mov     al, cs:[bx+0Eh] ; aka 1F7, command/status port
seg000:0743                 mov     ds:48Ch, al
seg000:0746                 mov     ah, 0
seg000:0748                 test    al, 80h
seg000:074A                 jnz     loc_766
seg000:074C                 mov     ah, 0CCh ; '¦'  ; write fault
seg000:074E                 test    al, 20h
seg000:0750                 jnz     loc_766
seg000:0752                 mov     ah, 0AAh ; '¬'  ; drive not ready
seg000:0754                 test    al, 40h
seg000:0756                 jz      loc_766
seg000:0758                 mov     ah, 40h ; '@'   ; seek failed
seg000:075A                 test    al, 10h
seg000:075C                 jz      loc_766
seg000:075E                 mov     ah, 11h         ; ECC corrected data error
seg000:0760                 test    al, 4
seg000:0762                 jnz     loc_766
seg000:0764                 mov     ah, 0
seg000:0766 
seg000:0766 loc_766:                                ; 
seg000:0766                                         ;
seg000:0766                 mov     ds:474h, ah
seg000:076A                 cmp     ah, 11h
seg000:076D                 jz      locret_772
seg000:076F                 cmp     ah, 0
seg000:0772 
seg000:0772 locret_772:                             ; 
seg000:0772                 retn
seg000:0772 GetIDEStatus    endp

The key being "mov al, cs:[bx+0Eh] " and I'm not sure what bx is upon entry to this routine. So bx could certainly point to the end of the rom space. I was just surprised to not see it reading in from an IO port there, or anywhere else in similar routines.

Of course I'm going to keep digging into it. Eventually I'll have the whole thing compilable, and then we can start making mods to it to bump it over 500MB.

We can probably go a couple different routes with the code once I get it unraveled. Either we change this code to add support, or we can take my existing PC/XT option rom code and change the reader/writers to use this memory map and build a brand new image for the ADP50. We'll see. This isn't on my immediate scope, just something I was playing around with yesterday.
 
I'd be adding support to do LBA transfers instead of CHS.
Whenever that came into existence (I was thinking that was more like ATA-4), but if there are specific ATA-6 commands you'd like to have INT13 support wrapped around, there's no reason not to. I haven't played enough with CD-ROMs to know what they need, but that's on the list too.

If we end up rolling the existing PC/XT BIOS into a new ADP50 controller BIOS, then all the goodies like eINT13 support, boot menu, etc will get pulled in too. I'm going to try and load up my BIOS with as many features and enhancements as we can possibly cram in. I want big IDE, CD-ROM and CF support at a minimum.

I'd be really honored if my BIOS gets updated for any IDE controller out there.
I'm trying to restructure it so that all the hardware specific stuff is isolated out so that it's pretty easy to modify for different platforms, as there are at least 3 that I want this BIOS to go into at the moment.
 
ATA-6 uses 48-bit LBA addressing for somewhere around a million gigabytes. Original LBA uses 28 bit block addressing, giving a limit of 137 GB.

(That's why I asked if there would be any point to implementing ATA-6 LBA addressing)
 
ATA-6 uses 48-bit LBA addressing for somewhere around a million gigabytes. Original LBA uses 28 bit block addressing, giving a limit of 137 GB.

(That's why I asked if there would be any point to implementing ATA-6 LBA addressing)

ah, that's what you were getting at.
Somehow I think 137G on these machines is plenty.

I've never actually done 48bit work via ATA before, and I've come across some conflicting information. Some of it says to use a 4 byte control block IO, which is mapped in a separate IO space, and others say that the upper LBA address can be written using a double pumping mechanism to the existing 3 LBA ports. Maybe both work, I'm not sure. One obviously requires additional hardware.

That said, the eINT13 support framework that I've already got in place supports up to 64bit addressing, so if the 2x write method works, then adding support for insanely large hard drives is actually pretty easy to do!

I'm most certainly going to try it on the PC/XT controller when I get some hardware. :)
 
You can do ATA6 48-bit mode on the simplest IDE controller--no special hardware needed. Double outputs to the various registers.

A terabyte HD on a 5160 would be interesting.
 
You can do ATA6 48-bit mode on the simplest IDE controller--no special hardware needed. Double outputs to the various registers.

A terabyte HD on a 5160 would be interesting.

I think an operating system that runs on a 5160 that could parse a terabyte HD would be more interesting... :)
 
I think an operating system that runs on a 5160 that could parse a terabyte HD would be more interesting... :)

Not even remotely impossible. Just do the CD-ROM trick and use the network hooks and implement your own (non-FAT) file system.

Now a 5160 that supported a terabyte of FAT-file system disk would really be interesting....
 
Sorry to necropost, but I was fooling around with my Silicon Valley ADP50 and ran into something a little odd. I've used CF->IDE adapters with it for many years and just ran into something interesting. Here's the normal results:

IBM 340MB Microdrive: Works fine
Sandisk 4G CF: Works fine but limited to 1024c, 16h, 64s (504MB)

And here's the odd results tonight:

Transcend 512MB FC: Received an error message from the ADP50: "ADP50 H version required, ERROR, F1 to continue"

"H" version? I've never heard of this. The BIOS version on my ADP50 is 2.35. Knowing there is a potentially later version is interesting, but I doubt I'll find it in my lifetime.
 
I wonder how long it would take the XT to calculate the free space on a 1TB drive?
Funny, I emailed Mike Brutman about a similar topic last night. The time it takes is not based on the size of the drive but rather the size of the FAT. So if you have a partition almost completely filling the FAT for a given cluster size, it's nearly ~64K entries it scans to find free clusters. If you bump the partition size slightly over the limit, DOS doubles the cluster size and reduces the FAT size to ~32K entries.

I did a test last night and found that DOS free space calculation took ~19 seconds to complete on a 127MB partition. But on a 130MB partition, where the cluster size was doubled and the FAT size was halved when it passed the 128MB mark, it only took ~10 seconds. (The tradeoff is that now you are limited to ~32K files in that partition and small files take up more space due to the larger cluster size, so if you're planning on keeping lots of little files on the drive, you may hit a limit.)

To take this to its nerdly conclusion, a FAT-16 DOS could use 48G out of a 1T drive, divided up into 24 drive letters. Each partition would be 2G and take the maximum 19 seconds. So, it would take 7.6 minutes for an 8088 to determine the free space going from C:, to D:, to E:, etc. until finished.
 
Funny, I emailed Mike Brutman about a similar topic last night. The time it takes is not based on the size of the drive but rather the size of the FAT. So if you have a partition almost completely filling the FAT for a given cluster size, it's nearly ~64K entries it scans to find free clusters. If you bump the partition size slightly over the limit, DOS doubles the cluster size and reduces the FAT size to ~32K entries.

I did a test last night and found that DOS free space calculation took ~19 seconds to complete on a 127MB partition. But on a 130MB partition, where the cluster size was doubled and the FAT size was halved when it passed the 128MB mark, it only took ~10 seconds. (The tradeoff is that now you are limited to ~32K files in that partition and small files take up more space due to the larger cluster size, so if you're planning on keeping lots of little files on the drive, you may hit a limit.)

To take this to its nerdly conclusion, a FAT-16 DOS could use 48G out of a 1T drive, divided up into 24 drive letters. Each partition would be 2G and take the maximum 19 seconds. So, it would take 7.6 minutes for an 8088 to determine the free space going from C:, to D:, to E:, etc. until finished.

It sounds like DOS does not automatically use the maximum number of FAT entries when you partition a new drive. Would it be correct to say that it dynamically allocates FAT entries as you use/disuse the free space on the drive? Or does it start off with half the entries and and increases them to the full value once you cross a certain free space threshold?
 
Would it be correct to say that it dynamically allocates FAT entries as you use/disuse the free space on the drive?

No, in a word :) DOS simply choses a cluster size when formatting to ensure that the resultant FAT is between 32K and 64K, regardless of volume size. I came across a great article a while back and cached it here on FAT.

Also a side effect of the cluster size is that random file performance (databases for example) can vary by a factor of 4 dependent on the relationship between file size, FAT size and cluster size - see here. Databases that used lots of smaller files would have performed much better than larger, flat files I guess.

As for the H version; it can't have been newer since otherwise the controller BIOS wouldn't have known about it :D Maybe the card is returning ATA-2 (hence 8-bit transfers) or something like that.
 
As for the H version; it can't have been newer since otherwise the controller BIOS wouldn't have known about it :D
Scenario:

I write Windows software named FRED.
I am about to release version 5.4 of FRED, and it will run on versions of Windows up to 7.
Windows 8 support won't be introduced until the future version of 5.5
I put code in 5.4 that upon detecting Windows 8, displays "Version 5.5 (or later) required ...".

5.5 will be newer than 5.4
 
Back
Top