• Please review our updated Terms and Rules here

68k 80 bit float format to x87

carangil

Experienced Member
Joined
Jun 3, 2009
Messages
285
Location
Oakland, CA
I've taken on a weird project. I am rewriting an old uv spectrogram data display program for a modern PC, in this case just win32. The customer gave me their only working copy of it... A power PC Mac mini with OS x. clicking on this program opens Mac os9 emulation, and the program running in there turns out to be from 1991 and is running in 68k emulation. So there's multiple levels of fakery going on.

My job is to read the data files and graph them. These files have no header and is just a binary blob of back to back 80 bit floating point numbers. Right now I read 10 bytes, reverse the order, and then use gcc's double long format, which is x86 80 bit format. The graph I spit out looks just like the old program.

My question is, except for the byte order, are the x86 and 68k float formats identical? I would hate for some small detail to cause a weird spike in the data or something like that. (like making a really small number decode as really big or something.)

This site seems the best place to find experts in old CPUs!
 
Does this help?

See the header file MacTypes.h. In it, you will find:

Base floating point types

Float32 32 bit IEEE float: 1 sign bit, 8 exponent bits, 23 fraction bits
Float64 64 bit IEEE float: 1 sign bit, 11 exponent bits, 52 fraction bits
Float80 80 bit MacOS float: 1 sign bit, 15 exponent bits, 1 integer bit, 63 fraction bits
Float96 96 bit 68881 float: 1 sign bit, 15 exponent bits, 16 pad bits, 1 integer bit, 63 fraction bits
 
Thanks. I compared that info with x86 format, and they really do seem to be the same except for behavior with some operations. But I'm just reading off numbers so I don't think any of those differences matter.
 
The only hangup you'll have is that they are byte-order reversed, but Float 80 Motorola and 8087 Extended are otherwise compatible... Sounds like you got it right.
 
Back
Top