carangil
Experienced Member
I've taken on a weird project. I am rewriting an old uv spectrogram data display program for a modern PC, in this case just win32. The customer gave me their only working copy of it... A power PC Mac mini with OS x. clicking on this program opens Mac os9 emulation, and the program running in there turns out to be from 1991 and is running in 68k emulation. So there's multiple levels of fakery going on.
My job is to read the data files and graph them. These files have no header and is just a binary blob of back to back 80 bit floating point numbers. Right now I read 10 bytes, reverse the order, and then use gcc's double long format, which is x86 80 bit format. The graph I spit out looks just like the old program.
My question is, except for the byte order, are the x86 and 68k float formats identical? I would hate for some small detail to cause a weird spike in the data or something like that. (like making a really small number decode as really big or something.)
This site seems the best place to find experts in old CPUs!
My job is to read the data files and graph them. These files have no header and is just a binary blob of back to back 80 bit floating point numbers. Right now I read 10 bytes, reverse the order, and then use gcc's double long format, which is x86 80 bit format. The graph I spit out looks just like the old program.
My question is, except for the byte order, are the x86 and 68k float formats identical? I would hate for some small detail to cause a weird spike in the data or something like that. (like making a really small number decode as really big or something.)
This site seems the best place to find experts in old CPUs!