• Please review our updated Terms and Rules here

IBM PC's 8088 replaced with a Motorola 68000

MicroCoreLabs

Experienced Member
Joined
Feb 12, 2016
Messages
298
I was wondering what the IBM Personal Computer would have been like if they had chosen the Motorola 68000 instead of the Intel 8088, so I used my MCL86+ to emulate the 68000 and find out!



Capture.JPG
 
Wasn't there a 68K card available for the PC XT back in the day? I could have sworn that I ran across one in my 1987 PC product guide. I know there definitely was one for the NS16032...
 
There was the IBM 9000 series of scientific instrument computers. Those used a 68K and were released around the same time but they are quite rare as most were stripped of their keyboards.
 
The XT/370 had two 68000 chips on one of the 3 boards that mimicked the mainframe.

The 68008 was not available in quantity until 1984 but, by then, the 68000 was cheaper than the 68008. That was rather unfortunate for the Sinclair QL which was the major system that tried to use the 68008.
 
There were other 16-bit CPUs available. The IBM lab computer came out a couple of weeks before the 5150, giving rise to speculation that the new Personal Computer was going to be a 68K system, which was nonsense. As far as I know, the Z8000 wasn't even being considered. Given that Intel could offer a more-or-less complete selection of peripheral chips, the x86 choice was a natural. I'll wager that IBM got a helluva price break for the complete set. Too bad that Intel didn't have a graphics chip available at the time; they could well have made a complete sweep. The Intel clone of the NEC 7220, the 82720 wasn't ready for mass production at the time of the 5150 introduction.
 
If you look at contemporary benchmarks carefully a thing that starts showing up is for a pretty decent swathe of general operations the 68000 actually isn’t that much faster than the 8086. (Obviously hobbling a 68000 with an 8 bit bus wasn’t something most people would have been doing, so direct comparisons with the 8088 are a little loaded.) I mean, it’s not “not faster”, but in terms of average ops per clock it’s very much in the same ballpark. The 68000 doesn’t really shine until you’re doing 32 bit math or working with data sets larger than 64K, both of which are things the 8086 has to jump through hoops to do. The 5150 was explicitly targeted at a low performance market niche where these things were not considered priorities.

The 68000’s programming/memory model advantages really didn’t pay off until a few years later, when it was more normal for personal computers to have the better part of a megabyte of memory and software applications and storage to really use it. The 80286 could run rings around a 68000 in 16 bit math benchmarks, but the 68000 “looked like” a 32 bit machine and by that point wasn’t that much more expensive than an 8086 , so it’s no surprise it was the darling choice for the mid-80’s crop of GUI super-micros.
 
Another factor that probably led to Intel offering IBM a deal-of-a-lifetime, was the faltering IAPX432 project. Early-on (1979?) it was being touted as the next step after the 8085/8086. It was hugely expensive on a chip basis and performance was disappointing. On the other hand, if you were a software vendor you could almost take your 8 bit code, run it through a translator and wind up with something that would run on a 8086. The performance of the 68008 that we were sampled was so dreadful that we eliminated it from contention right away. The 68000, on the other hand, worked pretty well with our 16 bit bus, but management nixed that choice. By the time the choice had to be made, Intel was sampling the 80186 and the 80286 wasn't far behind.
 
One item about the 68K that I recall was the announcement at a WESCON. The sales guys manning the booth were passing out databooks, but no samples. Since the 68K is dual mode (i.e. user and priviledged) with a big address space, I asked what was the possibility to implement a primitive virtual-memory scheme. The weary expression on the face of the guy on the other side of the table told me that my question was perhaps the 53,938,234th time he'd been asked that. The answer was "no"--there is not enough information saved for some (mostly memory-to-memory) instructions to restart them after a fault. Apple, on the other hand, took that in stride and simply declared certain instructions in user mode to be verboten. I think it was Apollo that got around the issue by running two 68Ks slightly out of phase with one another. The idea was that the leading 68K would fault, halt the second one and satisfy the memory reference for the second one. It was pretty clever.
 
>>> The idea was that the leading 68K would fault, halt the second one and satisfy the memory reference for the second one. It was pretty clever.

Hmmmm. I like the ingenuity!

Dave
 
The weary expression on the face of the guy on the other side of the table told me that my question was perhaps the 53,938,234th time he'd been asked that.

It's kind of funny it took them a whole three years to get the 68010 out the door if they were getting that much flack. I can only assume they wanted to sit on that feature until they had a matching MMU ready to go. (Maybe a bad choice considering how customers like Sun just rolled their own anyway.)

Plenty of people did run "UNIX-like" (Xenix!) OSes on the 68000 with simple segment-based memory protection schemes and swap files but, yeah, I'm surprised Motorola just barely missed the boat on true virtual memory support considering how forward-looking the rest of the design was.
 
What bad choice? Motorola sold two $400 processors instead of a single $500 processor.

They managed to get Apollo to do that, but most of the rest of their (potential and otherwise) customers were stuck trying to enforce software limitations (which were exploitable), doing without real virtual memory, or having to go elsewhere. Telling potential buyers that they need to faff around and solve what looked to them like an obvious-but-trivial-for-you-to-fix problem with your product themselves isn't exactly endearing.
 
Wasn't there a 68K card available for the PC XT back in the day? I could have sworn that I ran across one in my 1987 PC product guide. I know there definitely was one for the NS16032...
You're perhaps thinking of the Sritek VersaCard?
 
Not specifically; there was a raft of them, most are utterly forgotten. TLM for example, had a 68K board with 1MB of memory, 8 serial ports, 2 parallel ports. Motorola had the MPCKN2M card. Link Data had the MPC/68. The list IIRC, was pretty good-sized. And we shouldn't forget the made-for-application boards, like the OCR one from Palantir... Until the 80386, really, if you had large data sets, the 68K was an attractive solution.
 
I rewrote history! hehe.

Man, that article was clearly automatically generated by running the Hackaday text through a thesaurus. “IBM Private Pc”. Uh huh.

”AI” truly is going to be the death of human intelligence. Not in a cool Skynet kind of way, more of a “smothered in our own vomit after passing out on the toilet” sense.
 
Back
Top