• Please review our updated Terms and Rules here

Which minicomputer architectures used "skips" to branch and had no stack?

bzotto

Experienced Member
Joined
Jan 26, 2022
Messages
109
Location
San Francisco, USA
For research and comparison purposes, I'm trying to compile a list of machines whose architectures supported/used the following control flow patterns:
  1. Branching done with "skip" style instructions, i.e. upon various compares, the program counter will skip the following instruction (typically a jump) rather than more contemporary direct branch instructions.
  2. No stack, with subroutine calls using the trick of embedding the return address directly at the first word of the callee routine's code. Return thus done via an indirect jump to the top of the routine.
What are the machines that used these structures? The PDP-8 did, and so did the 16-bit HP 2100. The Data General Nova used skip branching and doesn't have the stack, but seems to do jump-and-link style calling via a register.

Generally looking at 16-bit (ish) systems prior to the mid-70s. Which ones like this or similar have I missed? Thanks!
 
Last edited:
(2) was quite prevalent in old iron. Consider the CDC 6000 series "return jump" (used both in PP and CP code; also present on the 160A, 1604, and 3000-series.. Since the IBM 1620 only had a single register to save a return address, the usual calling sequence was using a "Branch and Transmit Immediate" instruction, with the immediate field being the return address. None of these implemented a stack. Same for the IBM 1130/1800--BSI stored a return address preceding the branch target. The 1130 also had a bit of (1) in that the BSC (branch or skip on condition) could either be a simple conditional skip or a full-bore branch.

As for (1), I seem to recall that the SMS 300 microcontroller used skip-type branching.

The CDC 1700 mini satisfies both (1) and (2) of your criteria. Doubtless, that also goes for the controller firmware used in a lot of peripheral gear.

Also, for (1), consider the PIC MCUs--they have their origin in the GI PIC MCU for the CP1600 MPU systems (1976). No conditional branches.

I can probably come up with at least a half-dozen other examples. You could too--just spend a month or so reading through instruction descriptions on bitsavers.
 
Last edited:
I'll add another topic that seems to set various architectures apart--condition codes. What processors lack condition codes? Cray and CDC systems (no coincidence!) have this in common. Any others?
 
Not a mini but a contemporary: the PALM processor in the IBM 5100/5110/5120 matches 1. There is no stack either, but no special instruction for the kind of subroutine calling procedure described in 2.

Anyway, I'm mainly writing to say that Alpha doesn't have condition codes. Neither does PALM for that matter.
 
The 4004, processor, had no data stack and only 3 levels of useful program return stack ( one level was the current PC ). The 4004 had an unconditional skip instruction, not well noted in the assembly language, but useful for the entry of subroutines like print() that had often used values, like $0D or $0A.

The Nicolet 1080 , mini, had no dedicated stack, for data or instruction flow. When a call was made the first address of the subroutine would hold the return address. If the subroutine was to be re-entrant it was the responsibility of the subroutine to manage some form of stack to save the return. This was similar to other predecessors.

The NC4000 , processor, had two separate stacks, one dedicated to data and one dedicated to instruction flow( and sometimes data ).
Dwight
 
Seymour viewed condition codes as evil, particularly when performing instruction scheduling. His conditional branches depend on the content of a register (zero, negative, etc.) . The idea is that if you use condition codes, they're the result of some operation that itself has operands

Anent the 4004/8008 stack: A fixed-length internal stack isn't unusual. I believe the lower PIC MCUs feature that, as well as early MPUs like the National Semi PACE. The PACE did feature a "your 10 level stack is almost full" interrupt, so staging the stack to RAM was possible.
 
Seymour viewed condition codes as evil, particularly when performing instruction scheduling. His conditional branches depend on the content of a register (zero, negative, etc.)
I did that myself when throwing together a virtual-machine interpreter for one of my many hobby projects some years back. It's quite natural when you get used to it.
 
One thing is does is to avoid confusion over what instructions affect what condition codes. If one is doing dynamic instruction scheduling, the use of a single register as the source of a branch is very straightforward.
 
how does the 6600 handle overflow or multiple precision arithmetic?

also, skips are easy if an instruction is one word long. I did come across a machine with multi-word instuctions and skips. they said in the programming manual to only skip over instructions 1 word long
 
On the 6000 series, the important thing that many programmers just learning the machine architecture miss (and this also applies to the Cray I) is that arithmetic is performed to 48 bit precision, in spite of the word being 60 bits long. The only exceptions that can deliver a 60 bit result are integer add and subtract. THe 64-bit STAR even carried this over to all arithmetic operations--48 bits, with the upper 16 bits serving other purposes.

Everything else is done in the floating point units. Both normalized and unnormalized (and rounded) operations are provided. Double-precision requires separate steps to form upper and lower results, where the lower result carries an exponent 48 less than the upper result.

This is very handy--to perform an integer multiply, one need only do a unnormalized double-precision (lower product) multiply. If the exponents of the operands are 0 and the operands are unnormalized, the result is the unnormalized product with a zero exponent, unless overflow has resulted, in which case the exponent will be nonzero. (Actually, when I say "zero" I mean zero biased by octal 2000.). The so-called "integer multiply" option merely forced the exponents of the operands to zero.

There is no detection of overflow for simple 60 bit integer addition or subtraction.

The use of separate instructions to recover upper and lower results was advantageous, given that there are two multiply units, so that both halves of a double-precision product can be calculated with only a 1 minor cycle delay between them.

Recall that the Cray 1 didn't even have a divide instruction, but rather a "floating reciprocal approximation" instruction. A precise quotient could be computed with a couple of iterations of Newton's method from the product of the reciprocal and the dividend.

As regards instruction length, all register-register instructions on the 6000 are 15 bits in length--and are three-address. So an instruction could be issued every minor cycle, with delays only if an operand wasn't available. The issue logic could even "shortstop" a result by making it available for an instruction a minor cycle before it was realized in the register file.

Writing fast, efficient code for the 6600 was a great mental exercise.

Oh, and condition-code-less instruction sets should probably include PA-RISC.
=======================
Regarding the "skip" over two-word instructions. Certainly the 1130 skip (BSC) instruction would skip only one word, even if the following instruction was a two word one. One could use this as a programming "trick" by encoding a valid instruction as the second word (operand) of a two-word instruction. Could be confusing as heck to someone reading the code.

When CDC introduced the Compare-Move unit on the lower CYBER 70 (72/73) (corresponding to the 6400 and 6500) line, it wasn't available for the 74 (6600 architecture). So how to tell and make use of the thing? Well, a 15-bit no-op was octal 46000, and the CMU codes were shoehorned into this by making them 30 bit and 60 bit instructions, but starting with 46yxx, with "y" being nonzero. Since a CMU instruction had to occupy the first 30 bits of a word, coding one with a 30 bit jump in the lower 30 bits would result in a non-CMU machine executing a couple of no-ops and then the jump, while the CMU machine would see the word as a valid CMU instruction and consider the jump in the lower 30 bits part of the operands. It worked well on the entire range of 6000-7000 systems until the CYBER 170 series came out. It was never explained to me satisfactorily, but a no-op on a non-CMU CYBER 170 had to be coded as 46000--anything else threw an illegal instruction exception.

So, a variation on the theme...
 
Also the SEL-810 computers used skips, and subroutine calls were made using "SPB" (store place and branch), which stored the return address in the target memory location, then jumped to the memory location after that (return from subroutine then being an indirect jump to the top of the routine, exactly as described in the original question).
 
Amazing how many minicomputer architectures existed?

Subtoutine calls were generally as described above or Branch and Link using a Register like IBM 360/370.

Many DEC computers used the first word of a subroutine to store the return address.
PDP-8,7,9, and 15.
 
The CDC 6000/Cyber not only stored the return address, but added a branch instruction (EQ B0,B0) as part of the storage. This was a bit awkward when writing reentrant code; one had to take this return address and save it where it wouldn't be overwritten by another "call" instruction.

Some early systems had no subroutine call (see my note on the 1620; the 650 was another example). One explicitly stored the return address somewhere and then performed a jump to the routine. On some very simple architectures, the program counter is simply another register, so storing into it is the equivalent of a branch and a subroutine call sequence can simply store its value elsewhere. On such a machine, a simple register exchange can serve as a call instruction.

Edit

Lest there be any old 1620 people out there with a "yeah, but...", I'll amend my remarks about the 1620 not having a CALL instruction. If one executed a Branch and Transmit or Branch and Transmit Immediate, the address of the next instruction is saved in an internal register, IR-2. A Branch Back instruction allows one to branch to the address in IR-2. But that's it--you get a one-level subroutine call--and there's no way to read or save the contents of IR-2, although you could display it on the console lights. I tended to think of it as more of a diagnostic aid than a really useful feature.
 
Last edited:
For research and comparison purposes, I'm trying to compile a list of machines whose architectures supported/used the following control flow patterns:
  1. Branching done with "skip" style instructions, i.e. upon various compares, the program counter will skip the following instruction (typically a jump) rather than more contemporary direct branch instructions.
  2. No stack, with subroutine calls using the trick of embedding the return address directly at the first word of the callee routine's code. Return thus done via an indirect jump to the top of the routine.
What are the machines that used these structures? The PDP-8 did, and so did the 16-bit HP 2100. The Data General Nova used skip branching and doesn't have the stack, but seems to do jump-and-link style calling via a register.

Generally looking at 16-bit (ish) systems prior to the mid-70s. Which ones like this or similar have I missed? Thanks!
All the univac 18 bit machines 418, 1218, 1219
 

Attachments

  • 1219B_Mk152_INSTRUCTIONS.pdf
    167.5 KB · Views: 5
Sometimes the Honeywell 200 was described as a minicomputer, but the demarcation seems to be blurred. Was a machine a minicomputer if you could rest your elbows on top of it to read a broadsheet newspaper there?

The basic Honeywell 200 computer did not have a stack and there was no specific subroutine call as all branch operations saved the potential return address in the B register in case the code at the new location happened to be a subroutine. Subroutine code started by storing the return address somewhere in main memory to be used in the return branch. The simplest approach was simply to store the address within the return branch code itself. Self-modifying code was an important part of the programming style as it resulted in the fastest possible execution. Indeed a stack may be regarded simply as a sequence of instructions to be executed later that the program itself has compiled dynamically.

There couldn't be a fixed distance skip instruction as such because H200 instructions could be any length, even having any amount of trailing garbage characters that didn't actually change the effect of the instruction. In fact the trailing garbage could contain other instructions that could be executed by jumping into them, so instructions could effectively sit side by side in the logical sequence of the code.

It isn't really possible to classify machines into clearly defined categories as each had its optimum way of being used which its programmers would learn. It was a very different world from the idea of entirely portable programming languages that could disregard machine architecture entirely. In the extreme modern object oriented languages are not about programming computers at all but simply ways of describing reality.

Speaking of reality, how many levels of subroutine can the average human brain cope with? How many distractions and interruptions are needed to make us forget what we were doing originally and, maybe more importantly, why we were doing it? I often get the feeling that my mental stack space isn't large enough.
 
Sometimes the Honeywell 200 was described as a minicomputer, but the demarcation seems to be blurred. Was a machine a minicomputer if you could rest your elbows on top of it to read a broadsheet newspaper there?
Well, it'd be awfully hard to lose a technician in it ;)
 
Back
Top