• Please review our updated Terms and Rules here

Self Modifying Code... Best Practice or avoid?

Slightly OT: The NSC800 (Z80 CPU in 8085 clothes) did have an interesting quirk. The ICR lis a write-only register lying at I/O address 0xbb. However, it can only be written to by instructions "OUT (C),A" and "OUT (N),A". Block I/O writes are unaffected. So, in effect, you have an I/O space of 256½ ports.
 
Zilog's eZ80 processors (except eZ80190) force all I/O addresses to be 16-bit, regardless of being in Z80 or ADL mode

Notably they also add the “X” variant to the repeating I/O instructions (example, INIRX) which actually behave with a 16 bit address the way the Z80 repeat instructions do with an 8 bit port, IE, sensibly for tasks like, say, shoving a string out a parallel port. Like I said earlier, to me this is the smoking gun that the original Z80’s behavior is a “useful quirk”, not according to Hoyle address extension; the repeat instructions are effectively broken in the 16 bit interpretation.
 
... on protected mode operating systems.
Even a lowly ARM Cortex-M3 sports an MPU (Memory Protection Unit), so I would assume all general and most real-time operating systems to prevent code modifications by default. Outside of retro computing, I haven't seen anything fully unprotected in a long time.

I suppose you could find a way around it, just as JIT compilers do, but I can't even imagine a scenario where this makes the least bit of sense.
Modern embedded processors (at least the ones I've used) have learned from the errors of their 8 bit forefathers where such shenanigans are no longer needed.
Can you elaborate? I don't understand what you're trying to say.
 
Even a lowly ARM Cortex-M3 sports an MPU (Memory Protection Unit), so I would assume all general and most real-time operating systems to prevent code modifications by default. Outside of retro computing, I haven't seen anything fully unprotected in a long time.


Can you elaborate? I don't understand what you're trying to say.
There are plenty of embedded CPUs that don't run in a protected environment. There was a time and place for SMC, and hopefully it has long since passed.
 
I rather liked the PIC32 memory management (http://ww1.microchip.com/downloads/en/devicedoc/60001115h.pdf), though I suspect it's rarely fully exploited. I suspect most embedded MCUs run in privileged mode all of the time.
That's the situation with modern silicon--you get everything but the kitchen sink tossed in and unless you read the voluminous hardware manuals, you may not even know that much of it exists--or even if it's included in the development libraries.
 
I rather liked the PIC32 memory management (http://ww1.microchip.com/downloads/en/devicedoc/60001115h.pdf), though I suspect it's rarely fully exploited. I suspect most embedded MCUs run in privileged mode all of the time.
That's the situation with modern silicon--you get everything but the kitchen sink tossed in and unless you read the voluminous hardware manuals, you may not even know that much of it exists--or even if it's included in the development libraries.

Indeed. I just looked at my favourite MCS51 series chip and even though it has a harvard architecture, there is scope to rewrite the main memory as pseudoflash ( it also has regular flash ) which I think is intended for having the software download either updates or minor patches in-band then rewriting itself by writing the code space, so I think that qualifies as being SMC compatible - though the flash rewrite isn't instant and it's not going to save cycles. It does need to be enabled in the fuses though to be able to rewrite it's executable data store.

Also I really like the MCS51 for it's versatility with respect to the massive number of registers, bit operations and addressing modes... It might be an ancient Intel architecture, but as binary compatible goes, it's still in current modern use. I can't think of any specific examples where I'd use SMC with a '51 architecture.
 
I've never been able to find out whether sending B to A8~A15 was intentional or coincidental.

Almost certainly co-incidental. I have no doubt that the Zilog engineers well understood the behaviour of the 8080, which duplicated the port number on A15-A8 for IN and OUT instructions (see note 18 on page 2-20 of the Intel 8080 Microcomputer Systems Users Manual (Intel, Sept. 1975)). They clearly didn't feel that this behaviour was worth replicating for IN and OUT, and left the Z80 incompatible in this respect, as they did in a few other minor areas. And if they're not adding the circuitry to do this for IN and OUT, obviously it's not worth adding for the new I/O instructions.

Like I said earlier, to me this is the smoking gun that the original Z80’s behavior is a “useful quirk”, not according to Hoyle address extension; the repeat instructions are effectively broken in the 16 bit interpretation.

Yes, I agree, in combination with the behaviour of Z80 IN and OUT, it seems clear that the use of the B register here is a useful quirk, probably related to BC being internally a single 16-bit register.
 
it seems clear that the use of the B register here is a useful quirk, probably related to BC being internally a single 16-bit register.

I think in that other thread it was mentioned that there are netlist-level descriptions of the Z80’s internal layout that make it clear that this “bleed-through” behavior you get with a8-15 that doesn’t just involve BC is just an artifact of how the register pairs are multiplexed on the internal bus, and masking it would have cost extra transistors.
 
  • Like
Reactions: cjs
I just looked at my favourite MCS51 series chip and even though it has a harvard architecture, there is scope to rewrite the main memory as pseudoflash ( it also has regular flash ) which I think is intended for having the software download either updates or minor patches in-band then rewriting itself by writing the code space, so I think that qualifies as being SMC compatible - though the flash rewrite isn't instant and it's not going to save cycles.
Constant rewrites slowly kill the flash. I can imagine some engineers happily abusing this, but I would still see this as a hard nope on self-modifying code - even if it was fast.

Technically, you can do similar things on some AVRs (they can put themselves into flashing mode and rewrite parts of their code memory), but that is intended for bootloaders and not self-modifying code. AVRs also contain embedded non-volatile memory (eeprom), but that isn't executable either.

Dive into the STM32H74x reference manual here and listen to your brains run out your ears. Hard to believe that the little boards are less than $15 from China.
I've had to dive into the USB specification in the past. The necessary complexity for very simple uses is incredible, and many specs are badly written (doubly fun when they are secret as well).
 
It took me quite some time to get my composite-mode client USB device functioning on my MCUs. The USB specs are befuddling and writing descriptors will drive you nuts. I guess the point is to use someone else's (hopefully less buggy) library code.

Did it really have to be done this way? SCSI employs multiple devices that are distinguished by a simple ID/Unit pair. There are commands to determine the nature of what's being addressed. Easy-peasy. Why USB had to do what it did is a mystery. Even more maddening is that USB mass storage uses the SCSI command set...
 
The USB video spec is so badly designed that the video descriptor values depend on the other devices in the composite device. Drove me crazy until I figured that out, then had to abandon the project because it became infeasible. I'm not aware of any device using the ... more flexible parts of the spec, but I do know for sure that the drivers shipped with all major operating system drivers do not support them either.

It's simply incredible.
 
Slightly OT: The NSC800 (Z80 CPU in 8085 clothes) did have an interesting quirk. The ICR lis a write-only register lying at I/O address 0xbb. However, it can only be written to by instructions "OUT (C),A" and "OUT (N),A". Block I/O writes are unaffected. So, in effect, you have an I/O space of 256½ ports.
The NSC800 has got quite a few other enhancements that make it incompatible with some undocumented instructions and existing Z80 features. e.g. 8 bit autoincrementing refresh (R) register, multiplexed data/lower address bus.
 
The NSC800 has got quite a few other enhancements that make it incompatible with some undocumented instructions and existing Z80 features. e.g. 8 bit autoincrementing refresh (R) register, multiplexed data/lower address bus.
By the time the NSC800 made an appearance, the use of larger DRAM chips was becoming common, so I suspect they throught that the 8-bit range on the R register was a feature, as were the integrated interrupts. Basically a Z80 ISA in 8085 clothing. There is an article somewhere on replacing the 8085 on a Compupro 85/88 board with an NSC800. I tried to do it on another system, but kept running into issues with the 8202 DRAM controller.
 
The punch line is that a lot of 64K x 1 DRAMs support 7-bit refresh. I really was surprised by that. (FWIW, in none of the data sheets that I looked at did I find a 64K x 4 with 7-bit refresh.) I do have to wonder if the Z80 R register was an influence on this. (I know that R worked that way because their 16-bit increment look-ahead carry circuit split at 7 bits.) Now I want to do a 64K in a TRS-80 Model I mod someday.

Anyhow, I don't like self-modifying code at all. Even when there's a need (like the IN and OUT statements in MS-BASIC), it should go into a dedicated RAM area. And keep R/W variables out of the code segment too, so you can put it in ROM. (CP/M violates that rule.)
 
Now I want to do a 64K in a TRS-80 Model I mod someday.

This was a thing people did back in the day, both homebrew hack versions and products like BIGMEM that added some memory mapping capabilities. Considering how the number one reliability issue with the Model I is putting 32K of its RAM at the wrong end of a cable it's certainly not a terrible idea.

...where you would build the appropriate 8080/8085 IN or OUT instruction? Sounds like self-modifying code to me. :)

Sure, but at least you're getting to *pretend* that you're doing things in a modular way without having to waste 1K of space your "ROM table" method required. IE, you document the location of the IN/OUT operand as if it were an 8 bit variable, and that location is simply "padded" with the IN/OUT opcode in front and a jump back to wherever in ROM you detoured from after it. All together that's what, four bytes? It's technically self-modifying code, but you can at least *pretend* in the rest of your software that the location in RAM you're setting is effectively a memory mapped "multiplexer" register. But obviously this technique wouldn't work if you were dealing with a system that didn't have any RAM at all.

(* Although I guess I'm sort of confused how that could come up; if your application is so simple that you can do without RAM needing to iterate to random I/O port numbers seems... like a really edge case? Maybe for this case what you do is decode a 74373 register so it overrides one byte of your ROM chip, that being the operand for your IN or OUT command. Gawd help you if you need to reassemble the software after you've built the hardware...)
 
Back
Top