• Please review our updated Terms and Rules here

How to handle breakpoint re-entry without removing the breakpoint

cjs

Veteran Member
Joined
Nov 5, 2021
Messages
1,003
Location
Tokyo, Japan
A common way of implementing breakpoints in machine-language monitors is to set up a "software interrupt" pointing back into a monitor entry point and replace the opcode bytes at the breakpoint locations with the instruction that generates this software interrupt (e.g. RST on 8080/Z80, SWI on 6800, BRK on 6502). When you re-enter the monitor you put the original opcode byte back, and thus it looks as if the processor magically stopped at the breakpoint.

However, if you continue execution from that breakpoint, you can't swap the software interrupt opcode back in, because you now want to execute the original opcode. But if you just restart at the original opcode, you've now left the monitor without setting the breakpoint.

One way to get around this is to simulate the execution of the instruction at the breakpoint, but that's expensive in terms of code size.

How do monitors and debuggers that don't do this simulation maintain the breakpoint on re-entry to the user program?

I guess one way to do it would be to have a way of calculating the length of each possible instruction, put a breakpoint at the opcode after the actual breakpoint, and, on re-entering the monitor from that, restore the original "instruction-after" opcode and the original breakpoint just before it. But that doesn't account for conditionals, where you'd need to put in two breakpoints, and you also need to calculate the second breakpoint based on stack values if the instruction is a conditional return. While probably cheaper than simulation, this still seems like a fair amount of code.
 
I can't remember where I saw this, but most CPUs have a single step interrupt. So you can hit your breakpoint (implemented via a trap instruction), substitute in the correct opcode, then when you resume do a single step first, get that interrupt, restore the breakpoint, and then continue.
 
I can't remember where I saw this, but most CPUs have a single step interrupt. So you can hit your breakpoint (implemented via a trap instruction), substitute in the correct opcode, then when you resume do a single step first, get that interrupt, restore the breakpoint, and then continue.
The single-step interrupt you're talking about is implemented with external hardware, at least on the CPUs I mentioned (8080, 6800, 6502, Z80). You generally won't see it on most systems (at least most of the popular Japanese and Western microcomputers that I've seen); of the several dozen I'm aware of, the only one that has it is the 1970s NEC TK-80 trainer board (and its successor the TK-85).
 
Couldn't you just put in a NOP after the breakpoint?

That is, if you're working with fresh source code and not trying to analyze an EPROM dump... :)
 
Couldn't you just put in a NOP after the breakpoint?
I don't understand how that helps. Let's say the instruction at which I breakpoint is (in Z80 assembly), jp NC,$1234. Before executing the code, the monitor replaces the jp NC with an rst that jumps into the monitor, leaving the following bytes ($34 $12) as they are. When that instruction is about to be executed, instead the rst is executed and the monitor is re-entered. What happens after that when I tell the monitor to continue execution? Before continuing the jp NC must be restored in order to execute it, but that wipes out the breakpoint.

That is, if you're working with fresh source code and not trying to analyze an EPROM dump... :)
Yes, this breakpoint technique doesn't work with code in ROM. I can live with that.
 
All poor implementations. Many other architectures have breakpoint registers in hardware.
Why be lazy? Look for yourself. The source code is on the web.
 

Attachments

  • ddtsrc.zip
    19.1 KB · Views: 3
Last edited:
You remove the breakpoint.
Figure out where the code will land after the instruction and put breakpoints there. Maybe more than one!
set a flag to replace the original breakpoint and remove the new ones when the new one is hit.
run
 
All poor implementations. Many other architectures have breakpoint registers in hardware.
Well, I'm afraid most of the world disagrees with you there. The 8080, 6800, 6502 and Z80 were far more popular than whatever you're talking about.

And I'd be interested to see if your expressed preference is the same as your stated one. Up to about 1983, how many machines did you buy that had hardware breakpoint registers, compared to ones you bought that didn't, like the CPUs above?

Why be lazy? Look for yourself. The source code is on the web.
Had you read my post, you might note that the technique I described at the end of it is what your attached source does. Had you thought about it for a moment, you might have realised that I possibly had read that source, which in fact I had, and you wouldn't have made the incorrect and unjustified accusation that I was lazy.
 
I did a fair amount of projects in the 70's using the microprocessors of the day.
Used very expensive in circuit emulators. They worked well. They usually show the state after the breakpoint
But they had trace buffers so you could see how you got there. They didn't actually do breakpoints.
It was address match logic that forced a break.

I would have killed for a modern micro with JTAG and breakpoint logic.

You kids have got it good. And get off my lawn!
 
And I'd be interested to see if your expressed preference is the same as your stated one. Up to about 1983, how many machines did you buy that had hardware breakpoint registers, compared to ones you bought that didn't, like the CPUs above?
I didn't buy mainframes; however, it's worth noting that ARM devices, of which you probably own several, do implement hardware breakpoints. I have no idea how many ARM CPUs I own, only that the number is probably greater than the number of x86/x80 CPUs that I own. I imagine that this holds true for most people, but even the later x86 CPUs include some breakpoint registers (DRx).
Had you read my post, you might note that the technique I described at the end of it is what your attached source does.
Then why ask the question? You have access to a multitude of source materials. Although a "software interrupt" is convenient, it's not a requirement.

50 years ago, I would have been lost debugging a big virtual-memory, vector machine were it not for the hardware breakpoint (page 6-321 https://bitsavers.org/pdf/cdc/cyber/cyber_200/60256000_STAR-100hw_Dec75.pdf ) and the trace register (records the address of the last branch taken). But then, that was 50 years ago--and hardly unique.
 
Last edited:
I didn't buy mainframes; however, it's worth noting that ARM devices, of which you probably own several, do implement hardware breakpoints. I have no idea how many ARM CPUs I own, only that the number is probably greater than the number of x86/x80 CPUs that I own.
You seem to have a great deal of difficulty reading what I write. This time, you ignored the "Up to about 1983" part. Obviously the cost of circuitry has changed between early-'70s to early-'80s decade and later times, and that changes the calculus for what is "poor design."

Do feel free to answer either of my actual questions. I don't think your other contributions to this thread are very helpful.

Then why ask the question? You have access to a multitude of source materials.
Because I wasn't happy with the answers I was seeing after looking at those source materials and wondered if someone knows of a better way to handle this.

I think the bigger question here is, if you have no answers, and can't even be arsed to read and understand the questions, why are you posting here? Wouldn't a place like Twitter be better if all you want to do is throw shade?
 
The 8080, 6800, 6502 and Z80 were far more popular than whatever you're talking about.
That doesn't mean they were good. Just that they were cheap.

Whenever your breakpoint is hit, put another breakpoint at the next instruction. When the second breakpoint hits, restore the first one. Yes, that requires you to decode all branching instructions, there's no way around this.

If there would have been an easy, cheap software-only solution, nobody would have bothered implementing hardware for it. Hardware costs a lot of money, and people back then were not stupid.
 
That doesn't mean they were good. Just that they were cheap.
Price is just another factor in how one evaluates whether something is good or not, just as with size, speed, electricity consumption, and and so on. If someone decides to use X over Y in a project then X was by definition overall better than Y for that project, and it being better because it was cheaper is no different from it being better because it was faster or better because it had more memory, or whatever.

It's quite common for people to say that in 1980 the 68000 was "better" than the Z80, while ignoring the price difference. But that's akin to saying that the Z80 is better than the 68000 because it was physically smaller and needed fewer pins, ignoring that the 68000 could address more memory.

Yes, that requires you to decode all branching instructions, there's no way around this.
If there would have been an easy, cheap software-only solution, nobody would have bothered implementing hardware for it. Hardware costs a lot of money, and people back then were not stupid.
Hm. Unfortunate. I was hoping that there was some trick I had missed.
 
Back
Top