cjs
Veteran Member
A common way of implementing breakpoints in machine-language monitors is to set up a "software interrupt" pointing back into a monitor entry point and replace the opcode bytes at the breakpoint locations with the instruction that generates this software interrupt (e.g. RST on 8080/Z80, SWI on 6800, BRK on 6502). When you re-enter the monitor you put the original opcode byte back, and thus it looks as if the processor magically stopped at the breakpoint.
However, if you continue execution from that breakpoint, you can't swap the software interrupt opcode back in, because you now want to execute the original opcode. But if you just restart at the original opcode, you've now left the monitor without setting the breakpoint.
One way to get around this is to simulate the execution of the instruction at the breakpoint, but that's expensive in terms of code size.
How do monitors and debuggers that don't do this simulation maintain the breakpoint on re-entry to the user program?
I guess one way to do it would be to have a way of calculating the length of each possible instruction, put a breakpoint at the opcode after the actual breakpoint, and, on re-entering the monitor from that, restore the original "instruction-after" opcode and the original breakpoint just before it. But that doesn't account for conditionals, where you'd need to put in two breakpoints, and you also need to calculate the second breakpoint based on stack values if the instruction is a conditional return. While probably cheaper than simulation, this still seems like a fair amount of code.
However, if you continue execution from that breakpoint, you can't swap the software interrupt opcode back in, because you now want to execute the original opcode. But if you just restart at the original opcode, you've now left the monitor without setting the breakpoint.
One way to get around this is to simulate the execution of the instruction at the breakpoint, but that's expensive in terms of code size.
How do monitors and debuggers that don't do this simulation maintain the breakpoint on re-entry to the user program?
I guess one way to do it would be to have a way of calculating the length of each possible instruction, put a breakpoint at the opcode after the actual breakpoint, and, on re-entering the monitor from that, restore the original "instruction-after" opcode and the original breakpoint just before it. But that doesn't account for conditionals, where you'd need to put in two breakpoints, and you also need to calculate the second breakpoint based on stack values if the instruction is a conditional return. While probably cheaper than simulation, this still seems like a fair amount of code.