• Please review our updated Terms and Rules here

Identifying a 'buffer' space at the end of a program

Hi @durgadas311 ,

To be a bit less vague, what value would I expect to find in SP when the CCP loads a program and jumps to 0100, and what does the BDOS do about the stack during calls ( which you anticipated and answered - Seems the BDOS needs to relocate the stack on being called ) - so I probably have some additional code to implement for stack management.

In a truly single-user single-task system, stack management isn't an issue. Once you start switching tasks, it becomes a bigger challenge. But it's always reasonable to provide an architectural design around it as a part of a system architecture, and even CP/M has an architecture around stack management, whether intentional or by default.

How it handles the stack is it's stack management architecture. I'm just not familiar with that architecture.

@Chuck(G) , I don't think it really mattered too much in the 64K days... It was only when there was enough memory and concepts started to change rapidly that things like protected mode became practical, and the early implementations were a bit of a mess. It wasn't until the 386 that it was practical for most applications, and by then the future path was a lot clearer as the initial wave of the PC revolution had passed and it was just evolution from then on.

Stacks are challenging at the best of times, but when well mananged don't inhibit much. It's still better to have a prescribed stack architecture than a default one IMO.
 
To be a bit less vague, what value would I expect to find in SP when the CCP loads a program and jumps to 0100, and what does the BDOS do about the stack during calls ( which you anticipated and answered - Seems the BDOS needs to relocate the stack on being called ) - so I probably have some additional code to implement for stack management.
The exact value in SP is irrelevant. Suffice to say it points to a small but viable stack, physically located inside the boundaries of the CCP. It has enough space to make BDOS functions calls, but not much else. A CP/M program has the option of saving the SP and assigning a new value (that points to a viable stack) then restoring the CCP SP and doing RET - but it can only do that if it does not use all of the TPA memory (it must not overwrite the CCP). If the program is going to exit by doing a JMP 0, then it can just use the CCP SP and overflow - destroying the CCP (which is not used again). It can also assign any valid stack address within the TPA to SP and use that. I'm not sure I'd call that an architecture, but these are the rules for user programs stacks in CP/M.

Programs that are 8080 compatible will do something like "LXI H,0; DAD SP" to get the current SP into HL, and then save that to be restored by retrieving it into HL and then doing "SPHL". But that has nothing to do with CP/M, it is simply how one does it on 8080-compatible CPUs. This is the method used in the BDOS to save and restore the user's SP.
 
By way of contrast, SID starts the program being debugged with SP=0100h, so unless the program sets up its own stack there's a chance of it being overwritten by (for example) DMA to 80h.
 
My safe rule of thumb when I was still writing CP/M code was "Don't assume anything--set up your own stack if you're going to need one."
But even if you set up your own stack, you still have to assume something, don't you? Namely, the amount of stack that needs to always be available for interrupt processing. If I'm going to use up to 50 bytes of stack, then I need to set aside 50 + N bytes somewhere. Is there a documented value for N, or an accepted rule of thumb? Is eight enough?
The BDOS quickly switches to it's own stack ...
I was thinking about how quickly the OS could switch to its own stack. A maximally accommodating OS would make the N above only two. Doing this on a Z-80 wouldn't be too painful, thanks to the LD (addr), SP instruction. On an 8080, it's doable, but tricky. The only way to get and save the value of SP requires use of a DAD SP, but that destroys the value of the carry flag, which you can't save on the stack first or else you'll increase N to 4. The value of carry therefore needs to be implicitly saved in the value of PC, by branching, like so:
Code:
        SHLD    bufend-2
        LXI     HL, 0
        JC      L1
        DAD     SP
        JMP     L2
L1:     DAD     SP
        STC
L2:     LXI     SP, bufend-2
        ; Now push whatever registers will be used.
        PUSH    PSW
        PUSH    H
        ; ...
The return code would look like this:
Code:
        ; Pop whatever registers were used.
        ; ...
        POP     H
        POP     PSW
        SPHL
        LHLD    bufend-2
        RET
 
But even if you set up your own stack, you still have to assume something, don't you? Namely, the amount of stack that needs to always be available for interrupt processing. If I'm going to use up to 50 bytes of stack, then I need to set aside 50 + N bytes somewhere. Is there a documented value for N, or an accepted rule of thumb? Is eight enough?
Well, that depends on the BIOS implementation, no? If you use a lot of stack space in your ISRs, you're going to make problems for users unless you take precautions. A common approach is to simply do the minimum in the ISR and set a flag, so you've consumed perhaps only 4 bytes of stack space (2 for PSW/A and 2 for return). That does add some latency (code outside of the ISR has to get around to checking the flag), so it may not work for everything. What I did was to call a stack-management routine as the first instruction in such an ISR. Simple test there was to see if the system stack was already in use (exit), otherwise switch to it. That meant a maximum of perhaps 8 bytes of user stack being used--above that which might have been used by the BDOS. This was more than 40 years ago, but I don't recall any problems with programs running up against the stack size.
 
I don't recall any problems with programs running up against the stack size.
Hmm, so if an application programmer was writing an application that used 50 bytes of stack space, he needed to set up his own stack, right? And if the amount of space he allocated for that was 50 bytes, there would definitely be a problem as soon as an interrupt came in while the program was at peak stack usage, right? So I guess you're saying that such programmers would instead allocate 50 + N bytes, and they were all choosing a sufficiently large value of N. My question is what value was that? I guess the answer is somewhere in the range eight to sixteen?
 
DRI never specified a number for "N", that was just the nature of the times. Programmer's got pretty good an judging "enough stack". It's always a trade-off between speed, stack consumption, and code size. It really depends on whether you're writing a portable app (e.g. WordStar or DBase) or writing a program (utility) only to be used on a specific platform. Many of the larger, more portable, apps used the BDOS address as their stack pointer, or at least as an upper-bound for SP. Software Toolworks C/80 did that (after subtracting space for stdio), and checked in sbrk() to see if you were in danger of overlapping heap and stack.
 
Back
Top