• Please review our updated Terms and Rules here

Two new TP adventures

Ruud

Veteran Member
Joined
Nov 30, 2009
Messages
1,369
Location
Heerlen, NL
I have been a TP fan since 1985. So one day I started to think about writing my own Pascal compiler, one that should be able to handle at least TP3, TP4 and UCSD Pascal. Another goal: it only outputs macros, macros that can be handled by my own assembler. This assembler is able to handle 6800, 6502, 65816, 8088, Z80 and 8080. So if you have the right macros, the same program could run on various machines. OK, in reality it is a bit more complicated: a 6502 executable meant for the Apple 2][ most probably won't run on a C64 and vice versa. And writing those macros will be a looooooot of work.

Second adventure: with some help I was able to disassemble TP3. I used the result to create a BP alike version: it takes a Pascal file as input and outputs a COM file.

All above is freeware. If interested in the sources, info or whatever, just email me.
 
What did you write the compiler in? What is a "BP alike" version?
I am writing it in Free Pascal Compiler but pure for convenience: more lines on the screen. I check from time to time if it compiles and runs in Borland Pascal 7. And if things work out, I will scale down to TP6, TP5.5, TP5, etc. as far down as possible. But at the moment the first main goal that the program should be able to compile itself. The second main goal: the compiled output should run as well as the version outputted by BP7.

Sorry, I meant BPC, the comand-line version.
 
An update:

Busy with a renovation of some rooms of my house so I don't have that much time available. At this moment I'm busy with floating point numbers (FP). You find a lot of information about FP on internet but the moment that you ask: "But how does a computer handle FP in machine language?" then it becomes very silent. Partly I found a solution to covert a string like "12.3456" but I still have to find a way convert a number with large exponents into FP.
Converting FP into ASCII again is even more difficult. "0.11" is 0.75 in decimal and can be calculated like this: 0.11 = 1/2 + 1/4 = 0.75. Yes, very nice explanation and really understandable by humans. But no explanation how a computer does it. I found a way by looking at the source code of the BASIC of the Commodore 64. And while writing the previous sentence I just remembered I probably have another source: the UCR library, used by the book Art of Assembly. We'll see.....
 
An update:

I managed to convert ASCII into Floating Point and back again. I use 64 bits but in fact it is the 32-bit single precision Plus extra 32 bits for the mantissa. I also created routines to add, subtract, multiply and divide two FP numbers. Next challenge: square root, sinus, cosinus, tangens, ln, log, etc. I figured out I can use Taylor for everything except square root. I have to find out how to do that.
 
I wish I could remember the name of the book, but one of the old 6502 books (like "6502 Applications") has a treatise, and code, for FP. I've tried to search for it recently, past few months, but was unable to find it again.

I also once found a description of FP, and I thought it was part of the original Oberon book. But Wirth managed to succinctly and clearly express FP in less than 2 pages. It was very "aha!". I don't recall it talking about ASCII conversion, but basic math was pretty simple. I wish I could reiterate it.
 
Finding a way to convert ASCII to FP was the biggest problem. To make a very loooooong story short, I started to use tables:
- a table containing the digital version of 10, 100, 1000 etc.: 40 rows of 9 columns.
- a table containing the digital version of 0.1, 0.01, 0.001 etc: 50 rows of 14 columns.
Then I created a routine that converts an ASCII number into a number made out of an array of 24 words, using the above tables. All further operations are done using more of these arrays. And in the end the result is converted into the needed numbers to form the FP number, again using these tables.

These tables and arrays are the reason to limit the exponent to 8 bits: if I used the 11 bit exponent of double precision I either needed arrays of 184 words or find another method. And a range from roughly 1E-39 to 1E39 seemed good enough for me.
 
I have no experience with this. Normally, for simple integers, you take the first digit, add it to the result, then, if more digits, you shift the result by 10, and keep adding.

Once you reach the decimal point, you could "simply" invert the remainder of the string (up to the exponent) and do the same thing, shiftin downwards.

So for 123.456.

Start with 0, add 1, result 1

Multiply by 10 and add 2, result 12

Multiply by 10 an add 3, result 123.

Detect ".".

Scan to the end (i.e. 6) and run backward.

Interim result = 6.

Divide by 10, add 5, result 5.6

Divide by 10, add 4, result 4.56

Divide by 10, add interim result to main result: 123.456.

Once you hit the exponent, you use that to shift the exponent of the current result.

No doubt there are problems with this as there are with all things floating point.
 
....
Multiply by 10 and add 2, result 12
....
I did the same.

......
Divide by 10, add 5, result 5.6
......
That sounds simple, but the 6502 for example, doesn't support division. The use of tables and multiplying, = multiple additions, is no problem.

Once you hit the exponent, you use that to shift the exponent of the current result.
Eh, that's what I thought too but when dealing with digital numbers, shifting means multiplying with ten. My solution: write the original out to its original value, thus without 'E', and we start from the beginning.

No doubt there are problems with this as there are with all things floating point.
At the moment: square root, I have not a real idea where to start. I think this "long division" could work but I have no other idea yet.
 
That sounds simple, but the 6502 for example, doesn't support division.
It supports division as much as it supports multiplication -- i.e. not directly.

Obviously you can multiply and divide on the 6502, its just slow. Well, even on processors that have hardware multiply and divide, it's still slow. Same way the block move instructions on the Z80, while one instruction -- aren't necessarily faster that doing it yourself.

Another thought is to simply remove the decimal point, and treat it just like you do exponentiation.

So, 123.456 becomes 123456E-3, then you just have to add and multiply until you get to the end, then correct with the new exponent. You need this as a fundamental operation anyway.

Yea, this works really well.

Simply, what you really do is convert the numbers to the internal binary form. It's an integer operation. Then you use that value directly as the mantissa, and then adjust the exponent as appropriate. You can write a specific "x10" routine. Make 2 copies (or, just 1, use the original), first one you shift 3 (i.e. x8), second one you shift 1 (i.e. x2), then add them together. that gives you x10. SHOULD be cheaper than a generic multiply.
 
It supports division as much as it supports multiplication -- i.e. not directly.
What I meant was that the 6502 doesn't have instructions that support multiplication and division. The 8088 has at least MUL and DIV.

.... You can write a specific "x10" routine. .....
Before doing any conversion, I convert an "E" number into a normal one. So 1.234 E 7 becomes 1234000 and 1.234 E -3 becomes 0.001234. Just a matter of shifting the dot according the exponent. And then I convert this ASCII number into FP, so no multiplication afterwards needed.
(or did I misunderstood you?)
 
Back
Top