• Please review our updated Terms and Rules here

Testimonies of using BASIC back in the day

A lengthy set of benchmarks published in 1982 can be found at https://works.bepress.com/mwigan/14/ which includes Apple II+, Commodore PET, and many Z-80 variants.

The comparison ranges from the CDC Cyber 171 (fastest) to TRS Pocket Computer (slowest) with some amusing notes like the Seattle System 2 (8086) was faster than PDP-10 and IBM System 34.

There are older ones than that. The Cyber 171 isn't even the fastest of the 170 series. It might have been interesting including, say, a 176.
 
At that time it was generally sneered upon by those in the computer science community for being unstructured, inelegant and allowing spaghetti code...a "toy" language.
Most of those claims I felt came later -- the spaghetti code one I heard a lot and my response was always "Oh, nothing like Assembler..."

JMP = GOTO, Jx = IF/GOTO, CALL = GOSUB, RETURN = RET...

But that's the same as the bullshit you got from C asshats who kept saying C was "closer to how assembly worked" which of course is 100% BULLSHIT. Never understood where they got that claim from but Christmas on a cracker it's parroted a LOT, even today.

You want snobbery? Try being a Pascal programmer. All kinds of C snobs always preaching the supposed superiority of C.
Which was aggravating when their "not my favorite language" bullshit was filled with misinformation, what little truth there was to it was a decade or more out of date, and their favorite pet language was needlessly and pointlessly cryptic, aggravatingly vague, and pretty much DESIGNED to make developers make mistakes.

Again, there's a reason I'm not entirely convinced this is a joke:
https://www.gnu.org/fun/jokes/unix-hoax.html

But then, in the circles I was in during the late 70's through to the early '90's, NOBODY gave a flying **** about C on anything less than a mainframe platform. It just wasn't used and I really wonder what planet other people who say it was used were on... From what I saw it sure as shine-ola wasn't used on any MICROCOMPUTER platforms until the 386 came along for any serious project since C compilers were fat bloated overpriced toys -- relegating them to being about as useful as the toy that was interpreted BASIC.

I never found the limitations of even the interpreted BASICs terribly limiting.
That's really where BASIC got the "toy" label IMHO, it was FAR too limited in speed to do anything I wanted to do on any platform I ever had in the 8 bit and even early 16 bit era.

For me in the late '70's and early '80's was looking at every language higher than assembly and realizing "This is for lazy ****'s who don't want to write real software". Yes, even Pascal got that label from me. Interpreters were too slow on the hardware I had access too for anything more complex than DONKEY.BAS, compilers cost thousands, came on more floppies than I had drives, and took an hour of swapping disks just to compile a "hello world" that wouldn't even fit into a COM file. (since unlike the effete elitists with deep pockets we didn't have hard drives and were writing software to run OFF floppy drives or even cassette from systems that only HAD floppies.)

... and I still remember the compiler that flipped that attitude around 180 degrees, and anyone who knows anything about '80's compilers can guess EXACTLY what compiler for which language I'm referring to.

Of course, my recent adventure into trying to use C for a PCJr target only further drove me away from C... to the point I was walking around downtown muttering "****ing gonna shove C up K&R's arse" under my breath. I still say C exists for the sole purpose of perpetuating the myth that programming is hard; a kind of elitist circle-jerk to exclude certain types of thinkers from even having a chance in the field. How the hell it caught on as the norm or even desirable still escapes my understanding -- though admittedly I say the same thing about *nix and posixisms so... YMMV.

In any case, by the time BASIC matured away from line numbers and had compilers, there was NOTHING it offered I couldn't get from better faster compilers that made smaller faster executables with cleaner code syntax. I wasn't likely to migrate away from Turbo Pascal for QuickBasic, that would have been thirty steps backwards. It would be like migrating back to clipper after having moved from DBase3 to Paradox or Access.

BASIC was a cute toy, but pathetically crippled to the point of being near useless for writing any real software in the timeframe it was "standard in ROM". EVERYTHING I ever encountered for "commercial" software written with it reeked of ineptly coded slow as molasses in February junk.

Though there are some ... I don't know how to word it. Something about certain software from the 'look and feel' perspective makes my brain scream "cheap junk". I knee-jerk into "what is this crap" mode and most always it seems related to the language being used to build the program. BASIC has always had that for me where you can usually TELL its BASIC, and the only way around that seems to be to lace it so heavily with machine language, you might as well have just written it with assembly only. Clipper was another one that just gave me this "rinky cheap half-assed" feeling... which for all my hatred of C, I never got that feeling from stuff written in C.

That 'feeling' lives on today quite well the moment I see anything web related written using ASP. You can just tell, the way the UI feels half-assed, flips the bird at the WCAG, and the agonizingly slow page loads from too many DOM elements, pointless code-bloat crap visual studio just slops in there any-old-way, etc, etc... Whatever it was that made BASIC feel like a rinky cheap toy seems to live on in the majority of what people create using the WYSIWYG aspect of Microsofts Visual Studio.

You'd almost think one of VS's build options is to use VB.NET
 
notes like the Seattle System 2 (8086) was faster than PDP-10 and IBM System 34.
I still remember a magazine article from the '80's that was pointing out that TP3's 48 bit "real" faster and more reliable on a 8088 than the 32 bit "single" on a 8087... it was just more RAM hungry. Wish I could find a copy of that today.
 
I still remember a magazine article from the '80's that was pointing out that TP3's 48 bit "real" faster and more reliable on a 8088 than the 32 bit "single" on a 8087... it was just more RAM hungry. Wish I could find a copy of that today.

There were a lot of bugs in the TP 8087 support which caused inaccurate results. The code also ran slower than it should have on 8087. Conversely, code written using 48-bit reals moved to Delphi will slow down by a factor of about 5 since Delphi converts 48-bit reals into standard floating point and then converts back into 48-bit.
 
How often does anyone actually use FP? Even when I was rendering 3D wireframes I always used scaled integers.

But then one of my biggest gripes about most Pascal implementations is the standard datatypes. I tend to go out of my way to not use them. That was one of my main reasons for starting to write my own compiler what is now many years ago (that project has been shelved due to priorities). I wanted to write a Pascal that natively used null terminated strings (yes, I know there are some Pascals that already do this), didn't have any legacy MS-DOS junk in it, and didn't have any inbuilt procedures that violated the rules of the language. (I had great intentions of not having a WriteLn for one).

For all my complaints about BASIC, I do not complain about GOTO. I never understood the hatred of GOTO, especially from people who call themselves Computer Scientists. GOTO (JMP, whatever you care to call it) is a very useful tool. For every 10,000 lines of structured code that I write, I tend to use one Goto at least once, somewhere. There are certainly times and places where it makes sense. And I tend to find the same people who eschew GOTO are the same who have no problem overuseing or misusing a Break/Exit/whatever you care to call it.

I still stand by what I said; all the limitations of BASIC don't prohibit a programmer from writing usable software.

My favourite thing about BASIC (perhaps the only thing I like about it) is simply that it is included in ROM in so many machines. The fact that I can write an assembler (and bootstrap an assembly flat file editor) with nothing other than a C64 the way it came from Commodore, a disk drive (only to avoid the obvious tedium of not having one), and a black and white TV, is what makes hobbyist computing worthwhile to me. Mind you I write of my assembler, but there were countless other large scale projects that I did, and countless more I could have done. Sure I could have done it in machine language, with toggle switches, or in crude assembler in a monitor, but BASIC is sure more friendly than either of those for a large project.

I'm going to have to take down one of my interiour doors so I can stage a picture of what my computer desk looked like in 1986. A desktop made from a door with a C64C, a 1541, and a Penncrest TV (you can see that on my Youtube channel, doing nothing).
 
Half the code I dealt with in the late 70s involved floating point so I guess I have a slightly different perspective. I wanted accurate results with speed.

GOTO/GOSUB with line numbers has a major problem. A well structured BASIC program would have each subroutine contained in its own section and gaps of unused line number between sections. If the programmer incorrectly estimated how many line were needed for a subroutine, that could lead to hours of renumbering since early BASICs lacked automatic renumbering utilities. Or the programmer created ultimate spaghetti jumping to an unused group of line numbers before jumping back so a given routine might be broken into 4 or 5 randomly located pieces.

Standard Pascal has no MS-DOS junk. I don't remember how the extended standard handled the end of a variable length string. It was a great improvement on the original Wirth design but not as good as the UCSD implementation which influenced Blue Dolphin (Turbo) Pascal. Nothing like needing to write a text editor to get a decent string implementation added.
 
How much of BASIC's bad reputation is due to Microsoft's numerous low-quality implementations of it?
 
Half the code I dealt with in the late 70s involved floating point so I guess I have a slightly different perspective. I wanted accurate results with speed.
But what were you doing that you needed that? I've just never run into a situation where I really needed FP.

GOTO/GOSUB with line numbers has a major problem. A well structured BASIC program would have each subroutine contained in its own section and gaps of unused line number between sections. If the programmer incorrectly estimated how many line were needed for a subroutine, that could lead to hours of renumbering since early BASICs lacked automatic renumbering utilities. Or the programmer created ultimate spaghetti jumping to an unused group of line numbers before jumping back so a given routine might be broken into 4 or 5 randomly located pieces.
Right, but this is a problem with numbered lines, not a problem with GOTO in general.

Standard Pascal has no MS-DOS junk. I don't remember how the extended standard handled the end of a variable length string. It was a great improvement on the original Wirth design but not as good as the UCSD implementation which influenced Blue Dolphin (Turbo) Pascal. Nothing like needing to write a text editor to get a decent string implementation added.
At the time, I was dealing with FPC, for AmigaOS 4. It is the only Pascal at the time that would produce native OS4 executables (probably still is). It generates just horrible code, which is very PC-centric, and a good deal of a programmer's time is spent adapting to the OS.

Turbo Pascal and any Turbo Pascal derivitive that I've used is rife with MS-DOS artifacts (I can't speak for Delphi). I may have used Standard Pascal, at some point, I don't recall. I've used several Pascals that had no, or almost no string capability at all.
 
For what ever reasons I like BASIC. I'm not a programmer, but I can make things happen once in a while with BASIC, just to suit me. My memory is a little fuzzy, but a long while back, didn't Bill Gates have a standing $25,000 offer/bet (charity of course) that you could choose your language and he would write BASIC and then proceed to whip you like the proverbial red-headed stepchild?
 
I owned a copy of Visual Basic for OS/2 from Microsoft. I purchased it retail from Babbage's in the mall at Ft. Walton Beach, Fl.
 
I'll check in here with some tales from the past--and how I see things.

In my mainframe days, the bulk of my programming was done in assembly. Many thousands of lines, all keypunched. The experience taught me two lessons--the value of coding standards and the value of a really good macro assembler. Coding standards, obviously for maintenance--realize that this was before on-line editors--you sat with a listing with statement sequence numbers/identifiers and worked out directives for the source library program to make your changes--as in "delete these statements, insert these statements, etc." At the same time, you knew that there were other people writing directives, perhaps on the same section of code you were working on. Good documentation and coordination/communication were essential and paid off handsomely.

A good macro assembler would allow you to do just about anything that you could imagine. Remote/deferred assembly, character manipulation, syntax extensions, macros that define macros all could simplify something that would be a nightmare in straight assembly to something that a human could understand. It's a shame that not very many assemblers exist today that can do the same.

About the only other language that I used back then was FORTRAN--you could find it on just about any platform--at least, I know of no mainframe where it wasn't offered. You used that to write utility programs, if possible, where peculiar machine features or speed of execution. Some FORTRANs were very good indeed, being able to allocate register use and schedule instructions as good as the better assembly programmers.

BASIC wasn't an option back then--the language was too limited and usually was interpreted, not compiled.

I moved to very large vector systems with the emphasis on number-crunching. Huge instruction set with instructions like "SEARCH MASKED KEY BYTE" with up to 6 operands. For that, I used a derivative of FORTRAN called IMPL--and also made changes to the compiler to improve code generation. If you had something specific in mind, there were ways to express assembly instructions inline.

At about that time, I built my first personal microcomputer from a kit (Altair 8800). I'd been following the action at Intel and still have the notes from the 8008 announcement, faded though they may be. A disk was out of the question, so I used an audio tape recorder and the guts from a Novation modem for offline storage. It worked, so I didn't have to toggle things in, or type them from the console. BASIC was one of those programs that I typed in the hex code for, byte by byte. It worked, but not quickly--interpreter, again. So I used a memory-resident assembler which worked for a time. Eventually, I put together a system with Don Tarbell's disk controller and a couple of 8" floppy drives that I scrounged. It wasn't too long before I got CP/M 1.4 (or thereabouts) going, which gave me more possibilities for software development. But still assembly.

Professionally, at about the same time, I took a job with a startup and used an Intel MDS-800 running ISIS-II. Intel had a language that was vaguely reminiscent of PL/I called PL/M-80. It wasn't bad--you could actually make good use of its capabilities, although it was not an optimizing compiler in any sense, so the size of the executable code and its speed wasn't up to assembly. For "quick and dirty", however, it was great.

Eventually, as disk systems got affordable, other languages made their appearance. Various flavors of BASIC (few were true compilers--and there's a reason for that), FORTRAN, COBOL, SNOBOL4, FORTH...you name it. Anyone remember DRI's ISV program that promoted their PL/I? Yes--a remarkably feature-rich PL/I for the 8080. There were Cs--but they weren't all that good, for a very good reason:

The 8 bit Intel platform lacks certain features that makes C practical. C uses a stack architecture, derived from the PDP-11 architecture. The PDP-11 is a 16-bit machine, the 8080 is not. Addressing of local stack-resident variables on a PDP-11 is quite straightforward; on the 8080, it's a nightmare. Among other things, the 8080 doesn't have stack-relative addressing, nor does it have indexed addressing. 16-bit addresses have to be calculated the hard way--move the stack pointer to HL, load another register pair with the index, add it to HL, then access the variable byte-by-byte. Really ugly. While the Z80 does have indexed IY and IX addressing, it's also quite limited and handling simple 16-bit integer stack-resident variables, particularly if the local area is more than 256 bytes long, is again, very complicated. You simply can't generate good C code on an 8080. FORTRAN--sure. No stack-resident variables--in fact, no stack required at all. That's why FORTRAN could be run on an 8KW PDP-8, but no such luck for C--C imposes certain demands on the architecture.

Comes the 8086 in 1979 and the later, the IBM PC. All of the sudden, things get less complicated, although handling large (more than 64KB) data structures is quite awkward. But for the first time, you had a microprocessor with an ISA that could do justice to C. Disks were relatively inexpensive, so you had a full-scale development system. Assembly could be used to write fast and/or small programs, but for the tedious stuff, C was great. Microsoft even endorsed it--and they didn't have a C compiler at the time. They recommended the use of the Lattice C compiler--a basic K&R thing that did the job.

BASIC made sense for business applcations--I wrote a BASIC incremental compiler (to P-code) for a company to port the large suite of MCBA applications to an 8085. There was a good reason for the P-code thing: If you were to write a compile-to-native code BASIC, you'd wind up with a program full of code that did little more than set up arguments to subroutines to do the basic operations. At best, the 8080 could do inline 16-bit arithmetic as long as you didn't need to multiply or divide, but BASIC originally had no explicit type declaration statements. You had number and you had strings. The other problem was that 8080 code is not self-relocating. P-code results in smaller programs, location-independent code and even multitasking. The result can be quite small and fast.

As far as languages go, from a compiler-writer's viewpoint, they're all the same at the back end. You take a tree or other abstract representation of the compiled and optimized source code and you translate it into native instructions, perhaps doing some small optimizations. What the front-end eats isn't important. I've been on projects where the same back-end was used for C, FORTRAN and Pascal.

My perpetual gripe with C is that it lacks a decent preprocessor. For some odd reason, preprocessor directives are considered to be evil by the C community. Yet, look at PL/I's preprocessor, complete with compile-time variables, conditionals and other statements. Incredibly useful, if you know how to use it. Yes, C++ has features that make a preprocessor less important, but there you get the whole complex world of what amounts to a different language, when all you wanted was a way to write a general macro to initialize an I/O port. There were times when I've found C++ quite handy for abstracting things, but I like the simplicity of C.

So, for the last 20-odd years, I've written a lot of C, with a smattering of assembly support. But much more C than assembly. And almost no BASIC, FORTRAN or COBOL oor Ada at all--but I'd use any of the above if there were an advantage to using it in any particular application.
 
How often does anyone actually use FP? Even when I was rendering 3D wireframes I always used scaled integers.
You'd be shocked by 3d programming from pretty much 3rd generation Pentium onwards. The age of the MMX and the 3DNow, the time of the sword and axe is nigh, the time of the wolf's blizzard. Ess'tuath esse!

Somehow some math nerds who knew jack shit about programming got together and convinced EVERYONE in the 386 era that matrix multiplies were somehow more efficient and effective than the direct math for translations, rotations, and so forth. HOW they managed to convince people that 64 multiplies of 32 memory addresses into 16 more addresses was faster than four multiplies, three addition and one subtraction of 4 addresses into two I'll never understand... That matrix math started being used for TRANSLATIONS (what should be three simple addition) was pure derp... but it got worse...

As 1) everything moved to floating point, and 2) rather than argue it, processor makers created hardware instructions to do it. You know MMX? 3dNow? That's about ALL those do! Hardware matrix multiplies shoving massive amounts of memory around just to do a rotation or translation.

Pretty much by the time Glide was fading, all 3D math on PC's is floating point, typically double precision. OpenGL? DirectX? Vulkan? Double precision floats. Even WebGL in the browser does it now, and they had to change JavaScript to add strictly typecast arrays (to a loosely cast language) to do it! Though that change has opened the doors to doing a lot of things JavaScript couldn't before, making it even more viable as a full stack development option.

You can't even argue it now with 'professional' game programmers even when the situation calls for something matrixes and normal projections can't handle as they are so used to "the API does that for me". Implementing things like arctangent polar projections (which with a lookup table at screen resolution depth can be many, MANY times faster even CPU bound over a gpu projection) are agonizing to implement because the rendering hardware just won't take the numbers unless you translate it all from polar to Cartesian, a process that eliminates the advantages.

Laughably I wrote a game engine about twenty years ago that used GLIDE (3dFX's proprietary API which really didn't do much 3d, it was just a fast textured triangle drawing engine) built ENTIRELY in polar coordinates -- until the view rotation that was in fact handled as a translation -- using 32 and 64 bit integer math that in a standup fight could give the equivalent rendering in OpenGL on a 'similar performing' card that had hardware 3d math a right round rogering.

If you're working on the CPU and dealing with off the shelf 3d model formats now? Double precision floats. If you're working on the GPU through a major API? Double precision floats.

Which for a LONG time left ARM crippled or at the whims of the GPU (which laughably STILL aren't even up to snuff with Intel HD on processing power) until they added the option for a "VFP" extension -- vector floating point; which is a big fancy way of saying MMX on ARM. It's bad enough a ARM Cortex A8 at 1ghz delivers integer and memory performance about equal to a 450mhz PII (since they are more obsessed with processing per watt than processing per clock) when you realize that things like webgl or OpenGL ES want to work in double precision floats, and since there is no floating point in hardware on a stock ARM PRIOR to Cortex A8 and it's optional on A9's you're looking at 487 scale performance in that regard. (thankfully VFP and SIMD extensions are now commonplace, but a LOT of cheaper devices still omit them)

Even more of a laugh when you realize most low end ARM video hardware is just overglorified 20 year old Permedia designs with faster clocks shoved at it.

Part of why without a major overhaul, now that Intel is gunning for that space ARM could be in for a very rough ride in the coming years. VFP is a stopgap at best, even the best offerings in Mali OpenGL ES video for ARM gets pimp slapped by even piddly little Intel HD on some of the new low wattage Celerons. The only real hope ARM has moving forward is existing momentum and if nVidia's new low power strategy for desktop/notebook trickles its way down into the Tegra line.

... and honestly I wouldn't hold my breath on that, I get the feeling nVidia is starting to consider walking away from the mobile space even if their "shield" technology relies on it. It hasn't been the success they hoped for.
 
Last edited:
Floating point might not have made much sense in games since displaying partial pixels is not beneficial. In scientific software, it was common to go with floating point with as many bits of accuracy as possible. Sometimes a good idea, sometimes it just meant the PDP-11 ran all weekend.
 
Games just used whatever method was fastest at the time.
In the early 2d era, it often made most sense to just use integer coordinates, and work with a coordinate system that maps 1:1 to the pixel grid on screen.
With more advanced stuff (scaling/rotating 2D and such, 2.5D or real 3D), you would need additional precision over the screen resolution. So you want some kind of solution that can handle fractional coordinates as well.
Obviously, before FPUs were commonplace, full floating point wasn't very efficient. So games used fixedpoint notation (basically just integers scaled up by a certain power-of-2 value, to get fractional precision).
Another advantage of using integers is that they are very predictable and numerically stable. There's no fancy scaling or rounding that can affect precision in unwanted ways. So if you are writing some kind of rasterizing routine (doing eg a linedrawing or polygon routine), an integer-based solution will be guaranteed to render consistently, and touch all intended pixels.

But as FPUs became commonplace, it became more efficient (and flexible) to perform certain calculations with floating point.
In games there's a pretty obvious transition-point: In the era of DOOM, Descent and such, everything was still done with fixedpoint integers (the 486 made the FPU commonplace, but it wasn't a very efficient FPU, so you'd avoid it like the plague for high-performance calculations). Then Quake came around, and a lot of calculations were done with floating point (Pentium happened, and its FPU could do single-precision operations like fmul and fdiv much faster than integer mul and div.. And perhaps more importantly: the FPU instructions could run in parallel with integer ones. For perspective divide it would fire off one fdiv for every 16 horizontal pixels. The fdiv would effectively be 'free' because it ran in the background while the innerloop was outputting textured pixels. By the time it had rendered 16 pixels, the fdiv was completed and the result available on the FPU stack).
 
Some corrections for deathshadow: MMX is for integer/fixed-point operations, not floating-point. Single-precision maths is typical for most non-scientific GPU-based work, not double. Double-precision maths is supported by GPUs but it's avoided since it performs at far less than half the rate of single-precision operations in all but the highest-end GPUs, i.e. ones not intended for gaming. In ARM processors VFP has been supplemented by NEON which performs much better with vector operations than VFP does. The Cortex-A8 implements NEON well but has a crippled VFP unit compared to the one in the A9. Even pre-ARMv7 processors, such as the ARMv6 ARM11 used in the original Raspberry Pi greatly outperform it.
 
Half the code I dealt with in the late 70s involved floating point so I guess I have a slightly different perspective. I wanted accurate results with speed.
The exact opposite of almost all the code I dealt with; in the business world floating point was generally slower and less accurate because it tended to introduce rounding errors.
 
Back
Top