• Please review our updated Terms and Rules here

Using DTMF for recording data

MykeLawson

Experienced Member
Joined
Mar 21, 2014
Messages
396
I've often wondered why using the DTMF format for data recording was never used in the early days to record data to cassette for example. I've read many times about the inherent issues with properly detecting the single tones from KCS tapes, etc. As DTMF was introduced in the early 60's, it was available at the time. I know there were issues with tape speeds being stable; so maybe that could be the reason. But, me being me. I was thinking of a project to hook a DTMF encoder up to a serial port, output some data, and record it on my PC as either a .wav or a .mp3 file. Then try the reverse to retrieve the data. There is a lot of software already out there that could be easily utilized. Just a curious future project maybe.
 
Detecting the zero-crossing of an analog signal is very fast and can be done with a very low chip count. Decoding DTMF at high speed would require a special IC and make the system expensive. The decoder circuitry might also require more time to stabilize as analog filters have a finite response time. Neither digital filters nor high-speed sampling were feasible in realtime. In other words, while you get two bits per symbol with DTMF, your symbol rate would most likely need go down, negating the benefit.

Look at phone modem technology for comparison: Early standards (until 1200 bps) used FSK as well, advanced modulations weren't used until much later.
 
My first homebrew setup used the guts of a Novation 300 baud modem and a GE portable cassette recorder. Worked okay, even with the crappy recorder. You did have to shift the send or receive frequencies to make this work, however, since they're different on modems. I had a switch on the caps to shift the send frequency, so I could use the audio coupler part as a normal modem.

I don't think DTMF would have afforded many advantages over FSK, given the equipment of the day.

There were audio-cassette decks engineered as replacements for paper tape reader/punches. Pretty common on CNC rigs. AFAIK, those used DC levels, however. I had a Techtran dual drive that had inter-block stop/start and search functions. Could run at 2400 baud.
 
Last edited:
I found that the best tape drives for data are the worst audio ones.
All to do with the waves: ears prefer sine, but computers prefer square.

When manufacturing home computer cassette tapes for distribuition, I used a home built rig to do multiple copies at once, using tape mechanisms that allowed electrical control, so it require less operator input.
Still have a left over mechanism somewhere.

Years ago radio stations used to broadcast programs during their computer shows. You recorded it on your radio cassette, and played it back to your computer.
 
You know, that 'cassettepunk' idea might be a nice tinker project at some point. Given that the tones would get recorded on a PC, and not a cassette or 8 track, there would certainly not be an issue with tape speed variations, or dirty heads, etc. Could be fun....
 
I suspect that changing the batteries in portable recorder could change the tape speed enough to alter recorded frequencies. Weak batteries slow the tape down. Fresh batteries run fast. DTMF needs fairly exact frequencies. The simple solution was to use wall power.

The cassettepunk concept takes it a bit far though using both an Arduino and dedicated DTMF components moves it beyond what could be done with normal 70s hardware. All that technology to barely match up against punched paper tape.
 
DTMF was a terrible concept for recording anything - it was intended as a way of carrying single keys down an audio line that could be decoded through a bandpass filter, and would be incredibly slow - probably would struggle for 75 bps.

Most tape recording systems use FM and this could have been improved with MFM, and most typically operated around 1 kbps - not terribly fast, but in 5 to 6 minutes, would carry a full memory load of data.

Some variations existed - eg, Spectrum used one flux gap for 0 and twice that length for 1, so different data profiles would have different lengths. But these were easy for a CPU to decode in software in real time.

The 8 track concept had better luck however - a 2-track version was made by Sinclair, called the Microdrive, which had a data rate of closer to 100 kbps, and a formatted capacity of around 80 kbytes, though I've managed to fit nearly 640K into the data format through quirks in the formatting.. That is it's absolute theoretical limit though, and would take nearly a minute to load in a full 640K, or files that required full traverse of the data structure.

These were very successful, and the cartridges were a fraction of the size of an 8 track cartridge, but used the same principle and had two interleaved tracks that were decoded in hardware and used biphase ( type of FM ) encoding. In theory, they could have held nearly 256kb of data with read times as short as 8 seconds, which would have made them quite competative with floppies for early PCs. They were often called Stringy Floppies, and in fact, that was also another type/brand of 8-track style cartridge - as was the Rotronics Wafadrive. They compared well to floppy disks of the era, but you could fit an entire drive into your pocket. I have some here, and will connect them to my PC design when I get it into hardware
 
When I first joined what was known as The Bell System in 1971, the network was primarily analog but could be tricked into handling a small amount of digital. When I retired in 2004, reality was almost the exact opposite. Even voice traffic was sliced and diced into bits before going much of anywhere. Along the way, many signalling and/or transmission protocols (SF, DTMF and its subscriber based cousin TouchTone, and a host of others) were tried and for the most part abandoned. But one survives at the lowest levels of the ultra modern SONET protocol (Syncronus Optical NETwork).


That survivor of the telephone signalling wars is the T1 protocol, (developed in 1957 and deployed starting in 1962) and its close cousin outside the USA, the E1 protocol.


Dig deep enough into any SONET transmission bit stream and you'll find the lowly T1 24 channel protocol or its 28 channel E1 cousin.

It just works.
 
There's still a lot of PDH around, and much of it is slowly transitioning to asynchronous with the heavier use of Ethernet and IP cores in telcos, but you're right- the B channels still remain don't they... 64 kbps. Things are a little different in Australia, as a lot of newer telco's developed in a purely IP environment, but even they still have to carry E1s over IP at times. Telephones have pretty much vanished across Australia now though - handsets are rare and usually use VoIP.

Back around 2000, I remember Worldcom were using quadcoders, but that too relies on a B channel. There were 30 B channels and one D channel in a 2048kbps E1. Even a quadcoder will work with DTMF though.

A quick google tells me DTMF requires > 23ms of tone to be valid and only at around 40ms is confirmed valid. ( less than 23ms must be rejected. I'm not sure of what happens between them ).

If so, that means you can fit about 25 tones/sec, each tone can carry 4 bits of data or thereabouts.. 100bps raw rate... Then you have signalling requirements on top of that, so my original guess of around 75 baud would be reasonable. By 8 channels, stereo, that works out to a broad capacity of 1200 baud bandwidth for DTMF.

OTOH, With FM ( would be more reliable than MFM for this application ) you could possibly rely on 1200 bps per channel, by 16 channels, would provide 19200 kbps of data throughput on an 8 track.

FM should still encode acceptably on an analog tape and would work well. But ideally you would want to speed up the tape as they did with the stringy floppy and microdrive ( two channels per cartridge ) to get better data throughput.

But it remains that the bandwidth of an 8 track is limited by the speed at which the tape goes past the head, much as with the old cassettes most home computers used. Unless this can be sped up, or the flux transitions more closely spaced, then it quickly becomes the limiting factor. 8 tracks would never have been a good solution for this reason - the versions they built for computers worked well primarily because of their speed, and did compete with floppies from their era.
 
The fiber-connected terminal that services phones in my neighborhood definitely uses IP. Before the fiber setup, the terminal was connected via copper to the CO. IP speeds were generally limited to about 1.5Mbps (frame relay?--I don't recall).
Go a half-mile up the road and it's still copper to the CO, with about 12 wire miles, so generally useless for anything but voice-grade modem.
Such is the state of telecom in the rural US.
 
The fiber-connected terminal that services phones in my neighborhood definitely uses IP. Before the fiber setup, the terminal was connected via copper to the CO. IP speeds were generally limited to about 1.5Mbps (frame relay?--I don't recall).
Go a half-mile up the road and it's still copper to the CO, with about 12 wire miles, so generally useless for anything but voice-grade modem.
Such is the state of telecom in the rural US.

Starlink really had a big impact didn't it. I remember in 2019, after their third successful launch, it was obvious they were going to succeed. I wonder how long before their cells in the satellites start working. Though it's different because you just pay money and a phone shows up... New services require some basic level of technical skill to install and operate, even if it's sometimes little more than inserting a plug.

We've reached an incredibly junction of technology and progress where it's starting to form a row of cards on the top of the previous ones in our civilisations's technological house-of-cards - I wonder at what point we transitioned to a world where a single person can't do it all anymore? And what is the next phase? A group of people and an AI probably.

CP/M was kind of that point for operating systems. I think I've even heard it referred to as such - an operating system that a single person can write... And it's 100% true. It's at the scale that one person can write it from scratch in their spare time, as was the first version of DOS.
 
CP/M was kind of that point for operating systems. I think I've even heard it referred to as such - an operating system that a single person can write... And it's 100% true.
This is very much off-topic, but take a look at SerenityOS (lots of videos at https://www.youtube.com/@awesomekling). He did write a reasonable modern operating system, web browser and other things from scratch. But in contrast to CP/M, this is a multi-year effort with lots of deep knowledge and far from finished. Very impressive nonetheless.

Hardware back then was much simpler, reducing the requirements on software immensely. SymbOS (http://www.symbos.de) for some Z80 machines is an example of a decent one-man-show.
 
I'd say SymbOS is an INCREDIBLE example for a one-man show.... If it's all written from scratch in assembly, then that's amazing.

I'm curious about SerenityOS - it looks like it's reused a lot of code from elsewhere, but I'm only basing that on appearance, and that's not the case for SymbOS. It's very difficult to write an entire language in a High Level Language without reusing someone else's code. Even if it did use other code, it's still incredibly impressive. Otherwise it's equally amazing as SymbOS if it's all from scratch.

I rewrote CP/M from scratch, and I'm constantly learning things from the process, such as why they did things certain ways, because when I try to improve on it, often I can't. My OS is a little different from CP/M, but it's close enough that I can run most CP/M software on it without issues. Even then I only went for CP/M 2.2 and it took me a few months to write in spare time, and over the Christmas break. I decided not to take on CP/M plus as I didn't have the necessary knowledge to plan it out - and I only barely had enough to start on 2.2 - working from documentation backwards and putting the code together - and I still made a LOT of mistakes in the process.

I don't think I'd tackle a GUI quite yet, though I am aiming to build hardware vector graphics and bitmap copy into the final design, using all 1985 technology.

In my case, I started with a documentation file, turned it into comments, then started writing in code around it.

It's fun...
 
Starlink really had a big impact didn't it. I remember in 2019, after their third successful launch, it was obvious they were going to succeed. I wonder how long before their cells in the satellites start working. Though it's different because you just pay money and a phone shows up... New services require some basic level of technical skill to install and operate, even if it's sometimes little more than inserting a plug.
I don't think they did--quite yet. The long-term economy (e.g. regularly replacing LEO satellites) is still an open question. At least the few neighbors down the road who have satellite Internet, don't use Starlink as far as I can tell. HughesNet or Viasat mostly. For TV, Dish still seems to reign supreme.
Even that is probably going to be history in my area in about a year. A public co-op from a county south of here has already strung fiber; they just need to handle the "last mile". And then there are services such as Alyrica offering microwave/radio service.
 
I'm curious about SerenityOS - it looks like it's reused a lot of code from elsewhere,
He didn't write everything from scratch, but he did write all the code ("AK" are his initials), at least until he started getting help from contributers. The C++ standard library is not used, either. The whole project is a huge multi-year thing with much more complexity than CP/M... not comparable to me tinkering for half a year and making a functional CP/M machine.

I found the videos through his browser work - their browser engine is quite capable (including Javascript) and probably the last actively maintained (living) independent browser engine in the world.
 
Back
Top