• Please review our updated Terms and Rules here

Time encoding methods?

tech58761

Experienced Member
Joined
Nov 10, 2023
Messages
57
Location
Northern Plains, USA
I'm just curious if there was anything like a standard method for encoding 24-hour time back in the day.

I'm trying to pick apart some firmware to see how everything worked, but one of the biggest obstacles is not knowing how the time data was encoded.

I do know that the information was saved in 3-byte chunks (including day of the week), there were two schedules with up to 4 'schedule change' times available in each.

If I had some idea as to common time formats that would have been used to store the time of day back then, it would be an immense help in getting things figured out.

I do know we can rule out BCD right off the bat. The MPU used by this unit is a 6800 derivative, and the DAA opcode is noticeably absent in the code.

I am also fairly certain that the time would have been recorded in 24-hour format - but minutes? Seconds? Fractional hours? That's what I'm trying to figure out.

I can post a copy of the code if requested - if so, I can post a direct dump of the EPROMs and a copy I've been annotating as source code.
 
Welcome to VCFED.

Can you define "back in the day".

There are various ways of storing date and time, but it all depends.

Incidentally, storing the date and time in BCD does not necessarily mean that you have to use the DAA instruction...

Most time and date standards use a counter to count (say) milliseconds past a base date/time.

Providing some more information would help (especially if you have the code that can be provided).

Dave
 
Thank you! This would be vintage early to mid 1980s.

Chuck: Hmmm. I was afraid of that. Still, no harm in asking!

Complicating this effort is that this code was used in two different products. The 'load terminal' used this EPROM alone while the 'distribution terminal' used this EPROM along with a second one with routines to operate an ADC converter board (the latter were very rare, even when new).
 

Attachments

  • eprom dump.txt
    115.6 KB · Views: 1
  • source.txt
    67.5 KB · Views: 2
I'll bet back in the day BCD was probably better, especially for digital logic and such. Many real time IC's work this way. For me, in computing, back in the day I went with 1/1/1990 00:00:00 was 0 in a 32 bit integer and ever second past that was 1 more. I think it had 132 years range or so. There are lots of ways it can be done.
 
As noted in post above, the most efficient is interval units after an "epoch". The question is how much work do you want to spend decoding it into human-readable format. Using 1900 as the epoch is reasonable because computers didn't exist before then. However, if you were keeping archeological data, then you'd want a clock that went back about 4.3 billion years.

Mind you, the systems that go back before 1930 or so carry their own issues because various countries changed over from the old Julian calendar to the Gregorian, meaning that there are gaps where dates simply don't exist. And the changes were performed country-by-country.


And then one must realize that time zones before standardized ones, synchronized to a common standard, were radically different. So, Boston time was different from Philadelphia time (and by minutes, not hours). Railroads and the telegraph pretty much put an end to that.

Time before WWII is a confused affair. So adopting a later baseline makes sense.
 
The various epoch systems require a sizable library to convert the value into something that can be presented to the user. Memory limited systems might be better served with a more minimal date calculation method that needs more memory to store each date. SQL or Unix might have millions of time entries; a tightly squeezed format isn't quite as valuable when only a few hundred records can be saved. I know it goes against the modern experience but not having any dates or times was always a possibility.

My plan was always to use the system provided date/time method if my program has access to it and only create a method if I have no other option.
 
There are lots of gotchas, to be sure; e.g. 1900 had 365 days, but 2000 had 366; 2100 will have 365 (century year rule). Then there are those nasty "leap-seconds" that get introduced; our last one was in 2016, but there will be none after 2035, as the current proposal stands. Leap-seconds are introduced to compensate for a slower rotating Earth. Oddly enough, in 2021, the rotation of the Earth sped up slightly, bringing up the issue of a negative leap-second.

Software, as you might imagine, accommodates this very badly, if at all.

On my own MCU-related stuff, I simply use the value of the internal RTC, which breaks it all out in DDMMYYYY HHMMSS packed BCD form. Which is another issue, since using a FAT file systems on, say, an SD card means that I have to convert that to DOS time.
Code:
//  GetRTCDOSTime - Get time in DOS format.
//  ---------------------------------------
//
//

uint32_t GetRTCDOSTime( void)
{

  uint32_t
    dosTime;

  GetRTCDateTime();       // read the date and time into RTCDateTime struct

  dosTime =
    ((RTCDateTime.Year + 20) << 25) |   // year is 1980-based, so add 20
    (RTCDateTime.Month << 21) |
    (RTCDateTime.Day  << 16) |
    (RTCDateTime.Hours << 11) |
    (RTCDateTime.Minutes << 5) |
    (RTCDateTime.Seconds >> 1);
  return dosTime;  
} // GetRTCDOSTime
 
Last edited:
So, assuming that all the nodes on the system were synchronized to the master unit in the office, and this dating back to the 1980s, it would be reasonable to assume whatever 'epoch' MS-DOS worked off, that would be the starting point?
 
Yes--just boot a copy of DOS on a system, say a 5150, without an RTC and answer any time and date prompts (if they occur) with an empty return. The time-of-day will be set to midnight on January 1, 1980.
 
So, assuming that all the nodes on the system were synchronized to the master unit in the office, and this dating back to the 1980s, it would be reasonable to assume whatever 'epoch' MS-DOS worked off, that would be the starting point?

It is possible.

For example, I have written programs where there was a single time stamp in the resulting data file that marked a start time. There was an interrupt driven, running time meter, that counted seconds - using the number of bytes required to hold the maximum possible session time. When each event occurred during the session, the elapsed seconds count was recorded along with the event code. Subsequently, the data file could 'replay' the entire session (with a one second resolution) and and whatever measures needed to be calculated were done post-session. That kind of strategy is not uncommon.

I don't quite understand the load and distribution terminals for your situation, but it could be as simple as the adc values being stored/transmitted for each 'tick' or maybe the elapsed time when the adc value was at or above some some value and so on.

"used this EPROM along with a second one with routines to operate an ADC converter board (the latter were very rare, even when new)."

I know this is some thread drift, but the topic reminded me of some old personal experiences. Perhaps more rare in the early to mid 80s, but by 1987, such boards were readily available and I still have one (ADC and DAC) for the PC
rtd board IMG_2414.jpg


Also, I remember buying a TLC548 at the Shack and making a little ADC thermometer working off of the PC LPT port - the code I wrote has a 1988 file date.
TLC original 20220604_173038.jpg
 
So I will just figure that if they encoded the day into the time (versus rolling over each Sunday), that any continuing offset would be calculated from 1/1/80.

I know there is a counter in the code that resets every 1800 counts, so I think it is reasonable to assume that it counts off every half hour, but there are some features that go down to 5-minute intervals.

If it helps, I've attached a schematic of the load terminal - the ADC daughterboard (which I don't really need or care about anyway - just want to figure out how the rest worked) likely connected to J3 on panel 'C' or P3 on panel 'D'.
 

Attachments

  • terminal.pdf
    856.6 KB · Views: 2
After looking at the code further, my current line of thought is that $0C46 / 0C49 (NVR_46 / NVR_49 in source) store the local and system master time values and the routine at $F687 (sub_687) is the actual validation routine.

If I am right, then the code at $FDAB - $FDDE (app_DAB to app_DDE) is the validation algorithm, with a 'fudge factor' of 8 seconds?

A constant (0x9D80 / 40320) gets added in at some point.

Am I on the right track?
 
Back
Top