NEWS, EDITORIALS, REFERENCE
Making Course Corrections, Timers
I had done a bunch of coding during the March Break, and so I had a few things lined up that I wanted to write about here on my dev progress blog. After writing the post about the PETSCII art renderer, I decided to plunge ahead and write a post about the next topic, timers. I had already implemented the code, I just wanted to write a post to talk about it. So I wrote the post but I didn't publish it because I wanted to space them out a bit.
Turns out it was a good idea not to publish right away. In this article I describe the fact that I've been reading the Official GEOS Programmer's Reference Guide. After I'd implemented and written my post about my implementation, I read something in the GEOS reference guide that totally made me rethink how I had solved the same problem. Now, I definitely don't want to just copy GEOS. In fact I think there are a lot of things about GEOS that are less than ideal, and are not worth being copied. But I've also been taking notes about the nuggets of ideas that I come across that I think are worth recreating. I'll let you now read the introduction to my article, and I'll interrupt again with my thoughts before the second half of the post.
C64 OS is an Event Driven operating system. It is fundamentally single–tasking, but even as such it consists of two processes that run at the same time. This is not unusual. As I pointed out in my article, The Event Model, even the standard C64 KERNAL operating system works this way, although the built–in system is much more primitive.
There is the main process, and there is an interrupt–based service routine that is run in the background 60 times a second. The KERNAL uses the interrupt routine to scan the keyboard and fill a 10 byte buffer with PETSCII values. But it also does a few other things like update the Jiffy Clock, scan the STOP key, blink the cursor, etc. The main process, when it is not busy doing something else, goes into a tight loop looking to see if the interrupt routine has put any new PETSCII values in the keyboard buffer.
I've spent the past week or so deep diving into the Official GEOS Programmer's Reference Guide. It's a fantastic read, I highly recommend it.1 There are obviously many things about the way that GEOS works that I would never want to replicate. But on the other hand, there are a tonne of little gems, nuggets of brilliant ideas. So I've been taking notes about things GEOS did right that I would like to incorporate into C64 OS. GEOS is also event driven. And it is much more sophisticated than the KERNAL's OS, however its event dispatching is, in my opinion, primitive and quite limited. Its interrupt routine, however, still scans the keyboard and the mouse and buffers information about their activities. Meanwhile there is a main event loop that dispatches the events to prescribed destinations.
C64 OS also works more or less like this. Except that mouse and keyboard events are much more sophisticated data structures. And how they are dispatched by the main event loop is, in my opinion, more dynamic and will ultimately lead to more flexible, more powerful and a more modern–feeling user interface.
Event driven means that it is the main loop that is executing most of the time. And when the user clicks something, or presses some keys on the keyboard, that user activity constitutes an event. And the program responds to this event because the main loop calls some code in the application as a direct result of the user's action. This code then spins off a series of steps that the application must take before it eventually settles back into waiting in the event loop for more events to be generated. I'll go into more detail about the Main Event Loop of C64 OS in a future post.
However, sometimes you want some code to execute after a certain time delay. This could be a lengthy timeout, such as if the user doesn't interact with a network service for more than 5 minutes you may want to warn the user that the service is going to log them out. Or, on the other hand, you may want a very short delay, to show some simple animation a routine might want to run twice a second for several seconds. The question is, how do you get your code to run after a certain delay period, when the main event loop is geared around checking for user input events?
Okay, so that was the introduction to my post. It is still pretty much exactly what I wanted to say. Below however, in part two of this post was a discussion of my implementation. Which, I have since radically changed and simplified. I will take this opportunity to discuss where I went wrong, and then talk about how I've reimplemented timers.
SetTimer, ChkTimer and the JiffyClock
This was my first mistake. The reason I was unwilling to go into the details of how a queuing system might work is because I wasn't able to figure out a few key problems with how it would work. I'll describe those problems in a moment, and how my rewrite fixes the issue. Suffice to say, I have kept the idea that C64 OS provides the ability to have one time–oriented callback, which can be extended easily if multiple timers are required by the application.
The JiffyClock is the magic that makes it all work. The JiffyClock is a very simple concept, which the KERNAL itself also implements. There are three bytes down in system memory, which C64 OS defines as jiffyhi, jiffymi and jiffylo. Everytime the interrupt service routine executes, every 1/60th of a second, jiffylo is incremented. When jiffylo rolls over, jiffymi is incremented, and when jiffymi rolls over jiffyhi is incremented. A Jiffy is a measure of time equal to 1/60th of a second, and there are 5,184,000 (or $4F1A00 in hex) of them per day. If any program wants to know how much time has passed between two events, it can copy the JiffyClock at one event and compare the copy to the JiffyClock at a later event. Bingo.
What I have said here about the JiffyClock is more or less true. There are some problems with using it to track real time, however, that led me to go on a wild question–asking spree in #c64friends on IRC Freenode. What is the JiffyClock actually used for? Well, as it happens it is pretty handy for a number of odd uses. Tracking real time is not one of them. The CIA's have a TOD (Time Of Day) feature. In C64 OS, I'm using the booter to configure one of the CIAs' TOD registers from an RTC. Then the clock in C64 OS, which I'll discuss in some future post I'm sure is drawn by reading from that. The JiffyClock is an unreliable at tracking time, because it relies on the 60th of a second IRQ routine to keep it up to date. But there are many routines that temporarily disable interrupts. C64 OS does this too, of course, which can lead to the JiffyClock missing some cycles and falling behind.
Despite some of these issues, it is still a useful feature to have around. In fact I'm already using it for something else in C64 OS, where it really makes sense. However, where it does not make sense, is using it the way I am about to describe below. I thought the way I was going to use it made sense, but in after thought, it does not. So the following discussion of SetTimer and ChkTimer are now completely dead. I've stripped out the code.
C64 OS provides a Jump Table routine called SETTIMER. You place a pointer to a routine at $FB,$FC in zero page, and put a Jiffy offset into .X and .Y (lo, hi) and then JSR to SETTIMER. Two bytes for an offset means you can set a time delay of up to 65535 Jiffys, which works out to a maximum delay of 18 minutes. I believe this will be more than adequate for most use cases. SETTIMER copies $FB,$FC to an inline JSR pointer.2 Then it writes .X and .Y to the low two bytes of an internal 3–byte Jiffy Time variable. To these three bytes, it adds the current Jiffy Clock's time. Thus storing some Jiffy Time in the future.
Ugh. I thought this was a good idea. Mostly because I was fixated on the idea that the JiffyClock must be useful for something. This is not it. The limit of 2 bytes worth of Jiffies, or just 18 minutes, is not that great. But there is also the niggling problem that if the JiffyClock is close to its maximum value, $4F1A00, adding an offset to it could result in a number bigger than the clock ever gets to. This just added to other complications that I'll discuss in a moment.
CHKTIMER is an internal routine that will not normally be called by your application. Instead it is called at the bottom of the Main Event Loop, once every time it loops. CHKTIMER first checks to see if the timer pointer's high byte is zero. A zero high byte would mean that the code to execute when the timer goes off would reside inside zero page. Since this is not possible to do in C64 OS, CHKTIMER assumes the timer is not active and immediately returns. Thus, the check for an elapsed timer is incredibly lightweight when nothing is scheduled.
Next it checks each of the three bytes starting with the most significant. If the offset's most significant byte is greater than the Jiffy Clock's, it hasn't elapsed yet and it returns. If it is equal or greater, it checks the second most significant byte, and then the third. If the timer is found to be smaller than the Jiffy Clock's time, the JSR through the pointer is executed. After returning from the JSR, if the Carry is clear the JSR's pointer hi byte is cleared. Thus clearing the timer preventing it from firing again. And that is just about it. It's very simple.
It is fairly simple. But it's too simple in the wrong way, has irresolvable issues such as mentioned above, and is not sophisticated enough in other ways. The main problem I was having in my head was how to use this simple mechanism to build out an extended multi–timer queue system in application space, should an application have need of greater complexity. The idea of a queue of timers is that you want to have only one thing measuring time and have different timed processes ordered in the order they will execute in, and offset from each other by the relative difference in when they will execute. In other words, imagine I queue two timers, one for 20 seconds from now, one for 5 seconds from now. The one for five seconds will fire first, so its the only one the computer needs to worry about on each interupt. After the five second one fires, and the second one becomes the only one queued, it will fire 15 seconds after that, 20 - 5, and so the timer code then only needs to worry about a single timer again.
I know in my head that this is a scalable way of doing timers, especially if there are hundreds or thousands of them as is supported by modern computers and OSes. It remains efficient because the system only ever worries about computing when the next one needs to fire.
One decision driving me to use the JiffyClock is that the interrupt routine already increments it, so backending on it for timers would mean I would not have to extend the work that the interrupt has to do. This was a mistake. Converting the offset into a Jiffy Time in the future makes it much more complicated to figure out how to queue things. Then I read in the GEOS reference guide that it simply decrements a number on each interrupt until the number hits zero. Then it sets a bit flag saying the timer has expired, resets the counter and continues to tick it down again.
Not using the JiffyClock is just so much easier. The way GEOS does things is very static though. You don't register and unregister individual timers at run time. You intialize the entire set of all timers for the application at the same time, from what they call a table, but what in C parlance is effectively an array of structs. The timers are each individually ticked down, which in my opinion is less efficient than using queued offsets. Plus, timers are always recurring, the timer resets automatically and starts ticking down again, then there are extra flags that can be set with special control routines to mark a timer as "frozen", "blocked", or "runnable."
There are a few things to note. If, for some reason, you want to clear the timer before it fires that is very easy to do. Just stick a $00 into $FC (the pointer's high byte) and call SETTIMER. The other values don't even matter, because when the $00 is copied into the the JSR's inline pointer, that effectively causes CHKTIMER to consider the timer inactive.
The second thing to note is that little bit about the Carry being clear when the timer's JSR is returned from. If you want the timer to be called repeatedly, then the last thing the code that the timer calls should do is call SETTIMER again. This will reset the time offset to some new future time. Then you set the Carry to indicate that the timer should not be cleared after you RTS. This is a close enough approximation of setInterval.
This is another problem with my implementation. As it turns out, developers often do want routines to be run directly out of zero page. In fact BASIC itself sets up a routine inside zero page. So relying on checking just the high byte to see if it's zero was not that great. And having the timer automatically dequeue itself was a bit of a hack. Below I will explain how I have changed the implementation, and what the advantages are.
One advantage to my first implementation is that there is no additional work done by the interrupt routine at all. The mainLoop calls your code, your code calls SETTIMER which configures some future JiffyTime and sets the callback pointer. But it has to do this just once. The interrupt doesn't do anything special, it just increments the JiffyClock as per usual. Then added to the bottom of the mainLoop is a call to CHKTIMER which compares the JiffyClock times and jumps through your routine. Done.
The new implementation is much much simpler, but simultaneously much more flexible. In the interrupt routine there are a few subroutines being called. One is to updatetime, which is what is used to update the JiffyClock, blink the seconds of the onscreen clock, blink the cursor and a couple of other things. At the end of this subroutine is an RTS. Instead of an RTS, I replaced it with an indirect jump, JMP (), through a vector somewhere in system memory. The vector is initialized to a constant which points at an RTS. So, by default it's just one extra JMP () worth of execution time. The RTS already had to happen, and the byte required to hold the RTS is moved to a different place. Plus there are the two bytes for the vector that the user can change. This actually feels really clean, because the KERNAL and BASIC have several vectors stored down in system memory that are meant to be modified by applications to extend their use.
All you do is change this vector to point to your own updatetimer routine, and it is called on
every interrupt. This gives you huge flexibility in how to implement your timer count down. In
the simplest case, you just have your routine decrement a local variable until it hits zero.
Next, the Main Loop has a stage at the end which checks the value of a chkTimer vector. A little
trick I learned on IRC that day was how to check multiple bytes for non-zero very quickly, and
how to JSR through a vector:
LDA addr1 ORA addr2 BEQ skip JSR jumpvec skip ... Do some thing else... ... Do some thing else... RTS jumpvec JMP (addr1)
This is pretty much exactly what happens at the bottom of the Main Loop. Except in stead of that RTS, it loops back up to the top of the main loop. Loading in the first byte and then ORing in the second byte and if the whole thing is still zero, then both bytes are zero. Then, to do an indirect jsr, you jsr to an indirect jmp () that has been set to jump through the vector.
Inside your chkTimer routine, in the simplest case described above, you just check to see if the local variable is zero. And then do something. If you want the timer to continue, the chkTimer routine can optionally set the local variable back to some offset in Jiffys, or if it was a one–shot timer, the chkTimer routine can set the updatetimer vector back to its original address which just points at an RTS. And then zero out the chkTimer vector.
It seems primitive, but by calling a custom routine for updating the timer, it gives ultimate freedom. You could use an arbitrarily large number of bytes as the countdown variable, so you aren't limited to 18 minute timers, but you could have timers that last days. And the chkTimer routine can be similarly flexible. It can reset the timer to a different length each time it resets. Or you could use a second variable as a count for how many times to reset, for an animation with a fixed number of frames. The simplest case is very simple, only a few lines of code are necessary to recreate what I had tried to provide with my original SETTIMER and CHKTIMER routines. But it has none of the drawbacks.
And what's more, if you really need a more complex timer queuing system, or something like how GEOS works, you can point the two vectors at a custom timer queue system. I suspect that most applications will not need anything so complex, and putting such complexity into the C64 OS "kernal" would be a waste of precious space.
Here's how the original code looked:
What I thought was an amazingly short implementation, just 88 bytes, has been replaced literally by maybe just 4 or 5 lines added to the bottom of the Main Loop, and one RTS in updatetime (part of the existing interrupt routine) has been replaced by a JMP (updatetimer).
There is another kinda neat way to think about how event loop timers works. Since C64 OS is Event Driven, does it violate the principle to have code that can run as the result of a timer elapsing? Not at all. It merely expands the definition of what an "Event" is. There are keyboard events and mouse events, and now there also timer events. They are nestled right into the Main Event Loop together. This also sets out a potential pattern. In the future, there may well be "Network Activity Events." I haven't got that far into the development yet, but this certainly feels like a sensible way to weave network data processing into the single–tasking execution model. As always, I'd love to hear your comments and get feedback.