1 Introduction
2 Ground Rules

Building a File System
3 File Systems
4 File Content Data Structure
5 Allocation Cluster Manager
6 Exceptions and Emancipation
7 Base Classes, Testing, and More
8 File Meta Data
9 Native File Class
10 Our File System
11 Allocation Table
12 File System Support Code
13 Initializing the File System
14 Contiguous Files
15 Rebuilding the File System
16 Native File System Support Methods
17 Lookups, Wildcards, and Unicode, Oh My
18 Finishing the File System Class

The Init Program
19 Hardware Abstraction and UOS Architecture
20 Init Command Mode
21 Using Our File System
22 Hardware and Device Lists
23 Fun with Stores: Partitions
24 Fun with Stores: RAID
25 Fun with Stores: RAM Disks
26 Init wrap-up

The Executive
27 Overview of The Executive
28 Starting the Kernel
29 The Kernel
30 Making a Store Bootable
31 The MMC
32 The HMC
33 Loading the components
34 Using the File Processor
35 Symbols and the SSC
36 The File Processor and Device Management
37 The File Processor and File System Management
38 Finishing Executive Startup

Users and Security
39 Introduction to Users and Security
40 More Fun With Stores: File Heaps
41 File Heaps, part 2
42 SysUAF
43 TUser
44 SysUAF API

Terminal I/O
45 Shells and UCL
46 UOS API, the Application Side
47 UOS API, the Executive Side
48 I/O Devices
49 Streams
50 Terminal Output Filters
51 The TTerminal Class
52 Handles
53 Putting it All Together
54 Getting Terminal Input
55 QIO
56 Cooking Terminal Input
57 Putting it all together, part 2
58 Quotas and I/O

UCL
59 UCL Basics
60 Symbol Substitution
61 Command execution
62 Command execution, part 2
63 Command Abbreviation
64 ASTs
65 Expressions, Part 1
66 Expressions, Part 2: Support code
67 Expressions, part 3: Parsing
68 SYS_GETJPIW and SYS_TRNLNM
69 Expressions, part 4: Evaluation

UCL Lexical Functions
70 PROCESS_SCAN
71 PROCESS_SCAN, Part 2
72 TProcess updates
73 Unicode revisted
74 Lexical functions: F$CONTEXT
75 Lexical functions: F$PID
76 Lexical Functions: F$CUNITS
77 Lexical Functions: F$CVSI and F$CVUI
78 UOS Date and Time Formatting
79 Lexical Functions: F$CVTIME
80 LIB_CVTIME
81 Date/Time Contexts
82 SYS_GETTIM, LIB_Get_Timestamp, SYS_ASCTIM, and LIB_SYS_ASCTIM
83 Lexical Functions: F$DELTA_TIME
84 Lexical functions: F$DEVICE
85 SYS_DEVICE_SCAN
86 Lexical functions: F$DIRECTORY
87 Lexical functions: F$EDIT and F$ELEMENT
88 Lexical functions: F$ENVIRONMENT
89 SYS_GETUAI
90 Lexical functions: F$EXTRACT and F$IDENTIFIER
91 LIB_FAO and LIB_FAOL
92 LIB_FAO and LIB_FAOL, part 2
93 Lexical functions: F$FAO
94 File Processing Structures
95 Lexical functions: F$FILE_ATTRIBUTES
96 SYS_DISPLAY
97 Lexical functions: F$GETDVI
98 Parse_GetDVI
99 GetDVI
100 GetDVI, part 2
101 GetDVI, part 3
102 Lexical functions: F$GETJPI
103 GETJPI
104 Lexical functions: F$GETSYI
105 GETSYI
106 Lexical functions: F$INTEGER, F$LENGTH, F$LOCATE, and F$MATCH_WILD
107 Lexical function: F$PARSE
108 FILESCAN
109 SYS_PARSE
110 Lexical Functions: F$MODE, F$PRIVILEGE, and F$PROCESS
111 File Lookup Service
112 Lexical Functions: F$SEARCH
113 SYS_SEARCH
114 F$SETPRV and SYS_SETPRV
115 Lexical Functions: F$STRING, F$TIME, and F$TYPE
116 More on symbols
117 Lexical Functions: F$TRNLNM
118 SYS_TRNLNM, Part 2
119 Lexical functions: F$UNIQUE, F$USER, and F$VERIFY
120 Lexical functions: F$MESSAGE
121 TUOS_File_Wrapper
122 OPEN, CLOSE, and READ system services

UCL Commands
123 WRITE
124 Symbol assignment
125 The @ command
126 @ and EXIT
127 CRELNT system service
128 DELLNT system service
129 IF...THEN...ELSE
130 Comments, labels, and GOTO
131 GOSUB and RETURN
132 CALL, SUBROUTINE, and ENDSUBROUTINE
133 ON, SET {NO}ON, and error handling
134 INQUIRE
135 SYS_WRITE Service
136 OPEN
137 CLOSE
138 DELLNM system service
139 READ
140 Command Recall
141 RECALL
142 RUN
143 LIB_RUN
144 The Data Stream Interface
145 Preparing for execution
146 EOJ and LOGOUT
147 SYS_DELPROC and LIB_GET_FOREIGN

CUSPs and utilities
148 The I/O Queue
149 Timers
150 Logging in, part one
151 Logging in, part 2
152 System configuration
153 SET NODE utility
154 UUI
155 SETTERM utility
156 SETTERM utility, part 2
157 SETTERM utility, part 3
158 AUTHORIZE utility
159 AUTHORIZE utility, UI
160 AUTHORIZE utility, Access Restrictions
161 AUTHORIZE utility, Part 4
162 AUTHORIZE utility, Reporting
163 AUTHORIZE utility, Part 6
164 Authentication
165 Hashlib
166 Authenticate, Part 7
167 Logging in, part 3
168 DAY_OF_WEEK, CVT_FROM_INTERNAL_TIME, and SPAWN
169 DAY_OF_WEEK and CVT_FROM_INTERNAL_TIME
170 LIB_SPAWN
171 CREPRC
172 CREPRC, Part 2
173 COPY
174 COPY, part 2
175 COPY, part 3
176 COPY, part 4
177 LIB_Get_Default_File_Protection and LIB_Substitute_Wildcards
178 CREATESTREAM, STREAMNAME, and Set_Contiguous
179 Help Files
180 LBR Services
181 LBR Services, Part 2
182 LIBRARY utility
183 LIBRARY utility, Part 2
184 FS Services
185 FS Services, Part 2
186 Implementing Help
187 HELP
188 HELP, Part 2
189 DMG_Get_Key and LIB_Put_Formatted_Output
190 LIBRARY utility, Part 3
191 Shutting Down UOS
192 SHUTDOWN
193 WAIT
194 SETIMR
195 WAITFR and Scheduling
196 REPLY, OPCOM, and Mailboxes
197 REPLY utility
198 Mailboxes
199 BRKTHRU
200 OPCOM
201 Mailbox Services
202 Mailboxes, Part 2
203 DEFINE
204 CRELNM
205 DISABLE
206 STOP
207 OPCCRASH and SHUTDOWN
208 APPEND

Glossary/Index


Downloads

WAITFR and Scheduling

In this article, we are going to cover the WAITFR system service. However, this is a very simple service and would be a very short article. So, we will actually spend most of our time looking at process scheduling, which is intimately connected with the state of a waiting process. First, the documentation on WAITRFR.

WAITFR
Wait for Event Flag

Place process into a wait state until the specified event flag is set.

Format

SYS$WAITFR efn

Parameters

efn
A 64-bit integer that contains the event flag that will cause the process to exit a wait state when it is set.

Description

If the specified event flag is already set, the service returns immediately. Otherwise, the process is placed in a wait state until that event flag is set. If an AST routine interrupts the process, that routine will be processed as normal and, upon return, the event flag is checked and the process continues if it is set, otherwise the process is placed back into a wait state until the flag is set.

Required Privileges

None.

Required Quotas

None.

Condition Codes

SS$_NORMAL The timer was successfully created.
SS$_ILLEFC An illegal event flag number was specified.


function SYS_WAITFR( efn : int64 ) : int64 ;

var Status : int64 ;
    SysRequest : TInteger_Request ;

begin
    Status := 0 ;
    fillchar( SysRequest, sizeof( SysRequest ), 0 ) ;
    SysRequest.Request.Subsystem :=  UOS_Subsystem_USC ;
    SysRequest.Request.Request := UOS_USC_Wait_For ;
    SysRequest.Request.Length := sizeof( SysRequest ) - sizeof( TSystem_Request ) ;
    SysRequest.Request.Status := integer( @Status ) ;
    SysRequest.Int := efn ;

    Call_To_Ring0( integer( @SysRequest ) ) ;
    Result := Status ;
end ;
This new service is standard fare so we will let the code speak for itself.

        UOS_USC_Wait_For:
            begin
                UE := Enter_System_Call( Request, SReq, PID, MMC, sizeof( S6I9_Request ) - 
                    sizeof( SReq ), Address ) ;
                if( UE <> nil ) then
                begin
                    Set_Last_Error( UE ) ;
                    exit ;
                end ;
                try
                    Integer_Request := PInteger_Request( Address ) ;
                    Wait_For( Integer_Request.Int ) ;
                finally
                    Exit_System_Call( Request, PID, MMC, sizeof( TItem_Request ) - 
                        sizeof( SReq ) ) ;
                end ;
            end ;
This code is added to the API method of the USC. Again, this is like other API handlers that we have covered.

procedure TUSC.Wait_For( efn : integer ) ;

var Bit, EFC, EF : integer ;
    PID : TPID ;
    Process : TProcess ;

begin
    // Setup...
    PID := Kernel.PID ;
    Process := Get_Process( PID ) ;
    if( Process = nil ) then
    begin
        Generate_Exception( UOSErr_Nonexistent_Process ) ;
        exit ;
    end ;

    if( ( efn < 0 ) or ( efn > 128 ) ) then
    begin
        Generate_Exception( UOSErr_Illegal_Event_Flag ) ;
        exit ;
    end ;
    EFC := efn shr 5 ; // Event flag cluster
    EF := efn and 31 ; // Event flag within cluster
    Bit := 1 shl EF ; // Convert to bit flag
    if( ( Process.Event_Flags[ EFC ] and Bit ) = Bit ) then
    begin
        exit ; // Already set
    end ;
    Process.CEF := efn ;
    Block( PID, SCH_C_CEF, nil ) ;
end ;
This service routine gets the current process and generates an error if it isn't found. Frankly, this should never be able to happen, but it never hurts to perform a sanity check - especially if the check is a single pointer comparison. Then we validate the event flag number passed to us and exit with an error if it is invalid. Next we parse the event flag, as we did in the SETIMR service in the previous article. Finally we see if the event flag has already been set. If it is currently set, we immediately return to the caller. Otherwise, we set the (newly added) CEF instance data in the process, which indicates the event flag that we are waiting for, and block the process with the SCH_C_CEF state, which indicates that the process is in a wait state for an event flag.


Process Scheduling
All multiprocess operating systems have to share the CPU bandwidth between simultaneously running processes. There are many ways of accomplishing this. The simplest is to let one process run to completion, and then allow the next one to run, and so on. This approach may be simple, but it is also inefficient in most cases. For instance, any I/O operations tend to take a long time - as measured in CPU cycles - and so the CPU sits idle until the operation completes. Heavily I/O-bound processes result in the CPU doing nothing useful for most of the time.

As a consequence, most operating systems will allow another process to run while the current process is waiting for I/O completion. Chances are that that process will also block on an I/O operation, so the CPU will allow yet another process to run. As a result, many processes appear to run simultaneously even though they only run one at a time. But when the first I/O operation completes, the first process needs to continue running (until the next I/O). One way to handle this is to switch back to that process when the current process blocks on I/O. That is, upon blocking, the operating system checks for the next process that isn't blocked. The disadvantage to this is that a compute-intensive process (one with few I/Os) will tend to dominate CPU usage, while other processes wait for that process to block, even though the I/O operations for those other processes may have completed long ago. This is handled by checking the current process on regular intervals and, when it is determined that it has enough run-time, it is blocked so that other processes can run. This check is usually done when a hardware timer fires (sometimes via a real-time clock, sometimes by a dedicated timer circuit).

The next question is how much time to allow each process. This is referred to as the process quantum, which is the minimum amount of time that a process is allowed to run before it could be blocked to allow another process to run. Typically, this is some multiple of the hardware timer frequency. Switching processes can be a fairly expensive operation, so we don't want to switch as often as each time the timer fires. On the other hand, if we wait too long, the entire system can feel "sluggish" to users. The calculation for the proper quantum can be simple or complex and there are many schemes that have been used. The more complex ones are adaptive and adjust process quantum and/or priority based on numerous considerations. The approach chosen depends upon the usage of the system. For example, on a system that runs unattended with no users it might not matter if a process dominates the CPU until it finishes. In fact, in such a situation, reducing the number of expensive process switches will result in less overhead and more work will be done in the long term. This would not be the most likely case, however.

Another question is how (or if) real-time processes are handled. A "real-time" process is one that runs in response to a specific event - usually signalled by a hardware interrupt. The process must run immediately without delay, so other processes should be suspended until the real-time process finishes. It is possible that there could be multiple different real-time events that can be triggered. So, we could end up with one real-time event interrupting the execution of another real-time process. In such cases, each possible real-time event is given a priority. A higher priority event is allowed to interrupt the handling of a lower priority event, but not vice versa. The concept of priority can also be applied to non-real-time processes. For instance, we may want some processes to only operate in the background - that is, when nothing else is running.

All of these considerations are part of process scheduling. UOS uses a pre-emptive priority-driven round-robin timesharing process scheduling algorithm. It is pre-emptive, meaning that a compute-intensive process cannot prevent other processes from running. Round robin refers to the fact that the system iterates through the processes so that each has an equal chance to run. For example, if there are three processes, it give process 1 a chance to run, then process 2, then process 3, and then back to process 1, etc. If processes 1 and 2, between them, would always be executable then without a round robin approach, process 3 would never have a chance to execute. UOS processes also have priorities, which means that the round robin algorithm takes process priority into consideration. Thus, it is only round-robin in terms of the each priority level. Only if all of the processes at one priority are blocked does the system consider running lower-priority processes. Timesharing means that the processes share the CPU, each receiving a share of that resource, as defined by the foregoing considerations. Note that the term "timesharing" has come to mean the sharing of a computer by multiple interactive users. But that was not the initial meaning of the term. It just happens that that kind of process scheduling became a necessity for those kinds of environments and the two became synonymous.

The default UOS scheduling algorithm is fairly straight-forward (ie, not adaptive). Adaptive scheduling algorithms require more overhead, which we try to avoid here. Some are built into the executive of the operating system, while others may run as a process that monitors things and adjusts aspects of running processes over time (such as changing the quanta or priorities of those processes.) Some adaptive schedulers may also take into account such things as soft limits on numbers of I/Os or CPU usage. When a soft limit is reached, the process priority or quantum might be lowered slightly, for instance. The possibilities are limitless and so we won't try for an algorithm that could satisfy everyone. Rather, third-parties can address that while UOS uses a serviceable - if not optimal - scheduling algorithm and the use of hard quotas. To be clear: a soft quota is one that enforces a penalty by a change in configuration when the quota is reached, whereas a hard quota is one that prevents the user/process from exceeding it. For most UOS quotas, exceeding them results in an error. However CPUTIM is special. Giving an error for that would just use more CPU time. So, when CPUTIM is exceeded, the process is killed (now that's a hard limit!).

In practical terms, UOS accomplishes scheduling by maintaining a queue for each possible priority level. When processes are created, they are assigned a priority level and the PID is inserted into the corresponding queue. On each system timer interrupt, the currently executing process is checked to see if it has exceeded its quantum. If it has, it is blocked and a new process is selected to run. This is done by checking the highest priority queue that has any processes, and then running the next process in that queue ("next" in this case means the next one after the last process that executed that was in that queue). Now, it may turn out that the next process to run is the current process, in which case it is allowed to continue to run. On an I/O, timer, or other blocking situation, the scheduler treats the process as if it had run out of quantum and selects the next process to run.

That is the basic algorithm. But there are some modifications. 1) If a process runs out of quantum, but nothing else can run, it is allowed to continue (as described above) however its quantum is not reset so that as soon as another process - of that priority or higher - is ready to run, it will start immediately. Otherwise, 2) when a process runs out of quantum or blocks on something, its quantum is reset so that when the scheduler comes back to it, it will potentially get its entire quantum to run. 3) If a process is blocked by an I/O, its priority is increased by 1 to allow it to, ideally, start immediately after the I/O completes. The idea is that the I/O interrupted its full quantum, and if it had to wait until all other processes in the priority queue finished, it wouldn't really get its full share of quantum relative to the other processes. If this only happens occasionally, the impact will be negligable on the system and on that process. But for a process that is I/O bound, it would chronically get less time than less I/O-bound processes and, thus, it would be penalized CPU time because of I/O (This is the only "adaptive" aspect to the UOS scheduler). However, we do not want to leave the process with elevated priority - that would both not be fair to other processes but also would end up with every process eventually having the highest priority. So, 4) any odd priority is immediately reduced by 1 when the process begins its quantum. Thus, only even priorities are "stable". Finally, we have to consider real-time processes. These need to interrupt a lower priority process - even if it hasn't completed its full quantum. 5) We will divide the range of priorities (0-255) in half. priorities of 128-255 will be considered real-time. They will always pre-empt a lower priority process. In fact, when they start, they may block a process even before the scheduling check that is triggered by the system timer. Normal processes (priorities 0-127) only block a lower priority process when that process uses up its quantum. Thus, normal processes should not be assigned real-time priorities. Otherwise, much of the scheduling algorithm is bypassed and you will probably end up with a system that exhibits inconsistent "jerky" response.

By default, processes start out with a priority of 64. Real-time processes must be assigned a priority of 128 or higher after they start. Certain operations may be assigned a temporary priority of 254 for emergency purposes - think of the Control-Alt-Delete task manager on Windows - but no process should be manually assigned a prority that high. Because of the effect on the system's performance, changing a process priority requires special privileges.

I cannot neglect to point out that things are more complicated when there are multiple processors (such as on a multicore CPU). But we will leave that whole subject to the future.

You may ask what this all has to do with the current topic of timers. The answer is: much. First we must understand that, unlike hardware timers which operate independent of (and asyncrhonous to) the CPU, UOS timers are software constructs. Thus, UOS must check for expired timers on a regular basis. This is done as part of the handling of the line clock interrupt. But when a UOS timer expires, we don't want to preempt the currently running process if its quanta isn't used up. Therefore, we mark the process associated with the timer as ready to run. This is indicated with a process state of SCH_C_COM (computable). When the scheduler is searching for the next process to run, processes with this state are the ones eligible to run. Thus, an expired timer unblocks the process, but unblocking a process does not immediately make it run - it only marks it as runnable. It is the scheduler that controls which processes are running.

    if( ( Value and $FFFF ) = Interrupt_ST ) then // System timer
    begin
        if( not Unisys ) then
        begin
            USC.Schedule() ;
        end ;
        exit ;
    end ;
This code is added to the kernel's interrupt responder. Interrupt_ST is the interrupt that comes from the system timer (hence "ST"). Upon this interrupt, we call the USC component, Schedule method. I had originally decided to put the scheduling code in the Kernel, but upon reflection I realized it should be in the USC. Scheduling code is integrally involved with processes, which are handled by the USC. Thus, putting the scheduler in the USC makes logical sense - and the code is simpler than it would otherwise be.

Of note here is the use of the Unisys flag. This modifies the scheduling behavior because if it is set, the scheduling is bypassed. The flag is used when a process needs to have sole control of the system. The process that sets the flag is the one that gains control. In essence, such a process has infinite priority and cannot be preempted. We will cover the system service that sets/resets the flag in a future article. One might ask what possible purpose this serves. One example would be a disk fragmenter that needed to modify the on-disk structure without other processes messing with the structure at the same time. Or it might be used when swapping one of the executive components. Obviously, use of such a feature should only be during times that the system is idle and, even then, for very short intervals - and only when necessary. And it requires elevated privileges for obvious reasons.

procedure TKernel.Idle ;

begin
    FIP.Idle ;
    SSC.Check_Timers ;
    if( Unisys ) then
    begin
        HAL.Idle ;
    end else
    if( not USC.Schedule ) then
    begin
        HAL.Idle ;
    end ;
end ;
We've updated the kernel's Idle method to also check for scheduling. If we're idle, it means that no processes are currently running. If one becomes computable between system timer "ticks", we want to give it a chance to start right away rather than wait for the next interrupt. Thus, we minimize unnecessary unused CPU time.

function TUSC.Schedule : boolean ;

var PID : integer ;
    Current_Process, Process : TProcess ;

begin
    Result := False ;

    // Check for pending real-time processes...
    Current_Process := Get_Process( Kernel.PID ) ;
    PID := Next_Computable_RT_Process( 0 ) ;
    if( ( PID > 0 ) and ( PID <> Kernel.PID ) ) then
    begin
        Process := Get_Process( Kernel.PID ) ;
        if(
            ( Current_Process = nil )
            or
            ( Process.Priority > Current_Process.Priority )
          ) then
        begin
            // Preempt current process
            if( Current_Process <> nil ) then
            begin
                Block( Kernel.PID, SCH_C_COM, nil ) ;
            end ;
            Unblock( PID ) ;
            Process := Get_Process( PID ) ;
            Process.Priority := Process.Priority and ( not 1 ) ;
            Result := True ;
            exit ;
        end
    end ;
This new method in the USC schedules the next process to run. First, we check for any real-time processes that are ready to run and have a higher priority than the current process. If one is found, it will always preempt the currently running process. In such case, we "block" the current process, but set the process state to computable. This suspends the process, but leaves it eligible to continue when the opportunity presents itself. Then we unblock the new real-time process. Note that it doesn't matter if the current process has used up its quantum or not. Real-time processes always start as soon as possible. Of course, a higher-priority real-time process will take precedence over a lower-priority real-time process.

Note also that we trim out the low priority bit (converting any odd priorities to even). The low priority bit, as mentioned before, is used to give a process a slight boost after an I/O block. Since we've chosen that process above any of lower priority, the increased priority has done its job and we can now return it to normal. Of course, if the priority is already even, the low bit is already 0 and the operation effectively does nothing.

If a process is scheduled, the method returns true. Otherwise we return false.

    PID := Next_Computable_Process( 0 ) ;
    if(
        ( Current_Process = nil )
        or
        ( HAL.Timestamp - Current_Process.Start_Time >= Quantum )
      ) then // Current process has met or exceeded quanta
    begin
        if( ( PID > 0 ) and ( PID <> Kernel.PID ) ) then
        begin
            // Different process running than the next one scheduled
            if( Current_Process <> nil ) then
            begin
                // Block the current process...
                Block( Kernel.PID, SCH_C_COM, nil ) ;
            end ;
            Kernel.PID := PID ;
            Unblock( PID ) ;
            Process.Priority := Process.Priority and ( not 1 ) ;
        end ;
        Result := True ;
    end ;
end ; // TUSC.Schedule
If no real-time processes are ready, we turn to checking for normal processes. If one is found, we stop the current process and start the new one, as above. However, we only do this if the current process has reached or exceeded its quantum. Note that one effect of this code is that if the next scheduled process is the current one, nothing changes - the current process just keeps running. This is the case if there are no processes of the same, or higher, priority that are in the computable state.

function TUSC.Next_Computable_RT_Process( cpuid : integer ) : TPID ;

var I : integer ;
    Q : TInteger_List ;

begin
    Result := 0 ; // Default

    for I := High_Priority downto 128 do
    begin
        Q := Priority_Queues[ I ] ;
        if( ( Q <> nil ) and ( Q.Count > 0 ) ) then
        begin
            Result := Q[ 0 ] ;
            exit ;
        end ;
    end ;
end ;
This function locates the next real-time process to schedule. We limit the lower range of the check to the lowest real-time priority of 128. There are a couple important things to take note of. First, the use of the High_Priority and Low_Priority variables is to improve performance. Since this routine is called on each system timer click, we want to do the least amount of work possible. Thus, we keep track of the highest and lowest priorities that have been assigned to processes and limit our queue iteration to that range. If no real-time processes have ever existed, the highest priority will be less than 128 and thus the loop will immediately short-circuit, returning 0. Together, the low and high priority provide a smaller "window" into the entire range of 256 priorities.

The next thing to note is that our implementation of the scheduling algorithm, as mentioned above, works this way: only processes ready to run (computable) are kept in the queues. Any blocked process is removed from the queues. When unblocked, a process is put back into the queue corresponding to the process' priority. If a priority is never used, the queue isn't created, thus the check for nil queues.

Finally, note that the queues are actually TLists. Processes are added to the end of the list, and removed from the beginning - thus they work like a queue, which supports our round-robin scheduling.

function TUSC.Next_Computable_Process( cpuid : integer ) : TPID ;

var H, I : integer ;
    Q : TInteger_List ;

begin
    Result := 0 ; // Default

    H := High_Priority ;
    if( H > 127 ) then
    begin
        H := 127 ;
    end ;
    for I := H downto Low_Priority do
    begin
        Q := Priority_Queues[ I ] ;
        if( ( Q = nil ) or ( Q.Count = 0 ) ) then
        begin
            if( I = High_Priority ) then
            begin
                if( I > 0 ) then
                begin
                    High_Priority := I - 1 ;
                end ;
            end ;
        end else
        begin
            Result := Q[ 0 ] ;
            exit ;
        end ;
    end ;
end ;
Similar to the previous method, this iterates through the priority queues starting at the highest priority (or 127, if that is lower since we don't want to check the real-time priority queues again). If we find an empty or nil queue equal to the high priority, we decrement the high priority to reduce the range for the next call.

    _Quantum := Kernel.System_Parameters.Quantum ;
This code is added to the USC Setup method. It obtains the process quantum value for the system. We will discuss system parameters in a future article. We make a local copy because it will be faster to access than calling the kernel each time. Quantum is a system parameter because it allows the system administrator to alter the default in order to customize scheduler behavior. Note that the default quantum (subject to future review) is set at 50 ms, which means that the scheduler could switch between 20 CPU-intensive processes per second, and far more which are not CPU-intensive. On a dedicated non-interactive system, the quantum could be set much higher to reduce the overhead of task switches by reducing the number of those operations per second.

procedure TUSC.Set_Process_Priority( P : TProcess ; Value : integer ;
    Force : boolean = False ) ;

begin
    Value := Value and $FF ; // Limit to 0-255
    if( ( Value <> P.Priority ) or Force ) then // Changing priority
    begin
        if( ( P.State = SCH_C_CUR ) or ( P.State = SCH_C_RUN ) or ( P.State = SCH_C_COM ) ) then
        begin
            // Currently running or computable process...
            if( Priority_Queues[ P.Priority ] <> nil ) then
            begin
                Priority_Queues[ P.Priority ].Remove( P._PID ) ; // Remove from old queue
            end ;
            if( Priority_Queues[ Value ] = nil ) then
            begin
                Priority_Queues[ Value ] := TInteger_List.Create ;
            end ;
            Priority_Queues[ Value ].Add( P._PID ) ; // Add to new queue
        end ;
        P.Priority := Value ; // Alter process priority

        // Adjust active priority window...
        if( Value < Low_Priority ) then
        begin
            Low_Priority := Value ;
        end ;
        if( Value > High_Priority ) then
        begin
            High_Priority := Value ;
        end ;
    end ;
end ; // TUSC.Set_Process_Priority
This method is used to set a process priority. We only do something if the new priority differs from the previous priority or the Force parameter is True. If the process is in the old priority queue, we remove it. Then we add it to the new priority queue, creating the queue if it doesn't exist. Finally, we adjust the high and low priority values.

    if( ( Process.State = SCH_C_CUR ) and ( State <> SCH_C_CUR ) ) then
    begin // No longer running
        // Update process time...
        T := HAL.Timestamp - Process.Start_Time ; // Time accumulated by process
        Process.Accumulated_Time := Process.Accumulated_Time + T ; // Accumulate time
        T := Process.Accumulated_Time div One_Second ; // Number of CPU seconds
        Process._Usage.CPUTIM := Process._Usage.CPUTIM + T ;
        T := T * One_Second ;
        Process.Accumulated_Time := Process.Accumulated_Time - T ; // Subtract out full seconds
        if( ( Process._Quotas.CPUTIM > 0 ) // Process time quota is limited
            and
            ( Process._Usage.CPUTIM > Process._Quotas.CPUTIM ) // Exceeded quota
          ) then
        begin
            _Delete_Process( PID ) ;
            Process := nil ;
        end else
        begin
            if( Priority_Queues[ Process.Priority ] <> nil ) then
            begin
                Priority_Queues[ Process.Priority ].Remove( Process._PID ) ;
            end ;
            Process.State := State ;
            Priority := Process.Priority ;
            if( ( State = SCH_C_IOW ) or ( State = SCH_C_IOR ) ) then
            begin
                if( Priority < 255 ) then
                begin
                    inc( Priority ) ;
                end ;
                Set_Process_Priority( Process, Priority, True ) ;
            end ;
        end ;
    end ; // if( ( Process.State = SCH_C_CUR ) and ( State <> SCH_C_CUR ) )
    if( Process <> nil ) then // Process not deleted
    begin
        Process.State := State ;
        Process.Context := Context ;
    end ;
    Kernel.Yield ;
This code is added to the USC Block method. First, we only do the work in this section if we are converting between a running state and any other state. If the process is still running, there is no need for all the work. Next, we calculate the amount of time from now back to when the process started this quantum, and add that to the accumulated time. Then we determine how many seconds that amounts to and add those seconds to the process' CPUTIM usage, and then subtract it out of the accumulated time (leaving whatever fraction of a second remains). Thus CPUTIM is always in whole seconds. Then, if the usage exceeds the process CPUTIM quota, we terminate the process. Note: if the CPUTIM quota is 0, it means unlimited. One might wonder why we bother doing the work to calculate the CPUTIM usage if there is no quota. The reason is that it is needed for accounting purposes. We will cover accounting in the future. Also note that it is possible that a process could exceed its CPU usage if it is the only process running on the system (an unlikely scenario even for a single-user system). But even on a busy system, it might slightly exceed the quota since this is only checked when the process runs out of quantum or is preempted. However, even with a large quantum, this is almost always going to be less than a full second over the quota.

In the case we are not dealing with quotas, we remove the process from the priority queue, if it is present. Also, if the blocking state indicates an I/O (IOR for I/O read and IOW for I/O write), we increment the priority by 1 as we discussed earlier.

Stepping back a bit, an example sequence that would happen with a given process would look something like this:

  • Process requests a wait for a timer using even flag 0.
  • Process is blocked and put into a LEF state.
  • Process is removed from its priority queue.
  • Process is suspended.
  • ... other system activity happens and time passes ...
  • System timer sends an interrupt to the kernel.
  • Kernel checks for expired timers and finds the one requested in the first step.
  • Process is unblocked and set to a COM state.
  • Process is added to the queue corresponding to its current priority.
  • ... eventually the process is scheduled ...
  • Process is set to a CUR state.
  • Process is unsuspended and continues to run.
  • Each time the system timer fires, the kernel calls the scheduler.
  • Evenutally the process reaches or exceeds it quantum and the scheduler finds another process is due to run.
  • Process is blocked with a COM state.
  • Process is suspended.
And so forth, with the process switching between running (CUR) and computable (COM) states as its quantum is used up and then back when it is scheduled again. Of course, there may be I/O or timer waits that happen as well.

In the next article, we will look at the REPLY utility.