Copyright © 2016 by Alan Conroy. This article may be copied in whole or in part as long as this copyright is included.


1 Introduction
2 Ground Rules

Building a File System
3 File Systems
4 File Content Data Structure
5 Allocation Cluster Manager
6 Exceptions and Emancipation
7 Base Classes, Testing, and More
8 File Meta Data
9 Native File Class
10 Our File System
11 Allocation Table
12 File System Support Code
13 Initializing the File System
14 Contiguous Files
15 Rebuilding the File System
16 Native File System Support Methods
17 Lookups, Wildcards, and Unicode, Oh My
18 Finishing the File System Class

The Init Program
19 Hardware Abstraction and UOS Architecture
20 Init Command Mode
21 Using Our File System
22 Hardware and Device Lists
23 Fun with Stores: Partitions
24 Fun with Stores: RAID
25 Fun with Stores: RAM Disks
26 Init wrap-up

The Executive
27 Overview of The Executive
28 Starting the Kernel
29 The Kernel
30 Making a Store Bootable
31 The MMC
32 The HMC
33 Loading the components
34 Using the File Processor
35 Symbols and the SSC
36 The File Processor and Device Management
37 The File Processor and File System Management
38 Finishing Executive Startup

Users and Security
39 Introduction to Users and Security
40 More Fun With Stores: File Heaps
41 File Heaps, part 2
42 SysUAF
43 TUser
44 SysUAF API

Terminal I/O
45 Shells and UCL
46 UOS API, the Application Side
47 UOS API, the Executive Side
48 I/O Devices
49 Streams
50 Terminal Output Filters
51 The TTerminal Class
52 Handles
53 Putting it All Together
54 Getting Terminal Input
55 QIO
56 Cooking Terminal Input
57 Putting it all together, part 2
58 Quotas and I/O

UCL
59 UCL Basics
60 Symbol Substitution
61 Command execution
62 Command execution, part 2
63 Command Abbreviation
64 ASTs
65 Expressions, Part 1
66 Expressions, Part 2: Support code
67 Expressions, part 3: Parsing
68 SYS_GETJPIW and SYS_TRNLNM
69 Expressions, part 4: Evaluation

UCL Lexical Functions
70 PROCESS_SCAN
71 PROCESS_SCAN, Part 2
72 TProcess updates
73 Unicode revisted
74 Lexical functions: F$CONTEXT
75 Lexical functions: F$PID
76 Lexical Functions: F$CUNITS
77 Lexical Functions: F$CVSI and F$CVUI
78 UOS Date and Time Formatting
79 Lexical Functions: F$CVTIME
80 LIB_CVTIME
81 Date/Time Contexts
82 SYS_GETTIM, LIB_Get_Timestamp, SYS_ASCTIM, and LIB_SYS_ASCTIM
83 Lexical Functions: F$DELTA_TIME
84 Lexical functions: F$DEVICE
85 SYS_DEVICE_SCAN
86 Lexical functions: F$DIRECTORY
87 Lexical functions: F$EDIT and F$ELEMENT
88 Lexical functions: F$ENVIRONMENT
89 SYS_GETUAI
90 Lexical functions: F$EXTRACT and F$IDENTIFIER
91 LIB_FAO and LIB_FAOL
92 LIB_FAO and LIB_FAOL, part 2
93 Lexical functions: F$FAO
94 File Processing Structures
95 Lexical functions: F$FILE_ATTRIBUTES
96 SYS_DISPLAY
97 Lexical functions: F$GETDVI
98 Parse_GetDVI
99 GetDVI
100 GetDVI, part 2
101 GetDVI, part 3
102 Lexical functions: F$GETJPI
103 GETJPI
104 Lexical functions: F$GETSYI
105 GETSYI
106 Lexical functions: F$INTEGER, F$LENGTH, F$LOCATE, and F$MATCH_WILD
107 Lexical function: F$PARSE
108 FILESCAN
109 SYS_PARSE
110 Lexical Functions: F$MODE, F$PRIVILEGE, and F$PROCESS
111 File Lookup Service
112 Lexical Functions: F$SEARCH
113 SYS_SEARCH
114 F$SETPRV and SYS_SETPRV
115 Lexical Functions: F$STRING, F$TIME, and F$TYPE
116 More on symbols
117 Lexical Functions: F$TRNLNM
118 SYS_TRNLNM, Part 2
119 Lexical functions: F$UNIQUE, F$USER, and F$VERIFY
120 Lexical functions: F$MESSAGE
121 TUOS_File_Wrapper
122 OPEN, CLOSE, and READ system services

UCL Commands
123 WRITE
124 Symbol assignment
125 The @ command
126 @ and EXIT
127 CRELNT system service
128 DELLNT system service
129 IF...THEN...ELSE
130 Comments, labels, and GOTO
131 GOSUB and RETURN
132 CALL, SUBROUTINE, and ENDSUBROUTINE
133 ON, SET {NO}ON, and error handling
134 INQUIRE
135 SYS_WRITE Service
136 OPEN
137 CLOSE
138 DELLNM system service
139 READ
140 Command Recall
141 RECALL
142 RUN
143 LIB_RUN
144 The Data Stream Interface
145 Preparing for execution
146 EOJ and LOGOUT
147 SYS_DELPROC and LIB_GET_FOREIGN

CUSPs and utilities
148 The I/O Queue
149 Timers
150 Logging in, part one
151 Logging in, part 2
152 System configuration
153 SET NODE utility
154 UUI
155 SETTERM utility
156 SETTERM utility, part 2
157 SETTERM utility, part 3
158 AUTHORIZE utility
159 AUTHORIZE utility, UI
160 AUTHORIZE utility, Access Restrictions
161 AUTHORIZE utility, Part 4
162 AUTHORIZE utility, Reporting
163 AUTHORIZE utility, Part 6
164 Authentication
165 Hashlib
166 Authenticate, Part 7
167 Logging in, part 3
168 DAY_OF_WEEK, CVT_FROM_INTERNAL_TIME, and SPAWN
169 DAY_OF_WEEK and CVT_FROM_INTERNAL_TIME
170 LIB_SPAWN
171 CREPRC
172 CREPRC, Part 2
173 COPY
174 COPY, part 2
175 COPY, part 3
176 COPY, part 4
177 LIB_Get_Default_File_Protection and LIB_Substitute_Wildcards
178 CREATESTREAM, STREAMNAME, and Set_Contiguous
179 Help Files
180 LBR Services
181 LBR Services, Part 2
182 LIBRARY utility
183 LIBRARY utility, Part 2
184 FS Services
185 FS Services, Part 2
186 Implementing Help
187 HELP
188 HELP, Part 2
189 DMG_Get_Key and LIB_Put_Formatted_Output
190 LIBRARY utility, Part 3
191 Shutting Down UOS
192 SHUTDOWN
193 WAIT
194 SETIMR
195 WAITFR and Scheduling
196 REPLY, OPCOM, and Mailboxes
197 REPLY utility
198 Mailboxes
199 BRKTHRU
200 OPCOM
201 Mailbox Services
202 Mailboxes, Part 2
203 DEFINE
204 CRELNM
205 DISABLE
206 STOP
207 OPCCRASH and SHUTDOWN
208 APPEND

Glossary/Index


Downloads

The MMC

Loading the MMC
After a brief visit back to Init land in the last article, we now return to the Kernel startup routine. We left off with the loading of the file system. After that, we attach the store to the file system.

    M := TUOS_Managed_Store.Create ;
    M.Store := S ;
    FS.Store := M ;

Before we can proceed further, we will need a heap. But before we can create a heap, we need a Memory Management Component from which the heap can request chunks of memory for the heap. The following code loads the bootstrap, grabs the MMC position and length, and then calls Get_Image to load the component.

    // Load MMC and HMC components...
    S.Read_Data( Buffer^, Bootstrap_Fixed.Header, S.Min_Storage, E ) ;
    if( E <> nil ) then
    begin
        if( H.Console <> nil ) then
        begin
            H.Console.Output( PChar( E.Error_Text( _S, _T ) + CRLF ), -1 ) ;
        end ;
        H.Halt ;
    end ;
    move( Buffer[ 0 ], FSH, sizeof( FSH ) ) ;
    _MMC := TUOS_Memory_Manager( Get_Image( H, S, FSH.MMC_Position, FSH.MMC_Length, 1 ) ) ;
    if( _MMC = nil ) then // None specified on boot device
    begin
        if( H.Console <> nil ) then
        begin
            H.Console.Output( 'Could not load MMC', -1 ) ;
        end ;
        H.Halt ;
    end ;

The MMC
The MMC interfaces directly with the HAL to manage the computer's memory. The HAL provides a somewhat abstracted interface to the memory management hardware. The MMC provides a view of the HAL memory interface that is compatible with the way that UOS uses memory. There are many memory management schemes. Being platform-agnostic, UOS has to be able to handle any of them. We will discuss more advanced memory management in a future article. For now, we will address the simplest scheme. That is, we will view memory as a large buffer that is accessible in its entirety from any program. Note that such a scheme is inherently insecure since any program can access the memory used by any other program (or even UOS itself). In such a situation, all UOS can do is help to keep honest programs honest. It is wholly unable to stop malicious - or even erroneous - code from interfering with the operation of the system. However, such a simple management scheme will be helpful in understanding the operation of the MMC since we can present a simplified view of MMC operations. Of course, UOS will run on such hardware. Why, you ask, would anyone want to run a computer with such severe security risks? As a general rule, you wouldn't. But in the case of some embedded systems, where the only programs that run are known to be safe and are burned into ROM, such CPU capabilities are not needed and older, less costly, CPUs can be used.

The purpose of the MMC is to manage memory by responding to requests to allocate RAM for a process. But the MMC also needs memory for its own operations. During startup, this creates a bit of a chicken-and-egg scenario. The MMC needs to allocate memory for its control structures. Without these structures, it cannot allocate memory. To get around this, it reserves a chunk of RAM large enough to meet its initial needs. Until startup is finished, all requests from RAM simply allocate from this reserved space, keeping track of how much is used with a high watermark.

Here is the definition of our MMC descendant of the abstract base TUOS_Memory_Manager class.

type TMMC = class( TUOS_Memory_Manager )
                public // Constrctors and destructors...
                    constructor Create ;

                private // Components...
                    _Kernel : TUOS_Kernel ;
                    __HAL : THAL ;
                    _USC : TUOS_User_Security ;

                private // Cached HAL data...
                    RAM_Page_Size : integer ;
                    Allocation_Types : string[ 16 ] ;
                    Maximum_Allocation : int64 ;
                    Demand_Paging : boolean ;
                    Virtual_Memory : boolean ;

                private // Page support...
                    _Page_Table : TInteger_List ; // Owner PID and flags for each page
                    _Free_Pages : TAT64 ;
                    _Kernel_Pages : TUOS_Page_List ; // Kernel page list (see process page lists description)

                private // Other instance data...
                    Highest_Address : int64 ;
                    Reserved : int64 ; // High-water mark (highest reserved RAM address)
                    _Startup : boolean ;

                protected // Internal utility methods...
                    function HAL : THAL ;
                    function Page_Table : TInteger_List ;
                    function Free_Pages : TAT64 ;
                    function Kernel_Pages : TUOS_Page_List ;
                    function USC : TUOS_User_Security ;
                    function Space_For_Page_Tables : int64 ;
                    procedure Ensure_Pages ;

                public // API...
                    function Is_Class( N : Pchar ) : boolean ; override ;
                    procedure Set_Error( E : longint ) ;
                    procedure Set_Kernel( K : TUOS_Kernel ) ; override ;
                    function Create_Process_Page_List : TUOS_Page_List ;
                        override ;
                    function Allocate( PID : cardinal ; var Size : int64 ;
                        Typ : char ; Index, Flags : integer ) : int64 ;
                        override ;
                    procedure Set_Aside( Buffer : PChar ;
                        Buffer_Size : longint ) ; override ;
                    procedure Release_Allocation( PID : cardinal ; Typ : char ;
                        Index : integer ) ; override ;
                    function MemAvailable( PID : cardinal ; Typ : Char ;
                        Index : integer ) : int64 ; override ;
                    procedure End_Startup ; override ;
            end ; // TMMC

And here is the constructor:
// Constructors and destructors...

constructor TMMC.Create ;

begin
    inherited Create ;

    _Startup := True ;
end ;

All our constructor does is set the _Startup flag. The reason we don't do anything else is because part of the design philosophy for this component is to defer allocation of memory until it is absolutely required.

Here are a couple of the internal utility routines for the MMC:

// Internal utility methods...

function TMMC.HAL : THAL ;

begin
    if( __HAL = nil ) then
    begin
        __HAL := _Kernel.HAL ;
        // NOTE: The following values are assumed not to change while we are running
        RAM_Page_Size := __HAL.RAM_Page_Size ;
        Allocation_Types := __HAL.Allocation_Types ;
        Maximum_Allocation := __HAL.Maximum_Allocation ;
        Demand_Paging := __HAL.Demand_Paging ;
        Virtual_Memory := __HAL.Virtual_Memory ;
    end ;
    Result := __HAL ;
end ;


function TMMC.USC : TUOS_User_Security ;

begin
    if( _USC <> nil ) then
    begin
        _USC := _Kernel.USC ;
    end ;
    Result := _USC ;
end ;


// API...

function TMMC.Is_Class( N : Pchar ) : boolean ;

var _N : string ;

begin
    _N := lowercase( string( N ) ) ;
    Result := _N = 'tmmc' ;
end ;


procedure TMMC.Set_Error( E : longint ) ;

begin
    Set_Last_Error( Create_Error( E ) ) ;
end ;

None of these does much. The HAL method returns the current instance of the HAL. If this is the first time, we request it from the Kernel and then cache some of the memory-related settings so that we reduce the number of far calls to that component. As mentioned in the comments, we assume that none of these values will change while we are running because they are all CPU hardware-dependent and the CPU won't be changing out underneath us. If this kind of capability exists in future hardware, then we'll have to revisit this, but the implications of such a theoretical hardware platform are far-reaching.
The USC returns the USC component from the Kernel. Is_Class and Set_Error are overrides that are self-explanatory.

One of the first things that the Kernel does after creating the MMC, is to tell it about itself. This results in a call to Set_Kernel:

procedure TMMC.Set_Kernel( K : TUOS_Kernel ) ;

begin
    _Kernel := K ;
    if( ( K <> nil ) and _Startup ) then {ELSE:UNTESTED}
    begin
        HAL ; // Make sure we have data initialized
        __HMC := _Kernel.HMC ;
        if( __HMC <> nil ) then
        begin
{$WARNINGS OFF}
            SetMemoryManager( MMC_MM ) ;
{$WARNINGS ON}
        end ;
        Ensure_Pages ; // Set up page arrays
        Kernel_Pages ; // Allocate the kernel page list asap
    end ;
end ;

We assign our internal Kernel pointer and if the passed value isn't nil and we are still in startup mode (which will be true), then we ask for a copy of the executive heap (HMC). If that exists, we hook our memory management calls into it. In fact, this will not be the case in the way we are using the code in the Kernel. Next we call Ensure_Pages and Kernel_Pages to set up our basic control structures. Before we look at those routines, we need to discuss what a page is.
Although memory can be accessed at the byte-level, for the sake of memory management hardware, it is broken into larger chunks called "pages". The size of these pages varies depending on hardware. On the PDP-11, the page size was 8Kb. On newer Intel CPUs, it is 4Kb. On CPUs without memory management, it is an arbitrary value set by the HAL. Pages in memory are like clusters on a store. They are the minimum allocation/deallocation units. Thus, when you need even one byte, an entire page must be allocated. Sub-allocation of a page is one of the things that a heap does for us. We will get to that in another article. What makes memory different from disk stores is that the same locations in memory are shared between different programs. This requires us to be able to swap memory contents out for one program and use them for another. Memory swapping is a topic for a future article. For now we will ignore that aspect of memory management. But, because we will eventually need to concern ourselves with it, the MMC must keep track of which pages belong to which process (which is how UOS tells running programs apart). We also have to keep track of which pages are not in use and therefore available when requests for memory come in. Although it would be a simple matter of an allocation table to keep track of free/allocated RAM, the fact that we also have to keep track of other things (such as the currently owning process ID) means that we need something more extensive than a simple allocation table, such as what we used for our stores. Further, each process needs a local list of pages it has allocated to it. We can't just scan the master page table when we need to know for two reasons: 1) it would be very slow, and 2) when we get to virtual memory support, the local list of pages doesn't necessarily correspond to the system page table. We will save all that for later - for now, just realize that there is the master page table and a process page list. Finally, since the executive does operations on behalf of the whole system, it operates as if it were its own process, separate from all other processes running on the system. So, we reserve the process ID (PID) of 0 for executive-specific operations. As you will see, many of the MMC operations take a PID as one of the parameters. In the case of the executive, the PID will be 0.

Here is the code for Ensure_Pages:

procedure TMMC.Ensure_Pages ;

var Flags : integer ;
    Highest : int64 ;
    Info : TMemory_Info ;
    Index : integer ;

begin
    // Find highest usuable physical RAM address...
    if( Highest_Address = 0 ) then
    begin
        Index := 0 ;
        while( true ) do
        begin
            Info := HAL.Memory( Index ) ;
            if( Info.Memory_Type = MT_Invalid ) then
            begin
                break ;
            end ;
            Highest_Address := Info.High ;
            inc( Index ) ;
        end ; // while( true )
    end ; // if( Highest_Address = 0 )

The first thing we do is determine the highest usable address in memory and assign it to the Highest_Address instance variable. If the variable is already set, we don't need to do it again.

    // Setup...
    if( Page_Table.Count >= ( Highest_Address + 1 ) div RAM_Page_Size ) then
    begin
        exit ; // Early out
    end ;
    if( _Page_Table.Capacity < ( ( Highest_Address + 1 ) div RAM_Page_Size ) + 1 ) then
    begin
        _Page_Table.Capacity := ( Highest_Address + 1 ) div RAM_Page_Size + 1 ;
    end ;

_Page_Table is an instance variable that is an instance of an integer list class. It operates much like a dynamic array and is based on the Delphi TList class. Page_Table is a function that wraps this variable, creating it if needed. Having used it once in this routine, we can simply use the _Page_Table variable in safety, knowing that the class has been created. _Page_Table is our list of memory pages, starting with page 0 at RAM offset 0. On most systems, the maximum possible memory address is far beyond the amount of installed RAM, so we can ignore any pages beyond the end of physical memory. We may end up with some wasted (unused) items in the page table if there are large extents of non-existent memory before the end of physical memory, but this is something we will live with for the sake of simple code. Besides, this would be an unusual situation in practice. RAM_Page_Size is the size of memory pages, in bytes. We take the highest usuable address and divide by the page size to determine the total number of pages represented by our physical RAM and its addresses. We then set the capacity of the page table to this number of pages, unless it has already been set. The capacity of a list is the physical space reserved for it. The count is the logical size of the list. If we simply add items to the end of the list, it can cause the memory for the list to be reallocated since it is stored contiguously. This can cause memory fragmentation, so we set the capacity once so there is a single allocation for the list data. The logical size (count) will still be 0 at this point. Remember that the first item in a list is index 0, which corresponds to page 0 for this list.

    // Create page entries...
    Index := 1 ;
    Highest := -1 ;
    Info := HAL.Memory( 0 ) ;
    Info.Low := ( Info.Low + RAM_Page_Size - 1 ) and ( not ( RAM_Page_Size - 1 ) ) ;
    while( Highest <= Highest_Address ) do
    begin
        // Get next memory segment, if needed...
        if( Info.Memory_Type = MT_Invalid ) then
        begin
            break ; // End of available memory
        end ;
        if( Highest > Info.High ) then // Need next segment
        begin
            Info := HAL.Memory( Index ) ;
            Info.Low := ( Info.Low + RAM_Page_Size - 1 ) and ( not ( RAM_Page_Size - 1 ) ) ;
            inc( Index ) ;
        end ;

        // Process next page in segment...
        Flags := 0 ;
        if(
            ( Highest < Info.Low - 1 )
            or
            ( Highest + RAM_Page_Size > Info.High + 1 )
          ) then // Non-existant RAM
        begin
            Flags := Flags or Page_Flag_No_RAM or Page_Flag_Locked ;
        end else
        if( Highest < Reserved ) then // Reserved RAM
        begin
            Flags := Flags or Page_Flag_Locked ;
        end else
        begin
            case Info.Memory_Type of
                MT_ROM : Flags := Flags or Page_Flag_Read_Only ;
                MT_WOM : Flags := Flags or Page_Flag_Write_Only ;
            end ;
        end ;
        if( Flags <> 0 ) then
        begin
            Free_Pages.Allocate_At( Highest + 1, RAM_Page_Size ) ;
        end ;
        _Page_Table.Add( Flags ) ;
        Highest := Highest + RAM_Page_Size ;
    end ; // while( True )

    // Make sure any left-over bits in the AT are set
    while( Free_Pages.Allocate_At( Highest + 1, RAM_Page_Size ) ) do
    begin
        Highest := Highest + RAM_Page_Size ;
    end ;
end ; // TMMC.Ensure_Pages

The remainder of the function makes sure that the individual pages are set up in the page table. On systems with large amounts of RAM, scanning the page table for available (unused) memory would be slow. So, we will use an allocation table called _Free_Pages to allow us to quickly locate contiguous chunks of unallocated memory, whereas the page table is something we can look at to get specifics about a given page. Each page in the page list is an integer that contains flags that tell us something about the page.
What this code does is go through the memory chunks provided by the HAL, and assigns flags for each page that indicates if the page maps non-existant RAM, or read-only (or write-only) RAM. In essence, it maps the arbitrary chunks of RAM given by the HAL into a linear set of pages. Note that since pages have to be treated as atomic units of RAM, we can't have a page that has parts that are read-only or parts that are write-only. Although that may be true of the actual RAM, we will mark the whole chunk as read-only (Page_Flag_Read_Only) or write-only (Page_Flag_Write_Only), as appropriate. Likewise, if only some of the memory covered by a page exists, the whole page is treated as non-existant (Page_Flag_No_RAM). Finally, non-existant RAM pages, or those marked as reserved by the HAL have the locked flag set (Page_Flag_Locked). Reserved RAM is memory that is set aside for something and cannot be reused later. One example is the memory that was allocated to load the HAL and Kernel code back in the Kernel startup routine. A locked page is one which cannot be adjusted later (such as being allocated to a specific process). Note the use of the Reserved instance variable. This is different from the memory reserved by the HAL, but it is reserved nonetheless. We will explain it when we talk about the Allocate method.
As mentioned above, there are two types of things that we need for memory management, and we must be able to do those things quickly. Memory management is one of the fundamental services that any Operating System provides, and it is used frequently. So, anything that slows down the MMC will slow down the entire system. Hence we have the page list to allow us to quickly get information on any given page, and the free pages table to quickly allow us to search for unused pages, and mark them as used or unused.

Here are the Page_Table, Free_Pages, and Kernel_Pages functions:

function TMMC.Page_Table : TInteger_List ;

begin
    if( _Page_Table = nil ) then
    begin
        // Create page lists...
        _Page_Table := TInteger_List.Create ;
    end ; // if( _Page_Table = nil )

    Result := _Page_Table ;
end ; // TMMC.Page_Table


function TMMC.Free_Pages : TAT64 ;

begin
    if( _Free_Pages = nil ) then
    begin
        Page_Table ; // This will create the _Free_Pages list
        _Free_Pages := TAT64.Create( RAM_Page_Size ) ;
        _Free_Pages.Set_Size( ( Highest_Address + 1 ) div RAM_Page_Size div 8 ) ;
    end ;
    Result := _Free_Pages ;
end ;


function TMMC.Kernel_Pages : TUOS_Page_List ;

begin
    if( _Kernel_Pages = nil ) then
    begin
        _Kernel_Pages := Create_Process_Page_List ;
    end ;
    Result := _Kernel_Pages ;
end ;

In all three cases, we create the object instance if it doesn't exist and return it.

The main purpose of the MMC at this point is to allocate and deallocate pages of memory. Here is the Allocate method:

function TMMC.Allocate( PID : cardinal ; var Size : int64 ; Typ : char ;
    Index : integer ; Flags : integer ) : int64 ;

var APages : TList ;
    I : integer ;
    Info : TMemory_Info ;
    Pages : TUOS_Page_List ;
    T : integer ;

begin // TMMC.Allocate
    // Setup...
    Result := 0 ;
    if( Size = 0 ) then
    begin
        Free_Pages ; // Force initialization
        exit ;
    end ;


Requesting a chunk of memory that is zero bytes long would logically do nothing. In such a case, we return 0, indicating nothing was allocated, and also call Free_Pages to force an initialization (nothing will happen if we already initialized the free pages).

    if( _Startup ) then
    begin
        if( Size < RAM_Page_Size + Space_For_Page_Tables ) then
        begin
            Size := RAM_Page_Size + Space_For_Page_Tables ;
        end ;
        Info := HAL.Memory( 0 ) ;
        I := 1 ;
        while( Info.Memory_Type <> MT_Invalid ) do
        begin
            if( Info.Memory_Type = MT_RAM ) then
            begin
                 if( not HAL.Reserved_RAM( Info.Low, Info.High ) ) then // Not reserved by HAL
                 begin
                     if( ( Reserved > Info.Low ) and ( Reserved < Info.High ) ) then
                     begin
                         Info.Low := Reserved + 1 ;
                     end ;
                     if( Reserved < Info.Low ) then
                     begin
                         if( Info.High - Info.Low + 1 >= Size ) then // Room in this segment
                         begin
                             Result := Info.Low ;
                             break ;
                         end ;
                     end ; // if( Info.Low < Info.High )
                 end ; // if( not HAL.Reserved_RAM( Info.Low, Info.High ) )
            end ; // if( Info.Memory_Type = MT_RAM )

            Info := HAL.Memory( I ) ;
            inc( I ) ;
        end ; // while( Info.Memory_Type <> MT_Invalid )
        Reserved := Result + Size - 1 ;
        exit ;
    end ; // if( _Startup )

While we are in Startup mode (_Startup = true), we do not operate in the way we normally do. This flag is only reset after a heap manager is available for the executive's use. Prior to that, we still may have requests for memory, but we treat them differently than requests that come after we have a heap. What we do is iterate through the HAL's memory list, looking for available memory. At this point, the Kernel may not have yet informed the MMC about itself, so there are no page lists, etc. Because the memory allocated prior to having a heap is not managed, it is just allocated as if the HAL reserved it for some purpose. Our instance variable Reserved indicates the highest reserved address under this scenario. Once we find free space for the requested memory, we return the first available address and update the Reserved value to include the memory we just "allocated". Note that we reserve memory on byte boundaries instead of page boundaries. This is because any pages that are mapped by these reservations are locked and are never used for other purposes. So we can allocate arbitrary chunks of memory even if they do not correspond to page boundaries. For this reason, the calling code should request a minimal amount of memory during MMC startup, as that memory will be removed from any other uses in the future. In practice, as we will see, this reserved memory will contain primarily the control structures for the executive heap (HMC) and our own MMC page control structures. As it turns out, this is exactly what we want - we don't want the memory control structures to be swapped out or otherwise unavailable. Rather, they must always be available for the MMC and HMC. Otherwise the executive will likely enter a deadlock situation. To reduce the amount of memory that is reserved, we defer as much processing as possible in both the MMC and HMC (and, thus, the memory allocations that accompany such processing) so that the only reserved memory is that which is essential to their basic operation.
To summarize, some small portion of memory is reserved for memory control structures. Once allocated, this memory is reserved and not used for anything else until UOS is rebooted. To leave as much memory as possible available for programs, we defer as much processing as possible for as long as possible. Until the HMC is set up, the MMC is in setup mode which allocates the reserved memory and exits at this point. All code beyond this point executes only after we are out of startup mode.
As a sidenote, you might wonder what memory the HAL, itself, reserves. Typically, this will include hardware control structures unique to the platform. For instance, most CPUs reserve some amount of the lowest memory addresses for interrupt routine addresses. The HAL deals with such platform-specific issues so that UOS doesn't have to. All UOS knows, or cares about, is that the HAL says that memory is reserved.
    // Get appropriate virtual page list...
    if( PID = 0 ) then
    begin
        Pages := Kernel_Pages ;
    end else
    begin
        Pages := USC.Page_List( PID ) ;
        if( Pages = nil ) then // Not a valid PID
        begin
            exit ;
        end ;
    end ;

The first step, when not in startup mode, is to get a pointer to the appropriate page list. In the case of the executive asking for memory, the process ID (PID) will be 0. In that case, we use the Kernel_Pages instance data. Otherwise, we ask the USC (User and Security Component) to give us a pointer to the current process' page list. We will discuss the USC in a later article, but its purpose is to manage users and processes. During startup, we will only be using the executive's page list.

    // Determine allocation index...
    T := Resolve_Allocation_Type( Typ, Index, Allocation_Types ) ;
    if( T < 0 ) then
    begin
        exit ;
    end ;

    if( Flags = 0 ) then // No access flags specified
    begin
        if( Typ = 'I' ) then
        begin
            Flags := Page_Flag_Allow_Execute ;
        end else
        begin
            Flags := Page_Flag_Allow_Read or Page_Flag_Allow_Write ;
        end ;
    end ;

Once we have the appropriate page list, we have to determine the correct allocation type. On most modern computers, memory management associates specific characteristics with parts of memory. For instance, some memory may be marked as executable (if not, a program cannot be run from that memory), or for use solely as data, or for use in a stack. On a computer without memory management, all memory is available for any use - data, stack, executable, etc. UOS uses "I" (Instruction) for executable access, "D" for data, and "S" for stack. This also happens to match the types of allocations available on the venerable PDP-11 computers as well as modern Intel x86 CPUs. The type of memory access is passed in the Typ parameter of the routine. The HAL is responsible for indicating what kinds of memory allocation types are available on the hardware platform. The Resolve_Allocation_Type function takes the requested allocation type and resolves it to one that is supported by the HAL (this may be the type requested, or something compatible with it). If page flags are passed to the Allocate function, we use those. Otherwise we set them based on the actual allocation type.

    // Make sure there is room....
    Size := ( Size + RAM_Page_Size - 1 ) and ( not ( RAM_Page_Size - 1 ) ) ;
    APages := TList( Pages[ T ] ) ;
    if( Pages_Max_Allocation( APages ) + Size >= Maximum_Allocation ) then
    begin
        exit ; // No room left in virtual address space
    end ;

The next step is to check to make sure that the requested amount of memory, plus the memory already allocated to the kernel, doesn't exceed the maximum memory that can be allocated to a program on this hardware platform. For instance, on most 8-byte computers, the maximum amount of memory that can be accessed by a program is 64K bytes. An attempt to exceed this hard limit results in a failure and we exit with a return value of 0.

    Result := Get_Contiguous_Pages( APages ) ;
end ; // TMMC.Allocate

Finally, we try to allocate the memory, adding it to the executive's page list. As mentioned earlier, we are addressing only a simple memory model. This model requires that all allocations are in contiguous pages, if more than one page is needed. In the future, we will talk about segmentation and on-demand paging. For now we call the local function Get_Contiguous_Pages and return its result.

Here is the local Get_Contiguous_Pages function:

function Get_Contiguous_Pages( APages : TList ) : int64 ;

var I : integer ;
    Segment, This_Segment : TSegment ;
    Starting_Address : int64 ;

begin
    // Setup...
    Result := 0 ;
    Segment := nil ;

    // Find matching existing segment...
    for I := 0 to APages.Count do
    begin
        This_Segment := TSegment( APages[ I ] ) ;
        if( This_Segment.Typ = Typ ) then
        begin
            Segment := This_Segment ;
            break ;
        end ;
    end ;

    if( Segment = nil ) then // First allocation for this allocation type
    begin
        Result := Free_Pages.Allocate( Size ) ;
        if( Result <> 0 ) then // Success
        begin
            Segment := TSegment.Create ;
            Segment.Physical := Result ;
            Segment.Length := Size ;
            Segment.Flags := Flags ;
            Segment.Typ := Typ ;
            Segment.Index := Index ;
            APages.Add( Segment ) ;
        end ;
        exit ;
    end ;

    // Allocate immediately after current data...
    Starting_Address := Segment.Physical + Segment.Length ;
    if( Free_Pages.Allocate_At( Starting_Address, Size ) ) then // Success
    begin
        Result := Segment.Physical + Segment.Length ;
        Segment.Length := Segment.Length + Size ;
    end ;
end ; // Get_Contiguous_Pages

The code may look more complex than is necessary for the simple memory model we've discussed. That is because UOS has to be able to support more complex memory schemes in the future. Rather than write simple code now and then completely rewrite it in the future, we've chosen to write more complicated code that will work with both simple and complex memory models. As we mentioned before, there can be multiple memory allocation types. On systems that support it, each type of memory can be simultaneously supported. In the MMC we refer to these as segments, although its not the same thing as segmented memory (which we will discuss in the future). A page list consists of a list of lists. Each allocation type has an index (the first one is 0), and the corresponding index in the page list is a list of segments for that allocation type. In our simple memory scheme, there is only one segment for each allocation type, so the segment list for each allocation type has only one item in it - an instance of type TSegment. If the memory type isn't found in the page list, we create a new segment and add it. That is, assuming we can allocate that much contiguous memory.
If a matching segment is found, we try to extend it. We use Allocate_At to make sure the allocated memory is contiguous with the existing segment, since this is one of the requirements of the simplistic memory model. If we cannot extend the segment, we return 0 to indicate a failure.

The TSegment structure looks like this:

type TSegment = class
                    public // API...
                        Physical : int64 ; // Starting Physical address
                        Length : int64 ; // Length of segment
                        Flags : integer ; // See Page flags
                        Typ : char ; // A = Any/all, S = Stack, D = Data, I = Instruction
                        Index : integer ; // Type index
                end ;

Here is the Resolve_Allocation_Type function:

function Resolve_Allocation_Type( Typ : char ; Index : integer ;
    const Allocation_Types : string ) : integer ;

var I : integer ;

begin
    Result := pos( Typ, Allocation_Types ) ;
    if( ( Result = 0 ) and ( Typ = 'S' ) ) then // Not found, stack
    begin
        Result := pos( 'D', Allocation_Types ) ; // Default to data
    end ;
    if( Result = 0 ) then // Not found
    begin
        Result := pos( 'A', Allocation_Types ) ;
    end ;
    if( Result = 0 ) then // Not found
    begin
        exit ;
    end ;
    I := Index ;
    while( I > 0 ) do
    begin
        inc( Result ) ;
        if( Result > length( Allocation_Types ) ) then // Not found
        begin
            Result := pos( 'A', Allocation_Types ) ; // Use first any/all type
            if( Result = 0 ) then // Not found
            begin
                exit ;
            end ;
            break ;
        end ;
        while(
               ( Allocation_Types[ Result ] <> Typ )
               and
               ( Allocation_Types[ Result ] <> 'A' )
             ) do
        begin
            inc( Result ) ;
            if( Result > length( Allocation_Types ) ) then // Index not found
            begin
                Result := pos( 'A', Allocation_Types ) ; // Use first any/all type
                if( Result = 0 ) then // Not found
                begin
                    exit ;
                end ;
                break ;
            end ;
        end ; // while
        dec( I ) ;
    end ; // while( Index > 0 )
    dec( Result ) ;
end ;

The purpose of this routine is to either verify that the requested allocation type is supported by the HAL, or to fall back to a compatible type that is supported. If stack (S) is requested but not supported, we fall back to data (D). If data is requested, or a request for stack fell back to data, and the data allocation type isn't supported, we fall back to all (A). The index parameter is always 0 at this point, but the MMC supports the potential of multiple instances of memory types. Perhaps some CPU in the future will support multiple stacks in hardware. Or, consider the current 64-bit Intel iAPX architecture which has multiple data allocation types. Index, in conjunction with an allocation type, allows allocation of any one of allocation types that have multiple instances. For instance, a type of "D" and an index of 1 would indicate the second data allocation type. The function iterates through the types supported by the HAL (Allocation_Types), and returns an index that corresponds to the matching type/index. If not found, even after fallback, -1 is returned.

Now that we've addressed allocation of memory, let's turn our attention to the release of allocated memory. This is done via the Release_Allocation method.

procedure TMMC.Release_Allocation( PID : cardinal ; Typ : char ; Index : integer ) ;

var APages : TList ;
    Loop : integer ;
    Pages : TUOS_Page_List ;
    Segment : TSegment ;

begin // TMMC.Release_Allocation
    // Get appropriate virtual page list...
    if( PID = 0 ) then
    begin
        Pages := Kernel_Pages ;
    end else
    begin
        Pages := USC.Page_List( PID ) ;
        if( Pages = nil ) then // Not a valid PID
        begin
            exit ;
        end ;
    end ;
    if( Typ = NUL ) then
    begin
        _Release( Pages ) ; // Release all memory
        exit ; 
    end ;

    // Determine which allocation set...
    for Loop := 0 to Pages.Count - 1 do
    begin
        APages := TList( Pages[ Loop ] ) ;
        if( APages.Count > 0 ) then
        begin
            Segment := TSegment( APages[ 0 ] ) ;
            if( ( Segment.Typ = Typ ) and ( Segment.Index = Index ) ) then
            begin
                Release_Segment( APages ) ; // Release the allocation(s)
                exit ;
            end ;
        end ;
    end ;
end ; // TMMC.Release_Allocation

In this function, we grab the appropriate page list for our PID. If the type that was passed in was a NUL (ASCII 0), we call the local _Release function to clear all memory allocated to the PID. Otherwise, we loop through the pages looking for a matching segment. If found, we call the local Release_Segment function.

Here is the local _Release function:

procedure _Release( APages : TUOS_Page_List ) ;

var Loop : integer ;

begin
    for Loop := 0 to APages.Count - 1  do
    begin
        Release_Segment( TList( APages[ Loop ] ) ) ;
    end ;
end ;

This function simply iterates through the segments in the page list and calls Release_Segment for each one.

Here is the local Release_Segment function:

procedure Release_Segment( APages : TList ) ;

var Count, Loop : integer ;
    Segment : TSegment ;
    Page : int64 ;

begin
    for Loop := 0 to APages.Count - 1 do
    begin
        Segment := TSegment( APages[ Loop ] ) ;
        Count := Segment.Length div RAM_Page_Size ;
        Page := Segment.Physical div RAM_Page_Size ;
        while( Count > 0 ) do
        begin
            Page_Table[ Page ] := // Clear all but read-only flag
                Page_Table[ Page ] and Page_Flag_Read_Only ;
            inc( Page ) ;
            dec( Count ) ;
        end ;
        Free_Pages.Deallocate( Segment.Physical, Segment.Length ) ;
        HAL.Release_Segment( Segment.Index, Segment.Physical, Segment.Length, Typ ) ;
        Segment.Free ;
        APages[ Index ] := nil ;
    end ;
    APages.Clear ;
end ; // Release_Segment

This function releases the memory allocated to the segement at the specified index in the page list. It obtains the segment at the specified index in the page list that is passed in. We calculate the starting page associated with the segment, and the number of pages in the segment. Then we iterate through the page table, clearing everything except the read-only flag (if such flag is set). We then deallocate the page range in the Free_Pages allocation table. We tell the HAL that we are done with this segment (what the HAL does with this information is up to the HAL - it may do nothing at all). Next we free the segment instance, clear the item in the page list and then remove the empty item in the list.

Here is the code for the End_Startup method:

procedure TMMC.End_Startup ;

var I : int64 ;

begin
    Page_Table ;
    _Startup := False ;
    I := 0 ;
    Allocate( 0, I, 'D', 0, 0 ) ; // Force MMC to be set up
end ;

This method is called by the Kernel when the HMC is set up. This causes the MMC to set up the page table, exit startup mode, and then force the initialization by calling the Allocate method with a zero length, as we discussed earlier. Note that the order of operations here is quite intentional - the page table must be set up before we clear the _Startup flag. This is so that the page table is allocated from reserved memory, for the reasons we discussed earlier. If we cleared the flag before setting up the page table, the component would probably die a horrible death.

Here is the Set_Aside method:

procedure TMMC.Set_Aside( Buffer : PChar ; Buffer_Size : longint ) ;

begin
    Reserved := integer( Buffer ) + Buffer_Size - 1 ;
end ;

The code merely increases the size of the Reserved memory. This can be safely called before the MMC is set up. In fact, it is only useful at that point.

Finally, let's look at the MemAvailable method:

function TMMC.MemAvailable( PID : cardinal ; Typ : Char ;
    Index : integer ) : int64 ;

var APages : TList ;
    Pages : TUOS_Page_List ;
    Segment : TSegment ;
    T : integer ;

begin // TMMC.MemAvailable
    // Setup...
    Result := 0 ;

    // Get appropriate virtual page list...
    if( PID = 0 ) then
    begin
        Pages := Kernel_Pages ;
    end else
    begin
        Pages := USC.Page_List( PID ) ;
        if( Pages = nil ) then // Not a valid PID
        begin
            exit ;
        end ;
    end ;

    // Determine allocation index...
    T := Resolve_Allocation_Type( Typ, Index, Allocation_Types ) ;
    if( T < 0 ) then
    begin
        exit ;
    end ;

    APages := TList( Pages[ T ] ) ;

    // Map the virtual to the physical...
    if( APages.Count = 0 ) then
    begin
        Result := Free_Pages.MaxSpace ;
    end else
    begin
        Segment := TSegment( APages[ APages.Count - 1 ] ) ;
        Result := Free_Pages.Space_At( Segment.Physical + Segment.Length ) ;
    end ;
    Result := Result * RAM_Page_Size ;
end ; // TMMC.MemAvailable

This method returns a value indicating the number of bytes available to be allocated of the passed allocation type. First, we get the page list for the passed PID, then we resolve the allocation type. These were the same steps in the Allocate method as well. If nothing has been allocated for the PID, then the amount of memory available is how much free space is in Free_Pages. Otherwise, we see how much contiguous space is available immediately after the end of the segment - remember that the segments must be contiguous.

This diagram illustrates the layout of a process page list on a computer with the following allocation types defined: S, D, and I:

That wraps up the basic functionality of the MMC for now. We will come back to it in the future. In the next article, we will discuss the HMC.