Copyright © 2016 by Alan Conroy. This article may be copied in whole or in part as long as this copyright is included.


1 Introduction
2 Ground Rules

Building a File System
3 File Systems
4 File Content Data Structure
5 Allocation Cluster Manager
6 Exceptions and Emancipation
7 Base Classes, Testing, and More
8 File Meta Data
9 Native File Class
10 Our File System
11 Allocation Table
12 File System Support Code
13 Initializing the File System
14 Contiguous Files
15 Rebuilding the File System
16 Native File System Support Methods
17 Lookups, Wildcards, and Unicode, Oh My
18 Finishing the File System Class

The Init Program
19 Hardware Abstraction and UOS Architecture
20 Init Command Mode
21 Using Our File System
22 Hardware and Device Lists
23 Fun with Stores: Partitions
24 Fun with Stores: RAID
25 Fun with Stores: RAM Disks
26 Init wrap-up

The Executive
27 Overview of The Executive
28 Starting the Kernel
29 The Kernel
30 Making a Store Bootable
31 The MMC
32 The HMC
33 Loading the components
34 Using the File Processor
35 Symbols and the SSC
36 The File Processor and Device Management
37 The File Processor and File System Management
38 Finishing Executive Startup

Users and Security
39 Introduction to Users and Security
40 More Fun With Stores: File Heaps
41 File Heaps, part 2
42 SysUAF
43 TUser
44 SysUAF API

Terminal I/O
45 Shells and UCL
46 UOS API, the Application Side
47 UOS API, the Executive Side
48 I/O Devices
49 Streams
50 Terminal Output Filters
51 The TTerminal Class
52 Handles
53 Putting it All Together
54 Getting Terminal Input
55 QIO
56 Cooking Terminal Input
57 Putting it all together, part 2
58 Quotas and I/O

UCL
59 UCL Basics
60 Symbol Substitution
61 Command execution
62 Command execution, part 2
63 Command Abbreviation
64 ASTs
65 Expressions, Part 1
66 Expressions, Part 2: Support code
67 Expressions, part 3: Parsing
68 SYS_GETJPIW and SYS_TRNLNM
69 Expressions, part 4: Evaluation

UCL Lexical Functions
70 PROCESS_SCAN
71 PROCESS_SCAN, Part 2
72 TProcess updates
73 Unicode revisted
74 Lexical functions: F$CONTEXT
75 Lexical functions: F$PID
76 Lexical Functions: F$CUNITS
77 Lexical Functions: F$CVSI and F$CVUI
78 UOS Date and Time Formatting
79 Lexical Functions: F$CVTIME
80 LIB_CVTIME
81 Date/Time Contexts
82 SYS_GETTIM, LIB_Get_Timestamp, SYS_ASCTIM, and LIB_SYS_ASCTIM
83 Lexical Functions: F$DELTA_TIME
84 Lexical functions: F$DEVICE
85 SYS_DEVICE_SCAN
86 Lexical functions: F$DIRECTORY
87 Lexical functions: F$EDIT and F$ELEMENT
88 Lexical functions: F$ENVIRONMENT
89 SYS_GETUAI
90 Lexical functions: F$EXTRACT and F$IDENTIFIER
91 LIB_FAO and LIB_FAOL
92 LIB_FAO and LIB_FAOL, part 2
93 Lexical functions: F$FAO
94 File Processing Structures
95 Lexical functions: F$FILE_ATTRIBUTES
96 SYS_DISPLAY
97 Lexical functions: F$GETDVI
98 Parse_GetDVI
99 GetDVI
100 GetDVI, part 2
101 GetDVI, part 3
102 Lexical functions: F$GETJPI
103 GETJPI
104 Lexical functions: F$GETSYI
105 GETSYI
106 Lexical functions: F$INTEGER, F$LENGTH, F$LOCATE, and F$MATCH_WILD
107 Lexical function: F$PARSE
108 FILESCAN
109 SYS_PARSE
110 Lexical Functions: F$MODE, F$PRIVILEGE, and F$PROCESS
111 File Lookup Service
112 Lexical Functions: F$SEARCH
113 SYS_SEARCH
114 F$SETPRV and SYS_SETPRV
115 Lexical Functions: F$STRING, F$TIME, and F$TYPE
116 More on symbols
117 Lexical Functions: F$TRNLNM
118 SYS_TRNLNM, Part 2
119 Lexical functions: F$UNIQUE, F$USER, and F$VERIFY
120 Lexical functions: F$MESSAGE
121 TUOS_File_Wrapper
122 OPEN, CLOSE, and READ system services

UCL Commands
123 WRITE
124 Symbol assignment
125 The @ command
126 @ and EXIT
127 CRELNT system service
128 DELLNT system service
129 IF...THEN...ELSE
130 Comments, labels, and GOTO
131 GOSUB and RETURN
132 CALL, SUBROUTINE, and ENDSUBROUTINE
133 ON, SET {NO}ON, and error handling
134 INQUIRE
135 SYS_WRITE Service
136 OPEN
137 CLOSE
138 DELLNM system service
139 READ
140 Command Recall
141 RECALL
142 RUN
143 LIB_RUN
144 The Data Stream Interface
145 Preparing for execution
146 EOJ and LOGOUT
147 SYS_DELPROC and LIB_GET_FOREIGN

CUSPs and utilities
148 The I/O Queue
149 Timers
150 Logging in, part one
151 Logging in, part 2
152 System configuration
153 SET NODE utility
154 UUI
155 SETTERM utility
156 SETTERM utility, part 2
157 SETTERM utility, part 3
158 AUTHORIZE utility
159 AUTHORIZE utility, UI
160 AUTHORIZE utility, Access Restrictions
161 AUTHORIZE utility, Part 4
162 AUTHORIZE utility, Reporting
163 AUTHORIZE utility, Part 6
164 Authentication
165 Hashlib
166 Authenticate, Part 7
167 Logging in, part 3
168 DAY_OF_WEEK, CVT_FROM_INTERNAL_TIME, and SPAWN
169 DAY_OF_WEEK and CVT_FROM_INTERNAL_TIME
170 LIB_SPAWN
171 CREPRC
172 CREPRC, Part 2
173 COPY
174 COPY, part 2
175 COPY, part 3
176 COPY, part 4
177 LIB_Get_Default_File_Protection and LIB_Substitute_Wildcards
178 CREATESTREAM, STREAMNAME, and Set_Contiguous
179 Help Files
180 LBR Services
181 LBR Services, Part 2
182 LIBRARY utility
183 LIBRARY utility, Part 2
184 FS Services
185 FS Services, Part 2
186 Implementing Help
187 HELP
188 HELP, Part 2
189 DMG_Get_Key and LIB_Put_Formatted_Output
190 LIBRARY utility, Part 3
191 Shutting Down UOS
192 SHUTDOWN
193 WAIT
194 SETIMR
195 WAITFR and Scheduling
196 REPLY, OPCOM, and Mailboxes
197 REPLY utility
198 Mailboxes
199 BRKTHRU
200 OPCOM
201 Mailbox Services
202 Mailboxes, Part 2
203 DEFINE
204 CRELNM
205 DISABLE
206 STOP
207 OPCCRASH and SHUTDOWN
208 APPEND

Glossary/Index


Downloads

Native File Class

Before we begin, it is important to note that the class we are writing is not going to be the class that UOS applications use to access files. At least, not directly. There are several reasons for this that we'll eventually get to, but for now it is important to understand that this class will simply provide an interface to our file header and, through that, to our various data streams.

Our native file class performs two basic functions: meta data stream management (create/delete/read/write), and converting arbitary data offsets to data clusters/offsets. In essense, this class allows us to access the data by offset and length, without having to concern ourselves with a series of discontiguous clusters of data.

Tradeoffs
We discussed the file header in our last article, but there are some changes we should make. As discussed, we left room for 6 data stream headers. UOS has 12 different pre-defined file attributes that will be stored as meta data. Few files will have all of them, and many files will have none. But they will be used enough that it is important to have room for some in the file header to avoid additional I/O operations just to get to the headers. It would be nice, however, if we could include some file cluster information in the header to reduce the possibility of turns. How do we include those in the file header? One way would be to make the file header longer, and thus able to hold more data. The tradeoff is that there will be more storage overhead for each file. Adding a few cluster pointers might not be a big deal, but unless we make the file header size a multiple of the store's cluster size, we run the risk of having to do multiple I/O operations to read in a single header, since the header may span clusters. To avoid that, we'd have to increase the header size from 256 to 512, minimum, to avoid spanning clusters on most disk drives. Actually, if the file header clusters were all contiguous, we could still read them in a single operation. However, as we will see, this is not an assumption we can make. Further, one of the goals of UOS is to make an operating system that will run on as many hardware platforms as possible. This means it is important to minimize the footprint of UOS, especially for embedded systems. Note: "footprint" means the amount of space required on disk and in memory to run UOS. So, we will constrain ourselves to 256-byte headers, which means we have to somehow find space in our existing header. Note that everything in the header is required, so we can't drop anything. But, perhaps we can reduce the size of some of the items. First, we will combine the Extended Flags into the Flags field. With 64 flags possible, we should be able to manage. That saves us 4 bytes. Next, the Creator and Owner really don't need to be more than 32-bits each - it is unlikely that a single UOS system will have anywhere near a combined 4 billion users and groups, much less 4 billion times 4 billion. That saves us 8 bytes. Then we will reduce the number of stream headers in the file header by 1, saving us 16 bytes. Then we reduce the file cluster size and record size to 32-bits. Granted, that means that we can't have a cluster size or record size larger than 4 Gigabytes, but I don't think that is an unreasonable limitation. Finally, the likelihood of more than 4 billion distinct file names on a given store is remote, so we will reduce the Name to 4 bytes, saving us another 4 bytes. Since the value for the name is indirect, our file system can make use of a 32-bit value just as easily as a 64-bit value. All totaled, that gives us 5 cluster pointers. No turns will be required if the file's data fits into 5 clusters, which also means that small files will not need additional space for allocation chains. Further, we can optimize files by simply dividing their size by 5 and rounding up to the next store cluster size, and setting the file's cluster size to that value, which will ensure that all file allocation pointers are kept within the header. In fact, this is a strategy that we can use in our on-line disk defragmenter (which will we dicuss at some future date).

This is how our file header now looks:

const Max_Header_Data_Stream = 4 ;
      Max_Header_Cluster = 4 ;

type TData_Stream = packed record
                        Name : int64 ;
                        Pointer : TStore_Address64 ;
                    end ;

     TUOS_File_Header = packed record
                                // Name and sizes...
                                Name : longint ;
                                Size : int64 ; // Size on disk
                                EOF : int64 ; // Logical size
                                Uncompressed_Size : int64 ;
                                Clustersize : cardinal ;
                                Record_Size : cardinal ;

                                // Dates...
                                Creation : int64 ; // Creation date
                                Last_Modified : int64 ; // Last write date
                                Last_Backup : int64 ;
                                Last_Access : int64 ; // Last read/open date
                                Expiration : int64 ;

                                // Security...
                                Creator : cardinal ;
                                Owner : cardinal ;
                                ACL : int64 ;

                                // Misc...
                                Flags : int64 ;
                                Version_Limit : longint ;
                                Extension : TStore_Address64 ;

                                // Data streams...
                                Streams : array[ 0..Max_Header_Data_Stream ] of TData_Stream ;
                                Data_Stream : TStore_Address64 ; // Overflow pointer
                                Clusters : array[ 0..Max_Header_Cluster ] of TStore_Address64 ;

                                // File-system data...
                                Parent : int64 ;
                                File_System : int64 ; // Reserved for file system
                        end ;

Native File Class
Now we turn our attention to the class that implements our native file data and meta data interface. First will we define a descendant of our Allocation Cluster Manager class, whose sole purpose is to add a name and index to the class:

type TNative_File_ACM = class( TCOM_Allocation_Cluster_Manager64 )
                            public // Instance data...
                                Index : longint ; // Index of data stream name/pointer
                                Name : longint ;
                        end ;

The reason for these two items will become apparent shortly. Here is the first part of our file class definition:
type TUOS_Native_File = class( TCommon_COM_Interface )
                             public // Constructors and destructors...
                                 constructor Create ;
                                 destructor Destroy ; override ;

                             private // Instance data...
                                 _Buffer : PChar ; // Data stream buffer
                                 _Buffer_Size : longint ; // Size of allocated buffer
                                 _Store : TCOM_Managed_Store64 ;

                                 _Data_Stream_ACM : TNative_File_ACM ;
                                 _Max_Stream : longint ;
                                 _Streams : TList ; // List of TNative_File_ACMs for accessed streams

                             public // API...
                                 Dirty : boolean ;
                                 Header : TUOS_File_Header ;

_Buffer is a dynamically-allocated buffer used to read and write allocation clusters and stream data. The size of the buffer is contained in _Buffer_Size, which is calculated from the store cluster size and the file cluster size. _Store is a pointer to our store. The rest of the data has to do with our data streams. We are storing all streams as allocation cluster lists, which will be maintained by our old friend, the TCOM_Allocation_Cluster_Manager64 class. Since each stream is independent of the others, we will create an instance of our ACM class for each stream that we need to access. Note that we only create a given instance if the user requests access to that stream. _Streams is a dynamic list of all instances of streams that we have accessed. It will have an offset for each stream index, up to the highest index accessed. Unused (or unaccessed) streams will have a nil in the corresponding index. _Max_Stream indicates the highest valid stream index for the file, which may be larger than the highest index used in _Streams. Note that this does not indicate the number of actual defined streams in the file - just the highest index reserved for stream header data. Since we only have room in the header for 5 streams, what happens if we want more than 5 streams for a file? The Data_Stream item in the header points to a chain of clusters that will hold data stream headers. A chain of clusters? Wait. That sounds like what our ACM class handles. Indeed, we will use the ACM class to essentially manage another meta data stream. This stream will contain a series of data stream headers, but since it isn't a stream that the user has access to, we manage it separately. _Data_Stream_ACM is the instance of the ACM class that manages our stream of data stream headers.
Header is the file header for the file that our class instance is working with. Of note is the Dirty flag. We have a copy of the file header that we modify after certain operations. However, our class has no idea where this header comes from or how to update it on whatever store it exists on. That is the responsibility of the File System class that we will be writing next. The Dirty flag is our means of letting that class know when it needs to update the file header on the store.

Here's our constructor and destructor:

constructor TUOS_Native_File.Create ;

begin
    inherited Create ;

    _Max_Stream := -1 ;
end ;


destructor TUOS_Native_File.Destroy ;

var Loop : integer ;

begin
    if( _Streams <> nil ) then
    begin
        for Loop := 0 to _Streams.Count - 1 do
        begin
            if( _Streams[ Loop ] <> nil ) then
            begin
                TCOM_Allocation_Cluster_Manager64( _Streams[ Loop ] ).Detach ;
                _Streams[ Loop ] := nil ;
            end ;
        end ;
        _Streams.Free ;
    end ;
    if( _Data_Stream_ACM <> nil ) then
    begin
        _Data_Stream_ACM.Detach ;
    end ;
    Set_Store( nil ) ;

    inherited Destroy ;
end ;

We set _Max_Stream to -1 to indicate that we haven't calculated it yet, because we don't want to calculate it until it is needed for some operation. The destructor code should be obvious and/or familiar.
We also don't create the Data Stream ACM until it is requested, which won't be unless there are more than 5 data streams associated with the file. We use the following method to return the class instance, and create it if needed:
function TUOS_Native_File.Data_Stream_ACM : TNative_File_ACM ;

begin
    if( _Data_Stream_ACM = nil ) then
    begin
        _Data_Stream_ACM := TNative_File_ACM.Create ;
        _Data_Stream_ACM.Index := -1 ;
        _Data_Stream_ACM.Set_Store( _Store ) ;
        _Data_Stream_ACM.Set_Root( Header.Data_Stream ) ;
        _Data_Stream_ACM.Set_Clustersize( 128 ) ; // Minimum clustersize for streams data
    end ;
    Result := _Data_Stream_ACM ;
end ;

We set the minimum cluster size for the data stream ACM to 128, which allows up to 8 data stream headers per cluster.
Next we have a method that will return (and create, if needed) the ACM class for a given data stream index.
type TData_Stream_Array = array[ 0..Max_Memory div sizeof( TData_Stream ) ] of TData_Stream ;
     PData_Stream_Array = ^TData_Stream_Array ;

// Load Buffer with buffer for specified stream, and create/return ACM for it

function TUOS_Native_File.Find_Data_Stream( Index : longint ) : TNative_File_ACM ;

var DS : TData_Stream ;
    I : TStore_Address64 ;
    Original : longint ;

begin
    // Setup...
    Result := nil ; // Assume failure
    if( ( Index < 0 ) or ( Index > Max_Stream ) ) then
    begin
        exit ;
    end ;
    if( ( Streams.Count > Index ) and ( Streams[ Index ] <> nil ) ) then
    begin
        Result := TNative_File_ACM( Streams[ Index ] ) ;
        exit ;
    end ;
    Original := Index ;
    I := _Store.Min_Storage ;
    if( I < sizeof( TData_Stream ) * 4 ) then
    begin
        I := sizeof( TData_Stream ) * 4 ;
    end ;

    // Extend stream list...
    while( Streams.Count <= Index ) do
    begin
        Streams.Add( nil ) ;
    end ;

    if( Index <= Max_Header_Data_Stream ) then // Data in the header
    begin
        if( ( Index > 0 ) and ( Header.Streams[ Index ].Name = 0 ) ) then
        begin
            exit ; // Not a valid stream
        end ;
        Result := TNative_File_ACM.Create ;
        Result.Name := Header.Streams[ Index ].Name ;
        Result.Index := Index ;
        Result.Set_Store( _Store ) ;
        Result.Set_Clustersize( I ) ;
        Result.Set_Root( Header.Streams[ Index ].Pointer ) ;
        Streams[ Index ] := Result ;
        exit ;
    end ;

    Index := Index - Max_Header_Data_Stream - 1 ;
    _Read( Data_Stream_ACM, Index * sizeof( TData_Stream ), sizeof( DS ), @DS ) ;
    Result := TNative_File_ACM.Create ;
    Result.Name := DS.Name ;
    Result.Index := Original ;
    Result.Set_Store( _Store ) ;
    Result.Set_Clustersize( I ) ;
    Result.Set_Root( DS.Pointer ) ;
    Streams[ Original ] := Result ;
end ; // TUOS_Native_File.Find_Data_Stream

Next we have a method that will look for a stream with a given name and return its index. This is used to translate from a name to an index. The index that a given to a stream when it is created is as arbitary as the order in which the streams were created, so the user will reference them by name. But we need to reference them by index. Thus, this routine converts from a name to an index. It returns -1 if the named stream does not exist.
function TUOS_Native_File.IndexOf( Name : int64 ) : longint ;

var B : array[ 0..31 ] of TData_Stream ; // 512 bytes (max)
    Loop : integer ;
    P : TStore_Address64 ;

begin
    // Setup...
    Result := -1 ; // Assume not found...
    Set_Last_Error( nil ) ;

    // Check header...
    for Loop := 0 to Max_Header_Data_Stream do
    begin
        if( Name = Header.Streams[ Loop ].Name ) then
        begin
            Result := Loop ;
            exit ;
        end ;
    end ;

    // Check data stream chain...
    P := Header.Data_Stream ;
    if( P = 0 ) then
    begin
        exit ; // No more streams in file
    end ;

    P := 0 ;
    Result := Max_Header_Data_Stream + 1 ;
    while( P < Data_Stream_ACM.Get_Size ) do
    begin
        Loop := _Read( Data_Stream_ACM, P, sizeof( B ), @B ) div sizeof( TData_Stream ) - 1 ;
        if( Last_Error <> nil ) then
        begin
            exit ;
        end ;
        while( Loop >= 0 ) do
        begin
            if( Name = B[ Loop ].Name ) then
            begin
                Result := Result + Loop ;
                exit ;
            end ;
            dec( Loop ) ;
        end ;
        P := P + sizeof( B ) ;
        Result := Result + sizeof( B ) div sizeof( TData_Stream ) ;
    end ;

    Result := -1 ; // Not found...
end ; // TUOS_Native_File.IndexOf

Since we have some of the cluster pointers for the data (stream 0) in the header, we can't use the ACM class without a little bit of "magic". For any of the other streams (which don't have cluster pointers in the header), we can call the Offset_To_Pointer and Get_Size methods directly. But for the data stream, we have to handle the header pointers ourself and only pass the call on to the ACM when we are dealing with clusters beyond the first 5. So, we have methods that will handle the situation and we call these methods instead of the ACM methods elsewhere in our class.

function TUOS_Native_File.Offset_To_Pointer( ACM : TNative_File_ACM ;
    Position : TStore_Address64 ) : TStore_Address64 ;

var H : TStore_Address64 ;

begin
    if( ACM = Streams[ 0 ] ) then // Data stream
    begin
        H := Position div ACM.Get_Clustersize ; // Cluster index
        if( H <= Max_Header_Cluster ) then
        begin
            Result := Header.Clusters[ H ] ;
            exit ;
        end ;
        Position := Position - ( Max_Header_Cluster + 1 ) * ACM.Get_Clustersize ;
    end ;
    Result := ACM.Offset_To_Pointer( Position ) ;
end ;


function TUOS_Native_File.Get_Size( ACM : TNative_File_ACM ) : TStore_Size64 ;

begin
    if( ACM = Streams[ 0 ] ) then // Data stream
    begin
        Result := Header.Size ;
    end else
    begin
        Result := ACM.Get_Size ;
    end ;
end ;


procedure TUOS_Native_File.Set_Size( ACM : TNative_File_ACM ;
    Size : TStore_Size64 ) ;

var C : cardinal ;
    H : TStore_Address64 ;
    I : longint ;

begin
    if( ACM = Streams[ 0 ] ) then // Data stream
    begin
        C := ACM.Get_Clustersize ;
        H := Size div C ; // Cluster index of last cluster of new size
        if( H > Max_Header_Cluster ) then
        begin
            H := Max_Header_Cluster ;
        end ;
        while( H >= 0 ) do
        begin
            if( Header.Clusters[ H ] = 0 ) then
            begin
                Header.Clusters[ H ] := _Store.Allocate( C ) ;
                Dirty := True ;
                dec( H ) ;
            end else
            begin
                break ;
            end ;
        end ;
        H := Size div C + 1 ; // Cluster index of cluster after end of file
        while( H <= Max_Header_Cluster ) do
        begin
            if( Header.Clusters[ H ] <> 0 ) then
            begin
                _Store.Deallocate( Header.Clusters[ H ], C ) ; // WARNING: Dangerous
                Header.Clusters[ H ] := 0 ;
                Dirty := True ;
                inc( H ) ;
            end else
            begin
                break ;
            end ;
        end ;
        if( ( Size = 0 ) and ( Header.Clusters[ 0 ] <> 0 ) ) then
        begin
            _Store.Deallocate( Header.Clusters[ 0 ], C ) ; // WARNING: Dangerous
            Header.Clusters[ 0 ] := 0 ;
        end ;
        Size := Size - ( Max_Header_Cluster + 1 ) * C ;
        if( Size < 0 ) then
        begin
            Size := 0 ;
        end ;
    end ;
    ACM.Set_Size( Size ) ;
    if( ACM = Streams[ 0 ] ) then // Data stream
    begin
        Header.Size := 0 ;
        for I := 0 to Max_Header_Cluster do
        begin
            if( Header.Clusters[ I ] <> 0 ) then
            begin
                Header.Size := Header.Size + C ;
            end ;
        end ;
        Header.Size := Header.Size + ACM.Get_Size ;
        Dirty := True ;
    end ;
end ; // TUOS_Native_File.Set_Size

Also note that we keep track of the data stream size in the header, so these methods ensure that the file header Size field is properly updated.
Next we have the workhorse routines of the class: the read/write methods. As mentioned at the start of the article, one of the purposes of this class is to translate between arbitary positions and lengths to the chains of allocation clusters managed by the ACM class instances. The read is simpler than the write because it doesn't modify the file data or meta data.
function TUOS_Native_File._Read( ACM : TNative_File_ACM ;
    Position : TStore_Address64 ; Length : TStore_Size64 ;
    Buff : pointer ) : TStore_Size64 ;

var Clustersize : TStore_Size64 ;
    Offset, P : TStore_Address64 ;
    This_Length : longint ;

begin
    // Setup...
    Result := 0 ;
    if( ( ACM = nil ) or ( Length < 1 ) ) then
    begin
        exit ;
    end ;
    if( Position >= Get_Size( ACM ) ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Read_Past_End, nil ) ) ;
        exit ;
    end ;
    if( Position + Length - 1 >= Get_Size( ACM ) ) then
    begin
        Length := Get_Size( ACM ) - Position ;
    end ;
    P := Offset_To_Pointer( ACM, Position ) ;
    if( P = 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Data_Structure_Error, nil ) ) ;
        exit ;
    end ;
    Result := Length ;

    // Convert from position to offset from start of cluster
    Clustersize := ACM.Get_Clustersize ;
    Offset := Position - ( Position div Clustersize ) * Clustersize ;
    while( Length > 0 ) do
    begin
        This_Length := Length ;
        if( This_Length > Clustersize - Offset ) then
        begin
            This_Length := Clustersize - Offset ;
        end ;
        _Store.Read( P, Clustersize, Buffer^ ) ;
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Read_Failure, _Store.Last_Error ) ) ;
            exit ;
        end ;
        move( Buffer[ Offset ], Buff^, This_Length ) ;
        Length := Length - This_Length ;
        if( Length = 0 ) then
        begin
            break ;
        end ;
        Offset := 0 ; // Start at beginning of next cluster
        Position := Position + This_Length ;
        P := Offset_To_Pointer( ACM, Position ) ;
        if( P = 0 ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Data_Structure_Error, nil ) ) ;
            exit ;
        end ;
        Buff := PAnsichar( Buff ) + This_Length ; // Move forward in destination buffer
    end ; // while( Length > 0 )
end ; // TUOS_Native_File._Read

The method takes the ACM class for the stream that we are reading from, the position and length, and the buffer to read data into. It then handles starting and ending in arbitary positions, reading from the appropriate clusters and allowing the data to span clusters. The write method works similarly, but has a couple twists to it.
function TUOS_Native_File._Write( ACM : TNative_File_ACM ;
    Position : TStore_Address64 ; Length : TStore_Size64 ;
    Buff : pointer ) : TStore_Size64 ;

var Clustersize : TStore_Size64 ;
    DS : TData_Stream ;
    Offset, Old, P : TStore_Address64 ;
    This_Length : longint ;

begin
    // Setup...
    Result := 0 ;
    Set_Last_Error( nil ) ;
    if( ( ACM = nil ) or ( Length < 1 ) ) then
    begin
        exit ;
    end ;
    Old := ACM.Get_Root ;
    if( Position + Length > Get_Size( ACM ) ) then
    begin
        Set_Size( ACM, Position + Length ) ;
        if( Position + Length > Get_Size( ACM ) ) then
        begin
            Set_Last_Error(
		Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
            exit ;
        end ;
    end ;
    P := Offset_To_Pointer( ACM, Position ) ;
    if( P = 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Data_Structure_Error, nil ) ) ;
        exit ;
    end ;
    Result := Length ;

    // Convert from position to offset from start of cluster
    Clustersize := ACM.Get_Clustersize ;
    Offset := Position - ( Position div Clustersize ) * Clustersize ;
    while( Length > 0 ) do
    begin
        This_Length := Length ;
        if( This_Length > Clustersize - Offset ) then
        begin
            This_Length := Clustersize - Offset ;
        end ;
        if( ( Offset > 0 ) or ( This_Length < Clustersize ) ) then // Not a whole cluster
        begin
            _Store.Read( P, Clustersize, Buffer^ ) ; // Read in cluster for modification
            if( _Store.Last_Error <> nil ) then
            begin
                Set_Last_Error(
		    Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
                exit ;
            end ;
        end ;
        move( Buff^, Buffer[ Offset ], This_Length ) ;
        _Store.Write( P, Clustersize, Buffer^ ) ;
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error(
		Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
            exit ;
        end ;
        Length := Length - This_Length ;
        if( Length = 0 ) then
        begin
            break ;
        end ;
        Offset := 0 ; // Start at beginning of next cluster
        Position := Position + This_Length ;
        P := Offset_To_Pointer( ACM, Position ) ;
        if( P = 0 ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Data_Structure_Error, nil ) ) ;
            exit ;
        end ;
        Buff := PAnsiChar( Buff ) + This_Length ; // Move forward in destination buffer
    end ; // while( Length > 0 )
    if( Old <> ACM.Get_Root ) then
    begin
        if( ACM.Index <= Max_Header_Data_Stream ) then
        begin
            Dirty := True ;
            Header.Streams[ ACM.Index ].Pointer := ACM.Get_Root ;
        end else
        begin
            DS.Pointer := ACM.Get_Root ;
            _Write( Data_Stream_ACM,
		( ACM.Index - Max_Header_Data_Stream - 1 ) * sizeof( TData_Stream ) + sizeof( DS.Name ),
		sizeof( DS.Pointer ), @DS.Pointer ) ;
        end ;
    end ;
end ; // TUOS_Native_File._Write

Similar to the read routine, data is written in chunks that are no larger than a single cluster. However, in the case where only part of a cluster is being updated, we read the cluster into a buffer, modify part of it and write it back out. This is because the store cannot write data in smaller chunks than the store cluster size, so if we are only changing part of a cluster, we have to write the whole cluster. If we don't read in the cluster before modifying it, we will corrupt the part of the cluster that shouldn't change. Secondly, unlike the read routine, the user can write beyond the end of the stream. That is, writing to the stream can extend it.

Remember that we only allocate data when requested (called deferred processing). So is a method that returns the pointer to the buffer and one that returns a pointer to the stream list (and creates them if they aren't already):

function TUOS_Native_File.Buffer : PChar ;

var I : int64 ;

begin
    if( _Buffer = nil ) then
    begin
        I := Data_Stream_ACM.Get_Clustersize ;
        if( I < sizeof( TData_Stream ) * 4 ) then
        begin
            I := sizeof( TData_Stream ) * 4 ;
        end ;
        if( I < Header.Clustersize ) then
        begin
            I := Header.Clustersize ;
        end ;
        Reallocmem( _Buffer, I ) ;
        _Buffer_Size := I ;
    end ;
    Result := _Buffer ;
end ;


function TUOS_Native_File.Streams : TList ;

begin
    if( _Streams = nil ) then
    begin
        _Streams := TList.Create ;
    end ;
    Result := _Streams ;
end ;

Now we come to the public interface for the class, which will be used by the File System class. First, some getters/setters and the Is_Class override:
function TUOS_Native_File.Is_Class( Name : PChar ) : boolean ;

begin
    Result := lowercase( Name ) = 'tuos_native_file' ;
end ;


function TUOS_Native_File.Get_Store : TCOM_Managed_Store64 ;

begin
    Result := _Store ;
end ;


procedure TUOS_Native_File.Set_Store( Value : TCOM_Managed_Store64 ) ;

begin
    if( Value <> nil ) then
    begin
        Value.Attach ;
    end ;
    if( _Store <> nil ) then
    begin
        _Store.Detach ;
    end ;
    _Store := Value ;
end ;

We've seen this in our previous class work, so there is no need to go over it here.
The Create_Stream class is used to create a new stream in the file.
function TUOS_Native_File.Create_Stream( Name : int64 ) : longint ;

var DS : TData_Stream ;
    Loop : integer ;
    S, P : TStore_Address64 ;

begin
    Result := -1 ;
    if( Name = 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Duplicate_Name, nil ) ) ;
        exit ;
    end ;
    if( Indexof( Name ) <> -1 ) then // Already exists
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Duplicate_Name, nil ) ) ;
        exit ;
    end ;
    Set_Last_Error( nil ) ;
    for Loop := 1 to Max_Header_Data_Stream do
    begin
        if( Header.Streams[ Loop ].Name = 0 ) then
        begin
            Header.Streams[ Loop ].Name := Name ;
            Dirty := True ; // Header needs an update
            Result := Loop ;
            if( Result > _Max_Stream ) then
            begin
                _Max_Stream := Result ;
            end ;
            exit ;
        end ;
    end ;

    // If we get here, we have no room in the file header for a new stream...
    Result := Max_Header_Data_Stream + 1 ;
    DS.Name := Name ;
    DS.Pointer := 0 ;
    if( Data_Stream_ACM.Get_Size = 0 ) then // First data stream after header
    begin
        fillchar( Buffer^, _Buffer_Size, 0 ) ;
        move( DS, Buffer^, sizeof( DS ) ) ;
        _Write( Data_Stream_ACM, 0, _Buffer_Size, Buffer ) ;
        _Max_Stream := Max_Header_Data_Stream + 1 ;
        Header.Data_Stream := Data_Stream_ACM.Get_Root ;
        Dirty := True ;
        exit ;
    end ;

    P := 0 ;
    while( true ) do
    begin
        fillchar( Buffer^, _Buffer_Size, 0 ) ;
        S  := _Read( Data_Stream_ACM, P, _Buffer_Size, Buffer ) ;
        if( S = 0 ) then // Need to write at end of data stream
        begin
            Set_Last_Error( nil ) ;
            move( DS, Buffer^, sizeof( DS ) ) ;
            _Write( Data_Stream_ACM, P, _Buffer_Size, Buffer ) ;
            if( Result > _Max_Stream ) then
            begin
                _Max_Stream := Result ;
            end ;
            exit ;
        end ;
        for Loop := 0 to S div sizeof( TData_Stream ) - 1 do
        begin
            if( PData_Stream_Array( Buffer )[ Loop ].Name = 0 ) then
            begin
                Result := Result + Loop ;
                if( _Max_Stream < Result ) then
                begin
                    _Max_Stream := Result ;
                end ;
                PData_Stream_Array( Buffer )[ Loop ].Name := Name ;
                PData_Stream_Array( Buffer )[ Loop ].Pointer := 0 ;
                _Write( Data_Stream_ACM, P, _Buffer_Size, Buffer ) ;
                if( Result > _Max_Stream ) then
                begin
                    _Max_Stream := Result ;
                end ;
                exit ;
            end ;
        end ;
        Result := Result + S div sizeof( TData_Stream ) ;
        P := P + S ;
    end ; // while( true )
end ; // TUOS_Native_File.Create_Stream

The data stream has no name, per se, so it has a name value of 0. If the user tries to create a stream with a name of 0, that is not allowed since there is already one with that name. We use IndexOf to verify that the name isn't already being used with another stream in this file, since each stream has to have a unique name (otherwise, how would the user tell them apart?) We see if there is room for the new stream header in the file header. If so, we set up the stream header and return the index for the stream. Otherwise, we need to add it to data header stream, which requires us to search through the headers, looking for an unused one (name of 0, indexes 1-4). If found, we update the stream with the new header value. But if we run out of stream headers before we find an unused one, we need to extend the stream to make room for new headers. So we write a bunch of blank headers (one cluster worth) and then update the first of those with the new stream name. The pointer for the new stream remains 0 unless, or until, the user wants to extend that stream. Obviously, creating a lot of streams and extending them over and over is less efficient than extending the file data, but the meta data streams should remain fairly stable once they are created, except in certain cases, so they can be a little less efficient.

The Max_Stream method returns the maximum possible stream index. The first time it is called, it will calculate the value by looking at the file header and the data stream header stream. This is used to perform quick validation of stream indexes.

function TUOS_Native_File.Max_Stream : longint ;

var Loop : integer ;

begin
    if( _Max_Stream < 0 ) then // Need to figure out the max stream
    begin
        if( Header.Data_Stream = 0 ) then
        begin
            _Max_Stream := 0 ;
            for Loop := Max_Header_Data_Stream downto 1 do
            begin
                if( Header.Streams[ Loop ].Name <> 0 ) then
                begin
                    _Max_Stream := Loop ;
                    break ;
                end ;
            end ;
            Result := _Max_Stream ;
            exit ;
        end ;
        _Max_Stream := Max_Header_Data_Stream + Data_Stream_ACM.Get_Size div sizeof( TData_Stream ) ;
    end ;

    Result := _Max_Stream ;
end ;

The Stream_Name and Stream_Pointer methods simply find the stream header for the passed index and return the respective value.
function TUOS_Native_File.Stream_Name( Index : longint ) : int64 ;

var ACM : TNative_File_ACM ;

begin
    ACM := Find_Data_Stream( Index ) ;
    if( ACM = nil ) then
    begin
        Result := 0 ;
    end else
    begin
        Result := ACM.Name ;
    end ;
end ;


function TUOS_Native_File.Stream_Pointer( Index : longint ) : TStore_Address64 ;

var ACM : TNative_File_ACM ;

begin
    ACM := Find_Data_Stream( Index ) ;
    if( ACM = nil ) then
    begin
        Result := 0 ;
    end else
    begin
        Result := ACM.Get_Root ;
    end ;
end ;

The Read method is a thin layer over the internal _Read method that simply validates the parameters and obtains the ACM instance for the referenced class. The Write method, likewise, is a thin layer over the _Write method.

function TUOS_Native_File.Read( Stream : longint ; Position : TStore_Address64 ;
    Length : TStore_Size64 ; var Buff ) : TStore_Size64 ;

var ACM : TNative_File_ACM ;

begin
    Result := 0 ;
    if( Stream < 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Stream_Index, nil ) ) ;
        exit ;
    end ;
    Set_Last_Error( nil ) ;
    ACM := Find_Data_Stream( Stream ) ;
    if( ACM = nil ) then
    begin
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Read_Failure, _Store.Last_Error ) ) ;
        end else
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
        end ;
        exit ;
    end ;

    Result := _Read( ACM, Position, Length, @Buff ) ;
end ;


function TUOS_Native_File.Write( Stream : longint ; Position : TStore_Address64 ;
    Length : TStore_Size64 ; var Buff ) : TStore_Size64 ;

var ACM : TNative_File_ACM ;

begin
    Result := 0 ;
    if( Stream < 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Stream_Index, nil ) ) ;
        exit ;
    end ;
    Set_Last_Error( nil ) ;
    ACM := Find_Data_Stream( Stream ) ;
    if( ACM = nil ) then
    begin
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
        end else
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
        end ;
        exit ;
    end ;

    Result := _Write( ACM, Position, Length, @Buff ) ;
end ; // TUOS_Native_File.Write

The Get_Stream_Size and Set_Stream_Size are simple as well. They validate the parameters, look up the ACM instance, and then call the Get_Size and Set_Size internal methods.

function TUOS_Native_File.Get_Stream_Size( Stream : longint ) : TStore_Size64 ;

var ACM : TNative_File_ACM ;

begin
    Result := 0 ;
    if( Stream < 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Stream_Index, nil ) ) ;
        exit ;
    end ;
    Set_Last_Error( nil ) ;
    if( Stream = 0 ) then
    begin
        Result := Header.Size ;
        exit ;
    end ;
    ACM := Find_Data_Stream( Stream ) ;
    if( ACM = nil ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
        exit ;
    end ;
    Result := Get_Size( ACM ) ;
    if( ACM.Last_Error <> nil ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_IO_Error, _Store.Last_Error ) ) ;
    end ;
end ;


procedure TUOS_Native_File.Set_Stream_Size( Stream : longint ; Value : TStore_Size64 ) ;

var ACM : TNative_File_ACM ;

begin
    if( Stream < 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Stream_Index, nil ) ) ;
        exit ;
    end ;
    Set_Last_Error( nil ) ;
    ACM := Find_Data_Stream( Stream ) ;
    if( ACM = nil ) then
    begin
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
        end else
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
        end ;
        exit ;
    end ;
    Set_Size( ACM, Value ) ;
    if( ACM.Last_Error <> nil ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_IO_Error, _Store.Last_Error ) ) ;
    end ;
end ;

Finally, we have Delete_Stream, which will delete the specified stream. Note that we don't allow the call to delete the data stream (stream index 0). The only way to delete stream 0 is to delete the file itself. One can set the size to 0, but as long as there is a file, there is always stream 0.

procedure TUOS_Native_File.Delete_Stream( Name : int64 ; Index : longint ) ;

var ACM : TNative_File_ACM ;
    DS : TData_Stream ;

begin
    // Sanity checks...
    if( Index < 0 ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Stream_Index, nil ) ) ;
        exit ;
    end ;
    if( ( Name = 0 ) and ( Index = 0 ) ) then // Cannot delete file data via Delete_Stream
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_Invalid_Operation, nil ) ) ;
        exit ;
    end ;
    if( Name > 0 ) then
    begin
        Index := IndexOf( Name ) ;
        if( Index < 0 ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
            exit ;
        end ;
    end ;

    // Get ACM...
    ACM := Find_Data_Stream( Index ) ;
    if( ACM = nil ) then
    begin
        if( _Store.Last_Error <> nil ) then
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_Write_Failure, _Store.Last_Error ) ) ;
        end else
        begin
            Set_Last_Error( Create_Exception( UOS_File_Error_No_Such_Stream, nil ) ) ;
        end ;
        exit ;
    end ;

    // Remove stream from file...
    if( Index <= Max_Header_Data_Stream ) then
    begin
        Header.Streams[ Index ].Name := 0 ;
        Header.Streams[ Index ].Pointer := 0 ;
        Dirty := True ; // WARNING: Dangerous
    end else
    begin
        DS.Name := 0 ;
        DS.Pointer := 0 ;
        _Write( Data_Stream_ACM, ( Index - Max_Header_Data_Stream - 1 ) * sizeof( DS ), 
	    sizeof( DS ), @DS ) ;
        if( Last_Error <> nil ) then
        begin
            exit ;
        end ;
    end ;

    // Delete in-memory pointer to ACM...
    _Streams[ Index ] := nil ;

    // Deallocate the ACM and its allocated space...
    Set_Size( ACM, 0 ) ;
    if( ACM.Last_Error <> nil ) then
    begin
        Set_Last_Error( Create_Exception( UOS_File_Error_IO_Error, _Store.Last_Error ) ) ;
    end ;
    ACM.Detach ;
end ; // TUOS_Native_File.Delete_Stream

After validating the parameters, we find the ACM for the class, and set its size to 0, thus deallocating all of the clusters for the stream. Then we zero-out the data stream's header. If it is in the file header, it is very simple. Otherwise we need to alter the data stream header stream.

At this point, we have a working class. It has been tested and the stress test shows that it has roughly the same performance as the ACM class, which indicates that the overhead of this class is minimal. However, there is one issue. You may have noticed a few comments in the source code ("WARNING: Dangerous"). As we discussed in previous articles, to prevent the file system from becoming corrupted in the event of an unforeseen situation (such as a power failure at the wrong moment, or even - heaven forbid - a bug in the code), we want to remove pointers to allocated data before we actually deallocate the data. However, since the file header is not updated on the store by this class, but it does deallocate data, we run the risk of having left-over pointers that are invalid. If we don't know how to update the file header on the store, and the file system won't update it until after we deallocate data, we have code that can potentially corrupt data on the store. How do we address this? Well, we can include a call-back function that we call whenever we update the file header. Since we set the Dirty flag when we update the header, we can simply replace those assignments with a call to an internal routine that looks like this:

procedure TUOS_Native_File.Update_Header ;

begin
    Dirty := True ;
    if( assigned( On_Dirty ) ) then
    begin
        On_Dirty( self ) ;
    end ;
end ;

This requires new instance data for the class:
    On_Dirty : TDirty_Notice ;

And the type definition for the instance data:
type TDirty_Notice = procedure( Sender : TObject ) of object ;

Note that not every instance of setting the Dirty flag needs to go through this new method - only those that have to do with updating pointers that now point to deallocated data. And, it requires that we make sure we clear the pointer and call the method before we deallocate the data. Why wouldn't we call the method in every case of setting the dirty flag? When it involves allocating space, the worst thing that can happen is that some space on the store is "wasted" (until we clean it up), but no data gets corrupted. Since a given file system operation may involve multiple file header updates, if they don't involve deallocating data, we can "batch" them together and do a single write operation, rather than potentially multiple writes. This will give us better performance.

In the next article, we will start working on our File System class