Warning:
JavaScript is turned OFF. None of the links on this page will work until it is reactivated.
If you need help turning JavaScript On, click here.
This Concept Map, created with IHMC CmapTools, has information related to: Slides, Kerberized NFS Kerberos protocol is too costly to apply on each file access request Kerberos is used in the mount service: to authenticate the user's identity User's UserID and GroupID are stored at the server with the client's IP address For each file request: The UserID and GroupID sent must match those stored at the server IP addresses must also match This approach has some problems can't accommodate multiple users sharing the same client computer all remote filestores must be mounted each time a user logs in and NFS summary 2 Achievement of transparencies (continued): Failure: Limited but effective; service is suspended if a server fails. Recovery from failures is aided by the simple stateless design. Mobility: Hardly achieved; relocation of files is not possible, relocation of filesystems is possible, but requires updates to client configurations. Performance: Good; multiprocessor servers achieve very high performance, but for a single filesystem it's not possible to go beyond the throughput of a multiprocessor server. Scaling: Good; filesystems (file groups) may be subdivided and allocated to separate servers. Ultimately, the performance limit is determined by the load on the server holding the most heavily-used filesystem (file group)., Automounter NFS client catches attempts to access 'empty' mount points and routes them to the Automounter Automounter has a table of mount points and multiple candidate serves for each it sends a probe message to each candidate server and then uses the mount service to mount the filesystem at the first server to respond Keeps the mount table small Provides a simple form of replication for read-only filesystems E.g. if there are several servers with identical copies of /usr/lib then each server will have a chance of being mounted at some clients. also Kerberized NFS Kerberos protocol is too costly to apply on each file access request Kerberos is used in the mount service: to authenticate the user's identity User's UserID and GroupID are stored at the server with the client's IP address For each file request: The UserID and GroupID sent must match those stored at the server IP addresses must also match This approach has some problems can't accommodate multiple users sharing the same client computer all remote filestores must be mounted each time a user logs in, Storage systems and their properties In first generation of distributed systems (1974-95), file systems (e.g. NFS) were the only networked storage systems. With the advent of distributed object systems (CORBA, Java) and the web, the picture has become more complex. and File Group A collection of files that can be located on any server or moved between servers while maintaining the same names. Similar to a UNIX filesystem Helps with distributing the load of file serving between several servers. File groups have identifiers which are unique throughout the system (and hence for an open system, they must be globally unique). Used to refer to file groups and files, Kerberized NFS Kerberos protocol is too costly to apply on each file access request Kerberos is used in the mount service: to authenticate the user's identity User's UserID and GroupID are stored at the server with the client's IP address For each file request: The UserID and GroupID sent must match those stored at the server IP addresses must also match This approach has some problems can't accommodate multiple users sharing the same client computer all remote filestores must be mounted each time a user logs in and NFS optimization - client caching Server caching does nothing to reduce RPC traffic between client and server further optimization is essential to reduce server load in large networks NFS client module caches the results of read, write, getattr, lookup and readdir operations synchronization of file contents (one-copy semantics) is not guaranteed when two or more clients are sharing the same file. Timestamp-based validity check reduces inconsistency, but doesn't eliminate it validity condition for cache entries at the client: (T - Tc < t) v (Tmclient = Tmserver) t is configurable (per file) but is typically set to 3 seconds for files and 30 secs. for directories it remains difficult to write distributed applications that share files with NFS, Kerberized NFS Kerberos protocol is too costly to apply on each file access request Kerberos is used in the mount service: to authenticate the user's identity User's UserID and GroupID are stored at the server with the client's IP address For each file request: The UserID and GroupID sent must match those stored at the server IP addresses must also match This approach has some problems can't accommodate multiple users sharing the same client computer all remote filestores must be mounted each time a user logs in and NFS summary 1 An excellent example of a simple, robust, high-performance distributed service. Achievement of transparencies (See section 1.4.7): Access: Excellent; the API is the UNIX system call interface for both local and remote files. Location: Not guaranteed but normally achieved; naming of filesystems is controlled by client mount operations, but transparency can be ensured by an appropriate system configuration. Concurrency: Limited but adequate for most purposes; when read-write files are shared concurrently between clients, consistency is not perfect. Replication: Limited to read-only file systems; for writable files, the SUN Network Information Service (NIS) runs over NFS and is used to replicate essential system files, see Chapter 14., NFS summary 1 An excellent example of a simple, robust, high-performance distributed service. Achievement of transparencies (See section 1.4.7): Access: Excellent; the API is the UNIX system call interface for both local and remote files. Location: Not guaranteed but normally achieved; naming of filesystems is controlled by client mount operations, but transparency can be ensured by an appropriate system configuration. Concurrency: Limited but adequate for most purposes; when read-write files are shared concurrently between clients, consistency is not perfect. Replication: Limited to read-only file systems; for writable files, the SUN Network Information Service (NIS) runs over NFS and is used to replicate essential system files, see Chapter 14. advances Recent advances in file services NFS enhancements WebNFS - NFS server implements a web-like service on a well-known port. Requests use a 'public file handle' and a pathname-capable variant of lookup(). Enables applications to access NFS servers directly, e.g. to read a portion of a large file. One-copy update semantics (Spritely NFS, NQNFS) - Include an open() operation and maintain tables of open files at servers, which are used to prevent multiple writers and to generate callbacks to clients notifying them of updates. Performance was improved by reduction in gettattr() traffic. Improvements in disk storage organisation RAID - improves performance and reliability by striping data redundantly across several disk drives Log-structured file storage - updated pages are stored contiguously in memory and committed to disk in large contiguous blocks (~ 1 Mbyte). File maps are modified whenever an update occurs. Garbage collection to recover disk space., Storage systems and their properties In first generation of distributed systems (1974-95), file systems (e.g. NFS) were the only networked storage systems. With the advent of distributed object systems (CORBA, Java) and the web, the picture has become more complex. and What is a file system? Persistent stored data sets Hierarchic name space visible to all processes API with the following characteristics: access and update operations on persistently stored data sets Sequential access model (with additional random facilities) Sharing of data between users, with access control Concurrent access: certainly for read-only access Other features: mountable file stores, Automounter NFS client catches attempts to access 'empty' mount points and routes them to the Automounter Automounter has a table of mount points and multiple candidate serves for each it sends a probe message to each candidate server and then uses the mount service to mount the filesystem at the first server to respond Keeps the mount table small Provides a simple form of replication for read-only filesystems E.g. if there are several servers with identical copies of /usr/lib then each server will have a chance of being mounted at some clients. also NFS optimization - server caching Similar to UNIX file caching for local files: pages (blocks) from disk are held in a main memory buffer cache until the space is required for newer pages. Read-ahead and delayed-write optimizations. For local files, writes are deferred to next sync event (30 second intervals) Works well in local context, where files are always accessed through the local cache, but in the remote case it doesn't offer necessary synchronization guarantees to clients. NFS v3 servers offers two strategies for updating the disk: write-through - altered pages are written to disk as soon as they are received at the server. When a write() RPC returns, the NFS client knows that the page is on the disk. delayed commit - pages are held only in the cache until a commit() call is received for the relevant file. This is the default mode used by NFS v3 clients. A commit() is issued by the client whenever a file is closed., Case Study: Sun NFS An industry standard for file sharing on local networks since the 1980s An open standard with clear and simple interfaces Closely follows the abstract file service model defined above Supports many of the design requirements already mentioned: transparency heterogeneity efficiency fault tolerance Limited achievement of: concurrency replication consistency security NFS codes NFS server operations (simplified) read(fh, offset, count) -> attr, data write(fh, offset, count, data) -> attr create(dirfh, name, attr) -> newfh, attr remove(dirfh, name) status getattr(fh) -> attr setattr(fh, attr) -> attr lookup(dirfh, name) -> fh, attr rename(dirfh, name, todirfh, toname) link(newdirfh, newname, dirfh, name) readdir(dirfh, cookie, count) -> entries symlink(newdirfh, newname, string) -> status readlink(fh) -> string mkdir(dirfh, name, attr) -> newfh, attr rmdir(dirfh, name) -> status statfs(fh) -> fsstats, What is a file system? Persistent stored data sets Hierarchic name space visible to all processes API with the following characteristics: access and update operations on persistently stored data sets Sequential access model (with additional random facilities) Sharing of data between users, with access control Concurrent access: certainly for read-only access Other features: mountable file stores contains Case Study: Sun NFS An industry standard for file sharing on local networks since the 1980s An open standard with clear and simple interfaces Closely follows the abstract file service model defined above Supports many of the design requirements already mentioned: transparency heterogeneity efficiency fault tolerance Limited achievement of: concurrency replication consistency security, What is a file system? Persistent stored data sets Hierarchic name space visible to all processes API with the following characteristics: access and update operations on persistently stored data sets Sequential access model (with additional random facilities) Sharing of data between users, with access control Concurrent access: certainly for read-only access Other features: mountable file stores contains File service requirements Transparency Concurrency Replication Heterogeneity Fault tolerance Consistency Security Efficiency.., Case Study: Sun NFS An industry standard for file sharing on local networks since the 1980s An open standard with clear and simple interfaces Closely follows the abstract file service model defined above Supports many of the design requirements already mentioned: transparency heterogeneity efficiency fault tolerance Limited achievement of: concurrency replication consistency security authentication NFS access control and authentication Stateless server, so the user's identity and access rights must be checked by the server on each request. In the local file system they are checked only on open() Every client request is accompanied by the userID and groupID not shown in the Figure 8.9 because they are inserted by the RPC system Server is exposed to imposter attacks unless the userID and groupID are protected by encryption Kerberos has been integrated with NFS to provide a stronger and more comprehensive security solution Kerberos is described in Chapter 7. Integration of NFS with Kerberos is covered later in this chapter.