You've probably been wondering how openMosix handles things like file read/writes when a process migrates to another node.
For example, if we run a process that needs to read some data from a file /etc/test.conf on our local machine, if this process migrates to another node, how will openMosix read that file ? The answer is in the openMosix File System, or OMFS.
OMFS does several things. Firstly, it shares your disk between all the nodes in the cluster, allowing them to read and write to the relevant files. It also uses what is known as Direct File System Access (DFSA), which allows a migrated process to run many system calls locally, rather than wasting time executing them on the home node. It works somewhat like NFS, but has features that are required for clustering.
If you installed openMosix from the RPMs, the omfs should already be created and automatically mounted. Have a look in /mfs, and you will see a subdirectory for every node in the cluster, named after the node ID. These directories will contain the shared disks of that particular node.
You will also see some symlinks like the following:
here -> maps to the current node where your process runs
home -> maps to your home node
If the /mfs directory has not been created, you can mount it manually with the following:
mount /mfs /mfs -t mfs
If you want it to be automatically mounted at boot time, you can create the following entry in your /etc/fstab
mfs_mnt /mfs mfs dfsa=1 0 0
Bear in mind that this entry has to be on all the nodes in the cluster. Lastly, you can turn the openMosix file system off using the command:
Now that we've got that all covered, it's time to take a look on how you can make the ssh login process less time consuming, allowing you to take control of all your cluster nodes any time you require, but also help the cluster system execute special functions.