		     Internet Rover 3.0 Environment

Chapter 1.  Interesting Operating Environments

  The rover system is entirely open. Also, the simple flat-file nature of
the IPC used makes application of rover to network management tasks
relatively straight forward.   Further, the rover system allows use of
the standard unix utilities and programming and operating system
environment.  This allows the use of the remote execution commands,
X-windows toolkits, scripts, debugging tools, communications systems
including NFS and RPC, etc.  This flexibility has led to several
interesting rover operating environments.

1.1  Rover Satellite System

  We had the desire to allow any of our "customers" to run graphical
and text displays, but did not necessarily want to burden the data
collection machines with additional tasks.  Further, security folks
demanded accountability, so providing one userid for NOC folks, local
engineers, foreign engineers, VPs etc.  was deemed not acceptable.

  The solution was to rdist the data files to a "satellite" machine. 
This machine was administered by locals, has its own set of binaries, and
has plenty of horsepower to run many many displays.  There is, of course,
a 3 minute delay before the data is disted to the satellite system, but
this limitation was deemed acceptable for passive viewing
applications.  This system is currently in production at Merit. 


	********************************************
	*
	*  graphics -- Cannot represent as text.
	*
	********************************************

     

1.2  Rover over NFS

  Another mechanism that we experimented with for a while was to have the
data collection machine "share" the data files by making them available
over an NFS mount.   This way, any machines that mount the partition
can have free and transparent access to the data.  The down side ended up
being reliability of NFS, and the on the machine supporting the NFS
partition.


	********************************************
	*
	*  graphics -- Cannot represent as text.
	*
	********************************************

1.3  Multiple-userid NOC

  The folks at Rice are using multiple user ids running the DIsplays.  We
have adopted some of Rice's code in this release for this purpose.  The
benefits include being able to view the log files and see who updated
problems.   I believe the mechanism they use involves the creation of a
rover group, and the operators are members of that group.


	********************************************
	*
	*  graphics -- Cannot represent as text.
	*
	********************************************

1.4  Separate but equal NOC

  In this scenerio, the requirements are that separate groups of rover
collectors are logically separated.   This can easily be accomplished
by setting the $PINGKYDIR environmental variable to one data directory
before starting a select set of rovers, and then change the $PINGKYDIR to
point elsewhere before starting the next set of rovers.

  This strategy has several advantages.  First off, it allows similiar
hardware/software to be clustered together on one text alert screen
without affecting the other screens.   This strategy can be extended to n
screens. This  also provides a political firewall - if the rover breaks
in one place, it doesn't affect the other rovers/displays.  Secondly,
these rovers and displays can run on other machines.  As long as the
NOC can get to the rover or display machine, they can display alerts. 
This is particularly important if there is CPU resource contention.  It

   


	********************************************
	*
	*  graphics -- Cannot represent as text.
	*
	********************************************

1.5  One collector - multiple status files

  This environment is designed to allow a total network map view as
well as a breakdown by area.   The roverd can be altered (it is a
script after all) to, after polling, grep out certain portions of the
status file to create another STATUS file.  These other STATUS files
can then be used with their own MAP file.  Finally, you can configure the
menus such that when you click on a node, the execution of another xmap
application would result, where this invocation will be displaying a
sub-net.

