This file can be read from the man pages for PETSc by doing

     ./bin/install -libs g
     ./bin/toolman

(toolman is a script that runs xman (X11 window system client) using the
man directory built with install.

/*D
    TOOLSintro - this is the introductory manual page for the entire
tools directory.

   Issues

$     Portability
$     Easy to get started with
$     Efficiency
$     Self generating documentation
$     Object oriented
$     Not intended for toy problems.

D*/


/*D
	InstallingTools - procedure for installing the tools 

    Notes:
    All pathnames in this document are relative to the root directory for
    the entire PETSc library (usually something like "tools" or "tools.core").
    You should set the environmental variable TOOLSDIR to the location of
    the root directory of PETSc, for instance
$   setenv TOOLSDIR /home/bsmith/tools

    Also if you wish to use pvm or p4 or both you should set the 
    environmental variables PVMDIR and P4DIR to point to there root
    directories, for instance 
$   setenv PVMDIR /home/bsmith/pvm
$   setenv P4DIR /home/bsmith/p4

    You need to generate a "hosts" file in ./comm .  This file contains
    the names of machines, their "owners", and some resource limits.
    This allows people to allow their machines to be used in the evening
    but not during the day, or for short amounts of time in the day
    and longer ones in the evening. Update $TOOLSDIR/comm/hosts to include 
    machines at your site.

    First Installation:
    Run $TOOLSDIR/bin/install. This will compile the code.  You must run
    this from $TOOLSDIR (the root of the tools directory).  The option
    -complete will speed the process (this option must only be used when
    building all of the libraries; otherwise, the object libraries may be
    damaged).  The usual commands are

$   cd $TOOLSDIR
$   bin/install -complete >&tools.log &

    This will build the object libraries and the man pages.
    You should check the output of this build for any errors.  If you pipe
    the output to a file, say "tools.log", then you can look for error
    messages with
$        $TOOLSDIR/bin/finderrors tools.log
    This looks for compile and archive errors in the output by searching for
    special character patterns in the log file.

    Special note:
    For the intel Delta, you must do
$        bin/install intelnx inteldelta <other options>
    This distinguishes between the Delta (a mesh) and the ipsc/i860's
    (a hypercube).

    Building Documentation:
    $TOOLSDIR/bin/buildman will generate the man pages in ./man , as well
    as building the "whatis" file (used for man -k).  This needs to build
    a program in ./doc . $TOOLSMAN/bin/toolman under X-windows will bring
    up an interface to all the manual pages for PETSc.
    You can pass toolman additional arguments; for example, 
$   $TOOLSDIR/bin/toolman -pagesize 630x760 
    will change the size of the manual page.     

    In addition, this will build summary files (in ./docs).  This is a LaTeX
    file that may be used to generate a table of routine names, calling
    sequences, and short descriptions.  "make functions.dvi" in ./docs will 
    build the dvi file for this listing; "make print" will print a copy (
    assuming that you have dvips.  Otherwise, use your dvi-to-ps converter.)

    Reference manuals for the entire library can also be built.  First, 
    run bin/buildref from the $TOOLSDIR directory.  The reference manual
    for the entire library may be build with "make allrefs"; for SLES with 
    "make slesref", and for Chameleon with "make parref".  All of these are 
    from $TOOLSDIR/docs .  A simple "make" in this directory will build all 
    of the documents (one buildref has been run).

    You must run bin/buildman from $TOOLSDIR (the root of the PETSc directory).

    Building examples:
    $TOOLSDIR/bin/buildex [-g -Opg or -O] [ COMM=p4 or COMM=pvm] will 
    build and test run all the examples. (Don't be surprised if all the 
    examples don't run.  Before running the examples make sure that 
    the p4 and pvm libraries for the architecture are built.  

    You must run this from $TOOLSDIR (the root of the PETSc directory).
    
    Possible problems:

$   "Can not determine architecture type"
$   The install script could not find the architecture type for your machine
    by using /bin/arch or /bin/uname.  This should not happen for any
    of the supported architectures

$   "<name> is not a supported architecture"
$   PETSc uses a series of makefile includes to customize the makes for 
    each architecture; install could not find the ones that it needs.
    Look at bmake/sun4* for some examples; the name used should be
    the same as that returned by /bin/arch or /bin/uname.

$   install did not build the intelnx (hypercube and delta) versions.
$   PETSc looks for the presence of the cross-compiler before attempting
    this.  Currently, it looks in $PGI/i860/bin.<local-architecture> .
    You may need to change this test for different locations of the
    cross-compiler.  In addition, the environment variable PGI needs
    to be set appropriately (this means that it needs to be set
    to the correct directory, and should be set in you .cshrc file
    to insure that the correct value is propagated to the shell that
    executes the compiler).

$   install seems hung.
$   Install uses a mechanism for insuring that only one install program is
    processing a directory at any time.  This allows multiple install 
    scripts, each running on a different architecture, to run in parallel.
    This is managed by adding files with names INSTALL* to the directory
    being processed.  If an install aborts for any reason, these files
    may be left in the directories.  To remove them all, use the command
$   find . -name 'INSTALL*' -exec /bin/rm \{\} \;
$   The file contains the name of the architecture that generated it.

    Note that by default, this mechanism is not used.  However, if TOOLSLOCK
    has the value "YES", then it will attempt to lock access to the
    directories. 

$   Make always makes every component in the library.
$   Unfortunately, early versions of "make" did not support libraries very
    well, and different vendors added different extensions.  We have tried to
    accommodate these through the make patterns in the ./bmake directory,
    but on some systems we were unable to convince make (even gnumake!)
    to operate as we would have liked.  The Alliant fx2800 is one of these
    systems.

$   Can not build routines in xtools.
$   If the include files or libraries are not where they are expected, 
    these routines (naturally) can not be built.  To get these files
    to compile, add 
$        X11INCLUDEDIR = -Idirectory_name
$   to bamke/ARCH.  For example, if your X11 include files are in 
    /usr/local/include, and you are building these for a sun4, then add
$        X11INCLUDEDIR = -I/usr/local/include
$   to $TOOLSDIR/bmake/sun4 .  You will also have to modify bin/buildlib
    to change the test for the X11 include libraries.

$   Need to build the libraries without modifying existing ones.
$   You can use the -finaldir <directoryname> argument to bin/install 
    give the name of the directory where the code will eventually be 
    installed.  For example, if the installed directory is /home/me/tools.core,
    then to install a new version with the minimum inpact on current users,
    do :
$       unpack the new tools.core into, e.g., tools.new
$       cd tools.new
$       bin/install -finaldir /home/me/tools.core >& install.log 
$       <when done>
$       cd ..
$       mv tools.core tools.core.old
$       mv tools.new  tools.core

D*/

/*D
    Linking - Linking with the PETSc libraries

    Organization of libraries:
    The object libraries for the PETSc package are organized as

$   libs/libs[g,O,Opg]/<architecture>/*.a

    For example, the libraries for production (optimized) runs for the sun4
    are in

$   libs/libsO/sun4/*.a

    The libraries that you need to link with are

$   tools.a system.a

    If you are using p4 or pvm for communications in a parallel environment,
    you will need

$   tools<comm>.a tools.a system.a

    where <comm> is p4 or pvm.  In addition, you'll need to link in the
    appropriate communications libraries (such as libp4.a or libpvm.a);
    see ToolsMake for our to do this.

    There can be a conflict when using tools<comm>.a and tools.a .  All 
    of the routines in tools<comm>.a have counterparts (for a single
    processor) in tools.a .  Parallel programs that seem to hang may
    be doing so as a result of linking with tools.a instead of tools<comm>.a .
    If, for some reason, tools<comm>.a did not get built correctly, the
    same problem can occur.

    When using Fortran, additional libraries are required.  These contain
    routines that translate from Fortran to the PETSc libraries, and must
    be placed before them on the link line.  These libraries are

$   tools.core/fort/<architecture>/fort<comm>.a 
$   tools.core/fort/<architecture>/fort.a

    As above, the "comm" version is required only when using the parallel 
    routines.  For example, the complete set of libraries to link with when
    using a Fortran program to call routines from SLES (the linear solvers) 
    on a sun4 is

$   tools.core/fort/sun4/fort.a tools.core/libs/libsg/sun4/tools.a \
$   tools.core/libs/libsg/sun4/system.a

    See also:
    ToolsMake
D*/

/*D
    ToolsMake - Using make with tools

    Introduction:
    PETSc provides a variety of compile-time and link-time options.
    In order to simplify the use of PETSc, a standardized makefile format
    has been adopted.  These allow easy switching between different
    architectures and environments.

    Description:
    When using make with the makefiles in PETSc, upto three arguments
    may be passed to make to determine the compile-time version to use.
    These are
.   ARCH=<architecture> - (Mandatory) <architecture> should be one of the
                          supported architectures, such as sun4, intelnx,
	    		  IRIX, etc.
.   BOPT=<optimization> - (optional) <optimization> should be the level of 
			  optimization.  Valid values are
$                         g (debugging)
$                         O (production)
$                         Opg (profiling)
.   COMM=<communication> - (optional) <communication> should be the name of
			 an alternate communication library, such as
$                        p4 - p4 communications
$                        pvm - (Parallel Virtual Machine)

    To use these options in a makefile, you must include the lines

$   LDIR      = $(ITOOLSDIR)/libs/libs$(BOPT)$(PROFILE)/$(ARCH)
$   LIBS      = $(LDIR)/tools$(COMM).a $(LDIR)/tools.a \
$               $(LDIR)/tools$(COMM).a $(LDIR)/system.a
$
$   LIBNAME = dummy
$
$   include $(ITOOLSDIR)/bmake/$(ARCH).$(COMM)
$   include $(ITOOLSDIR)/bmake/$(ARCH).$(BOPT)$(PROFILE)
$   include $(ITOOLSDIR)/bmake/$(ARCH)

    in your makefile.  ITOOLSDIR should be set to the top level of the tools
    directory.  In addition, a line to link a program should look like

$ example1: example1.o
$	$(CLINKER) -o example1 $(CFLAGS) $(BASEOPT) example1.o \
$                    $(LIBS) $(CLIB) $(SLIB) -lm

    Here, CLIB (defined by one of the included files) gives the required
    communication libraries and SLIB gives any required system libraries.
    Non-parallel programs may omit CLIB.  CLINKER is the linker to us for
    C main programs; use FLINKER for Fortran main programs (in most cases,
    this is the same as CC and FC respectively, however, they may include
    special options or, on the CM-5, be different programs entirely).

    By default, rules are provided for compiling C and Fortran programs.
    These use the values of $(BASEOPT) and $(CFLAGS) (for C) and $(BASEOPTF)
    and $(FFLAGS) (for Fortran).  For example, this is the rule for C 
    programs on a sun4

$ .c.o: 
$         $(CC) -pipe -c $(CFLAGS) $(BASEOPT) $*.c

    The usual definition of CFLAGS is

$  CFLAGS   = -I$(ITOOLSDIR) $(OPT)

    where OPT is used as a hook for user-defined options (that is, it is 
    usually null).  For example, to use OPT to pass an option to the 
    compilation step, use

$   make BOPT=g ARCH=sun4 OPT=-DMydefine

    Parallel code needs to use

$  CFLAGS   = -I$(ITOOLSDIR) $(COPT) $(OPT)

   The COPT is essential; this insures that the appropriate version is built.

   If you redefine CFLAGS, make sure it includes the -I$(ITOOLSDIR) $(COPT) 
   values.

    The line "LIBNAME = dummy" is not always needed.  If you are building
    your own library, say mylib.a, set "LIBNAME = mylib" and use 
    
$   SOURCE = mysource.c mysource2.c ...
$   SOURCEC = $(SOURCE)
$   OBJS   = mysource.o mysource2.o ...

    Fortran source files should be specified with "SOURCEF = mysrc.f ..." .
D*/

/*D 
     Porting - This attempts to provide an outline of how to port PETSc
  to another architecture. Warning, since unix is far from standardized and 
  parallel computers are even more so; porting PETSc to another 
  architecture may be difficult or very easy.

  1): Choose a name for the architecture. This should be short, likely 
  to be unique, (for instance, don't use RISC or IBM), and related to 
  the actual machine. Use just numbers and letters; not any special 
  charactors.  All machines that are binary compatiable should have the 
  same arch, for instance the Sparc1, Sparc1+ and Sparc2 all have the 
  arch, sun4. The standard unix command uname -m maybe a good place to
  get the name from.

  2): Create files in bmake called arch, arch., arch.O, arch.g, arch.Opg, 
  containing the compiler names; locations of system libraries, etc.
  You can start by modifying the sun4 versions.

  3): Modify bin/tarch to deal with the new arch.

  4): Add the arch to comm/hosts.h

  5): Add the new arch to comm/hosts.c (two places)

  6): Add the new arch to system/arch.c

  7): If the new architecture supports PVM; determine the PVM name 
      for the architecture and add the correspondence between your 
      arch and the PVM arch into comm/initpvm.c and bin/install

  8): If the new architecture supports p4; determine the p4 name
      for the architecture and add it to bin/install

  9): Try installing PETSc with
$         bin/install myarch -libs g >& myarch.log
$     where "myarch" is the new architecture.

  Different Unix versions have different include files, different ways
  of dealing with floating point exceptions, etc. All you can do 
  now is try a build and fix any problems that comeup doing compiling.

  Message passing systems: PETSc can also be modified to run with 
  different message passing systems. This is a bigger project; most of
  the changes; however are restricted to the comm directory, with some
  in the system directory.

  We would be very interested in hearing about any of your successful
  or unsucessful ports.
D*/

/*D
   BLAS - Basic Linear Algebra Subroutines 

   PETSc contains a copy of the BLAS.  Some systems provide their own 
   BLAS; these can be accessed with the variables BLAS, BLAS1, BLAS2, and
   BLAS3 in the makefiles (see bmake/<ARCH> , for example, bmake/sun4 or
   bmake/intelnx).  If your system has these libraries, you may remove
   the "blas" directory.  Make sure you edit the bmake/<ARCH> file to
   reflect the location of the blas.
D*/

/* This page is not yet ready */
/* D
   RunningExamples - How to run the examples in PETSc

   PETSc contains a number of example programs.  These are in directories
   of the name "examples" in the various directories.

   To run these (if you are the owner of the PETSc directory), cd to
   the directory, type "make ARCH=<name> BOPT=O".  This will build
   the examples for that directory.

   All <not yet> of the examples accept the argument -help ; this will
   print a usage summary of the program.
D */

/*D
   Overview - Goals and contents of PETSc.

   PETSc (Portable, Extensible Tools for Scientific computing) is a package
   that provides a flexible and uniform framework for methods for solving
   some important problems in scientific computing.  This section describes 
   some of the concepts and organization behind these routines and the
   contents of the package.  Since subsets of this package are available,
   you may not have the entire package.


   Concepts:
   The PETSc package uses object-oriented design to provide a consistent
   and extensible interface to numerical methods and algorithms.  This allows,
   for example, an application to keep its own data structures and to use,
   with little or no code modification, a wide variety of methods.  

   Many of the routines in PETSc are arranged around a "context"; this is
   a way to hold all of the data that is relevant for describing both the
   problem to be solved and method to solve with.  The usual approach is
$
$  ctx = CreateNewContext( method, problem )
$  SetOption( ctx, option, value )
$  SetOption( ctx, option2, value2 )
$  ...
$  SolveProblem( ctx )
$  FreeContext( ctx )
$
   (the routine names here are generic; each part of PETSc uses its own
   set of routines.)

   

   Contents:
   This list gives the contents of PETSc by file directory, organized into
   some broad categories.

$
$
$  Parallel Processing
.  blog	   - Event logging.  Used by comm and -event options
.  comm    - Chameleon message passing system
.  blkcm   - BlockComm package for sending blocks of data
$
$
$  Linear Systems
.  sparse  - General sparse code plus routines for some popular formats
.  solvers - SLES (Simplified Linear Equation Solvers) package
.  iter    - Generalized iterative accelerators.  May be used on parallel 
	     computers.
.  ilu     - Some specialized incomplete factorization routines
.  fd      - Finite differences in "sparse" format
$
$
$  Nonlinear Systems
.  nonlin  - Nonlinear version of SLES; currently under development
$
$
$  Support and Miscellaneous
.  inline  - Macros for inlining popular operations.  Should ONLY be used
	     by experts
.  set     - Simple index set operations; used in SLES and BlockComm
.  vectors - Generalized vectors.  Used in SLES, iter, and nonlin
.  system  - System support routines (space allocation, timers, file 
	     management)
.  c2fort  - Contains program to construct Fortran interfaces
.  lint    - Contains program to construct lint libraries, and the libraries
	     themselves
$  
$
$  Graphics
.  xtools  - X11 Window System routines (xlib level)
$
$
$  Documentation
.  ref     - LaTeX versions of man pages
.  man     - man pages
.  doc     - Contains programs to produce man pages
.  docs    - Documentation on PETSc
.  docs/tutorial - Tutorials on SLES (solvers.tex) and Chameleon (parallel.tex)
.  docs/talk - Overview of PETSc (in Slitex)
.  tex     - Some useful TeX macros 
$
$
$  Scripts and Makefiles
.  bmake   - Makefile includes for supported architectures
.  bin     - Scripts for installing and using PETSc.


   Examples:
   Example programs may be found in these directories

$  comm/examples       
$  comm/examples/angst - Contains programs for performance estimates of 
       			 parallel computers
$  sparse/examples     
$  solvers/examples    - Contains examples for solving linear systems, 
                         including some in Fortran.	              
$  iter
$  xtools/examples
$  xtools/papps/examples
$  blkcm/examples
$  nonlin/examples
$  nonlin/examples/mintest

   
D*/
