
How to set up the dynamic feed system:

o)	Unpack the news

o)	Edit "where.h" to locate various files and directories

o)	make all

o)	Create a directory 'feed' in your News lib directory
	(usually /usr/lib/news) -- the FEED directory.

o)	Create a directory for the various binaries.  Users will
	never have to run these programs, so it's really up to
	you where to put them.

	cp feed arbit setfeed listrc rcmerge $BINDIR

	If you are a leaf site, you only need arbit and listrc.  If
	you are not receiving a dynamic feed you only need feed,
	setfeed and rcmerge.

o)	Feeding sites -- decide an owner for the setfeed program, and
	how it is to be executed (uux, mail alias, etc.)

o)	Read the man pages for specific details on the programs and
	files.  Look at the sample files for examples.

o)	In the FEED directory:

	Feeding sites:
		Create a 'sitepasswd' file, readable only by the owner
		of the setfeed program.  Use our sample as a starting
		point.   Put in site names and passwords.

		Create a 'global_rules' file that excludes local groups,
		and private distributions and enforces whatever global
		feeding rules you need.  This is input to rcmerge(8).

		Create directories for the sites you wish to feed with
		the dynamic feed mechanism.  Directory name will be the
		site name.

		In each directory place:
			local_rules -- additions and deletions
				specific to this site, as per rcmerge(8).

			newsrc -- initial newsrc file, if desired.
					(This is not necessary, the first
					call to rcmerge will make one.)

			new_feed -- initial subscription request, if
					desired -- or wait for the site to
					send one in.

			options -- optional file with options to be added
					to the 'feed' command line.

			subscription -- optional permanent version of
					'new_feed' (Run 'subscribers' from
					the cron regularly)

	Receiving sites:
		Create a 'feedpass' file, readable only by the user that
		will collect statistics (probably root.)  Include sites
		that feed you, and the unique password for that site,
		as shown in our example feedpass file.

		local_dels -- optional sed script to remove groups (or
			otherwise edit) the subscription list from arbit.
		
		add__default -- default additions to make to output before
			sending it off
		add_sitename -- additions to make to any subscription
			requests for that site name.
		extrarc -- optional list of extra .newsrc files to scan
			for subscriptions.

o)	Fix up the feeding shell script (dofeeds) and the subscription
	building shell script (collect) to match how you have set things
	up on your site.  The dofeeds script takes optional site name
	arguments, or does all sites if no arguments are provided.  The
	collect script takes one argument, the feed site to receive the
	subscription request.

	In most cases, you will have to edit the variables at the front
	of each script.  In the case of collect, you should check the
	final command that sends the request to your feed site, to
	pick the method by which you will do this.

o) 	Feeding sites should install the setfeed program and make it
	setuid.  It must be able to read the sitepasswd file and create
	the new_feed file in any site's directory.  The dofeeds shell
	script must be able to delete that new_feed file.

o)	Feeding sites should arrange to run dofeeds, or a similar script
	that calls feed and rcmerge, on a regular basis.  dofeeds needs
	no special permission, *except* it should be able to remove the
	new_feed file from each site's directory.  You may also wish to
	run the 'subscribers' program once a day, if you have sites that
	have a permanent subscription file rather than a fully
	dynamic subscription.

o)	Receiving sites should arrange to call collect on a regular basis --
	at least once/day and possibly 3-4 times, depending on how dynamic
	you want your feed to be.

	You could even arrange to have setfeed call collect when it is
	done (although it has to become root somehow) to pass along new
	requests up the chain immediately.

------------------------------------------------------------

Notes:

The directory structure and file structure above are not fixed.  You
can arrange things as you like by editing the shell scripts and passing
different options.  All the defaults can be changed.

Feeding sites:
	Be sure to feed control (in case they forget).
	Be sure to exclude your local groups if you want to.
	Be sure to exclude special groups like "clari" groups if
		the site is not entitled to get them.

Receiving sites:
	(Some notes on things to put in add__default and add_(sitename))

	Be sure to include control in your request list.
	Only request 'junk' if you really want groups your feed doesn't get.
	It's a good idea to request all new groups in the major hierarchies,
		even if you don't want them down the road.
	Be sure to request rec.humor.funny  :-)
	While you can put in exclusion patterns, delete obviously wrong
		groups (like test) with a  sed script before you send the
		request out.
	Be sure to request groups that nobody reads, but which are needed
		in your 'sys' file, like comp.mail.maps.
	Be sure to request all distributions you want.  "world" is no longer
		the default distribution, but some people may use it anyway,
		for example.
	Request the 'trial' hierarchy to support a quieter way of creating
		groups.  (Ok, this is not required, I'm just plugging it.)

	Tell your users about how dynamic feeding works.  They may get
	confused if they subscribe to a new group and there is nothing
	there.  Don't worry, usually within a day it will be full of
	back articles.  So make it clear in your intro docs or man pages
	that if you subscribe to a group and nothing's there, to leave
	it subscribed, and then go back and read it the next day.  If
	a group is only found many hops up the line, it may take a
	few days.

	If you have NNTP users, be sure to do something that records
	all groups requested recently via NNTP, and append that on the
	end of your request, or turn it into a fake newsrc type file.


In addition:
	Think about using 'arbit' to send arbitron reports to Brian
	Reid's collection project at decwrl.

	Read the READ.ME file for terms on the use of this software.
