/nsfnet/linkletter/linkletter.9101-02 Vol. 3 No. 6 January/February 1991 L I N K L E T T E R The Merit/NSFNET Backbone Project NSF ANNOUNCES ADDITIONAL FUNDS REMAINING EIGHT NODES WILL MOVE TO T3 The National Science Foundation (NSF) has announced a $6 million addition to Merit Network, Inc.'s NSFNET cooperative agreement to upgrade eight additional NSFNET T1 backbone end nodes to T3 service. Merit along with its partners, Advanced Network and Services (ANS); IBM Corporation; MCI Communications Corporation; and the State of Michigan, will begin work immediately on the installations. The upgrades will bring the NSFNET T3 backbone nodes to a total of sixteen. "This comprehensive expansion of the NSFNET to T3 capacity represents an unprecedented advance in the technological capacity of national computer networking and further demonstrates our commitment to maintaining the NSFNET as the world's leading computer network for the support of research and education," said Dr. Stephen S. Wolff, Division Director, Division of Networking and Communications Research and Infrastructure, at the NSF. Total funding now at $28 million A $7.9 million addition to the agreement to fund the first eight end nodes on the T3 backbone was announced by NSF last May. Details on the current state of the T3 upgrade may be found on page 4 of this issue. The new award will provide expansion to T3 service for all of the current NSFNET T1 backbone sites not already part of the T3 backbone, and brings the Foundation's funding for the NSFNET project to $28 million. Addition of the eight new sites in Atlanta, GA; Boulder, CO; College Park, MD; Houston, TX; Lincoln, NE; Princeton, NJ; Salt Lake City, UT; and Seattle, WA, will make the NSFNET, which now links nearly 2,300 university, industry and government research networks, the nation's largest and fastest research and education computer network. "NSFNET is significantly expanding the networking capability of our nation's researchers with this T3 expansion by involving more people in more places, and though it is difficult to predict what the most exciting use of the new bandwidth will be, the effect of connecting more users to this level of computing power is inherently synergistic," said Eric M. Aupperle, President of Merit Network, Inc. "We at Merit are very pleased with the outstanding advances of the NSFNET backbone," said Dr. Douglas E. Van Houweling, member of the Merit Network, Inc. Board of Directors and Vice- Provost for Information Technology at the University of Michigan. "T3 services will enrich the already outstanding high-speed technology of the NSFNET and the new sites will extend access to this critical data superhighway." Upgrading the entire NSFNET to T3 bandwidth will make many new applications possible which were not previously available to researchers. "This extension of T3 bandwidth capacity is extremely important. Each quantum jump in the NSFNET's capacity has qualitatively changed the methods of connecting humans to computers and computers to computers," said Larry Smarr, Director, National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign. "We can expect to see interaction at remote sites increase in three major areas as a result of T3 services throughout the NSFNET: interactive computing, visualization, and collaboration; all of which are crucial to fostering important research and progress in many disciplines and technologies," added Smarr. New technology The model developed for high-speed backbone transmission involves a new generation of Nodal Switching Subsystem technology developed by IBM. Advanced circuit technology for the T3 upgrade is being provided by MCI. The architecture for the T3 network utilizes a collection of Core Nodal Switching Subsystems (C-NSS) within the MCI infrastructure, forming a cloud of co-located packet switching capability. Exterior Nodal Switching Subsystems (E-NSS) are located at client sites and connect into a C-NSS cloud. An early glimpse of this new technology was provided by the NSFNET partnership last March at the National Net '90 conference when they transmitted information over a T3 clear channel link between Washington, D.C. and Ann Arbor, MI, in the first demonstration of high-speed networking on a public access network. The upgrade of the remaining sites to T3 capacity is scheduled to be completed over the next few months. At that time, it is anticipated that the existing T1 backbone will no longer be maintained at operational status. - Ken Horning, Merit/NSFNET MERIT TO PRESENT NETWORK SEMINAR IN MAY Following the success of last November's seminar, Merit/NSFNET Information Services continues its commitment to providing current information on national networking by sponsoring a two-day seminar in Ann Arbor, Michigan, on May 20 and 21, 1991. "Making Your NSFNET Connection Count" will focus on issues of interest to campus computing leaders, information systems and networking administrators, educational liaisons, librarians, and educators who want to learn more about national networking. K-12 applications Some of the topics to be featured include network applications at the K-12 level, ways in which university libraries are making use of the Internet, the development of the Institutional File Server at the University of Michigan, a presentation of exciting image data available from NASA, and an overview of the NSFNET and the Internet. NOC tools and procedures Participants will also have the option of touring Merit's state- of-the-art Network Operations Center after hearing presentations on NOC tools and NOC procedures. The seminar will be held at the Tenneco Automotive Training and Development Center in Ann Arbor. Microcomputers connected to the Internet will be available onsite so that attendees may access network resources discussed in the presentations. Save with early registration The registration fee is $395. An early-bird fee of $345 will be charged for registrations received before April 1, 1990. The registration fee includes the two-day seminar, a reception on Sunday evening, lunch on Monday and Tuesday, all seminar material, and an optional tour of the Network Operations Center. For further information send an electronic message to seminar@merit.edu or telephone 1-800-66-MERIT. -Pat Smith, Merit/NSFNET NETWORKING "DOWN UNDER" WITH AARNet Accompanying the rapid growth of the Internet within the continental U.S. has been a comparable increase in networking on the international scene. NSFNET's international connections have grown to nearly 700 and include an increasing number of links to the Pacific Rim countries through the Pacific Communications Network (PACCOM). PACCOM is a cooperative program for research communications infrastructure which includes Australia, Japan, Korea, New Zealand and Hawaii. In this issue of the Link Letter we are pleased to spotlight Australia's national network which is called the Australian Academic and Research Network (AARNet). Smooth implementation The major portion of the AARNet installation project began during the last week of April 1990 and was completed in four weeks. "It is a tribute to the advanced state of the Internet network technology that this very tight national implementation was achieved without major problem, and all sites were fully functional within the Australian IP network immediately after the physical connection was established," commented Geoff Huston, Network Manager for AARNet. "From initial design to completion of the network, the activity has spanned just 14 months, and has been achieved through the work of a single full time staff member for the majority of this period." Topology AARNet is based on a star topology with the center of the star located in Melbourne. Links radiate to eight regional network hubs within the states. In turn, each of the regional hubs is the center of another star. In all, some 36 major sites are linked in this fashion. These sites include the 32 higher education institutions within Australia, together with multiple connections into the Commonwealth Scientific and Industrial Research Organisation. Plans to offer 2Mbps service The initial backbone is operating at 48 Kbps although the Eastern States will be interconnected at a speed of 2 Mbps by the end of 1990, in order to meet anticipated demands for communications services. "Currently we do our research on Argonne [Argonne National Labs] machines (for example, the connection machine). Access to this size and power of machine within Australia is non- existent," remarks John Barlow at the Parallel Computing Research Facility, Australian National University in Canberra. Mr. Barlow explains further that the AARNet/Internet link assists with using the Argonne machines, and has opened up a new area previously impossible, that of X Windows graphics: "It is a little slow, but getting graphical results as they are computed . . . helps immensely to debug, tune and analyse my programs." International link The major international link is accomplished by a connection to the Internet in the United States via a 128 Kbps satellite service between Melbourne and NASA-Ames in Palo Alto, CA. With the sharp increase in traffic over this link there are plans to further increase bandwidth during 1991. In particular, such an increase will offer significant support for collaborative U.S./Australian research programs. "I use AARNet extensively as part of a collaborative effort with MIT in NASA's crustal dynamics program," notes Dr. Gary Willgoose at the University of Newcastle. "I ship about 100 megs [of data] a week across the wires." Several network protocols supported The network design includes support for a number of different network protocols to coexist within a single infrastructure of physical communications links. The initial phase of the network supports only the Internet Protocol suite. Plans call for the addition of support routing for the DECnet Phase IV protocol in the near future and the addition of routing support for OSI CLNS and DECnet Phase V, but there is no specific timetable for this work at present. Attention will also be given to the appropriate mechanisms to support access into the Packet Switched Networks using the X.25 interface protocol. The implementation within Australia is based on the use of cisco Gateway Servers acting as the interface between each site LAN and AARNet, and the use of larger configurations of the same equipment as the major switching points within AARNet. "We are convinced that this network will become an essential tool in all areas of research and its use will increase dramatically over the next years. It will bring Australian research closer to other research centers all over the world," said Professor Heiko Schroeder and Dr. Bryan Beresford-Smith in a joint statement from the University of Newcastle. - Pat Smith, Merit/NSFNET LATEST DEVELOPMENTS ON THE NSFNET SCENE The NSFNET partnership of Merit, ANS, IBM, and MCI have been building an eight node T3 NSFNET backbone over the last several months. In December, 1990, half of the T3 nodes were in place and the introduction of production traffic onto the installed backbone was begun. T3 hardware and circuit installations are now complete at Ann Arbor, MI; Cambridge, MA; Palo Alto, CA; Pittsburgh ,PA; San Diego, CA; and Urbana-Champaign, IL. Installation of the T3 circuit into Ithaca, NY is anticipated by press time, and the proposed completion date for the end node at Argonne is early March. Installations at these two sites have been delayed due to a combination of local site issues and circuit build complications. Planning for the recently announced upgrades to all eight remaining T1 backbone sites is also in progress. Status of the T3 network The process of allowing more production traffic on the T3 network is proceeding. Full production routing on the deployed backbone nodes was initially restrained because of complications at the external interface of the Exterior Nodal Switching Subsystems (E-NSS). It is anticipated that full production routing will be phased in across the completed nodes of the T3 backbone within the next few weeks. As often occurs with new technology deployment, several challenges were presented by the Ethernet interface being used in the E-NSSs which handle routing tables and IP forwarding "on card." Through dedicated effort on the part of all team members, these problems have been resolved. The situation continues to be closely monitored in an effort to assure reliability and refine the overall effectiveness of the new system. T3 FDDI interface The T3 FDDI interface which has been under development is currently undergoing interoperability testing. The equipment of two vendors has been successfully tested to interoperate with the interface. Testing continues and the deployment date of FDDI capability is now expected by the end of February. T1 backbone improvements The T1 NSFNET backbone has continued to experience a growth in traffic and the partnership continues to monitor the effects of this load on individual backbone nodes. A change has been made to the external interface of the College Park, MD backbone node to improve performance and split the traffic load. In addition, the external interfaces to Ithaca, NY and Palo Alto, CA have been upgraded to improve performance of these backbone nodes. -Ken Horning, Merit/NSFNET NSFNET PARTNERS SUPPORT TWO RESEARCH NETWORKS In addition to managing the NSFNET's production T1 and T3 backbones, Merit Network, Inc. and its partners, Advanced Network and Services (ANS), IBM Corporation, and MCI Communications Corporation, created and support two Research Networks. They are operated and maintained by the partnership without support from the National Science Foundation. Like the production NSFNET, the two Research Networks operate multiple backbone nodes. The sites included in these networks are located at Merit's Network Operations Center (NOC) in Ann Arbor, MI; at IBM facilities in Milford, NY; Yorktown, NY; and Gaithersburg, MD; and at an MCI facility in Richardson, TX. The Research Networks facilitate development work by helping to isolate problems and verify fixes, as well as being used for expediting the distribution of software needed to operate the NSFNET backbones. MCI's research net node moves to Texas MCI consolidated a large part of its engineering organization last year by moving from the Washington, DC area to Richardson, TX. MCI's Advanced Technology Lab was one of the many groups to move from Reston, VA. This facility plays a key role in MCI's contributions to both the T1 and T3 research networks for the NSFNET Project. The Richardson lab supports one of the five Nodal Switching Subsystems (NSSs) that comprise the project's T1 Research Network and one of the four Exterior Nodal Switching Subsystems (E-NSSs) on the T3 Research Network. These networks are used as test beds to fully test, evaluate, and integrate all hardware and software enhancements for the NSFNET. Further, the lab evaluates all new hardware that will interface with transmission facilities and tests it for compatibility with the other transmission equipment which comprises MCI's communications network. Conformity to industry standards is also verified. On the road Along with the movement of more than 1700 hundred pieces of test equipment and computers, as well as some 400 racks of transmission equipment and systems (nine trailer truckloads of hardware), the move necessitated installation of more than 90 miles of cable, and the relocation of 35 engineers and their families. Despite the logistics involved, the lab's NSS was moved with a minimum impact in service to Research Network operations. In addition, the relocation required the expansion of transmission capacity into the new Texas facility, the rerouting of existing circuits, and the installation of new telco tail circuits. Further, a duplicate NSS was created and installed in the new facility while the original node in Virginia remained operational. A new home Once all the pieces were in place, a cutover was performed that involved personnel at each node of the T1 Research Network. Through an outstanding effort of technical personnel from each of the NSFNET partners, the cutover and verification of the new node was accomplished in less than two hours. All of these activities were completed with a minimum impact to the users of the Research Network, and they occurred coincidentally with the installation of the Richardson node on the T3 Research Network. Development and testing required for the installation of the T3 Research Network was not impacted by the move. -Ken Horning, Merit/NSFNET and Ken Zoscak, MCI MERIT ADOPTS NEW NAME AND LOGO The Merit Board of Directors has approved a name change and a new logo for the Merit statewide network. The Merit Computer Network is now called MichNet and the corporation itself has undergone a name change to Merit Network, Inc., rather than Michigan Educational Research Information Triad, Inc. The name MichNet represents both the state Merit serves, Michigan, and the fact that it is reaching out to encompass the state with a modern computer communications network. Change in direction The new name is an outgrowth of changes in direction, outreach, technology and services that have occurred during the past year and have expanded Merit's focus beyond the state-supported universities to other institutions in the state of Michigan. Goals defined Goals include finding new ways to fulfill Merit's research, education, and economic development mission, in particular the computer networking needs of the K-12 educational community, business and industry in the state, community colleges, and other educational and research institutions. -MichNet Technical Support Group UNIDATA AND NSFNET KEEP AN EYE ON THE WEATHER Editor's note: Information contained in this article was obtained from several Unidata publications. Unidata is a nationwide program which aids university departments in the acquisition and use of atmospheric data in near real time. The Unidata Program Center (UPC) is managed by the University Corporation for Atmospheric Research (UCAR) in Boulder, Colorado, and is funded by the National Science Foundation's Atmospheric Science (ATM) Division. Personnel and other resources required for university participation are provided by the institutions themselves. The NSF and university resources are complemented by contributions from private industry. NSFNET plays a significant role Overall, NSFNET plays a significant role in expediting development, testing, and distribution of the products developed by UPC. The NSFNET is also a key resource in allowing the atmospheric science community to share their experiences and expertise in the use of Unidata systems. When concurrent development efforts take place at different locations, the work is expedited through use of the national network. One example is a project aimed at enhancing the Purdue Weather Processor (WXP) software system, porting it to IBM workstations, and integrating it with other Unidata products. From West Lafayette, IN, Purdue staff were able to complete the implementation on computers in Boulder and test all components including graphics output, using X Windows displays via the NSFNET. In turn, UPC personnel experiment with the software and incorporate it into the central configuration management and distributions system at the UPC. During one especially productive period, staff in West Lafayette, Boulder, and the NSF offices in Washington, D.C. all developed, tested, demonstrated, and revised software simultaneously on computers via NSFNET. Different portions of this "U. S. Campus" demonstration system were running on computers at each location. Some of the data were being captured in West Lafayette, some in Boulder, and analysis and display were being done at all three sites. UPC has been involved in similar remote development and implementation efforts at the University of Wisconsin- Madison Space Science and Engineering Center (SSEC) to develop software connecting the Unidata Local Data Management (LDM) with the Wisconsin Man Machine Interactive Data Analysis System (McIDAS) and at the National Center for Atmospheric Research (NCAR) to implement a prototype "Campus Weather Display" system. Remote testing and troubleshooting The national network has also helped streamline troubleshooting and testing of products at remote sites. Versions of the Scientific Data Management system (SDM) were tested at the University of Illinois and on demonstration computer systems at the National Science Foundation in Washington, D.C. In one case, remote installation and testing of software at the Massachusetts Institute of Technology (MIT) uncovered some hardware problems which were then corrected. UPC staff are currently helping with a similar remote debugging effort at the University of Alaska where the satellite antenna points a mere three degrees above the horizon. More efficient distribution Most of the Scientific Data Management sites now obtain new releases of the SDM software over the networks. This allows electronic announcement and release of the systems and enables more efficient user access by eliminating the need to ship tape/diskettes in the mail, as well as alleviating media compatibility problems. Similarly, the Unidata Program Center (UPC) and Unidata sites can easily access software from other institutions. For example, the National Center for Supercomputer Application's image analysis tools and MIT's X Windows package are obtained via networks from the provider institutions. Export weather displays By using the X Windows System with the latest release of WXP (SDM version 2.0), one can export the weather displays over the network, i.e., have the display appear on another computer. The display computer does not need to be running Unidata SDM; it only needs to function as an X Windows display server. Automated programs that display up-to-date WXP weather maps on DOS, Macintosh, UNIX, and VMS computers have been set up within the UPC offices and at a number of other sites that are without complete Unidata systems. This provides an excellent example of the power of networking, and is also convenient for obtaining the latest weather displays. With the distributed LDM system it is possible to run parts of the SDM on one computer and have it call subroutines on a remote computer to complete its work. This capability may be useful at sites where the Zephyr antenna and receiver must be located away from the main computing systems. Unidata services and software While reception of "real time" weather data requires a satellite dish at the university location, many other Unidata services are now available over the NSFNET including free consultation, and ongoing support for the Unidata community of users as well as training workshops. The NSFNET and gateways to SPAN, BITNET and OMNET are important factors in the UPC effort to encourage interaction among sites through a set of centrally-supported special interest group mailing lists that reflect mail to users on different networks. Unidata systems are available to all universities. University scientists and computer experts are instrumental in shaping the Unidata program through their involvement in various committees and working groups. Over 90 universities are currently using Unidata products or services and many contribute time and resources to the program as well. Unidata software, training, and support are provided free of charge. Group discount rates for data services have been arranged. Universities assume the costs of purchasing equipment, subscribing to data services and all travel and accommodations associated with training workshops. For further information contact: UCAR/Unidata Program Center P.O. Box 3000 Boulder, CO 80307-3000 (303) 497-8644 support@unidata.ucar.edu -Ben Domenici, Unidata, Pat Smith, Merit/NSFNET MERIT/NSFNET WILL BRING T3 CONNECTIVITY TO NET '91 National NET '91, "Towards a National Information Infrastructure" will take place on March 20, 21, and 22 at Loews L'enfant Plaza Hotel in Washington, D.C. During this event, the NSFNET project team will provide T3 connectivity for a number of exciting demonstrations. The incoming T3 circuit will interface with both an Ethernet and an FDDI network to which several workstations will be attached. Scheduled demonstrations include: "UC Berkeley Image Database Project," from UC-Berkeley; "Distributed Visualization of 3-D Tomographic Images using High Speed Networking Protocols," Lawrence Berkeley Laboratory; "Interaction with a Remote Digital Library," National Center for Supercomputer Applications; "Astronomical Image Processing System," University of Virginia, and "Run your own application: Shared X Windows," Communication for North Carolina Education, Research and Technology (CoNCert). Further information may be obtained from EDUCOM at 202- 872-4200. - Pat Smith, Merit/NSFNET CONTINUING THE ADVENTURE IN SAN DIEGO Due to a desire to focus more on advanced research and technology networking issues, Hans-Werner Braun has decided to leave his position as Principle Investigator of the NSFNET project at Merit and move to the San Diego Supercomputer Center (SDSC). At SDSC he will initially work on the CASA very-high-speed communications network research project which is part of the Corporation for National Research Initiatives (CNRI) gigabit testbed initiative. "The past several years of the Internet evolution have been extremely exciting, and it was not easy to imagine something that comes even close to the excitement level of developing, deploying and managing national infrastructure," commented Braun. Participated in NSFNET design Braun joined the University of Michigan's Merit Computer/UMnet group in April 1983. As one of the true "pioneers of networking" he remembers fondly the early days of IP and the first fuzzballs. Prior to and following the award of the NSFNET expansion project to Merit in 1987, he participated in the design, implementation, and operation of the NSFNET backbone project as well as that of the various mid-level networks attached to the backbone. "Hans-Werner's leadership in the evolution of Merit's NSFNET backbone initiative was instrumental in its success and Merit's enhanced national reputation," stated Eric Aupperle, President, Merit Network, Inc. In addition to his responsibilities as Principal Investigator of the NSFNET backbone project Braun managed the Internet Engineering (IE) group at Merit. Aupperle has now temporarily assumed administrative responsibility for that group. "Fortunately, and again thanks to Hans-Werner's leadership, the IE staff are well trained and positioned to carry on their assignments during this transition," continued Aupperle. Farewell to Michigan winters Hans-Werner commented that the thing he will miss most about working at Merit is the people. "I will miss the very bright, intelligent, and hard-working people associated with the NSFNET project." He added jokingly, "I will not miss that fact that Merit is located in a place that has a winter." Braun was instrumental in helping facilitate the evolution of the backbone from 56k to T1 and now to T3. He said that the greatest challenge at Merit was working to create the environment we have today. "You don't bring up a network. You build up an environment. This environment includes people, technology, and even politics," he remarked. Braun will continue to stay in close touch with Merit and the NSFNET project to facilitate the further network evolution. When asked if he has any final comments for the Link Letter audience, he remarked, "The adventure continues with the next generation of networking." -Laura Kelleher, Merit/NSFNET