Network and inter-network infrastructure is a critical building block for the application of high-performance data-intensive research. The ability to move large quantities of data to and from data centers, laboratory research instrumentation, and both local and remote data repositories/servers can make or break a research workflow. RCC is continually working with the University of Chicago’s IT Services office to ensure the highest levels of network connectivity to our resources.
RCC’s Midway supercomputing cluster is connected to the University of Chicago network backbone through a 10 Gbps network uplink. The University of Chicago network connects to Internet2 at 10 Gbps and has 1 Gbps connections to other commercial networks. All Midway file transfer nodes and login nodes are connected at 10 Gbps ethernet to the University of Chicago campus network backbone.
The research networks at the University of Chicago are deployed at the two campus core distribution points. The connectivity is via two 10-Gigabit Ethernet circuits to MREN and the CIC OmniPop. The connectivity to MREN is via an I-Wire fiber ring that uses DWDM optical equipment to consolidate circuits into individual light frequency “waves” in order to allow multiple circuits to co-exist on the same fiber strands. The CIC OMNIPOP connectivity is achieved by a single 10-Gigabit Ethernet circuit, provisioned across dark fiber. This circuit carries various different VLANs to allow direct peering to other institutions and networks, such as the Internet2, ESNET, Argonne Labs, and Fermi Labs.
Both of these solutions provide the University great flexibility and capacity to connect to other institutions and share research data and resources. The strategy addresses the need to cultivate the many advantages and leverage the current environment to enhance the capabilities and capacity of the University of Chicago network infrastructure. The University of Chicago has built a Network infrastructure to establish a Science DMZ (with an NSF grant: award number 1246019) that is distinct from the general-purpose campus network and that is purpose-built for data-intensive science. This Science DMZ includes support for virtual circuits, software defined networking, and 100 gigabit Ethernet. The RCC compute resources are now connected with a 40 Gbps ethernet connection to the UChicago Science DMZ and tests are being performed to ensure that network traffic is properly segregated. RCC has seen sustained real-world file transfers to long-distance Internet2 locations exceeding 3 Gb/s