BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121115T155000Z DTEND:20121115T161000Z LOCATION:155-F2 DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: This year during the SuperComputing 2012, Caltechs HEP team intends to build a 100G testbed between 4 LHC data centers, with the goal of demonstrating highly efficient transfers of LHC data between these sites. Each end site will host a set of storage servers with 40GE Ethernet NICs and PCIe Gen3 motherboards.=0A=0AAfter the successful SC11 demonstration, we continued our work on data transfer systems and have seen very good consistent results in particular in terms of single-node performance, transfer stability and system robustness. We therefore intend to extend last years demonstration, scaling it up to multiple 100G links, interconnecting several sites. The core link capacities will be 100Gbps, and 40Gbps where necessary. On the server side, the emphasis during SC12 will be to use 40GE network interfaces throughout.=0A=0AAccording to the researchers, the efforts will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology, which allows transfer rates of true 40 and 100 Gbps. SUMMARY:Efficient LHC Data Distribution Across 100Gbps Networks PRIORITY:3 END:VEVENT END:VCALENDAR BEGIN:VCALENDAR PRODID:-//Microsoft Corporation//Outlook MIMEDIR//EN VERSION:1.0 BEGIN:VEVENT DTSTART:20121115T155000Z DTEND:20121115T161000Z LOCATION:155-F2 DESCRIPTION;ENCODING=QUOTED-PRINTABLE:ABSTRACT: This year during the SuperComputing 2012, Caltechs HEP team intends to build a 100G testbed between 4 LHC data centers, with the goal of demonstrating highly efficient transfers of LHC data between these sites. Each end site will host a set of storage servers with 40GE Ethernet NICs and PCIe Gen3 motherboards.=0A=0AAfter the successful SC11 demonstration, we continued our work on data transfer systems and have seen very good consistent results in particular in terms of single-node performance, transfer stability and system robustness. We therefore intend to extend last years demonstration, scaling it up to multiple 100G links, interconnecting several sites. The core link capacities will be 100Gbps, and 40Gbps where necessary. On the server side, the emphasis during SC12 will be to use 40GE network interfaces throughout.=0A=0AAccording to the researchers, the efforts will help establish new ways to transport the increasingly large quantities of data that traverse continents and oceans via global networks of optical fibers. These new methods are needed for the next generation of network technology, which allows transfer rates of true 40 and 100 Gbps. SUMMARY:Efficient LHC Data Distribution Across 100Gbps Networks PRIORITY:3 END:VEVENT END:VCALENDAR