Comparision of Network Emulation Methods
Abstract
Dedicated network impairment devices are an accepted method to emulate network latency and loss, but are expensive and not available to most users. This project will compare the performance of a dedicated network impairment device with that of performing network emulation within the Linux kernel to simulate the parameters when utilizing TCP.
Intellectual Merit
This testing will be used as validation of WAN impairment of TCP and LNET over a single 100 gigabit Ethernet circuit being conducted in Germany through a partnership with the Technische
Universität Dresden.
Broader Impact
Results will be widely disseminated through a joint paper with Technische Universität Dresden and presented to Internet2 members in April.
Use of FutureGrid
For this project the FutureGrid Spirent Network Impairment Device will need to be utilized and relocated by IU GRNOC staff to Bloomington, IN for a period of 2-4 weeks beginning March 7, 2011.
Scale Of Use
We will only be using the network impairment device connected to dedicated servers and storage in the IU Bloomington data center.
Publications
Results

The direct host-to-host link saw an average delay of .040 ms while the path through the XGEM (.004 ms) and Nexus (.026 ms) was .080 ms.
Dual five minute unidirectional TCP Iperf tests were conducted, one each across the direct and switched path. Tests were initiated independently and occurred at approximately the same start time with a deviation of +/- 3 seconds initiation. Results were gathered for each direct (D) and switched (S) test. Follow-up tests were executed increasing the number of parallel streams Iperf (command line option -P) could transmit. The number of streams included single, sixteen, thirty-two, sixty-four and ninety-six. Delay was added via the Spirent at increments of default (.080 ms), 4.00 ms, 8.00 ms, 16.00 ms, 32.00 ms, 64.00 ms, 96.00 ms and 128.00 ms RTT. The matrix yielded forty data points. Additionally the experiments were repeated utilizing two different kernel tuning profiles, increasing the data points to 80 and 120. The data points and graph (only switched path) show that as delay increased overall TCP performance increased as the number of parallel threads were increased.
Detailed results can be found in the attached text and excel files.