158k views
5 votes
TCP could only accept a segment lost probabitlity of 2*10⁻¹⁰ to reach an average bandwidth of 10Gbps. Derive the equation for the segment loss probabitlity for the RTT=100ms and MSS=1500 bytes. What would be the tolerable loss if TCP had to support 500 Gbps connection?

User Karl Li
by
8.1k points

1 Answer

6 votes

Final answer:

The segment loss probability equation for a TCP connection with a specific bandwidth, RTT, and MSS can be derived from the TCP throughput approximation formula. Changing the desired bandwidth to 500Gbps, the same process can be applied to find the tolerable loss to support this higher bandwidth requirement.

Step-by-step explanation:

The student's question deals with the relationship between TCP performance, segment loss probability, bandwidth, round-trip time (RTT), and maximum segment size (MSS). The Mathematical derivation for TCP throughput considering segment loss probability involves TCP's congestion control algorithm which includes the effect of retransmission timeouts. Bandwidth (BW) is affected by segment loss probability as it determines the rate at which TCP can successfully transmit data without requiring retransmissions.

In order to derive an equation for the segment loss probability that allows a TCP connection to reach a bandwidth of 10Gbps with an RTT of 100ms and MSS of 1500 bytes, we can use the TCP throughput approximation formula: BW = (MSS / RTT) * sqrt(3/2) / sqrt(LossProbability). Rearranging the formula for LossProbability yields: LossProbability = [3/2 * (MSS / (BW * RTT))2].

For a desired bandwidth of 500Gbps using the same RTT and MSS, the tolerable loss would be calculated by plugging in the new bandwidth into the rearranged loss probability formula. This will provide the maximum segment loss probability that the TCP connection could sustain in order to maintain the higher bandwidth.

User Sanya Tobi
by
7.6k points