Final answer:
In the context of dynamic congestion windows with no network congestion and no queuing delays, the number of server stalls would be n × (s/b), and the total latency would be n × rtt + b/r.
Step-by-step explanation:
In the context of dynamic congestion windows in a computer network, the number of server stalls would occur if the entire congestion window's worth of data cannot be held in the buffer at once. In this scenario, with no congestion and no queuing delays, the server would stall each time it had to wait for an acknowledgment before it could send out the next portion of the data, assuming a simple 'stop-and-wait' protocol. For a file of b bytes split into segments of s bytes each, this would happen n times, where n is the number of segments (b/s). Thus, if 'b' is not exactly divisible by 's', we would add one more stall for the last partial segment, leading to the expression (n × s/b).
The total latency is the sum of the time spent transmitting the bits and the time the signals spend in propagation. If we ignore the processing and queuing times, then the time to transmit the object can be expressed as b/r where b is the size of the object in bytes and r is the transmission rate in bytes/second. Thus, the total latency would be the round-trip time (which includes the total propagation latency on the round-trip path rtt) added to the time taken to transmit the object, therefore, n × rtt + b/r.