Can someone explain to me how data is transmitted over a network using TCP/IP? Let's say I currently receive 500 Mbps of data over a 1 Gbps line right now. If I upgrade the line to 10 Gbps, would my data speeds increase to 5 Gbps due to serialization (more lanes for data to move, even though it might still be moving at the same speed per lane)?
I'm really confused on what it actually is that determines throughput and why actual throughput varies from labeled link speed. I believe actual throughput is determined by round-trip time, but to achieve 10 Gbps, wouldn't that mean you'd need like sub-millisecond ping? Or is that throughput per lane, and then depending on the link speed, you multiply that transfer rate by the appropriate number of lanes?