13

Can YouTube (e.g.) send a video file once and multiple users stream it? Or does YouTube need to send the file to each person individually, even though all of the users live in the same region?

If it is the second case, is there any way through, apart from P2P, in which the ISP handles the parallelism or something like that?

(EDIT) Apart from YouTube, is it possible otherwise to send a file once, and for it to be downloaded multiple times, possibly at the same time(live). I mean can we send a file to multiple IPs with only needing to upload it once? (Not talking about cloud services here)

Peter Cordes
  • 6,345
Someone
  • 157

4 Answers4

25

YouTube sends a separate copy to each user. It's pretty much the only option available over the Internet.

is there any way through, apart from P2P, in which the ISP handles the parallelism or something like that?

No, there isn't. ISPs don't have any way they could handle it. There kind of used to be two ways – multicast and ISP-provided caching proxies – but neither of them works these days.

While IP "multicast" is a thing, it has never worked on Internet scale (they tried and gave up), and it couldn't work for VODs anyway. (Some ISPs do use multicast internally for their IPTV offers, but it only works for real-time, TV/radio style broadcast streams – not for "on-demand" video, where different viewers want to start the playback at different times.)

A few decades ago, some ISPs used to provide HTTP proxies that would locally cache any requested file on behalf of their clients (sometimes opt-in, sometimes enforced). However, such proxies no longer work now that all websites use HTTPS – the proxy can't see through HTTPS so it doesn't know what is being requested (and most people wouldn't want their ISP to know that anyway) – and they don't offer as much benefit anyway, now that 100 Gbps WAN links are an option (as opposed to smaller ISPs having to share just 1 Mbps for the entire country, back in the day).

That being said, YouTube itself most likely does local caching (I'm sure Netflix does, but I believe YouTube does as well). YouTube isn't hosted in a single place – it has storage and caching nodes in various regions, interconnected through Google's private WAN links. So when multiple users in the same region watch the same video, they all request it from their regional YouTube servers – which will most likely receive the video just once and cache it for subsequent requests.

So you could say that YouTube itself partially handles the parallelism, by bringing the data closer to ISPs and caching it.

The rest of the path (from the local YouTube nodes, via ISPs, to the end users) is still a separate copy for each user – can't really do it any other way – but it's a fairly short path.

grawity
  • 501,077
8

Consider that if three people are viewing the same 50 minute-long video, one client might be at the beginning, another the middle and a third at the end. One could even decide to backtrack 15 minutes into the video. Though receiving devices do buffer a few seconds, in this situation three separate streams are required, possibly via TCP.

However, for simultaneous broadcasts, e.g., live video of a conference or a satellite launch, far fewer User Datagram Protocol (UDP) streams might serve for millions of viewers. Why more than one source? Some might be viewing at 640x480 pixel resolution, and others at 1920x1080 (1080p), so each requires it's own UDP stream.

6

Answer : That depends on the video you're watching.

One-to-many casting is called multicast while one-to-one casting is called unicast. In multicast, all receivers receive exactly the same video at the same time, while in unicast the receiver may have control of the position within the video.

Multicast is suitable for large public events that are that are cast for a large audience, since it allows the broadcaster to economize on the bandwidth being transmitted, so it's used for large Live events. Unicast is suitable for individual watching on YouTube.

Although in multicast the receiver cannot choose his position within the video, this is offset by the browser storing the received video within its temporary internet cache, which allows the watcher to pause and go backward/forward within the stream and up to the Live point.

For multicast to work, every router between the recipient and the source must be multicast-enabled. In order for a computer to join multicast groups, it's necessary for it to support Internet Group Management Protocol (IGMP). This was once a problem, but today most hardware supports this.

A well-known use of multicast happened on May 18, 2000, when over two million internet users watched the Victoria's Secret Fashion Show. The event was broadcast in 56 and 100 kbps unicast, and two multicast streams at 300 kbps and 700 kbps. If everyone would have been using 56 kbps, that's over 100 Gbps of streamed data, requiring prohibitively costly bandwidth. The majority of users at the time couldn't use multicast, but if all were using it then the maximum amount of bandwidth that Victoria's Secret would have needed for the event would have been 1 Mbps.

For more information see the article Global IP Network Multicast FAQ.

harrymc
  • 498,455
6

The current best solution is brute force, and local proxy servers distributed around the world (CDN = Content Delivery Network).


Multicast (https://en.wikipedia.org/wiki/IP_multicast) technically exists, but AFAIK it's not supported over the public Internet. (In the early days of video on the Internet, I recall reading about multicast getting used over a part of the Internet ("mbone", the multicast backbone) that connected some North American universities, especially for video conferencing with many-to-many connections.)

Using it for video could basically get routers to do the job that CDN servers (content delivery networks) do for live feeds: sending the data once to a city, and having all users in that city stream from a nearby machine, so there isn't one machine in the world sending out all that traffic many times over backbone links. (Maybe not "city" but "internet hub".)

For a live stream with huge numbers of viewers (e.g. the Olympics, world cup soccer) where most networks that have any viewers of the stream have multiple viewers, it's a potentially interesting idea, if we had networks that supported multicast. Or for software updates to major software (like Windows) right after they're released, you could imagine having a server multicast the update files a few times for everyone to get a copy.


But there are major technical challenges.

If a packet gets dropped, multicast doesn't have a good way for a single client to request a retransmit. With low-latency video conferencing suitable for two-way chats, you just accept the glitch in your stream as the price to pay for keeping latency low. But streaming a "live" event, normally we don't care about a few seconds of delay which give enough of a buffer for TCP retransmits. (Youtube, and the web in general, operates over TCP/IP, a protocol that retransmits dropped packets.)

The routers don't keep old copies of the packets, and it doesn't scale for every client in the world to send a request to the server to retransmit a dropped packet to them (perhaps over unicast UDP). (This would also open up a way for malicious users to create DOS (Denial of Service) attacks and consume a huge amount of the server's bandwidth.)

So perhaps we could use some forward error correction (like ECC) so receivers could reconstruct the correct data if a few packets were dropped. That would add some overhead all the time, with no dropped packets, but could potentially avoid many clients needing to retry. But dropped packets often come in bursts, so we'd need a large "chunk size" for FEC to help (like a few % overhead over many seconds or maybe a minute of video, requiring large buffers on each client to be able to reconstruct data with error correction).

This would be much less of a problem for software distribution via multicast (on release day when everyone's downloading it), since ECC codes could cover the whole thing, like Par2 or the recovery data that can be embedded in some archive formats like RAR.

But if we wanted to cover the worst case for a viewer comparable to what youtube buffering can absorb currently (multiple seconds of lost data), that would make the error-correction overhead way too high.

So we'd want clients to be able to fall back to unicast requests to fill gaps, presumably over TCP (to get its congestion-friendliness). So we need a high-bandwidth server or CDN, especially if people abuse this to always stream from TCP instead of multicast (e.g. to work around being on an ISP that doesn't support multicast). Or if we don't consider that "abuse", and just consider multicast as saving some bandwidth.

This is obviously much more technically complex than receiving a TCP stream, and may just let your CDN be cheaper and have fewer machines or capacity.

And building around this architecture doesn't let people pause and rewind, which some streams do want to support. (Unless their client had already received and locally saved the video they're rewinding into.)

Peter Cordes
  • 6,345