1

This is a rather general question about hardware and standards in general:

Why do they place limits on data transfer rates, and disallow manufacturers from exceeding those rates? (E.g. 100 Mbit/s for Wireless G, 150 Mbit/s for Wireless N, ...)

Why not allow for some sort of handshake protocol, whereby the devices being connected agree upon the maximum throughput that they each support, and use that value instead? Why does there have to be a hard-coded limit, which would require a new standard for every single improvement to a data rate?

user541686
  • 23,629

3 Answers3

4

In general, when a new communications technology is invented, the inventors make it as fast as possible. They don't know any feasible way to economically make it faster. They set a speed at which the technology should operate so that users can be sure that equipment from different manufacturers will interoperate.

Take Ethernet as an example, after playing around with lower speeds, the group that defined the standard settled on 10 Mbps over thick coaxial cable. If they'd known then how to get 10 Gbps over twisted pair cable at a marketable price, I'm sure they would have done it.

If you've worked out how to get 10Mbps over thick coaxial cable, you probably don't know how you could get 10 Gbps over the same cable and it would be pointless specifying how all the nodes should negotiate speed if you don't yet know how future high speed devices might interoperate with low speed devices.

A kind of exception exists for low-speed low cost systems such as USB. It was known the keyboards need lower IO rates than memory devices so they built in a way to negotiate between low and high speeds. Yet even higher speeds had to be retrofitted - they were not anticipated in the original standards. It is better to issue a usable standard now than wait until you can work out what speeds might be possible in twenty or thirty years time.

2

In general one of the advantages of a standard is, that with adaptors supporting a certain standard, and cables supporting a specific standard, it will work. With that in mind, most IEEE standards tend to be conservative, slightly overenginnered, and will generally work as advertised.

There's nothing stopping a manufacturer from extending the standard to increasespeed - which in this case wasn't always as advertised - or to use a non standard speed or interface. By following a standard, manufacturers ensure that their products, when bought, arn't returned cause they are incompatable.

There's nothing forcing this - standards make sense for everyone involved, since it means all gear conforming to a standard will work together, and you don't need to worry about whether gear from company A and B support different, non-compatable approaches - one reason you can use a ethernet interface (10mbps) with any sort of ethernet cable, and they can co-exist with fast ethernet (100mbps) and gig-e (1gbps) adaptors to an extent.

Its just like networking - there's nothing stopping someone from running an alternate domain system, or replacing HTTP with a different protocol. The standards just make it simpler for everyone involved.

Journeyman Geek
  • 133,878
0

The point of standards is to insure interoperability of devices that conform.

If I manufacture a FooStand v2 device that actually emits data 20% faster than FooStand v2 devices are required to accept it that breaks the interoperability guarantee. Which is bad.