The transmission time can be defined, as per Wikipedia, as (for a digital signal):
"The time from the first bit until the last bit of a message has left the transmitting node."
Thinking about digital signals, the tranmission time corresponds to the total duration of the whole packet. The data is pushed onto the wire from the send buffer. The amount of time individual bits of a packet should wait in the send queue (buffer) is another kind of delay that is called queuing delay, thus I am excluding it for now.
My question is this: What exactly is transmission time a function of? For example, we can say that the propagation delay is a function of a fraction of the speed of light and the distance between the connected nodes, the processing delay is a function of the processing speed of the CPUs, FPGAs etc. However, the transmission time seems to be loosely defined in terms of these functional relations.
Since queuing time is a separate kind of delay, we should think that the data that will be pushed onto the wire have already got past the queuing time, thus, the only work to do is to push the data onto the wire. The question I ask also takes the form: "What physical reason(s) prevent the data to be pushed onto the wire immediately?".
Of course, it seems it cannot be pushed onto the wire immediately, and it is not intuitive at all, but why? Is it the distance from the physical location of the send buffer to the starting end of the wire? Is it some kind of limitation that the PHY protocols are able to enforce through different limitation mechanisms (e.g. see: "...Many Ethernet adapters and switch ports support multiple speeds by using auto-negotiation to set the speed and duplex for the best values supported by both connected devices.") ? If so, what are these mechanisms on the hardware level? Is it the sampling rate of the clock of the node? Which one(s) of these would be likely to make the biggest impact on the transmission time?
I know that not every PHY protocols are able to handle high transmission speeds (e.g. Comer, D. E. (2008). Computer Networks and Internets 5th Edition: "Asynchronous serial interfaces commonly will support bit transmission speeds of up to 230.4 kbit/s."). As far as I understand, there is a limitation caused by the cabling and physical layer itself, however, I would assume the answer lies in a combination of the properties of the sending node and the physical connection interface on the NIC.
By the way, please do not tell me that it is a function of the packet size and bit rate, since it eventually comes down to saying that "Period is high, but why? Because frequency is low!", which is meaningless.
I may have skipped some sources that may be giving information on this, however, could not find any.

