I'm doing a system design in which many flash-memory devices are connected into a USB tree of hubs, and I need to estimate how long it will take for them all to do a download ... which is complicated by my lack of in-depth understanding of USB! I'm hoping to find a simulation that will let me explore some alternatives, but some experience-wisdom would sure help!
I have from 1..200 USB 2.0 devices each with 1GB to offload as quickly as possible. The scenario is that someone sets up the tree of hubs - 2-port laptop to a couple hubs, which each branch N-way until I get to the 200 leaves. (Fewer is certainly possible, there's no certainty they'll get handled in full-size batches.) Then, we start plugging in devices. When a device is available, we want to start an automatic download of the full 1GB memory from that device.
These will be "generation2" versions of some existing devices, so there's a mix of design freedom and legacy constraint... The devices will have both HID and MSD functions. They are USB 2.0 but have very limited CPU so I expect each device is limited to Full speed (12Mbps). The devices need to pick up charge while plugged in and downloading, so I gather I need to ensure the devices and hubs implement Battery Charging Spec v1.2 (hubs being "downstream charging"). I get to specify the hubs to be used, so they can be USB 3.0 or whatever. I think I can specify the peripheral devices will use Link Power Management though I'm not yet certain of the implications.
I don't know how bandwidth divides on a many-device USB setup like this, what's the impact of the plugging in and unplugging, would it be better to split the devices 100/100 vs 127/73 ... that sort of question. So, how do I guesstimate how long this will take (wall-clock time) to pull the 200GB into the laptop?

