So if you want reliable results, you can only measure servers that you're in control of (so that the you could make the server publish a 1GB file via HTTP at a known URL that the test service could download).
(And although it'd still be technically easy for a 'speed test' service to set up its servers to offer a mode where it downloads an external URL into the test server, I kind of suspect that this would expose it to a few legal risks.) ), as most URLs hosting ordinary HTML webpages are tiny in comparison to the server's upload speed – with serving 'only' 200 kB, you'd end up measuring the webapp's processing time and network latency more than you'd measure the actual throughput. The same does not apply to 'any random public server' (e.g. Servers offered by regular 'Speed test' tools have one thing in common: they all publish HTTP URLs which consistently serve a large amount of data to the client – large enough that a measurement could be done over at least several seconds (giving time for TCP flow control to settle, etc).