If most Web users have one HTTP/TCP connection open at a time, I would
expect that current lower-layer techniques (such as TCP congestion
avoidance) and likely-to-be-employed techniques (such as some
form of fair queueing in the routers) should result in fairness.
That is, everybody will get a roughly equal share of the shared
links. Then it's up to each client (software, not human) to decide
how to allocate its share among prefetching and demand fetching.
Whether or not HTTP supports prefetching, there is still a danger
of a tragedy of the commons. You can waste bandwidth by prefetching,
or by downloading enormous MPEGs. Or RealAudio, or whatever.
Either the net will ultimately adopt usage pricing, or the current
flat-rate model will generate enough income for ISPs to maintain
a reasonable supply of bandwidth. Dave Clark has good arguments
to support the latter model, and so far it seems to be working.
(Dave's arguments are based on studies by respectable economists;
usage-pricing is not inevitable.)
The point is that it may be reasonable to include as a design goal of
a prefetching scheme that the total amount of bandwidth used should
decrease even locally. Given a sufficiently accurate prediction
system, persistent connections, and good caching, this seems possible
(barely).
Not. Think about the set of objects retrieved by prefetching+demand.
It must be a superset of the pages retrieved by demand (in the absence
of dire failures). It can't be a subset. And if any of the predictions
are wrong, it will be a larger superset. Since we can't expect
a particularly accurate mechanical prediction of what any human
being will do next (if I had one, I'd play the stock market with it),
prefetching will definitely increase bandwidth requirements.
On the other hand, it might result in more efficient use of
server resources, router resources, proxy resources, and client
resources ... by increasing temporal locality. Or maybe not.
-Jeff