Re: two ideas...

Jeffrey Mogul (
Thu, 07 Dec 95 17:03:07 PST

> (4) We would modify HTTP to add a new response header
> to carry predictions. Ideally, these should come AFTER

We beleive that predictions need to be signalled not only to HTTP,
but to the client's IP processing mechanism, and to the network.
The problem is that predictions are opportunistic use of available
bandwidth, which must not interfere with other "real" traffic.

You can believe this as much as you want, but I don't think you
will be able to insist on it. For one thing, HTTP already uses
non-reserved bandwidth, and it's not possible to decide what is
"real". (Should posting of sports scores get priority over
high-resolution downloading nude images? Depends if you are a pervert,
a dermatologist, or a sports fan.)

My intuition is that, at the moment, the primary contributor to
delay for the average web user (on a 14.4 or 28.8 modem) is the
long transmission time over the "tail circuit" dialup link.
Prefetching is therefore mostly a way of hiding this part of
the latency. Since these links are typically private to a
given client, and are paid for by the minute, not by the packet,
it makes sense to try to use as much of their bandwidth as possible.
This assumes that the available bandwidth between ISPs is large
enough to cover N*28.8K bits/sec when N users are actively using
the Web, but presumably one would not prefetch infinitely into
the future; that is, there would still be plenty of "think time"
per active user.

You write:
As a result, we send them:
- to a different PORT on the client-side cache
- flagged as "droppable" (ABR user-flagged 'red', in ATM terms)
but then you also write:
The other advantage is that this requires NO modification to HTTP.

This seems inconsistent to me. Use of a different port and
requiring network-level priority settings definitely means
changes to the HTTP spec, and almost certainly would require
major changes to proxies, routers, etc.

Of course, one could get into an argument over whether prefetching
or presending is more wasteful of bandwidth (for a given reduction
in latency), but I'll leave that for later.