Re: two ideas...

Fri, 1 Dec 1995 15:00:55 -0800

> From: Jeffrey Mogul <>
> I grabbed a copy of the Touch and Farber paper cited earlier in
> this thread, which seemed to deal with FTP. This described a
> pre-send selection algorithm of sending everything in the currently
> selected directory. The Boston University system used a simple
> 1-level probablistic model to pick the best candidates for
> pre-send, and used fare less extra bandwidth, though with a higher
> probablity of a cache-miss. There's lots of stuff to tune with
> speculation.
> The model that Venkata Padmanabhan and I had been working on is
> a little different from the BU and Touch/Farber models (as far as
> I have been able to learn).

Most of our model is similar, except as follows:

> (4) We would modify HTTP to add a new response header
> to carry predictions. Ideally, these should come AFTER

We beleive that predictions need to be signalled not only to HTTP,
but to the client's IP processing mechanism, and to the network.
The problem is that predictions are opportunistic use of available
bandwidth, which must not interfere with other "real" traffic.

As a result, we send them:
- to a different PORT on the client-side cache
- flagged as "droppable" (ABR user-flagged 'red', in ATM terms)

> As far as I can tell, ours differs from the other "prefetch"
> approaches in that we split up the prediction mechanism
> from the control mechanism, since it's not possible to optimize
> both in the same place. The server knows the behavior of clients

Our technique does this too - we have a client-side cache, and a
server-side presender. The client is unaware of the actions of the
client-side cache, and the server is unaware of the actions of the
server-side presender, resulting in transparent operation.

The system looks like:

if you split the long-line in different places, this looks like

prefetching client caches:

prefetching server caches:

prefetching intra-net proxies:


The other advantage is that this requires NO modification to HTTP.