Most of our model is similar, except as follows:
> (4) We would modify HTTP to add a new response header
> to carry predictions. Ideally, these should come AFTER
We beleive that predictions need to be signalled not only to HTTP,
but to the client's IP processing mechanism, and to the network.
The problem is that predictions are opportunistic use of available
bandwidth, which must not interfere with other "real" traffic.
As a result, we send them:
- to a different PORT on the client-side cache
- flagged as "droppable" (ABR user-flagged 'red', in ATM terms)
> As far as I can tell, ours differs from the other "prefetch"
> approaches in that we split up the prediction mechanism
> from the control mechanism, since it's not possible to optimize
> both in the same place. The server knows the behavior of clients
Our technique does this too - we have a client-side cache, and a
server-side presender. The client is unaware of the actions of the
client-side cache, and the server is unaware of the actions of the
server-side presender, resulting in transparent operation.
The system looks like:
client--cache-------------------------presender--server
if you split the long-line in different places, this looks like
prefetching client caches:
client--cache--presender-------------------------server
prefetching server caches:
client-------------------------cache--presender--server
prefetching intra-net proxies:
client-------------cache--presender--------------server
etc.
The other advantage is that this requires NO modification to HTTP.
Joe