Although the HTTP working group hasn't really addressed this yet,
it presumably will happen fairly soon, so I think one can assume
that HTTP 1.1 will have preemption whether or not it has prefetching.
Of course, it's an interesting tradeoff if the "preemption"
mechanism adopted is "close and reopen the TCP connection": how often
does this increase latency to the user vs. how often does a successful
prefetch reduce latency? But I think one can do a reasonably fast
preemption using the TCP Urgent Pointer (cf. the Telnet protocol).
direct requests preempt speculative responses
both at the server and at the client
Yup. Preempt may mean "abort" or it may mean "suspend".
direct requests and responses preempt speculative packets
in the network - i.e., this is why you need
red-flagged ABR ATM packets, or some sort of
similar flag in integrated-services IP. It doesn't
work at all with current IP.
Since our aim is to make HTTP + current TCP + current IP +
current link layers (i.e., not ATM) work better within the next
year or so, this is moot. As you observe, it can't be done.
Also, it's a micro-optimization; one or two packets of latency
(if the MTUs are set according to Van Jacobson's advice in RFC1144)
should not materially affect interactivity.
cache hits are forwarded to the server presender
so that the server presender can update
its speculation set
This won't be popular with the HTTP community, since it adds
server load. And it's not clear to me that the server's "speculation
set" (predictive model) should be updated to reflect cache-hit
behavior ... since (in our approach, at least) its purpose is
to predict cache-miss behavior!
speculative responses are dropped if
no bandwidth in the net
server busy with other direct requests or responses
In our model, servers can stop sending prefetch predictions if they
are overloaded (or use a feedback control system to adjust their
prediction threshold to maintain slightly under 100% loading).
Note - the server rules imply that cache updates
arrive on a different IP port than direct requests,
and that the cache loads come on different IP ports
than direct responses.
This might be an optimization, but it's not necessary. And if
you don't insist on this optimization, the changes to HTTP are
quite minimal (and hence easy to get into the standard).