> Well, it certainly sounds like the protocol needs a time-to-live, now
> doesn't it? Or at least a digest of the file to be compared against short
> fetches every few hours. -- Darren
I don't care what technological muscle we throw at this problem.
It won't provide a solution. The problem is not the design of the
protocol or the servers. It is the human component.
Dan accurately pointed out the the HENSA server is misconfigured. But who
is responsible for providing the proper configuration? And how can it be
tracked? Looks like the only reason that O'Reilly found out that HENSA
was misconfigured was from a table in a limited distribution published
I suggest that the only reason that caching servers exist is to improve
network resource usage, and by doing so, improve the responsiveness of
retrievals. This is for the benefit of the caching server's network usage
and against the best interests of the publisher. There is no incentive on
the cache manager's part to acceed to the wishes of the publisher. How
many publisher's will state "All of our information is timely, so
don't cache any of it" simply as an expedient?
Caching servers treat the symptom, not the illness.