Sending a HTTP/1.0 request to a HTTP0 server can result in problems
with the client and server are sufficiently far apart on the network,
apparently because of the following:
(a) Client sends 'GET /foobar HTTP/1.0' CR LF.
(b) Server says, aha, I'm supposed to send data, and starts sending.
(c) Clinet sends the rest of the HTTP/1.0 request, including the
Accept fields, etc., followed by the HTTP/1.0 terminating (end of
request) blank line.
The result is that the socket gets confused and the client only ends
up getting the first chunk of data (usually 1024 bytes).
Lots of people are seeing this with Mosaic 2.0p3, and Bill Perry's
Emacs client hits the same situation in some cases. So it seems to be
general enough to be a real pain in the butt...
Any ideas?
Possibilities I see:
(a) Everyone upgrade to HTTP/1.0 pronto, and everyone forget about
HTTP0 forever.
(c) Change all existing HTTP0 servers to catch the HTTP/1.0 request
field when it's sent and wait for the rest of the HTTP/1.0 request
to come through (but ignore it).
(c) Change HTTP/1.0 to only send a single CR LF in the entire
request datastream.
None of these seem palatable to me. Thoughts?
Thanks,
Marc