Scott> The reasons I would propose a method that allows for all requests to
Scott> be sent, then all responses received are:
Scott> It would encourage clients to "get their act together" before making
Scott> a connection, thus discouraging a "fetch, parse, fetch some more"
Scott> method that would cause the connection (and server instance) to stay
Scott> open any longer than necessary.
This is not always possible. For instance, the Interpedia project basically
boils down to complicated database queries with the responses returned as
virtual HTML documents. In many cases, it is _not possible_ for the client
to predict what its next query will be until it receives the response from
the last query.
While I understand and agree with the basic HTTP model being lightweight,
stateless, and clean, the fact is that complicated operations often require
some complexity to implement properly. You said it yourself:
Scott> This doesn't exist yet, but could be requested in the future--let's
Scott> think ahead!
I agree! Think ahead! Further modifications to HTTP should include the
ability to perform multiple queries and transfers without specifying them
I'm currently using another port to perform the database queries using an
intermediate script (ie, the client asks for something, a gateway then makes
a complicated set of queries on another port, and returns the results to the
client). This will likely turn out pretty well, but I'm looking down the
road and thinking it would be cleaner to move some of the stuff directly
into HTTP. For instance, the client requests some information, and gets too
much information returned. It then wants to apply a URC (or SOAP) to the
previous query to narrow the search.
If the protocol is completely stateless (as any MIME-oriented MGET proposal
is), it will have to resend the entire query, forcing the server to do all
the processing again. Only in a stateful protocol can this be implemented
cleanly and efficiently.