Re: HTTP-NG: status report
Gary Adams - Sun Microsystems Labs BOS (email@example.com)
Tue, 22 Nov 1994 08:56:07 +0500
> From firstname.lastname@example.org Tue Nov 22 07:47:51 1994
> Looked great!
It's a good time to lift up the rock and ask a few what, when, where,
how and why questions. (And I liked it, too).
What are the key areas that need to be addressed in the next rev of the
HTTP protocol? (performance, security, extensibility, xxx)
Why does the protocol need to be enhanced? (or why have we left it the way it
is for such a long time?)
When could a new version be specified/deployed? (summer 95?)
Who's ready willing and able to make it happen? (W3O)
How do we make a smooth transition to the new capabilities?
Where can I get it :-)
Actually, I'm suggesting that an http working group be formed to flush
out the requirements and details of a next generation http spec, similar
to the excellent work that has been done in the html working group for
a verifiable and well documented HTML 2.0.
> The act of fetching inlined images could be made faster if the server
> were able to tell the client directly in its response headers "I think
> you'll want to get these items next" - that way the client doesn't have
> to parse the document as it's coming down the pipe looking for inlined
> images. I.e., in HTTP/1.0 terms (since I haven't seen the HTTP-NG terms)
> C: GET / HTTP/1.0
> S: ...
> Inlined-objects-URI: /Images/yes.gif, /Images/no.gif
This is the place that I always get confused in separating out the responsibility
of the protocol on the wire for HTTP and the features I expect to see supported
from an HTML document authoring system. To follow from your example above the
next objects transmitted over the wire would look like this :
Important to display this as soon as possible (warm fuzzies for users)
In other words, a document header is accessible with a minimal amount of
processing and may contain equally valuable processsing instructions at
an application level. If this information is in the HTML HEAD, it could be
possible to acheive similar savings over other transport schemes (ftp, gopher,
> The server just has to spend the effort to derive this info once and can
> cache it in memory or to disk from then on, doing quick if-modified
> checks to see if that info may have changed. The client then still has
> the option of getting the inlined object from a cache somewhere.
When I divide up the processing, I'd like to see the document authoring
system take responsibility for most of the content specific processing
as possible. e.g. A list of inline images with their timestamps and sizes
could easily be produced when a document is "published" into a particular
document server's 'database' (in today's terms this translates into a user
can manually add the directives to the header of their document or run a
script to update the header information when they copy it to the publicly
accessible file system).
If we intend for the server to parse documents and take action before the
client has determined what they will do with the information, there will need
to be some additional dialog upfront where the client can communicate its
"intentions" about the information it has requested. e.g. I'd rather have
10,000 clients parse the document and determine what actions to take than
to have the server parse the same document 10,000 times while the next 10,000
requests are waiting. Best of all I'd rather have the document author wait
while their document is checked for errors and optimized for rapid retrieval
just once so all 10,000 of us could enjoy the benefits.
> Your slick hype/tripe/wipedisk/zipped/zippy/whine/online/sign.on.the.ish/oil
> pill/roadkill/grease.slick/neat.trick is great for what it is. -- Wired Fan #3
> email@example.com firstname.lastname@example.org http://www.hotwired.com/Staff/brian/