I for one often wish WinMosaic had a much deeper cache for inline
images, especially as I page up and down just a few layers in a server
and go "oops, downloading that lovely but huge mess of icons that I just
saw three pages ago." By contrast, the Gopher user very rarely feels any
pain when cruising menus. Only a few menus out there are tremendously
long.
You could probably do a pretty good simple study by analyzing logs, and
coming up with a statistical profile of the various files delivered --
ie keeping track of bytes of text delivered versus bytes of images,
sounds, etc.
You'd have to define terms if you undertook a more in-depth study. What
do you mean by "efficiency"? By browsing 200 Gopher servers I might find
a document I'm looking for, but to an external observer it appears I'm
harvesting a lot of useful information. But an index tool might've saved
me 199 scans.
This is one reason why taking the NSFnet traffic stats and graphing them
can be misleading -- the value of information delivered is not
proportional to the quantity of bytes sent out. (I'm guilty of
distribution such charts myself.) And of course network
bandwidth isn't the only issue -- painting those inline images also
takes real time for the client program.
/Rich Wiggins, CWIS Coordinator, Michigan State U
> Has anyone done any recent studies of the efficiency of the various
>common information sharing protocols (HTTP, Gopher, WAIS, etc)? I'd
>like to know how each compares to the others and also where the majority
>of the bandwidth is being consumed (transferring images, I assume).
> Is anyone planning on doing such stuides? Does anyone have any sugge stions
>and/or experience regarding measurement techniques? Chris
>-----------------------------------------------------------------------