When a user installs Gnutella or on a configuration screen when running Gnutella they will be able to pick what level they are will to let their client promote itself. Also they will be able to limit how much memory they are willing to dedicate to Gnutella. There will be at least three levels of promotion:
At this level a
client computer will behave much like it does in the current Gnutella network. It will attach itself to a few computers and start performing searches.
It will have to option of being cached by a 1nd level caching computer though. If this happens it will upload its complete file index to the 1nd level caching computer. It will stop performing searches because the searches will be conducted on the 1nd level caching computer. It will send updates on its file system to the 1st level caching
computer through delta updates.
The information that a slave will have to maintain if a computer is caching it and what is the IP of that computer.
The slave will get requests from 1st level caches for permission to cache and it will have the opportunity to accept or decline. The reason for the caches asking instead of the slaves asking is to prevent the caches from being inundated with caching request.
When a slave drops:
When a basic slave computer drops from the network
there is nothing that has to be done. It is just gone.
1st Level Cache
A client computer can become a 1st level cache simply if the user allows for it and also if the computer allows incoming connections
I am not sure of the best method in deciding which slave clients should be given the option to be cached but criteria such as persistence (length of uptime), a relatively small file index and a decent ping time may be sufficient.
Large file indices should not be cached.
Large file indices would be more than 250 files or something like that. The savings in terms of traffic is the same whether a large file index is cached or a small file index is cached. Thus small persistent file indices should be cached before larger ones.
A computer will be asked if it can be cached if it is connected to a possible/actual 1st level cache for n hours. I believe that 1 - 5 hours may be enough.
I estimate that a cache should maintain no more then 4 - 5 client indices. This
will keep the delta updates from clients down to a minimum. If the clients are not updating their files very much the number of mirrored clients can increase. Clients that change their files too much can be dropped.
When bringing on a new client to be cached the file index would be requested and stored on the 1st level cache. The slave would then drop all the other connections it has to the network since it is now no longer performing searches.
When a cache drops:
When a cache
drops from the network the clients will notice since they must receive a message from their respective cache every [n] seconds. If they do not receive a message they will attempt to ask the cache about its status. If it does not respond in [n] seconds they will then reconnect to other machines as uncached slave computers.
2nd Level Mirror
If a user allow for it once a computer has become a 1st level cache it will be able to promote itself to a 2nd level mirror. In this stage
the computer will be able to copy the contents of other 1st level caches.
All connections would be typed according to the computer on the other end. Thus a computer which had the resources to become a 2nd level cache would be able to know which 1st level caches it was connected to.
Mirrors would want to create clusters of 2 - 4 computers (more if there is not very many changes in the connectivity structure or file indices). This would results in load sharing and a reduction in traffic to
½ to ¼. (Assuming each cache was maintaining 5 clients the total clients mirrored by a cluster of 4 computers would be able 25. Assuming that the files offered by computer would be about 200 that would be a combined file index of 5000.)
A mirror will share to its incoming links the other computers that are its mirrors. Thus the incoming links will automatically balance the load by rotating the computer to which it sends search requests. Also each computer in a mirror cluster will have to
share their outgoing links thus the network will retain the same connectivity no matter which mirror handles a particular search request.
Again mirror connections would be asked for. The criteria would be based on ping time, persistence of connection.
When a mirror drops:
A mirror must talk to its co-mirrors every [n] seconds. Usually there will be file system updates to be passed around or connectivity changes. If there is no messages a simple status message should be exchanged.
When a mirror drops from a mirror cluster it will stop responding to all messages. It status should be asked and if it does not respond the mirror should be considered gone. All incoming connections should be send an updated mirror list. Also the file indices shared from the lost mirror should be deleted from the other caches. This is because the slaves computers which attached to that mirror will now find some other way to reconnect to the network.