staindrocks Posted November 10, 2006 Share Posted November 10, 2006 What does "spamming the tracker" mean, and does BitComet 0.70 do it? I belong to a private tracker community that just recently decided to add clients to their ban list(B*tS*up). What doesn't make sense to me is that many other trackers that have always banned all releases of BitComet, many of them have unbanned the 0.70 release. BS has always allowed their members to use BitComet(not sure which releases, but i know 0.70 because i've used it ever since i joined 20 weeks ago), and now all of the sudden they ban ALL BitComet releases. The big reason for banning it, from what i've heard, is because it spams the tracker. I know that many Utorrent and Azureus users like to tell people a few things that are bad about BitComet, stuff that has been fixed. But they still say it, and so the more of them that post these things, the more people believe it. So i need to know, from somebody that definately knows, if this is true or/and how it compares to Utorrent and Azureus in this respect? Link to comment Share on other sites More sharing options...
kluelos Posted November 10, 2006 Share Posted November 10, 2006 A hefty part of the problem is that people seen to just invent behaviour that no Bittorrent client has. So it wouldn't be surprising if somebody else invented something new and decided to call it spamming. Just about all Bittorrent clients behave in the same way. It's defined in the Bittorrent spec, which you may learn about in the developers section at bittorrent.org. You should note that this is a static definition. Bittorrent.org no longer controls, or even leads, bittorrent development any longer, and has become increasingly irrelevant. So any future changes to this spec won't reflect conditions in the field, only the bittorrent.org client itself (also increasingly irrelevant). That said, a client opens a torrent and connects to the tracker to announce its own membership and readiness to join the swarm. The tracker acknowledges, and sends a list of the current swarm members, plus a time hack for when that client should scrape again. The format of these messages, or metafiles, is precisely defined. That's if everything goes right. The tracker may time out before responding. If you leave the client alone, it will gently try again and again, at increasing intervals, until it does get a successful scrape. However, it is possible for a client, whether Azureus, µtorrent or BitComet, to manually re-scrape the tracker upon command at any time. This is not recommended, and just adds to the overload on the tracker. That behaviour has, some time in the past, been described as "spamming the tracker" by somebody who didn't really understand what they were talking about. (Not that this is exactly novel in the field.) Any of the big 3 can be told to manually scrape the tracker, and it's just as easy in any of them. But it's not news, and not surprising, for someone to decide to blame one of them for all evil. If rational minds prevail, good. If not, well, irrational minds aren't that much of a loss to the community. Link to comment Share on other sites More sharing options...
staindrocks Posted November 10, 2006 Author Share Posted November 10, 2006 (edited) [Mod. Edit - You do not have to quote a long post, if it is directly above yours.] Sorry about that. So excluding manually scraping the tracker, do you know how BC's scraping the tracker compares to Utorrent or/and Azureus? Are you saying that there is no difference? Edited November 11, 2006 by staindrocks (see edit history) Link to comment Share on other sites More sharing options...
kluelos Posted November 12, 2006 Share Posted November 12, 2006 There are two separate situations. 1) The client did not get a response from the tracker, usually due to a timeout. Re-scraping behaviour is programmed, not user-settable. This is not well defined in the spec, but is generally the same from one client to the next. If you're bored enough, you can watch the client do this for a tracker timeout. It will tell you when it's going to try again, and it will be at gradually longer intervals, until it finally succeeds. So you can see for yourself that behaviour is comparable. 2) The client got a response. That response includes a time hack from the tracker, telling the client how long to wait until re-scraping. All three clients obey that, and the interval's determined by the tracker. This is the normal behaviour for a working client. There is no advantage to be gained by more frequent scraping. Swarm composition doesn't vary all that much over short periods (20 minutes or so), and clients don't react to changes in the peer list very quickly either. It's more effective for connections that have dropped -- but it's not especially beneficial. All that happens is that clients which learn that a peer has dropped, stop trying to communicate with that peer. Those communications weren't going anywhere or doing anything anyway, and since the peer has dropped, it's not receiving them. The benefit, if any, is a nebulous one to the network itself -- an earlier cessation of futile traffic. But notice that the tracker isn't involved in any of that communication. It's basically a wash either way. Link to comment Share on other sites More sharing options...
Recommended Posts
Please sign in to comment
You will be able to leave a comment after signing in
Sign In Now