I kinda agree that it may be a good idea to increase the max limit for that option, so that one can tweak it further up and see what results they get. Perhaps, even the default value could be reconsidered, given the fact that the average speeds are a couple of orders of magnitude bigger than a few years ago.
But I’m not so sure about enforcing a lower limit bigger than now.
It’s true, that apparently most clients trade blocks about 16KB in size, so, at a first glance, it would stand to reason that this should be the minimum size the client should check upon, in the alloted time.
But I’m not sure if this approach wouldn’t miss some particular situations.
Take a case when your LAN and/or the network of your ISP is congested. As we well know the average MTU of the Internet is below 1500 bytes therefore a 16KB block will be divided in several IP packets. Let’s say that your RWIN TCP value is 63712 bytes; this means that even more than one blocks would have to be received in order for TCP process to be able to fill the window. In times of congestion, not all of the packets may reach your PC, so your PC will have to wait to fill the TCP receive window before being able to acknowledge all the TCP segments in the window and send them in bulk to the upper application layer. Your client may not even have a single complete block received from that peer, but that may not be the remote peer’s fault if network congestion downstream caused it.
So, as long as at least some data arrives, this is a pretty good sign that the peer is probably not a leech, and you CANNOT know for sure due to **which **reason you received a low amount of data.
OTOH I think you may mistake this option with the peer selection algorithm that BitComet uses to select and keep the best peers.
I can’t speak for the team, as I don’t exactly know how the guts of BC work but I think that the AL (anti-leech) algorithm and the peer selection algorithm are distinct or at least they are distinct parts of one more complex algorithm.
That is more or less verified by the fact that if there are many available peers BC will connect to several of them, keep the best of them and then by rotation try to connect to others, until it will have tried most or all of them and established which are the “best of the best” that will kinda match its speed and that will want to keep a steady connection with it.
You can check that by watching the Peers tab and expanding the bt_connecting and bt_disconnect categories. This is the “good peer selection algorithm”, the part which deals with getting the best possible peers.
When, there are lots of good peers I don’t even think that the AL algorithm gets to do that much active work, because most of the peers that are unchoking you, will send you data way above the thresholds of AL. Even then though, the AL algorithm may have to keep an eye on the peers gotten through optimistic unchoking, though.
But when there aren’t lots of peers, or not many that have good upload speed or that have much of the torrent data, then you may be connected to several peers which will not send you much data (be it that they don’t have any pieces you want, or that they have a small upload bandwidth spread too thin or whatever). For instance, one of those cases when the overall download speed you get for a torrent is constantly below 10-15KB/s, but it’s steady and it doesn’t stall.
I guess here it is where AL comes in place and tries to get the “best among the worst” or to put it more correctly to “keep the really bad ones out”.
When there is nothing much to choose from, setting such high values (as you speak of) would only grind the torrent to a halt and hurt you.
If you check the Peers tab, occasionally you will see the peers with red icons (“bad peers”) which are the ones who sent too much bad/corrupted data. Those are really banned for a certain period of time.
I don’t know if those disconnected by AL algorithm also get banned for any period or just disconnected, but as you can see, there are several distinct layers in how BC treats peers, connects and disconnects from them.
Again, this is my take, I don’t speak for the team but it sort of stands to logic. From empirical observation, BC always seems to manage to find the best peers in the swarm (that will want to “talk” to it, that is) and thus the AL algorithm doesn’t seem to be the only thing which is employed in selecting peers. In fact AL is not employed at all in “selecting” peers it only “de-selects” the bad ones, but it’s something else that “chooses” the best ones.
That is because, from the way it’s constructed the AL algorithm seems to be establishing only the “lower acceptable limit”, whereas BC would still need a different algorithm that will focus on getting the best peers possible, from the remaining existent pool, in order to explain its current behavior, because AL doesn’t have any provisions for choosing good peers it only has means of eliminating the really bad ones.
Meaning, that while BC has AL which says: “this is the minimum and if you don’t have it you don’t get into the club” it also has another algorithm which says: “OK, let’s see who are the best peers and let’s constantly select and try to get them”. The second one, runs the “better peers chase”, so to speak, and is the second part of the equation, in my opinion. That is the real “meat” of the peer selection algorithm, as far as good speeds are concerned.
Thus setting a very high “lower” limit for the AL algorithm, would mean that you’re trying to make the AL routine perform the job of the other one but I’m not sure that it would be a wise choice.