Hi there.
(my post seemed to be over-overquoted, so I’ve decided to use different colors for my old text (blue) and your text (light brown) instead of quotes and comments. Sorry for inconvenience.)
When I’ve said “We have almost the same idea being discussed nearby” I meant just the same thing you said later: “both features require identical file identification between different torrents”. 
Well, there can be lots of features added to BitTorrent clients & trackers based on the mechanism of identifying single files in different .torrent-s. And that mechanism is what we all are talking about. Now let me be a little more personal 
However, a solution would be to reconstruct blocks from one torrent when you have two consecutive blocks from the other.
Example: You just downloaded blocks 123 & 124 from torrent A.
Block 123 & 124 in torrent A is of file_a.txt
I assume file_a.txt is divided between only two pieces (let’s use BitComet terms), and then it is 100% completed.
In torrent B that section (both blocks from A) of the file is divided among blocks 456, 457, & 458.
Therefore, by using offsets, you should be able to reconstruct & hash check block 457 in torrent B, and (more complicatedly) the end of block 456 (so you can stop that block download early).
Piece #457 must be OK, but #456 can’t be verified as of now as we need its first part to be able to hash-check #456. We have no idea when we’ll get this part, so we’ll have to store its last part in cache without being able to send it out or use in any other way.
Since you have block 457 the progress of torrent B just progressed without needing to obtain that block from torrent B (although my suggestion would be to treat them as one download and construct files from blocks from both torrents).
No objections. Just some useless - as of now and indefinitely from now on - data (last part of #456 & first part of #458) in cache.
It may not be 100% effecient, but only downloading from one swarm (current functionallity) is more like 50% comparitive effeciency.
…assuming both swarms have equal number of seeds & leechers, equal U/D speeds and equal distribution of pieces between peers. Not of much relevance to real life IMHO… Nevermind.
The point is trade-off between CPU & memory usage and overall change in traffic. Each downloaded piece (and there may - and, most likely, will - be hundreds of them) from one torrent has to be compared… with what? Here we are…
How can BC identify identical files in different torrents?
THIS seems to be a problem to solve first. (Well, let’s skip it for a while. Let’s assume WE tell BitComet that “these two” files are identical. And let’s even assume we’re right (not too obvious, to be precise…)).
A possible approach would be to construct the file with the data you get as you get it.
And how would we verify that received data is correct? And how much of which torrent’s file data drop if there are (and there will be) errors?
Check what you have before requesting blocks.
Every, say, 1s? Each of xxx of (in-)complete pieces of xxx MB torrent of one torrent with yyy (in-)complete pieces from another yyy MB (one?) ? (I still assume files within x and y torrents ARE identical)
Example: (X means downloaded, - means not)
File 1:
Hashes (A then B respectively):
|—1—|—2—|—3—|-
—|----6----|----7----|—
Block 6 & 7 of B are recieved.
—XXXXXXXXXXXXXXXXXXXXX—
|—1—|—2—|—3—|-
—|----6----|----7----|—
Block 2 of A no longer is needed, nor is the end of block 1, or block 3 if the first part of block 8 of B is recieved first. Blocks priority is also affected by completed percent (block 8 of B is favored over block 3 of A). More complicated than I originally envisioned, but it still seems possible and helpful.
So here’s a question: how much will it cost in term of resources? What CPU should I have, how much memory (=what cache) should be sufficient for this to work 24/7? Even if everything above will be written in BitComet’s code and will be working flawlessly (they say, BTW, “the simplier, the better (=more reliable)”) …
On the contrary, if this isn’t supposed to work 24/7, how often will there be need in using this? Isn’t it possible that this feature will become too “heavy” to be widely used?
It’s not useless. It can be used to “repair” some “dead” torrents at cost of high CPU usage. This will lead to (partially) duplicate “alive” torrents. It can be used to receive files from more peers at cost of high CPU/RAM/HDD usage.
What should private trackers’ users do if one file is in private & public torrent?
It seems to me that number of questions exceeds number of answers… So, my overall rating is “very questionable”.
You ask me what I’d suggest myself? You already know it
Add checksums of single files into .torrent first. And then anyone will be able to post their suggestions in appropriate topic 
E.g. it’ll be possible to find & download single needed file in all BitTorrent community…
Anymore ideas? I’m just wondering whether programmers can hear their users…