![]() |
|
|
#1626 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
That said, it doesn’t line up with my own experience. I once cracked and resubmitted a non-serial-numbered card that I had previously graded with TAG, and they actually identified it as the same card.
__________________
BO Resident TAG Grading shill Last edited by rfgilles; 11-11-2025 at 06:25 PM. |
|
|
|
|
|
|
#1627 | ||
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Quote:
Ultimately, at the end of the day, the reports are more of a technical proof of concept than an actual productized version that works for the masses. I had thrown something together last year (when I was a “TAG Advocate” myself, and was hoping I could contribute to development for them) that simply had something called “Grading Summary” that pointed out the key areas and explained at an easy-to-understand perspective. TAG’s DIG/DIG+ Reports today are just a lot of unnecessary information that doesn’t contribute to your actual grade or score, which causes obfuscation to the end user. As for my earlier comment about their reports being “as useful as PSA’s Grader Notes,” I’ll continue to stand to it. If you have to pay $150 to get “subgrades” that slightly helps out on the “per area” understanding, then all of the other reports (DIG) are fundamentally useless. They look like they’re giving you a lot of information — but, inherently, they’re bombarding you with unnecessary information that obfuscates the contributing factors to the score/grade as a whole (even with the DINGs being pointed out). It isn’t rocket science - if Jesse Peng is claiming that “TAG is a tech company and not a grading company” (as he did on Discord), then consumers should hold them to the bare minimum of a tech company. TAG is selling a Minimally Viable Product (MVP) — not one that’s been productized to be much more than that. Quote:
As for the consistency: That’s an entirely different scenario that I’m not going to get into much. Mostly because it’s a slippy slope of assumptions without enough verifiable testing on my end. My real concern is that they keep changing the scanning technology to make it better - without explaining that it practically invalidates any cards graded prior to that hardware/software upgrade. This isn’t a TAG-explicit issue, it’s any “machine learning” grading company that needs to make this more transparent to the consumer.
__________________
Appearances are often deceiving - Aesop Last edited by Ataraxia; 11-12-2025 at 09:17 AM. Reason: Didn’t want to double post — added additional quote and response to this response |
||
|
|
|
|
|
#1628 | |
|
Member
Join Date: Mar 2023
Posts: 141
|
Quote:
|
|
|
|
|
|
|
#1629 | |
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Quote:
Versioning of the software and hardware + the model becoming more tuned to the flaws and issues of a card they’ve graded repeatedly = Invalidates the Grade/Score of a Card over time It’s the principal of the entire system, the algorithm and model tied to it, and the underlying concept of the technology that they’re relying on. But they just hand-wave it to oblivion; likely because they don’t fully understand the technology themselves OR it’s not using any sort of machine learning capability. But I’d argue that even upgrading their hardware (i.e., scanning tech) is going to make a significant shift in their grading model, far more than something like PSA/CGC changing their grading rubric.
__________________
Appearances are often deceiving - Aesop |
|
|
|
|
|
|
#1630 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
https://my.taggrading.com/card/V9775940 The grading had to be consistent enough for them to identify it as a resubmit.
__________________
BO Resident TAG Grading shill |
|
|
|
|
|
|
#1631 |
|
Member
Join Date: Jan 2022
Posts: 235
|
So this TAG card I have was graded a 974, I wonder what my chances are to cross it to a PSA 10
|
|
|
|
|
|
#1632 | |
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Quote:
I've done fairly extensive testing with TAG — primarily focused on QC/Damage (since they had damaged a number of my cards). I had a few cards flagged, more cards ignored (and still regraded). It's a numbers game, essentially. Which is essentially "exploiting" the logic behind machine learning grading. At the same time, I'm not going to put exact numbers out in a means of "tarnishing their reputation," since I do like their slab and wish they succeed in the long run. But they do have to do better than they are right now and be genuinely transparent.
__________________
Appearances are often deceiving - Aesop |
|
|
|
|
|
|
#1633 |
|
Member
Join Date: Oct 2024
Posts: 504
|
|
|
|
|
|
|
#1634 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
Also, there’s no vague “threshold” here. The Juan Soto I resubmitted has a POP 2.
__________________
BO Resident TAG Grading shill |
|
|
|
|
|
|
#1635 |
|
Member
|
Was the card flagged by the grading or was it flagged since you sent them the same card from the same account?
__________________
Always looking for rarer Rik Smits cards and cards from the 2014-15 Spectra Global Icons set. Send me a message! |
|
|
|
|
|
#1636 |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.
__________________
BO Resident TAG Grading shill |
|
|
|
|
|
#1637 |
|
Member
|
So the card was graded 100% the same? Doubt that. Other cards weren't flagged by other submitters, but yours were? Did you submit the cards through your same account?
__________________
Always looking for rarer Rik Smits cards and cards from the 2014-15 Spectra Global Icons set. Send me a message! |
|
|
|
|
|
#1638 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
It was graded consistently enough that TAG identified it as a resubmit. And of course it was the same account, people submit multiple copies of the same card all the time
__________________
BO Resident TAG Grading shill Last edited by rfgilles; 11-14-2025 at 11:58 AM. |
|
|
|
|
|
|
#1639 | |
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Quote:
Admittedly, I’ve only submitted ~650 cards total to TAG to experiment (give or take). I focused mostly on QC (as they damaged a number of my PC cards and I wanted to understand whether I should keep sending them $100-300 cards or just keep it to the “This would look nice in a TAG slab, but less than $50 in value”), so my sample size for fingerprinting threshold is less than 40 (which isn’t going to be a large enough sample size to give you anything above a 60% with a 8-10% margin of error). That said… I won’t be sending them any card worth over $50 any time soon. Lol.
__________________
Appearances are often deceiving - Aesop |
|
|
|
|
|
|
#1640 | ||
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
Quote:
So which is it?
__________________
BO Resident TAG Grading shill |
||
|
|
|
|
|
#1641 |
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Both.
If the card you’re submitting has too low of a population — the system will easily detect the card due to the same dings/impact. This even goes for ever-so-faint modifications or adjustments (like fixing a corner or a dent). If the card you’re submitting has too high of a population — the system will be able to detect the card due to the population having dings unique to that card. What I was saying is that I’m not going to give the exact numbers of either, as the system can be abused (which goes against the whole purpose of the “fingerprinting technology” that they’ve implemented). The reason why it exists is to prevent the “Please Submit Again” system (who actually financially benefit from resubs/regrades). That said, even with my testing, I can’t give a definite number on the high-end anyways. I’m not saying anything I didn’t already say before - both “too much” and “too little” information can allow their system to be more or less consistent. There’s a murky gray area in there that’s exploitable and inconsistent.
__________________
Appearances are often deceiving - Aesop |
|
|
|
|
|
#1642 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
__________________
BO Resident TAG Grading shill |
|
|
|
|
|
|
#1643 | |
|
Member
Join Date: Oct 2024
Location: North Carolina
Posts: 27
|
Quote:
The assumption is that the pattern of micro-defects is unique to each card, so comparing two photos verifies the card's identity. But this system can be "broken" (exploited/bypassed) within the gray space where there's enough variation that there is more than one card with similar defects and there's not so many that your card become "obviously unique" (again). My problem is that I don't know the exact number, since I've submitted less than 1,000 cards total (and only a few were deliberately resubmitted - with a 70/30 pass/fail rate). I also only tested TAG X, fwiw (but this shouldn't matter since they tend to have the same scan data for DIG (TAG X) vs DIG+ (Express) — it's just presented differently). This all goes back to the nature of machine learning. Even something as changing the scanning hardware to be higher resolution could be enough for the system to fail the fingerprinting check, depending on the variation of data from the scan.
__________________
Appearances are often deceiving - Aesop Last edited by Ataraxia; 11-14-2025 at 08:56 PM. Reason: Fixed a few typos — typed too fast without double-checking. |
|
|
|
|
|
|
#1644 | |
|
Member
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
|
Quote:
You will have better luck finding the loch ness monster than a "gray space"
__________________
BO Resident TAG Grading shill Last edited by rfgilles; 11-14-2025 at 11:31 PM. |
|
|
|
|
![]() |
| Bookmarks |
|
|