Blowout Cards Forums
2025 Black Friday

Go Back   Blowout Cards Forums > BLOWOUTS HOBBY TALK > GRADING

Notices

GRADING For all grading talk - PSA, BGS, SGC, etc

Reply
 
Thread Tools Display Modes
Old 11-11-2025, 06:21 PM   #1626
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by SLGSports View Post
This new video debunks the myth that TAG grading is consistent. 7 cards resubmitted to TAG and got wildly different grades (and identified issues).

AI models are clearly having trouble identifying surface issues and even coming up with consistent centering measurements (which is super easy tech-wise).

https://www.youtube.com/watch?v=LC1Vr2Y4j94
This video has already been posted in the thread. To me, it is the most damning thing to come out about TAG.

That said, it doesn’t line up with my own experience. I once cracked and resubmitted a non-serial-numbered card that I had previously graded with TAG, and they actually identified it as the same card.
__________________
BO Resident TAG Grading shill

Last edited by rfgilles; 11-11-2025 at 06:25 PM.
rfgilles is online now   Reply With Quote
Old 11-12-2025, 09:12 AM   #1627
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Default

Quote:
Originally Posted by rfgilles View Post
To you...

You have the DINGs and get individual grades for centering, surface, etc. Not sure what more you want or how much simpler you want it.
There are a handful of “issues,” per se, with the DIG/DIG+ Reports. First off, the exact report you’re talking about (DIG+) is sitting at a comfy $150 per card right now. The second issue is that there’s simply a lot of irrelevant data that leads the average user to believe “something contributed to the score” (when it really did not) — which is more of a criticism of the DIG (not DIG+) report, which is more accessible at this time.

Ultimately, at the end of the day, the reports are more of a technical proof of concept than an actual productized version that works for the masses. I had thrown something together last year (when I was a “TAG Advocate” myself, and was hoping I could contribute to development for them) that simply had something called “Grading Summary” that pointed out the key areas and explained at an easy-to-understand perspective. TAG’s DIG/DIG+ Reports today are just a lot of unnecessary information that doesn’t contribute to your actual grade or score, which causes obfuscation to the end user.

As for my earlier comment about their reports being “as useful as PSA’s Grader Notes,” I’ll continue to stand to it. If you have to pay $150 to get “subgrades” that slightly helps out on the “per area” understanding, then all of the other reports (DIG) are fundamentally useless. They look like they’re giving you a lot of information — but, inherently, they’re bombarding you with unnecessary information that obfuscates the contributing factors to the score/grade as a whole (even with the DINGs being pointed out). It isn’t rocket science - if Jesse Peng is claiming that “TAG is a tech company and not a grading company” (as he did on Discord), then consumers should hold them to the bare minimum of a tech company. TAG is selling a Minimally Viable Product (MVP) — not one that’s been productized to be much more than that.

Quote:
Originally Posted by rfgilles View Post
This video has already been posted in the thread. To me, it is the most damning thing to come out about TAG.

That said, it doesn’t line up with my own experience. I once cracked and resubmitted a non-serial-numbered card that I had previously graded with TAG, and they actually identified it as the same card.
That’s because the “trick” with bypassing their fingerprinting technology is to make sure you send a card that has less than a certain threshold of cards already graded. The more variations they have, the more distinct your card is. We can send in 5,000 McDonald’s Promo Pikachu cards and if I try to resub one, it’ll most likely get dinged/flagged. Do the same with a much more rare/less graded card (e.g., Pyotr Kochetkov or any other hockey card that has less than the defined amount) and you won’t get flagged as a resub.

As for the consistency: That’s an entirely different scenario that I’m not going to get into much. Mostly because it’s a slippy slope of assumptions without enough verifiable testing on my end. My real concern is that they keep changing the scanning technology to make it better - without explaining that it practically invalidates any cards graded prior to that hardware/software upgrade. This isn’t a TAG-explicit issue, it’s any “machine learning” grading company that needs to make this more transparent to the consumer.
__________________
Appearances are often deceiving - Aesop

Last edited by Ataraxia; 11-12-2025 at 09:17 AM. Reason: Didn’t want to double post — added additional quote and response to this response
Ataraxia is offline   Reply With Quote
Old 11-12-2025, 01:44 PM   #1628
SLGSports
Member
 
Join Date: Mar 2023
Posts: 141
Default

Quote:
Originally Posted by Ataraxia View Post
My real concern is that they keep changing the scanning technology to make it better - without explaining that it practically invalidates any cards graded prior to that hardware/software upgrade. This isn’t a TAG-explicit issue, it’s any “machine learning” grading company that needs to make this more transparent to the consumer.
As someone who's been in AI for 25+ yrs... I can confirm that this versioning issue is a big deal. AI-based solutions can either provide consistency (by never updating their processes or algorithms), or can have continual improvement. Since AI teams will want to continually improve their work, grading consistency simply cannot be expected with AI. In this regard, AI grading may well be LESS consistent than human-based grading -- which is what the video above appears to be highlighting. At some point in the future, AI improvements may plateau, or grading may become a "solved problem", in which case the algorithms and processes could theoretically be frozen in time. But we're years away from that ever happening.
SLGSports is offline   Reply With Quote
Old 11-12-2025, 03:02 PM   #1629
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Arrow

Quote:
Originally Posted by SLGSports View Post
As someone who's been in AI for 25+ yrs... I can confirm that this versioning issue is a big deal. AI-based solutions can either provide consistency (by never updating their processes or algorithms), or can have continual improvement. Since AI teams will want to continually improve their work, grading consistency simply cannot be expected with AI. In this regard, AI grading may well be LESS consistent than human-based grading -- which is what the video above appears to be highlighting. At some point in the future, AI improvements may plateau, or grading may become a "solved problem", in which case the algorithms and processes could theoretically be frozen in time. But we're years away from that ever happening.
Yeah, this is something that TAG has continuously “hand-waved” away when it’s been brought up on their Discord or through their CS pipeline. “It doesn’t matter, it won’t invalidate previous grades.” (to paraphrase) — the reality is that any machine learning-based system’s entire goal is to get “better” (over time).

Versioning of the software and hardware + the model becoming more tuned to the flaws and issues of a card they’ve graded repeatedly = Invalidates the Grade/Score of a Card over time

It’s the principal of the entire system, the algorithm and model tied to it, and the underlying concept of the technology that they’re relying on. But they just hand-wave it to oblivion; likely because they don’t fully understand the technology themselves OR it’s not using any sort of machine learning capability. But I’d argue that even upgrading their hardware (i.e., scanning tech) is going to make a significant shift in their grading model, far more than something like PSA/CGC changing their grading rubric.
__________________
Appearances are often deceiving - Aesop
Ataraxia is offline   Reply With Quote
Old 11-13-2025, 12:27 PM   #1630
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by Ataraxia View Post
That’s because the “trick” with bypassing their fingerprinting technology is to make sure you send a card that has less than a certain threshold of cards already graded. The more variations they have, the more distinct your card is. We can send in 5,000 McDonald’s Promo Pikachu cards and if I try to resub one, it’ll most likely get dinged/flagged. Do the same with a much more rare/less graded card (e.g., Pyotr Kochetkov or any other hockey card that has less than the defined amount) and you won’t get flagged as a resub.
The card I cracked and reslabbed was a baseball card and "below the threshold"
https://my.taggrading.com/card/V9775940

The grading had to be consistent enough for them to identify it as a resubmit.
__________________
BO Resident TAG Grading shill
rfgilles is online now   Reply With Quote
Old 11-13-2025, 01:28 PM   #1631
Boostedwrx
Member
 
Join Date: Jan 2022
Posts: 235
Default

So this TAG card I have was graded a 974, I wonder what my chances are to cross it to a PSA 10
Boostedwrx is offline   Reply With Quote
Old 11-13-2025, 09:25 PM   #1632
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Default

Quote:
Originally Posted by rfgilles View Post
The card I cracked and reslabbed was a baseball card and "below the threshold"
https://my.taggrading.com/card/V9775940

The grading had to be consistent enough for them to identify it as a resubmit.
Well, the "threshold" wasn't clearly defined. But you explained the scenario yourself quite well - it isn't that it has to be consistent enough, it's that it has to have enough of a sample size. The "fingerprinting technology" they use is literally just confirming that the same issues are matched on the same x, y axis of the scan (and that it doesn't pick up anything else).

I've done fairly extensive testing with TAG — primarily focused on QC/Damage (since they had damaged a number of my cards). I had a few cards flagged, more cards ignored (and still regraded). It's a numbers game, essentially. Which is essentially "exploiting" the logic behind machine learning grading. At the same time, I'm not going to put exact numbers out in a means of "tarnishing their reputation," since I do like their slab and wish they succeed in the long run.

But they do have to do better than they are right now and be genuinely transparent.
__________________
Appearances are often deceiving - Aesop
Ataraxia is offline   Reply With Quote
Old 11-13-2025, 09:35 PM   #1633
RKH916
Member
 
Join Date: Oct 2024
Posts: 504
Default

Quote:
Originally Posted by Boostedwrx View Post
So this TAG card I have was graded a 974, I wonder what my chances are to cross it to a PSA 10
lol@1000 point scale

Complete sham.
RKH916 is offline   Reply With Quote
Old 11-13-2025, 10:15 PM   #1634
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by Ataraxia View Post
Well, the "threshold" wasn't clearly defined. But you explained the scenario yourself quite well - it isn't that it has to be consistent enough, it's that it has to have enough of a sample size. The "fingerprinting technology" they use is literally just confirming that the same issues are matched on the same x, y axis of the scan (and that it doesn't pick up anything else).
What are you talking about? In my case, the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.

Also, there’s no vague “threshold” here. The Juan Soto I resubmitted has a POP 2.
__________________
BO Resident TAG Grading shill
rfgilles is online now   Reply With Quote
Old 11-14-2025, 12:10 AM   #1635
discodanman45
Member
 
Join Date: Jun 2020
Location: CA
Posts: 9,796
Default

Quote:
Originally Posted by rfgilles View Post
What are you talking about? In my case, the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.

Also, there’s no vague “threshold” here. The Juan Soto I resubmitted has a POP 2.
Was the card flagged by the grading or was it flagged since you sent them the same card from the same account?
__________________
Always looking for rarer Rik Smits cards and cards from the 2014-15 Spectra Global Icons set. Send me a message!
discodanman45 is online now   Reply With Quote
Old 11-14-2025, 12:39 AM   #1636
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by discodanman45 View Post
Was the card flagged by the grading or was it flagged since you sent them the same card from the same account?
the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.
__________________
BO Resident TAG Grading shill
rfgilles is online now   Reply With Quote
Old 11-14-2025, 08:36 AM   #1637
discodanman45
Member
 
Join Date: Jun 2020
Location: CA
Posts: 9,796
Default

Quote:
Originally Posted by rfgilles View Post
the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.
So the card was graded 100% the same? Doubt that. Other cards weren't flagged by other submitters, but yours were? Did you submit the cards through your same account?
__________________
Always looking for rarer Rik Smits cards and cards from the 2014-15 Spectra Global Icons set. Send me a message!
discodanman45 is online now   Reply With Quote
Old 11-14-2025, 09:02 AM   #1638
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by discodanman45 View Post
So the card was graded 100% the same? Doubt that. Other cards weren't flagged by other submitters, but yours were? Did you submit the cards through your same account?
Yes.........

It was graded consistently enough that TAG identified it as a resubmit. And of course it was the same account, people submit multiple copies of the same card all the time
__________________
BO Resident TAG Grading shill

Last edited by rfgilles; 11-14-2025 at 11:58 AM.
rfgilles is online now   Reply With Quote
Old 11-14-2025, 09:26 AM   #1639
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Lightbulb

Quote:
Originally Posted by rfgilles View Post
What are you talking about? In my case, the card was correctly flagged as a resubmission because the same defects were identified both times — meaning the grading was consistent.

Also, there’s no vague “threshold” here. The Juan Soto I resubmitted has a POP 2.
I meant that I did not define the exact threshold I was talking about. The fewer number of submissions of the same card, the more obvious the same card is going to be (e.g., if you have a POP 2 card, there’s only two cards that have been scanned and “identified” — you’re undoubtedly going to get flagged). The higher number of submissions (within the undefined threshold), the more likely you are able to resubmit without getting flagged by their fingerprinting technology.

Admittedly, I’ve only submitted ~650 cards total to TAG to experiment (give or take). I focused mostly on QC (as they damaged a number of my PC cards and I wanted to understand whether I should keep sending them $100-300 cards or just keep it to the “This would look nice in a TAG slab, but less than $50 in value”), so my sample size for fingerprinting threshold is less than 40 (which isn’t going to be a large enough sample size to give you anything above a 60% with a 8-10% margin of error).

That said… I won’t be sending them any card worth over $50 any time soon. Lol.
__________________
Appearances are often deceiving - Aesop
Ataraxia is offline   Reply With Quote
Old 11-14-2025, 09:40 AM   #1640
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by Ataraxia View Post
I meant that I did not define the exact threshold I was talking about. The fewer number of submissions of the same card, the more obvious the same card is going to be (e.g., if you have a POP 2 card, there’s only two cards that have been scanned and “identified” — you’re undoubtedly going to get flagged). The higher number of submissions (within the undefined threshold), the more likely you are able to resubmit without getting flagged by their fingerprinting technology.
Quote:
Originally Posted by Ataraxia View Post
That’s because the “trick” with bypassing their fingerprinting technology is to make sure you send a card that has less than a certain threshold of cards already graded. The more variations they have, the more distinct your card is. We can send in 5,000 McDonald’s Promo Pikachu cards and if I try to resub one, it’ll most likely get dinged/flagged. Do the same with a much more rare/less graded card (e.g., Pyotr Kochetkov or any other hockey card that has less than the defined amount) and you won’t get flagged as a resub.

So which is it?
__________________
BO Resident TAG Grading shill
rfgilles is online now   Reply With Quote
Old 11-14-2025, 03:48 PM   #1641
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Default

Quote:
Originally Posted by rfgilles View Post
So which is it?
Both.

If the card you’re submitting has too low of a population — the system will easily detect the card due to the same dings/impact. This even goes for ever-so-faint modifications or adjustments (like fixing a corner or a dent).

If the card you’re submitting has too high of a population — the system will be able to detect the card due to the population having dings unique to that card.

What I was saying is that I’m not going to give the exact numbers of either, as the system can be abused (which goes against the whole purpose of the “fingerprinting technology” that they’ve implemented). The reason why it exists is to prevent the “Please Submit Again” system (who actually financially benefit from resubs/regrades).

That said, even with my testing, I can’t give a definite number on the high-end anyways. I’m not saying anything I didn’t already say before - both “too much” and “too little” information can allow their system to be more or less consistent. There’s a murky gray area in there that’s exploitable and inconsistent.
__________________
Appearances are often deceiving - Aesop
Ataraxia is offline   Reply With Quote
Old 11-14-2025, 03:56 PM   #1642
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by Ataraxia View Post
Both.

If the card you’re submitting has too low of a population — the system will easily detect the card due to the same dings/impact. This even goes for ever-so-faint modifications or adjustments (like fixing a corner or a dent).

If the card you’re submitting has too high of a population — the system will be able to detect the card due to the population having dings unique to that card.

What I was saying is that I’m not going to give the exact numbers of either, as the system can be abused (which goes against the whole purpose of the “fingerprinting technology” that they’ve implemented). The reason why it exists is to prevent the “Please Submit Again” system (who actually financially benefit from resubs/regrades).

That said, even with my testing, I can’t give a definite number on the high-end anyways. I’m not saying anything I didn’t already say before - both “too much” and “too little” information can allow their system to be more or less consistent. There’s a murky gray area in there that’s exploitable and inconsistent.
This applies to any card regardless of POP report. It doesn't magically become important after a certain number of cards are submitted.
__________________
BO Resident TAG Grading shill
rfgilles is online now   Reply With Quote
Old 11-14-2025, 08:54 PM   #1643
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Default

Quote:
Originally Posted by rfgilles View Post
This applies to any card regardless of POP report. It doesn't magically become important after a certain number of cards are submitted.
Except, it does. When there's enough deviation from the norm that your unique card isn't unique-enough — it will fail to detect it as a card it's graded before. It's not like some "magically important card" it's more so a systemic failure of how machine learning works. See, machine learning systems will often fingerprint cards based on the following:
  1. Unique edge chips
  2. Ink defects
  3. Subtle surface scratches
  4. Micro-creases (normally invisible to the naked eye or without a microscope)
  5. Exact centering ratio
  6. Color speckling
  7. Foil pattern irregularities
  8. Border asymmetry
  9. Dust or Fiber artifacts

The assumption is that the pattern of micro-defects is unique to each card, so comparing two photos verifies the card's identity.

But this system can be "broken" (exploited/bypassed) within the gray space where there's enough variation that there is more than one card with similar defects and there's not so many that your card become "obviously unique" (again). My problem is that I don't know the exact number, since I've submitted less than 1,000 cards total (and only a few were deliberately resubmitted - with a 70/30 pass/fail rate). I also only tested TAG X, fwiw (but this shouldn't matter since they tend to have the same scan data for DIG (TAG X) vs DIG+ (Express) — it's just presented differently).

This all goes back to the nature of machine learning. Even something as changing the scanning hardware to be higher resolution could be enough for the system to fail the fingerprinting check, depending on the variation of data from the scan.
__________________
Appearances are often deceiving - Aesop

Last edited by Ataraxia; 11-14-2025 at 08:56 PM. Reason: Fixed a few typos — typed too fast without double-checking.
Ataraxia is offline   Reply With Quote
Old 11-14-2025, 11:12 PM   #1644
rfgilles
Member
 
rfgilles's Avatar
 
Join Date: Sep 2019
Location: Long Island, NY
Posts: 4,302
Default

Quote:
Originally Posted by Ataraxia View Post
Except, it does. When there's enough deviation from the norm that your unique card isn't unique-enough — it will fail to detect it as a card it's graded before. It's not like some "magically important card" it's more so a systemic failure of how machine learning works. See, machine learning systems will often fingerprint cards based on the following:
  1. Unique edge chips
  2. Ink defects
  3. Subtle surface scratches
  4. Micro-creases (normally invisible to the naked eye or without a microscope)
  5. Exact centering ratio
  6. Color speckling
  7. Foil pattern irregularities
  8. Border asymmetry
  9. Dust or Fiber artifacts

The assumption is that the pattern of micro-defects is unique to each card, so comparing two photos verifies the card's identity.

But this system can be "broken" (exploited/bypassed) within the gray space where there's enough variation that there is more than one card with similar defects and there's not so many that your card become "obviously unique" (again). My problem is that I don't know the exact number, since I've submitted less than 1,000 cards total (and only a few were deliberately resubmitted - with a 70/30 pass/fail rate). I also only tested TAG X, fwiw (but this shouldn't matter since they tend to have the same scan data for DIG (TAG X) vs DIG+ (Express) — it's just presented differently).

This all goes back to the nature of machine learning. Even something as changing the scanning hardware to be higher resolution could be enough for the system to fail the fingerprinting check, depending on the variation of data from the scan.
The more cards submitted the probability increases that another card would be similar. You aren't making any sense and are overcomplicating things. What you call fingerprinting is cataloging the defects identified during the grading process.

You will have better luck finding the loch ness monster than a "gray space"
__________________
BO Resident TAG Grading shill

Last edited by rfgilles; 11-14-2025 at 11:31 PM.
rfgilles is online now   Reply With Quote
Reply

Bookmarks


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 11:32 AM.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2025, vBulletin Solutions Inc.
Copyright © 2019, Blowout Cards Inc.