View Single Post
Old 11-12-2025, 03:02 PM   #1629
Ataraxia
Member
 
Join Date: Oct 2024
Location: North Carolina
Posts: 27
Arrow

Quote:
Originally Posted by SLGSports View Post
As someone who's been in AI for 25+ yrs... I can confirm that this versioning issue is a big deal. AI-based solutions can either provide consistency (by never updating their processes or algorithms), or can have continual improvement. Since AI teams will want to continually improve their work, grading consistency simply cannot be expected with AI. In this regard, AI grading may well be LESS consistent than human-based grading -- which is what the video above appears to be highlighting. At some point in the future, AI improvements may plateau, or grading may become a "solved problem", in which case the algorithms and processes could theoretically be frozen in time. But we're years away from that ever happening.
Yeah, this is something that TAG has continuously “hand-waved” away when it’s been brought up on their Discord or through their CS pipeline. “It doesn’t matter, it won’t invalidate previous grades.” (to paraphrase) — the reality is that any machine learning-based system’s entire goal is to get “better” (over time).

Versioning of the software and hardware + the model becoming more tuned to the flaws and issues of a card they’ve graded repeatedly = Invalidates the Grade/Score of a Card over time

It’s the principal of the entire system, the algorithm and model tied to it, and the underlying concept of the technology that they’re relying on. But they just hand-wave it to oblivion; likely because they don’t fully understand the technology themselves OR it’s not using any sort of machine learning capability. But I’d argue that even upgrading their hardware (i.e., scanning tech) is going to make a significant shift in their grading model, far more than something like PSA/CGC changing their grading rubric.
__________________
Appearances are often deceiving - Aesop
Ataraxia is offline   Reply With Quote