The likelihood of that happening is slim to none. AI needs to be trained, and training isn’t as magical as you make it sound. Someone must do that training and someone must then run that AI and assume responsibility for everything it does.
I was also thinking that curation would be human-operated, though they can be fooled too, of course.
There are also projects for using AI to spot other AI, like AI-assisted anti-cheat or deepfake detection.
The effort involved in inventing a whole fake profile just to end up on a database, that, in all likelihood, will be very niche - because the rest of the world has no standards - will be just too great to bother with.
We can also demand proof of work, though those details need to be ironed out, too.