"Justice is that which exists when all laws are enforced" - R. Daneel Olivaw.
Asimov dealt with AI justice in The Caves of Steel where the robot, Olivaw, gives the AI definition of justice listed above. Olivaw is sentient, but he is still an AI and he has to learn a more human definition.
But that is not the biggest problem with an AI judge:
AIs take on the biases of those who code them. Look up Google and gorillas some time.
An AI judge is likely to still be racist, sexist, etc. It will still be biased because there will be bias in its training.
Here's an example of how it might go wrong. You feed your AI judge a bunch of past court cases to teach it how they are resolved.
Some of those cases come from a smallish town in the American South where white people routinely get off and Black people are routinely jailed for the same offense.
The AI is going to learn that "Black means criminal" unless you are very careful what data you feed it.
One AI for screening resumes learned to screen out women because it was being trained on "success" and of course women were having less success because of sexism. So the AI became sexist.
I'm very uncomfortable with this. Until we have a Daneel Olivaw who can learn empathy...AIs should not be judging court cases.
(They are, however, highly useful for finding points of law and doing legal research).