Still kinda learning about AI, but after creating a few neural nets and slogging through understanding activation functions, gradient descent, loss functions, etc, it makes sense why these models (whatever architecture they’re using) could perform well on handling all the dimensions and chaos involved. It’s pure math and stats under the hood. The challenge will be dealing with the black box nature of deep learning algorithms, though I know explainable AI is becoming a thing. Computational cost is another topic as well.