I personally think that there is absolutely nothing inherently bad with using AI for code as long as the result is not trash, which unfortunately AI still does generate quite often. So I frequently use AI to generate a 'proof of concept', but I take it as a rule to never ever copy-paste code from AI directly, instead using it as a reference to write code myself. Not because of some prejudice, but because due to the nature of LLM models, AI is exceptionally adept at generating 'almost correct' code, with subtle errors which make is very difficult to debug, especially if you did not take time to understand it. FWIW, I do the same with all the code I get online, even from comparatively reliable sources like Stackoverflow.
Same thing goes for learning -- advanced AI models act very good in teacher role, with infinite patience and boundless erudition. However, they can easily and convincingly mislead you, simply because of random fluctuation of one of hundred billion neuron weights. So when I use AI to learn something, chat with it to get a quick understanding, and then go look at original source (in your example it would be A* paper or some textbook on algorithms) to make sure there is no mistakes in my understanding.
I believe in a few more years of progress those concerns will diminish, but we are certainly not there yet.