Honestly, for my "tutorial project" its more educational than anything. Once I've done something a few times like set up a new npc.tscn I can do that on my own, I only needed ChatGPT to handhold me for first one or two times. I'm declaring signals, linking them, and rationalizing my way through node interactions more or less on my own.
But like a textbook, each and every time I turn the metaphorical page and see something new I may need to be walked through the process once or twice.
And funny thing, I've noticed ChatGPT can actually catch several of its mistakes if prompted differently.
AI: So, you wanna do $this, var this, and _that().
Me: That function call looks suspicious. Are you sure that's the right one for doing [psuedo-code step here]
AI: Oh yeah, good catch. It's documented that it isn't. Try using _this() instead of _that()
And then that one super specific thing works first try in test play.
DrakeAR3
Recent community posts
I hadn't thought of it that way. If recycling and refactoring is already such a crucial part of the coding process, then having AI generate code snippets and generic function templates instead of grabbing them from design bibles, sample projects, or code repositories isn't really that much different in the long run, is it?
Thanks! I'll get back to working on my project. I hope to have a playable tech demo soon.
Thank you. Yes, I'm very involved in the coding process. Even when AI leads me down a wild rabbit trail when trying to debug one of its own problems it's still really educational. Heck, I remember trying to debug collision between CharacterBody2D and Area2D nodes in GODOT. AI had me using _on_area_entered(area) when I really needed _on_body_entered(body).
Before it was all over I had print commands set up to spam Output with the ID, collision layer, and mask layer of every bullet being generated. Absolutely everything was lined up but hits weren't registering until my gut told me that particular function might be the culprit. Looked it up, sure enough, just a GODOT quirk. Learned so much that day.
Yes, I've noticed. I've had some very interesting debugging sessions already. I'm not going to claim to be some AI whisperer but it does seem to have some common themes to the mistakes it makes at least and honestly, my goal is ultimately to learn so asking the AI to elucidate the programming concepts behind its recommendations until my intuition kicks in has been worth the extra headache.
That said, I'm still using it for learning and generating boiler plate that I then refactor to my liking and I was asking about where this community's opinions and standards tend to land since it is becoming a bit of an issue. I see every day escalating AI misuse, but as an autistic person with a hugely different learning style from conventional education its been near life changing for me. (And honestly, claiming AI makes mistakes isn't really a deal breaker for me because honestly, have you -seen- how often human flesh and blood teachers make mistakes and oversights too? At least the AI can discern my word salad reliably.)
I'm quite new to Itch.io and I'm looking to create my first project page within a month or two, so I'm familiarizing myself with the rules and expectations now.
I find myself a little uncertain about where the AI content rules lie. Currently, I find ChatGPT to be rather helpful in two specific areas. I am autistic and struggle with with syntax of language. It took me a very long time to become decent in English and I still struggle with syntax and articulation. Typing the lines of code is still super difficult even when I understand the psuedo-code logic.
I also find AI to be helpful at teaching me concepts like "Hey, can you teach me the logic of this A* Pathfinding logic I heard about?" or "I really don't like having this branching IF block inside a loop that runs every frame assessing the same conditions over and over, can I just have the IF block run once on object creation, assign the outputs to variables, and then the loop uses those variables to optimize the code? I can? Great!"
In regards to the community rules and expectations, where, if any, is the distinction between asking AI to "do it for me" vs assist me with learning the concepts and proofing code?