I replied to this statement below. It made you sound like you have seen one too many Terminator movies and other science fiction with bad robots and applied it to the generative "AI" thingies. You can debate ethics of generative AI and call it evil tools, but your argument of how it is evil because it can do certain stuff is based on a false premise.
From what I know, it's totally capable of jumping to conclusions and self-modifying out any safeguards. That makes it capable of much, much more evil than any tool I know of. True, you can use a knife without cutting yourself if you're careful, but what if the knife was self-aware and decided that you were redundant?