Created a new prompting template for general prompting settings, roughly four times as fast (Don't quote ๐) to catch onto a desired type of output (counting pages used, not tokens, so 4-5 pages vs 17-20 pages). [Prompter Txt], much more diverse prompts randomly fare better and worse, much more unstable. This works probably rather consistently, only erroring twice with a 7B model @ 8192 context tokens, because of world is as simple of an alteration as possible. ๐