Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles

[SOLVED][Suggesion/Feature Request]Extended Features for Quads + Project-Wide Asset Search

A topic by hy created 75 days ago Views: 197 Replies: 17
Viewing posts 1 to 18
(2 edits)

Hello !

I've come across a niche feature that, to what I can find (without messing with STM's code myself), isn't available.

In <example game> I have certain glyphs that are displayed via quads (we can use the traditional triangle, circle, square, and cross shapes for this example). These glyphs are white (or black) so as to fit alongside other text. I would like to colour this quad-based text. This is where we run into the first issue: Quads cannot be coloured with a text effect colour in the text inspector. So even if I have "<c=red><q=cross><q=triangle><q=circle></c>" it will not render with a red cross, red triangle, or red circle. Similarly, it does not work for a single quad instead of a sequence of quads. And further it would not work with animated quads. Suggestion: Allow quad colours to be multiplied by a colour called in the text inspector.
(Quad colours can be multiplied in the text inspector by tagging the quad asset as "Silhouette" in the asset inspector; thanks Kai !)

Well, maybe we can't call them in the text inspector, but how about in the asset itself ? Since these glyphs are just shapes, maybe we can colour them in the asset itself? I am willing to make a bunch of assets just for those shape-glyphs. Unfortunately, STM doesn't support colour for a texture in the asset either. Simply, even if a texture can be assigned to an STMQuad, an additional colour cannot be defined by it. This means that in order to change the colour of a glyph, I would have to render it in that exact colour and for any change I would later make I have to re-render that image with the updated colour. What a hassle ! Suggestion: Allow quad colours to be multiplied by a colour in the asset inspector.
(Quad colours cannot be determined by the asset, but pre-parsing can automate colour assignment; thanks Kai ! I learned something entirely new.)

Well, shoot. Maybe I can just export the glyphs in whatever colours I need and make a kind of "voice" using it ? If I can't colour them in the inspector or the asset, maybe I can have something that automatically draws them in those colours just based on what I type ! Alas, while STM does have a "voice" feature for audio clips, there doesn't appear to be an equivalent for quads. This is such a loss of potential ! You could effectively make your own hieroglyph system for NPCs or such with a system like that. Or hide secrets in plain sight ! Maybe even allow more creative freedom with text in a character map to allow something like animated text from images ! This could allow for more games to emphasise a "hand-drawn" feel or other atmospheric effects (tropically seen in "memory" or "dream" sequences in games as well). You could even combine this feature with existing audio 'voices' by assigning audio clips to each glyph variation as well. Suggestion: Create the ability to use a character map for text as a new kind of "voice" and allow for audio voices to read out those characters
(While originally referring to having colour assignments as part of a voice definition (similar to how STMSoundClips are done), pre-parsing will solve this issue. This also touched on having quads be audible, now a planned feature !)

And, well, maybe that last one is a bit of work. Not just for Kai, but for users as well ! Managing glyph variations and colours and audio read outs just for what could probably amount to only one NPC in an entire game using it ? What an edge case. Maybe in the end I should relegate these glyphs to a whole new custom font, something that for-sure works with colours and effects and audio readouts. I'd still have to manually type out each colour or effect for a character if I want them to be consistent (or even randomised). Maybe this could all be solved by a different approach. Maybe what we need is not the ability of a new voice feature, but an expansion on the current one. Voices already support defining text and effects for a 'voice'. How about instead we allow a voice to define what effects a character has ? We can already set up voices speak out a sound based on what character it is (though it comes at a loss that all other read outs of those characters, regardless of voice, will say them too). So why not extend that feature to quads and colours and fonts in and of themselves ? This makes text tool management that much more powerful. Setting up a voice to allow specific definitions for audio and/or glyphs and/or colour and/or colour per character based on what voice is being read out makes management of resources that much more easy to use and text that much more expressive. Suggestion: Allow STMAudioClips group, STMColours/STMGradients/STMMaterials, and/or STMQuads to be defined per-character in an STMVoices asset.
(This can all be done via pre-parsing and the planned quad readouts !)

I think that is all for my over-the-top suggestions. I have several friends and seen many strangers long for abilities like this in text managers, especially coming from GML where character maps are used much more than fonts or fonts are converted to character maps. That along with basic colour features being absent from quads, some tools in this large and amazing asset seem to fall just short from "a master of its craft".

Lastly, I have an inquiry. In the past, I have had a folder separate from STM's 'Resources' folder where I would put my own STM assets, wherein they would not get read or recognised by STM. Does STM search project-wide for STM assets or does it only pull from its own folder ? While sample resources are always appreciated, other user-made resources can get lost in them and difficult to find without separate organisation. If this ability is not in STM, it would be greatly appreciated and mend together better with the rest of Unity.
(Organisation of extraneous assets outside /Clavian can be done by creating another 'Resources' folder and putting assets in a folder named after the asset type; a little bit of a roundabout but just as effective !)

Thank you !

Developer

Hey!



Quads can already be multiplied by a colour set in the inspector! Just make sure to check "silhouette" in the quad's settings. This makes it so the quad is treated like any other character in the mesh, so it gets coloured in the same way regular characters do. Works with animated quads, gradients, textures, everything.


***


Good point about quads not having a character in place for custom sound clips. The way internal sound clips works is pretty limited in this regard, so it might be better to get this working through an extension. I was planning on making the audio system more open-ended in a future update, so I think this will be the solution:

I'm going to be adding a unity event that gets invoked every time a sound clip is supposed to play, and that event can be extended in any way, either for support with other middleware like FMOD, or for specific instances like this. I'm imagining for quad output it could work as follows...

Every time a sound is supposed to play, check if the character is "\u2000" (This is a unicode character assigned to quads), and if so, check the data attached to this letter to figure out what the original tag was. (<q=myQuad1> vs <q=myQuad2>) Then using this infor, you can play the appropriate sound.


That said, I'll see if I can extend the sound clips class a bit. It's already got support for typed-out characters like "line break" and "space", so I think I'll be able to have it also check for matches with quads.


***

I think you might be confusing STM's voice system with something else? It's sorta just a renamed text macro system because <m> was already used for materials and <v> was free.

The whole idea behind STM's voices is that they get replaced with other text, even other tags. So if you have a voice named "small" defined as "<s=0.1><c=yellow>" then typing <v=small> will essentially place both of those tags there at once, and you'll get small, yellow text.

I could expand the sound clips class to include colour/effects, as one way to do it. So... it would be like a per-character tag for voices. You might also be able to solve this with Preparsing! I can send some sample code, but with preparsing, you can set up custom tags that get parsed before STM even attempts to parse text. So... you could set up a <mySpecialTag> thing and have it automatically put a colour tag before every character so you don't have to.



This one's a bit tricky to think about, but per-character voices are a neat idea and I don't think they'd be that hard to implement. I'd just have to put them in the same place as my current code for sound clips, I think. It could pair together with the above sound clip idea.


***


STM currently uses the resources folder to queue up effects. The way unity works with Resources, if an asset is in *any* folder named "Resources", unity considers it to be within resources. My default wave effect is stored at "Assets/Clavian/SuperTextMesh/Resources/STMWaves/default.asset", but if you were to store it at "Assets/MyAssets/Resources/STMWaves/default.asset", Unity doesn't see the difference. Just make sure to put it in a folder called "STMWaves" or whatever the appropriate class is. This division is here because... when Unity tosses everything into one Resources folder, it would be really annoying to tell the different classes apart without the subfolders. I also don't want this interfering with other assets that might use the Resources folder.

You're free to delete the sample effect files, by the way! All of them except the "default" ones aren't needed.


***


Anyway, I hope that helps. For now, you might be able to extend STM using  preparsing or the OnPrintEvent() for image effects and audio, respectively.

Last-minute thought about randomized colours tho: Try using a rainbow gradient, had try turning off "smooth gradient" and setting it to repeat often. That should give a randomized colour effect pretty quickly.

Hi ! Thank you so much for your reply.

I must have somehow skipped over the "Silhouette" option ! Thank you for pointing it out. I guess my skimming over the docs is too 'skimmy' ;P

==

Thank you for your consideration of sounds with quads ! I understand at the moment they're a strange amendment to the rest of the text system, but I look forward to their expansion.

==

With STMVoices, you can define a set of audio clips as STMAudioClips and call it within the voice (as is done with 'royalty' and 'typewriter'), but you are not able to define separately what sound or text effect goes to what character; it is chosen randomly. To my knowledge, you can put single characters or similar in Resources/STMAutoClips as an STMAudioClips asset and it will read out that way, but it overrides any other voice (at least, I presume). Thus in a single voice I can't, say, have each character read out in an Animal Crossing style and have that voice change per NPC (or such) without, possibly, very hack-y and limited scripting that changes the pitch or whatever set of sounds should be played. Effectively my solution is a dictionary where the defined element is the text character (or, expanding on the earlier ideas, a quad) and the definition would include the STMAudioClips set and whatever effects (similar to how you define them in STMVoices already). An example would be defining "a" with "<audioClips=audio-for-a><c=color-for-a>a</c>" (though this is admittedly inefficient).

This is all a bit difficult to visualise, much less explain adequately. However, I think you understand the idea or how to tackle it from your additional comment. I may try looking into preparsing to find a solution otherwise.

==

Thank you for the clarification ! With that understanding, I think I'll be able to organise stuff better. And I know that I can delete (most) everything that isn't needed, I like having them especially for reference and memory jogging.

==

I'll definitely look into all this. It's a bit complicated (especially to make it more all-purpose) but I think it'll work out. Thanks for all your help !

Developer

Hey!


STM audio clips and STM sound clips are actually different things. Sound clips are the same as auto clips, but you can call them with a tag, so I thing that's what you're looking for. (There's a typewriter example in there that puts a "ding" sound when return is pressed)


Would it be possible to tell me the exact effect you want your text to do? Even before I add quads to auto clips/sound clips character list, the effect you're after might already be possible.

(2 edits)

Hello!

You know, I have no idea how I missed "STM Sound Clips" as a feature, but that may work for the audio portion of things ! At least for non-quad text. As for the graphics portion with quads, let's say I want to map all this to a kind of "voice"...

So instead of doing "<c=secret-color-CHMRW><q=secret-glyph-FGHIJ></c><c=secret-color-EJOTY><q=secret-glyph-ABCDE></c>" (produces 'HE' in 'HELLO WORLD') etcetera in the text inspector I could instead map it to a kind of "voice" that automates the colour assignment and glyph appearance. In this example, the glyphs could be animated for a "hand-drawn" effect or equate to garbled "dream" speech. Then in addition (or as a replacement to animated glyphs) they could be animated however they might for read-out (wiggling, jiggling, squishing, etc). These effects in combination with the STMSoundClips would be perfect for earlier-mentioned scenarios.

EDIT: I am noticing that using the context menu to create a Sound Clip Data asset seems to instead create an 'STMAutoClipData' instead of 'STMSoundClipData'. Additionally, there's no 'Auto Clip Data' in the context menu to choose from. All other types seem to create the proper data.

Developer (4 edits)

Now that I see it laid out this way, I think Pre-parsing might be the way to go for your solution.

You'd have to set up a script that interprets your text, maybe something like this:

public void Parse(STMTextContainer(x))
{
    //go thru entire string
    for(int i=0; i<x.text.Length; i++)
    {
        string replaceValue = x.text[i]; //default value
        //replace specific characters with sequences
        switch(x.text[i])
        {
            case "A": replaceValue = "<c=cyan><q=CIRCLE>"; break;
            case "B": replaceValue = "<c=cyan><q=CROSS>"; break;
            etc etc...
        }
        //remove original character
        x.text = x.text.Remove(i);
        //replace with sequence
        x.text = x.text.Insert(i, replaceValue);
    }
}


That way, when you send the text "HELLO WORLD" to a mesh, it'll just convert it to quads, itself! If you need help writing this code for this, just let me know.



Also oops, looks like that's a typo... On line 8 of STMAutoClipData.cs, I call it "Sound Clip Data" instead of auto clip data. I'll make sure this is changed in the next update! That said, you can create new Auto Clips thru the text data inspector. (The menu that shows up when you click the [T] in the top right of any super text mesh inspector. It'll be under "Inline > Sound Clips" or "Automatic > Auto Clips"

Developer

Managed to code it in, auto delays, auto clips, and sound clips all work together with quads, now. If that means everything is solved, I'll publish this update and I can email you a build.

(5 edits)

Hey ! Apologies for the late response.

So glad to hear the changes ! I'd love to try out a build ! After checking over everything I mentioned, I think that should be it aside from what effectively amounts to native pre-parsing (which I imagine would be hell to put in and largely unnecessary; I wonder if it'd be worthwhile adding the ability for voices to have a pre-parsing script attached to them to automate (more) things for each voice ?). I'll give out the pre-parsing via script a try this week as well ! Thank you for all your help.

Edit: I also noticed you can't cancel voices in the same line -- you can start a new one, but effects of the previous that aren't overwritten (such as autoclips (e.g. newline)) will still persist. An effect like this would allow for easier pre-parsing and mixing of voices for dramatic effect. I think I would appreciate some help with preparsing this, especially to ignore event calls like voices, sounds, etc. Also, in your example, instead of comparing the iteration length of the container text it should compare to the STM object's "_text" (otherwise it'll loop indefinitely or never iterate more than once).

Developer

Hey,


A tag that uses a custom parser isn't the worst idea, but preparsing already allows for full text customization with custom tags. I'll see what I can do, though.


Yeah, the voices are just a collection of multiple tags. <v=myVoice> just puts <c=myColor><w=myWave><etc> into a string. I can't believe I don't have some type of <clearAllTags> tag, yet. That would solve that, so I'll get on it.


Also yeah, I just wrote that sample code as pseudo code right in browser, I didn't test it.

(1 edit)

Hey !

I think you may be able to use a custom e2 event for inline multilayer parsing (a layer of parsing on top of the already existing parser) but I haven't tried it out yet. If it works, you may be able to get around it just doing that.

I think a closing tag for voices would be the most useful for multiple effect stuff (since voices can predefine other effects) but a clear tag would be rad as well.

Yeah I knew it was just pseudo-code but having never messed with STM code-wise it was a bit of a brain jog trying to figure out just what exactly needed to be changed. :S Probably shouldn't be up at 3 AM trying to do all that.

Thank you for all your help !

Edit: Doesn't look like an e2 event works for multi-layer parsing but you may be able to prove me otherwise.

Developer (1 edit)

Found a bug with it, but the tag </v> should cancel all tags already. I'll publish another update that fixes this.


I thought about using custom events for this, but the usage of events is very different than what we're after. I really think it'll be better to use preparsing.


I wrote up some working code that does what you need:


using UnityEngine;
using System.Collections;
public class STMPreparse3 : MonoBehaviour {
    public string textTag = "transcribe";
    public void Parse(STMTextContainer x)
    {
        string startTag = "<" + textTag + ">";
        string endTag = "</" + textTag + ">";
        int startingPoint = x.text.IndexOf(startTag);
        int endingPoint = startingPoint > -1 ? x.text.IndexOf(endTag, startingPoint) : -1; //get tag after starting tag point
        //optional, where this tag ends
        if(endingPoint == -1)
        {
            endingPoint = x.text.Length;
        }
        else
        {
            //remove tag
            x.text = x.text.Remove(endingPoint, endTag.Length);
            //ending point is already accurate
        }
        //if this tag exists in STM's string...
        if(startingPoint > -1)
        {
            //remove tag
            x.text = x.text.Remove(startingPoint, startTag.Length);
            //push backwards
            endingPoint -= startTag.Length;
            //actually modify text
            Replace(x, startingPoint, endingPoint);
        }
    }
    void Replace(STMTextContainer x, int startingPoint, int endingPoint)
    {
        //int originalLength = x.text.Length;
        int skippedChars = startingPoint;
        //go thru string
        for(int i=startingPoint; i<endingPoint; i++) //for each letter in the original string...
        {
            string replaceValue = x.text[skippedChars].ToString(); //default value
            //replace specific characters with sequences
            //for this example, compare all letters as uppercase letters
            switch(x.text[skippedChars].ToString().ToUpper())
            {
                case "A": replaceValue = "aaa"; break;
                case "B": replaceValue = "bbb"; break;
                //etc etc...
            }
            //remove original character
            x.text = x.text.Remove(skippedChars, 1);
            //replace with sequence
            x.text = x.text.Insert(skippedChars, replaceValue);
            //1 by default, but adds up if more characters are inserted
            skippedChars += replaceValue.Length;
        }
    }
}

This code ignores other tags (which shouldn't overlap with this edge case anyway), but you can define a starting and ending point using <transcribe> and </transcribe> or whatever you change the textTag value to.

(2 edits)

Hey !

I don't think this is quite it. Let's say within the glyph text I want to add something like a wave effect or override the colour of the glyph with another one (say the example 'fire') or interject audioclips or a voice -- the pre-parser (understandably) will interpret something like "<transcribe><w>hello <c=fire>world</c></w>!" as "<@>$#&&X <%=O-▲#>@X▲&□</%></@>". However, as expected, these functions do work (mostly) as they would be expected to if they're placed outside the tag -- save for the colour shenaniganry which outright disallows the <transcribe> from working from that point on. For example, if I do "<w><transcribe>Lorem Ipsum</transcribe><c=fire><transcribe>dolor sit</transcribe></c><transcribe> amet</transcribe>" it will end up as "&X▲#^ -*+~^ <transcribe>dolor sit</transcribe><transcribe> amet</transcribe>" with "<transcribe>dolor sit</transcibe>" in fire colour and the rest of the text in the default colour

It's so close, yet so far... Perhaps, following the structure of ignoring the textTag, I could make a loop that iterates through tags to ignore ? Or maybe to automatically retrieve tags (e.g. tags with names attached such as '<c=fire>' and other custom attributes) thus ignoring those from replacement ?

Additionally, I wasn't able to use </v> for some reason -- it seemed to throw an error at the time or not cancel voices at all and does not cancel tags as far as I can tell, so I concluded it didn't exist.

I am grateful for all your work and help so far; I've learned a lot on how this all works. If only my needs weren't so complicated and specific !
Thank you

Developer

Hey,


Yes, in the current build on the asset store, </v> causes an error, but I'm currently waiting on the asset store to upload my fix to that. The change I made was that I changed line 1884 in SuperTextMesh.cs to "myInfo = new STMTextInfo(this);"


Here's a modified Replace() function for the above code that will work together with other tags:


void Replace(STMTextContainer x, int startingPoint, int endingPoint)
    {
        //int originalLength = x.text.Length;
        int skippedChars = startingPoint;
        bool parsingOn = true;
        //go thru string
        for(int i=startingPoint; i<endingPoint; i++) //for each letter in the original string...
        {
            
            string replaceValue = x.text[skippedChars].ToString(); //default value             if(replaceValue == "<")
            {
                //turn off parsing
                parsingOn = false;
            }
            else if(replaceValue == ">")
            {
                //turn back on
                parsingOn = true;
            }
            //is this a letter that should be replaced?
            if(parsingOn)
            {
                //replace specific characters with sequences
                //for this example, compare all letters as uppercase letters
                switch(x.text[skippedChars].ToString().ToUpper())
                {
                    case "A": replaceValue = "aaa"; break;
                    case "B": replaceValue = "bbb"; break;
                    //etc etc...
                }
                //remove original character
                x.text = x.text.Remove(skippedChars, 1);
                //replace with sequence
                x.text = x.text.Insert(skippedChars, replaceValue);
            }             //1 by default, but adds up if more characters are inserted
            skippedChars += replaceValue.Length;
        }
    }

Hey !

This seems almost perfect. After adding in your code to where '</v>' appears in my version of SuperTextMesh.cs (line 1870), voices close out properly, making no additional sound as well as without closing out the '<transcribe>' effect with it. I have not tested a voice and colour combination at this time to gauge whether '</v>' would reset the colour, though it doesn't entirely appear so as the glyphs are all coloured as they should be. So that's a success !

However, the effect with '<transcribe>' being escaped is still present. Here is what the line currently looks like:
<w><transcribe>Lorem ipsum</transcribe><c=fire><transcribe> dolor sit</transcribe></c><transcribe> amet</transcribe>, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</w>

I have noticed that if I put '<c=fire>' within the next opening '<transcribe>' tag (now reading "[..]<transcribe>Lorem ipsum</transcribe><transcribe> <c=fire>dolor sit</transcribe></c>[..], the '<transcribe>' effect is still escaped with 'dolor sit <transcribe>' now having the fire effect. This means the pre-parser only checks for the first instance of '<transcribe>' and '</transcribe>' without amending the other calls.

Otherwise, without an intermediary close of '<transcribe>' and merely putting '<c=fire>' and '</c>' around some other glyphs, they result in the expected behaviour (which is essentiallyno effect since the '<transcription>' effect overrides it with its own colour definition as expected).

While I can certainly work around this for properly-emphasised effects (and properly-written dialogue), it does make me wonder if there'd be some way to override the next colour definition (perhaps by adding an attribute like "overrideNext=true" to a colour ? e.g. <c=fire overrideNext=true>). However, if STM effects like colours and voices allow spaces in their names instead of using them as a separator, this would become problematic. Just a thought problem !

I think with this, I am almost fully satisfied with the work; it would just be the multiple '<transcribe>' calls ability that would bring it all together. Thank you so much for all your help !

Developer (1 edit)

Hey,


The above preparsing code only searches for <transcribe> once in a mesh, since I figured you'd have a single character talking this way and wouldn't need it to happen multiple times. Here's a modified Parse function that loops until all <transcribe> tags are cleared:


public void Parse(STMTextContainer x)
    {
        string startTag = "<" + textTag + ">";
        string endTag = "</" + textTag + ">";
        int startingPoint;
        int endingPoint;
        do
        {
            startingPoint = x.text.IndexOf(startTag);
            endingPoint = startingPoint > -1 ? x.text.IndexOf(endTag,startingPoint) : -1; //get tag after starting tag point
            //optional, where this tag ends
            if(endingPoint == -1)
            {
                endingPoint = x.text.Length;
            }
            else
            {
                //remove tag
                x.text = x.text.Remove(endingPoint, endTag.Length);
                //ending point is already accurate
            }
            //if this tag exists in STM's string...
            if(startingPoint > -1)
            {
                //remove tag
                x.text = x.text.Remove(startingPoint, startTag.Length);
                //push backwards
                endingPoint -= startTag.Length;
                //actually modify text
                Replace(x, startingPoint, endingPoint);
            }
        }
        while(startingPoint > -1);
    }

Hey !

Thank you SO MUCH for all this. I think this shall work fantastically !

It's been absolutely fun seeing how all this works and getting help with something so absurd (but possibly versatile ?!). Thank you so much for everything !

I think, aside from the to-be audio readouts of quads, everything is done ! Can't wait for the next update and new features and improvements. You seriously rock. I gave a lot of this my own try between replies and it's been cool to see different approaches and solutions to problems. I think I'll be using this all to make some powerful games in the future. ✨✨

Thank you again. Sorry this was such a hassle ! If you have any more questions or otherwise, I'm all ears, and I'll be eager to give you the same when I have them again.
Good luck and take care

Developer

No problem! If you want the build right now, drop me an email and I'll give you the .unitypackage. It'll be at least another 2 days before the patch goes up, as the asset store is giving me some strange issues with the automatic vetting process.

I think I can wait for the asset store patch in the meantime to spare the trouble, but I'll let you know if I change my mind ! Thank you so much.