Skip to main content

Indie game storeFree gamesFun gamesHorror games
Game developmentAssetsComics
SalesBundles
Jobs
TagsGame Engines

hy

16
Posts
2
Topics
5
Followers
8
Following
A member registered Oct 16, 2016 · View creator page →

Creator of

Recent community posts

Very cute novel. Absolutely in love with all the characters. Every scene I had a new favourite !!!

This is a very lovely game with a wonderful concept -- definitely kept me engaged. Played all the way to the end ! My only criticism for this as a jam game is the difficulty between puzzles isn't exactly consistent or linear -- certain boards were just too easy despite the implied challenge of increased tiles. Otherwise, it's well-balanced -- it never felt like I was being cheated of a win or wandering aimlessly. The commentary and flavour text was also very endearing and provides just enough insight to the puzzle to give you an idea but not hint outright what should be done to solve them.

Overall, I'd rate it a 4.7/5 as a puzzle game !!

I think I can wait for the asset store patch in the meantime to spare the trouble, but I'll let you know if I change my mind ! Thank you so much.

Hey !

Thank you SO MUCH for all this. I think this shall work fantastically !

It's been absolutely fun seeing how all this works and getting help with something so absurd (but possibly versatile ?!). Thank you so much for everything !

I think, aside from the to-be audio readouts of quads, everything is done ! Can't wait for the next update and new features and improvements. You seriously rock. I gave a lot of this my own try between replies and it's been cool to see different approaches and solutions to problems. I think I'll be using this all to make some powerful games in the future. ✨✨

Thank you again. Sorry this was such a hassle ! If you have any more questions or otherwise, I'm all ears, and I'll be eager to give you the same when I have them again.
Good luck and take care

Hey !

This seems almost perfect. After adding in your code to where '</v>' appears in my version of SuperTextMesh.cs (line 1870), voices close out properly, making no additional sound as well as without closing out the '<transcribe>' effect with it. I have not tested a voice and colour combination at this time to gauge whether '</v>' would reset the colour, though it doesn't entirely appear so as the glyphs are all coloured as they should be. So that's a success !

However, the effect with '<transcribe>' being escaped is still present. Here is what the line currently looks like:
<w><transcribe>Lorem ipsum</transcribe><c=fire><transcribe> dolor sit</transcribe></c><transcribe> amet</transcribe>, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</w>

I have noticed that if I put '<c=fire>' within the next opening '<transcribe>' tag (now reading "[..]<transcribe>Lorem ipsum</transcribe><transcribe> <c=fire>dolor sit</transcribe></c>[..], the '<transcribe>' effect is still escaped with 'dolor sit <transcribe>' now having the fire effect. This means the pre-parser only checks for the first instance of '<transcribe>' and '</transcribe>' without amending the other calls.

Otherwise, without an intermediary close of '<transcribe>' and merely putting '<c=fire>' and '</c>' around some other glyphs, they result in the expected behaviour (which is essentiallyno effect since the '<transcription>' effect overrides it with its own colour definition as expected).

While I can certainly work around this for properly-emphasised effects (and properly-written dialogue), it does make me wonder if there'd be some way to override the next colour definition (perhaps by adding an attribute like "overrideNext=true" to a colour ? e.g. <c=fire overrideNext=true>). However, if STM effects like colours and voices allow spaces in their names instead of using them as a separator, this would become problematic. Just a thought problem !

I think with this, I am almost fully satisfied with the work; it would just be the multiple '<transcribe>' calls ability that would bring it all together. Thank you so much for all your help !

(2 edits)

Hey !

I don't think this is quite it. Let's say within the glyph text I want to add something like a wave effect or override the colour of the glyph with another one (say the example 'fire') or interject audioclips or a voice -- the pre-parser (understandably) will interpret something like "<transcribe><w>hello <c=fire>world</c></w>!" as "<@>$#&&X <%=O-▲#>@X▲&□</%></@>". However, as expected, these functions do work (mostly) as they would be expected to if they're placed outside the tag -- save for the colour shenaniganry which outright disallows the <transcribe> from working from that point on. For example, if I do "<w><transcribe>Lorem Ipsum</transcribe><c=fire><transcribe>dolor sit</transcribe></c><transcribe> amet</transcribe>" it will end up as "&X▲#^ -*+~^ <transcribe>dolor sit</transcribe><transcribe> amet</transcribe>" with "<transcribe>dolor sit</transcibe>" in fire colour and the rest of the text in the default colour

It's so close, yet so far... Perhaps, following the structure of ignoring the textTag, I could make a loop that iterates through tags to ignore ? Or maybe to automatically retrieve tags (e.g. tags with names attached such as '<c=fire>' and other custom attributes) thus ignoring those from replacement ?

Additionally, I wasn't able to use </v> for some reason -- it seemed to throw an error at the time or not cancel voices at all and does not cancel tags as far as I can tell, so I concluded it didn't exist.

I am grateful for all your work and help so far; I've learned a lot on how this all works. If only my needs weren't so complicated and specific !
Thank you

(1 edit)

Hey !

I think you may be able to use a custom e2 event for inline multilayer parsing (a layer of parsing on top of the already existing parser) but I haven't tried it out yet. If it works, you may be able to get around it just doing that.

I think a closing tag for voices would be the most useful for multiple effect stuff (since voices can predefine other effects) but a clear tag would be rad as well.

Yeah I knew it was just pseudo-code but having never messed with STM code-wise it was a bit of a brain jog trying to figure out just what exactly needed to be changed. :S Probably shouldn't be up at 3 AM trying to do all that.

Thank you for all your help !

Edit: Doesn't look like an e2 event works for multi-layer parsing but you may be able to prove me otherwise.

(5 edits)

Hey ! Apologies for the late response.

So glad to hear the changes ! I'd love to try out a build ! After checking over everything I mentioned, I think that should be it aside from what effectively amounts to native pre-parsing (which I imagine would be hell to put in and largely unnecessary; I wonder if it'd be worthwhile adding the ability for voices to have a pre-parsing script attached to them to automate (more) things for each voice ?). I'll give out the pre-parsing via script a try this week as well ! Thank you for all your help.

Edit: I also noticed you can't cancel voices in the same line -- you can start a new one, but effects of the previous that aren't overwritten (such as autoclips (e.g. newline)) will still persist. An effect like this would allow for easier pre-parsing and mixing of voices for dramatic effect. I think I would appreciate some help with preparsing this, especially to ignore event calls like voices, sounds, etc. Also, in your example, instead of comparing the iteration length of the container text it should compare to the STM object's "_text" (otherwise it'll loop indefinitely or never iterate more than once).

(2 edits)

Hello!

You know, I have no idea how I missed "STM Sound Clips" as a feature, but that may work for the audio portion of things ! At least for non-quad text. As for the graphics portion with quads, let's say I want to map all this to a kind of "voice"...

So instead of doing "<c=secret-color-CHMRW><q=secret-glyph-FGHIJ></c><c=secret-color-EJOTY><q=secret-glyph-ABCDE></c>" (produces 'HE' in 'HELLO WORLD') etcetera in the text inspector I could instead map it to a kind of "voice" that automates the colour assignment and glyph appearance. In this example, the glyphs could be animated for a "hand-drawn" effect or equate to garbled "dream" speech. Then in addition (or as a replacement to animated glyphs) they could be animated however they might for read-out (wiggling, jiggling, squishing, etc). These effects in combination with the STMSoundClips would be perfect for earlier-mentioned scenarios.

EDIT: I am noticing that using the context menu to create a Sound Clip Data asset seems to instead create an 'STMAutoClipData' instead of 'STMSoundClipData'. Additionally, there's no 'Auto Clip Data' in the context menu to choose from. All other types seem to create the proper data.

Hi ! Thank you so much for your reply.

I must have somehow skipped over the "Silhouette" option ! Thank you for pointing it out. I guess my skimming over the docs is too 'skimmy' ;P

==

Thank you for your consideration of sounds with quads ! I understand at the moment they're a strange amendment to the rest of the text system, but I look forward to their expansion.

==

With STMVoices, you can define a set of audio clips as STMAudioClips and call it within the voice (as is done with 'royalty' and 'typewriter'), but you are not able to define separately what sound or text effect goes to what character; it is chosen randomly. To my knowledge, you can put single characters or similar in Resources/STMAutoClips as an STMAudioClips asset and it will read out that way, but it overrides any other voice (at least, I presume). Thus in a single voice I can't, say, have each character read out in an Animal Crossing style and have that voice change per NPC (or such) without, possibly, very hack-y and limited scripting that changes the pitch or whatever set of sounds should be played. Effectively my solution is a dictionary where the defined element is the text character (or, expanding on the earlier ideas, a quad) and the definition would include the STMAudioClips set and whatever effects (similar to how you define them in STMVoices already). An example would be defining "a" with "<audioClips=audio-for-a><c=color-for-a>a</c>" (though this is admittedly inefficient).

This is all a bit difficult to visualise, much less explain adequately. However, I think you understand the idea or how to tackle it from your additional comment. I may try looking into preparsing to find a solution otherwise.

==

Thank you for the clarification ! With that understanding, I think I'll be able to organise stuff better. And I know that I can delete (most) everything that isn't needed, I like having them especially for reference and memory jogging.

==

I'll definitely look into all this. It's a bit complicated (especially to make it more all-purpose) but I think it'll work out. Thanks for all your help !

(2 edits)

Hello !

I've come across a niche feature that, to what I can find (without messing with STM's code myself), isn't available.

In <example game> I have certain glyphs that are displayed via quads (we can use the traditional triangle, circle, square, and cross shapes for this example). These glyphs are white (or black) so as to fit alongside other text. I would like to colour this quad-based text. This is where we run into the first issue: Quads cannot be coloured with a text effect colour in the text inspector. So even if I have "<c=red><q=cross><q=triangle><q=circle></c>" it will not render with a red cross, red triangle, or red circle. Similarly, it does not work for a single quad instead of a sequence of quads. And further it would not work with animated quads. Suggestion: Allow quad colours to be multiplied by a colour called in the text inspector.
(Quad colours can be multiplied in the text inspector by tagging the quad asset as "Silhouette" in the asset inspector; thanks Kai !)

Well, maybe we can't call them in the text inspector, but how about in the asset itself ? Since these glyphs are just shapes, maybe we can colour them in the asset itself? I am willing to make a bunch of assets just for those shape-glyphs. Unfortunately, STM doesn't support colour for a texture in the asset either. Simply, even if a texture can be assigned to an STMQuad, an additional colour cannot be defined by it. This means that in order to change the colour of a glyph, I would have to render it in that exact colour and for any change I would later make I have to re-render that image with the updated colour. What a hassle ! Suggestion: Allow quad colours to be multiplied by a colour in the asset inspector.
(Quad colours cannot be determined by the asset, but pre-parsing can automate colour assignment; thanks Kai ! I learned something entirely new.)

Well, shoot. Maybe I can just export the glyphs in whatever colours I need and make a kind of "voice" using it ? If I can't colour them in the inspector or the asset, maybe I can have something that automatically draws them in those colours just based on what I type ! Alas, while STM does have a "voice" feature for audio clips, there doesn't appear to be an equivalent for quads. This is such a loss of potential ! You could effectively make your own hieroglyph system for NPCs or such with a system like that. Or hide secrets in plain sight ! Maybe even allow more creative freedom with text in a character map to allow something like animated text from images ! This could allow for more games to emphasise a "hand-drawn" feel or other atmospheric effects (tropically seen in "memory" or "dream" sequences in games as well). You could even combine this feature with existing audio 'voices' by assigning audio clips to each glyph variation as well. Suggestion: Create the ability to use a character map for text as a new kind of "voice" and allow for audio voices to read out those characters
(While originally referring to having colour assignments as part of a voice definition (similar to how STMSoundClips are done), pre-parsing will solve this issue. This also touched on having quads be audible, now a planned feature !)

And, well, maybe that last one is a bit of work. Not just for Kai, but for users as well ! Managing glyph variations and colours and audio read outs just for what could probably amount to only one NPC in an entire game using it ? What an edge case. Maybe in the end I should relegate these glyphs to a whole new custom font, something that for-sure works with colours and effects and audio readouts. I'd still have to manually type out each colour or effect for a character if I want them to be consistent (or even randomised). Maybe this could all be solved by a different approach. Maybe what we need is not the ability of a new voice feature, but an expansion on the current one. Voices already support defining text and effects for a 'voice'. How about instead we allow a voice to define what effects a character has ? We can already set up voices speak out a sound based on what character it is (though it comes at a loss that all other read outs of those characters, regardless of voice, will say them too). So why not extend that feature to quads and colours and fonts in and of themselves ? This makes text tool management that much more powerful. Setting up a voice to allow specific definitions for audio and/or glyphs and/or colour and/or colour per character based on what voice is being read out makes management of resources that much more easy to use and text that much more expressive. Suggestion: Allow STMAudioClips group, STMColours/STMGradients/STMMaterials, and/or STMQuads to be defined per-character in an STMVoices asset.
(This can all be done via pre-parsing and the planned quad readouts !)

I think that is all for my over-the-top suggestions. I have several friends and seen many strangers long for abilities like this in text managers, especially coming from GML where character maps are used much more than fonts or fonts are converted to character maps. That along with basic colour features being absent from quads, some tools in this large and amazing asset seem to fall just short from "a master of its craft".

Lastly, I have an inquiry. In the past, I have had a folder separate from STM's 'Resources' folder where I would put my own STM assets, wherein they would not get read or recognised by STM. Does STM search project-wide for STM assets or does it only pull from its own folder ? While sample resources are always appreciated, other user-made resources can get lost in them and difficult to find without separate organisation. If this ability is not in STM, it would be greatly appreciated and mend together better with the rest of Unity.
(Organisation of extraneous assets outside /Clavian can be done by creating another 'Resources' folder and putting assets in a folder named after the asset type; a little bit of a roundabout but just as effective !)

Thank you !

Oh, this is very cute ! Very sweet message. The music feels a bit out of place, but does emphasise the melancholy and bittersweetness. Thank you !

Oh ! Currently, both are set to 'true'.

If Ignore Time Scale is false, but Remember Read Position is true...
-Text readout pauses with editor pause.

If Ignore Time Scale is true, but Remember Read Position is false...
-Similar behaviour to both being true occurs
-All sounds play rapidly when catching up to current readout position

If Ignore Time Scale and Remember Read Position are false...
-Text readout pauses with editor pause.

When toggling them during play mode...
-works as described for each situation
-when Ignore Time Scale is toggled, the sounds from readout will re-play rapidly to catch up with current readout position, but continue from then as described for situation

So it looks like it's mostly a quirk of sorts, from Ignore Time Scale. Ideally, I think it should pause readout (as when Ignore Time Scale is false) when it's an editor pause, but regard to the Time Scale reference otherwise. As it's not a major bug, it's low priority and entirely optional.

However, another quirk I've noticed is when pausing, switching tasks, then returning, the text renderer fails for a frame, giving a null material. This appearance is kept for the duration of the pause when Ignore Time Scale is true. When switching back and forth again, the material is restored as if unpaused. No settings or data are lost during this, so it's also low priority imo.


Thank you ! I'll consider this 'solved' but also sort of as a bug report ? Good luck ! Thanks for all your help. <3

(1 edit)


Above attached gif demonstrates the effect. (alternative link here: https://i.imgur.com/PYTdVmI.gif)

While a line of text from the Dialogue Sample  (with readout controls script attached) is being read, pausing the game in the editor does not halt the readout. While the graphics and sound halt, it is still being read normally in the background. Unpausing the game in the editor will update and the graphics will load, being at the expected 'point' as if the readout had been continuous. Everything else behaves as expected.

<3