<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Octanta Studio</title>
    <description>The latest articles on DEV Community by Octanta Studio (@octanta_studio).</description>
    <link>https://dev.to/octanta_studio</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/octanta_studio"/>
    <language>en</language>
    <item>
      <title>Touch Effects in Mobile Games. Implementation via Unity Shaders + AI</title>
      <dc:creator>Octanta Studio</dc:creator>
      <pubDate>Tue, 07 Oct 2025 11:49:18 +0000</pubDate>
      <link>https://dev.to/octanta_studio/touch-effects-in-mobile-games-implementation-via-unity-shaders-ai-19f9</link>
      <guid>https://dev.to/octanta_studio/touch-effects-in-mobile-games-implementation-via-unity-shaders-ai-19f9</guid>
      <description>&lt;p&gt;Recently, our team at Octanta Studio released the &lt;a href="https://u3d.as/3C7x" rel="noopener noreferrer"&gt;Touch Effect System&lt;/a&gt; asset. Initially, we were looking for a way to implement a heat trail from touches, and the most optimized way to do this in Unity for mobile turned out to be through UI shaders.&lt;br&gt;


  &lt;iframe src="https://www.youtube.com/embed/rPmN9Ty4b8A"&gt;
  &lt;/iframe&gt;


&lt;/p&gt;

&lt;p&gt;The mechanism is as follows: the shader fully controls the behavior of the graphic effect, the material from the shader is applied to a UI particle, and the particle is reused at touch locations. Later, we realized that this solution is not only for trails but also for regular touch effects that do not require drawing, are flexible in customization, and reduce CPU load.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5d9lcp7pwls168so1vw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5d9lcp7pwls168so1vw.png" alt="Shader-based touch effects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Touch Effects are Needed
&lt;/h2&gt;

&lt;p&gt;At first glance, touch effects are decorative, like a bow on the side. However, they are one of the important details necessary for improving UX, like any other VFX. The user gets immediate and tangible feedback on their actions. According to our observations, touch effects are especially common in Japanese F2P mobile games as a means of increasing engagement. In some games, monetization is built around "skins" for touch effects (e.g., Fruit Ninja). In others, they are used as a pleasant little addition that enhances immersiveness (e.g., touching the ground in Gwent). Here are some interesting studies on this topic:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;On how tactile feedback (haptics + visual effects) solves the problem of the "incorporeality" of games, reducing the distance between the person and the screen. &lt;a href="https://proa.ua.pt/index.php/jdmi/article/download/30771/21294" rel="noopener noreferrer"&gt;Link 1&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On the positive impact of tactile feedback on reward perception and behavioral motivation. Although the article is more about vibration as a means of encouraging sales, this also applies to visual effects. &lt;a href="https://academic.oup.com/jcr/advance-article/doi/10.1093/jcr/ucaf025/8120234" rel="noopener noreferrer"&gt;Link 2&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On how the audio-visual experience increases engagement and can stimulate clicks/purchases - a common practice in live-service/gacha games. &lt;a href="https://www.researchgate.net/publication/333993897_Exploring_the_game-of-chance_elements_in_Japanese_F2P_mobile_games_Qualitative_analysis_of_paying_and_non-paying_player%27s_emotions" rel="noopener noreferrer"&gt;Link 3&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;On how visual effects adjust other methods of responding to user input. &lt;a href="https://arxiv.org/pdf/1902.07071" rel="noopener noreferrer"&gt;Link 4&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zc9awndbrs26j2vlibn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4zc9awndbrs26j2vlibn.png" alt="Immmersive touch effects"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Besides direct dialogue with the user through the screen, visual feedback can be used for other development purposes: for example, for recording mobile screens during testing; for creating tutorial hints that simulate touches on necessary buttons; for replacing heavy world effects.&lt;/p&gt;

&lt;p&gt;It is important that touch effects do not slow down the game or interfere with UI readability, so priority #1 when creating the Touch Effect System was optimization, and #2 was flexibility in customization. In addition to the prepared shaders and settings, we provide a guide for other developers on how to independently implement any touch effect using AI.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why UI Shaders are the Best Solution for Touch Effects in Unity
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/lFnJ8sLvVpc"&gt;
  &lt;/iframe&gt;


 &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Visual Flexibility&lt;/strong&gt;. Effects - gradients, glows, animations - are calculated mathematically. Colors, highlights, blur, or outlines can be adjusted with sliders. There is no need to draw or keep packs of textures in the project.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Minimal Size&lt;/strong&gt;. The shader for the circle example weighs ~6 KB, the material ~1.3 KB. That's all that's needed for the effect.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;. Animations are computed inside the shader, without changing Transform or Canvas components. This reduces CPU load. Relevant for mobile.&lt;/li&gt;
&lt;li&gt;Since it's UI, the effects work on top of any graphics, regardless of the pipeline (URP/HDRP/Built-in).&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;
  
  
  Writing Your Own Shader for a Touch Effect
&lt;/h2&gt;

&lt;p&gt;

  &lt;iframe src="https://www.youtube.com/embed/jkmtc2J5JRs"&gt;
  &lt;/iframe&gt;


 &lt;/p&gt;

&lt;p&gt;Here is an example of a shader for a basic touch effect in the form of a circle. There is also an example of a more complex shader for a stain effect with random shape generation. And an example of a complex shader with animation that simulates touching a matrix. The last two are shown in the video.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Shader "UI/TouchPointCircle"
{
    Properties
    {
        // Core system properties - managed by TouchGlowUI script
        [HideInInspector] _MainTex ("Texture", 2D) = "white" {}
        [HideInInspector] _TimeNow ("Time Now", Float) = 0
        [HideInInspector] _StartTime ("Start Time", Float) = 0
        [HideInInspector] _Lifetime ("Lifetime", Float) = 2.0

        // Core animation properties
        [Toggle] _Scaling ("Disappearing Scaling", Float) = 0
        [Toggle] _Fading ("Disappearing Fading", Float) = 1

        // Core layer properties - solid center of the particle
        _CoreColor ("Core Color", Color) = (1, 1, 1, 1)
        _CoreSize ("Core Size", Range(0.1, 0.4)) = 0.25
        _CoreOpacity ("Core Opacity", Range(0.0, 1.0)) = 1.0

        // Glow layer properties - soft outer illumination
        _GlowColor ("Glow Color", Color) = (1, 1, 1, 1)
        _GlowBlur ("Glow Blur", Range(0.0, 1.0)) = 0.3
        _GlowOpacity ("Glow Opacity", Range(0.0, 1.0)) = 0.3

        // Ring layer properties - optional outer ring structure
        _RingColor ("Ring Color", Color) = (1, 1, 1, 1)
        _RingThickness ("Ring Thickness", Range(0.01, 0.1)) = 0
        _RingOpacity ("Ring Opacity", Range(0.0, 1.0)) = 0

        // Platform optimization and touch behavior properties
        [HideInInspector] _UseMobileOptimization ("Use Mobile Optimization", Float) = 0
        _OneTouchHoldAge ("OneTouch Hold Age", Range(0.0, 1.0)) = 0
        [Toggle] _HoldingForbidden ("Holding Forbidden", Float) = 0

    }

    SubShader
    {
        // UI transparency rendering configuration
        Tags { "RenderType"="Transparent" "Queue"="Transparent" "IgnoreProjector"="True" }
        Blend SrcAlpha OneMinusSrcAlpha
        Cull Off
        ZWrite Off
        ZTest LEqual

        Pass
        {
            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma target 2.0
            #include "UnityCG.cginc"

            struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; };
            struct v2f { float4 pos : SV_POSITION; float2 uv : TEXCOORD0; };

            // Shader property declarations
            sampler2D _MainTex; float4 _MainTex_ST;
            float _TimeNow, _StartTime, _Lifetime;
            fixed4 _CoreColor, _GlowColor, _RingColor;
            float _CoreSize, _CoreOpacity, _GlowBlur, _GlowOpacity, _RingThickness, _RingOpacity;
            float _Scaling, _Fading;
            float _UseMobileOptimization;

            v2f vert (appdata v) {
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target {
                // === PARTICLE LIFETIME MANAGEMENT ===
                float age = (_TimeNow - _StartTime) / _Lifetime;
                if (age &amp;lt; 0.0 || age &amp;gt;= 1.0) return float4(0, 0, 0, 0);

                // === COORDINATE SYSTEM AND DISTANCE CALCULATION ===
                // Circle uses standard Euclidean distance from center
                float distance = length(i.uv - 0.5);

                // === SIZE SCALING OVER LIFETIME ===
                float sizeScale = 1.0 - age;
                if(_Scaling == 0)
                {
                    sizeScale = 1;
                }                

                // === LAYER GENERATION SYSTEM ===
                // Core layer - solid center with sharp edges
                float core = step(distance, _CoreSize * sizeScale);

                // Glow layer - soft outer illumination with blur control
                float glowRadius = 0.35 * sizeScale;
                float glowSoftness = 0.1 * _GlowBlur;
                float glow = 1.0 - smoothstep(glowRadius - glowSoftness, glowRadius + glowSoftness, distance);

                // Ring layer - optional outer ring structure
                float ringRadius = min(0.42 * sizeScale, 0.42);
                float ringInner = ringRadius - _RingThickness;
                float ringOuter = ringRadius;
                float ring = smoothstep(ringInner - 0.02, ringInner, distance) * 
                           (1.0 - smoothstep(ringOuter, ringOuter + 0.02, distance));

                // === FADE EFFECTS ===
                // Edge fade prevents artifacts at texture boundaries
                float2 edgeDistance = min(i.uv, 1.0 - i.uv);
                float edgeFade = smoothstep(0.0, 0.05, min(edgeDistance.x, edgeDistance.y));

                // Time fade creates natural particle death animation
                float timeFade = 1.0 - age;
                 if(_Fading == 0)
                {
                    timeFade = 1;
                }       

                // === LAYER COMPOSITION ===
                // Calculate alpha values for each layer
                float coreAlpha = core * _CoreOpacity * timeFade * edgeFade;
                float glowAlpha = glow * _GlowOpacity * timeFade * edgeFade;
                float ringAlpha = ring * _RingOpacity * timeFade * edgeFade;

                // === OPTIMIZED COLOR BLENDING ===
                // Determine layer dominance without branching using step functions
                float finalAlpha = max(max(coreAlpha, glowAlpha), ringAlpha);
                float coreWeight = step(max(glowAlpha, ringAlpha), coreAlpha);
                float ringWeight = step(glowAlpha, ringAlpha) * (1.0 - coreWeight);
                float glowWeight = (1.0 - coreWeight) * (1.0 - ringWeight);

                // Blend colors based on layer weights
                float3 finalColor = _CoreColor.rgb * coreWeight + 
                                  _RingColor.rgb * ringWeight + 
                                  _GlowColor.rgb * glowWeight;

                return float4(finalColor, finalAlpha);
            }
            ENDHLSL
        }
    }
    Fallback "UI/Default"
}

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://github.com/octantast/octantastudio_as_publisher/tree/main/TouchEffectSystem_examples" rel="noopener noreferrer"&gt;Get all examples via GitHub →&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To create your own touch effect based on these, upload them all to your neural network as examples. And use a prompt like this, provide a description of what kind of touch effect is needed:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;We are developing a Unity UI touch effect system. Your task is to create a new particle type based on the existing architecture. Work will be done sequentially, task by task, and must follow the established system design. The new particle must maintain the existing lifecycle, including initialization, update, and destruction logic. It should support movement disabling, edge margins, smooth transitions, and holding behavior. Visual disappearance must be controlled by the variables disappearing scaling/fading, which should directly affect the fade-out and scaling during disappearance. The _HoldingForbidden and _OneTouchHoldAge logic must be implemented identically to the samples (if true, the particle disappears at the place of appearance, if false, follows the input). All other variables should control visual parameters visible in the Inspector. For internal animations or randomness, if needed, use simple mathematical operations optimized for mobile devices. If the effect is initially invisible, it’s better to create a preview similar to the impact effect to make editing in the Inspector more visual and convenient. Be very careful with internal variables. Follow the existing samples as a reference. Start with a simple implementation. The effect should look like: [...]&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The general architecture is important:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The life cycle is needed to animate the touch effect from appearance to complete fade-out. By moving this logic to the shader, there is no need to animate (scale/fade) objects on the scene itself. There is no need to adjust complex and diverse material properties, only update the lifetime so that the particle animates itself according to the shader formula.&lt;/li&gt;
&lt;li&gt;The ability to disable movement determines whether the touch effect will follow the touch or appear and fade in one place independently of it.&lt;/li&gt;
&lt;li&gt;Margins from edges are needed so that the effect fits within the square UI particle and is fully visible.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Testing
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfyawt6tof19b6iwgamo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjfyawt6tof19b6iwgamo.png" alt="Shader material for touch effects in Unity"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create a material from your new shader (Right-click &amp;gt; Create &amp;gt; Material).&lt;/li&gt;
&lt;li&gt;Assign this material to the GlowMaterial slot of the TouchGlowUI script (e.g., on a prefab like TouchCircle).&lt;/li&gt;
&lt;li&gt;Test in the scene.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The AI might not get it perfect on the first try. Iteration is key. Tweak your prompt and the description based on the results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat3z3b7v87hcv6v4y4uk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat3z3b7v87hcv6v4y4uk.png" alt=" "&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once the visual is right, explore the Input Controller's advanced settings:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Trigger effects only on UI elements or objects with specific tags.&lt;/li&gt;
&lt;li&gt;Call effects manually from your own scripts for specific game events.&lt;/li&gt;
&lt;li&gt;Pair the effect with a sound.&lt;/li&gt;
&lt;li&gt;Transform a one-touch effect into a Trail by emitting multiple particles in sequence - this is exactly how our heat trail is implemented.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg2z6vkj9pmism3jht5g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffg2z6vkj9pmism3jht5g.png" alt="Trail effects via mobile UI shaders"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Contacts
&lt;/h2&gt;

&lt;p&gt;Touch Effect System Asset page: &lt;a href="https://u3d.as/3C7x" rel="noopener noreferrer"&gt;https://u3d.as/3C7x&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you need:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A custom touch effect developed specifically for your project.&lt;/li&gt;
&lt;li&gt;Technical support while building your own.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Reach out to us:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Email: &lt;a href="mailto:octantastudio@gmail.com"&gt;octantastudio@gmail.com&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Discord: Join Octanta Studio channel &lt;a href="https://discord.gg/6SPxKpFZFC" rel="noopener noreferrer"&gt;https://discord.gg/6SPxKpFZFC&lt;/a&gt; and use branch ‘to-developers’. We will provide an answer or create a special YouTube tutorial.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>unity3d</category>
      <category>vfx</category>
      <category>mobile</category>
      <category>shader</category>
    </item>
    <item>
      <title>Tutorial. The sworn enemy of players and developers</title>
      <dc:creator>Octanta Studio</dc:creator>
      <pubDate>Thu, 27 Feb 2025 14:55:44 +0000</pubDate>
      <link>https://dev.to/octanta_studio/tutorial-the-sworn-enemy-of-players-and-developers-93h</link>
      <guid>https://dev.to/octanta_studio/tutorial-the-sworn-enemy-of-players-and-developers-93h</guid>
      <description>&lt;p&gt;If you explore the internet, you'll discover an interesting fact: usually both players hate tutorials, and developers hate making them. Sources: [&lt;a href="https://www.reddit.com/r/gamedev/comments/1829yqj/anyone_feels_making_tutorial_for_your_game_is/" rel="noopener noreferrer"&gt;1&lt;/a&gt;], [&lt;a href="https://medium.com/gaminglinkmedia/tutorials-you-need-them-heres-why-3d5b62b181c5" rel="noopener noreferrer"&gt;2&lt;/a&gt;], [&lt;a href="https://stackoverflow.blog/2024/12/19/developers-hate-documentation-ai-generated-toil-work/?utm_source=chatgpt.com" rel="noopener noreferrer"&gt;3&lt;/a&gt;].&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw1pksilkcw139293j8q.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frw1pksilkcw139293j8q.jpg" alt="Tutorial slides design" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's a thankless but necessary thing for mutual understanding. The highest skill is when a game - "show, don't tell" - guides itself, encourages experimentation, and doesn't need those unfortunate fingers or windows. And when they do exist, it's better for them to be well-thought-out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0b4y7h2phj4tvru2ns5.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fb0b4y7h2phj4tvru2ns5.jpg" alt="Game tutorial making hate" width="625" height="367"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not like when a hint freezes the game, forcing you to watch as text is leisurely typed one letter at a time. Not like when a hint appears and an accidental touch closes it, never to be seen again. And not like when an interaction hint only appears when approaching in a bent-over position from a certain side.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foabu29s62p3v61wdggrd.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foabu29s62p3v61wdggrd.jpg" alt="Worst tutorial you've ever experienced as a player" width="688" height="907"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In &lt;a href="https://u3d.as/3tsL" rel="noopener noreferrer"&gt;Automatic Tutorial Maker&lt;/a&gt; for Unity we've already thought through most of these nuances. For example, solving the problems above:&lt;/p&gt;

&lt;p&gt;⚙️ Choose zoom/fade/slide animation for hint appearance and a duration of 0.5 sec.&lt;br&gt;
⚙️ Set a minimum time for the hint to exist - 1.5 sec. Or add a mandatory confirmation button.&lt;br&gt;
⚙️ Activate the hint asynchronously when approaching the target through Vector3.Distance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8pdjd5w1ljoii6xfsn3.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fm8pdjd5w1ljoii6xfsn3.jpg" alt="Hating im game tutorials" width="800" height="282"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;And here are two paragraphs from the book "The Art of Game Design: A Book of Lenses". Both explore how developers can better explain things to players.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The ability to think like a player. "Where does this location intuitively lead me, and where might I get stuck?"&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Playtests and adaptation. If explanations become necessary, write accompanying phrases in a notebook. This is the potential tutorial that needs to be beautifully integrated into the game. Either in the words of characters or in the geometry of the scene.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Actually, at the core of ATM are these two things: the developer sits down and walks the player's path while showing it. This generates basic hints, and it also allows you to review the game through different eyes one more time. To test and improve the tutorial logic while remaining calm about the technical part.&lt;/p&gt;

&lt;p&gt;🎮 This is reminiscent of King of Thieves: a mobile game where before saving traps in your dungeon, you need to complete them yourself.&lt;/p&gt;

&lt;p&gt;👀 And it also somewhat resembles hunting. To capture the player, you need to think like a player, and also spy on them from your developer bushes, again and again.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk7udk62voo6d6za0klf.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqk7udk62voo6d6za0klf.jpg" alt="Making game tutorial is harder than making the gameplay" width="640" height="649"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymbjqgoda0tkeuagn0oj.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fymbjqgoda0tkeuagn0oj.jpg" alt="Hate making game tutorial Unity" width="640" height="311"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://u3d.as/3tsL" rel="noopener noreferrer"&gt;Automatic Tutorial Maker (ATM)&lt;/a&gt; is a tool that not only automates routine tasks but also rethinks the approach to creating tutorial systems. By taking care of the technical implementation (animations, timings, trigger conditions), it frees the developer from "coding for the sake of coding," allowing them to focus on what truly matters — &lt;strong&gt;logic that doesn’t frustrate the player&lt;/strong&gt;. No more spending hours debugging pop-ups or calculating trigger distances for hints. Instead, you can immerse yourself in the player’s role: analyze where they might get stuck, how to guide them without words, and what level metaphors can make the learning process feel natural. ATM transforms tutorial creation into a dialogue with the player: you design the experience, the tool brings it to life, and the resulting mechanics work like an invisible guide — unobtrusive yet reliable. This isn’t just about saving time; it’s an opportunity to turn a mandatory step into a chance to surprise: when the tutorial becomes part of the game rather than an interruption, players don’t rush to skip hints, and developers don’t dread creating them.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>unity3d</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Automatic Tutorial Maker for Unity: How to Speed Up In-Game Guide Creation</title>
      <dc:creator>Octanta Studio</dc:creator>
      <pubDate>Mon, 17 Feb 2025 16:41:06 +0000</pubDate>
      <link>https://dev.to/octanta_studio/automatic-tutorial-maker-for-unity-how-to-speed-up-in-game-guide-creation-5g9d</link>
      <guid>https://dev.to/octanta_studio/automatic-tutorial-maker-for-unity-how-to-speed-up-in-game-guide-creation-5g9d</guid>
      <description>&lt;p&gt;Hello! Our team at Octanta Studio introduces Automatic Tutorial Maker (ATM), a Unity tool designed to simplify in-game tutorial creation by automating the process. Just demonstrate actions once, and ATM generates a customizable, step-by-step guide for players. Good for freelancers, outsourced projects, and developers who frequently build casual or unconventional games.&lt;/p&gt;

&lt;p&gt;ATM is available on the Unity Asset Store:&lt;br&gt;
☝️ &lt;a href="https://u3d.as/3tsL" rel="noopener noreferrer"&gt;https://u3d.as/3tsL&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/ChIdHeiNlUQ"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;p&gt;This asset is optimized for 2D, 3D, UI, and supports desktop (mouse/keyboard) and mobile (touch input). Whether you’re teaching players how to open an inventory, swipe, or perform drag-and-drop, ATM handles it all.&lt;/p&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;ATM records your actions in real-time during gameplay and converts them into tutorial steps. Each step includes visual hints (pointers, graphics, animations) and tracks player progress. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Clicking the "I" key to open the inventory → Generates a step with a text graphic element on Canvas: "Press I."&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Dragging a pear into a basket → Creates a step with a drag-pointer and target highlight, and text tip.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgloemh3kifhwmhtjx4ei.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgloemh3kifhwmhtjx4ei.png" alt="Unity tutorial automation" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5f10lv47b82qxgdyaao.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft5f10lv47b82qxgdyaao.png" alt="In-game tutorial engine" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Quick Setup
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzx4lt4cixkatirvttxu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkzx4lt4cixkatirvttxu.png" alt="step by step tutorial" width="800" height="365"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import the ATM package into your Unity project.&lt;/li&gt;
&lt;li&gt;Add the TutorialSystem prefab to your scene and unpack it.&lt;/li&gt;
&lt;li&gt;Assign your scene’s Main Camera and UI Canvas to TutorialSceneReferences.&lt;/li&gt;
&lt;li&gt;Run Play Mode, press "Start Recording" in the ATM component during Play Mode. Maximize Game window (recommended) and perform the actions you want to teach.&lt;/li&gt;
&lt;li&gt;Press "Stop Recording" to save the tutorial.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Done! The system auto-generates visual hints and tracks player input. If you enter Play Mode again, you will see generated tutorial as a player.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvi6gb3ckbp5p8rp5dyr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffvi6gb3ckbp5p8rp5dyr.png" alt="Designing good game tutorial" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;&lt;strong&gt;Feature&lt;/strong&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Action Recognition&lt;/strong&gt;: Supports clicks, holds, swipes, drag-and-drop, key presses (WASD, etc.), and more. Detects UI, 2D, and 3D targets automatically.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Customizable Visuals&lt;/strong&gt;: UI/World Pointers: Arrows, hands, geotags. UI/World Graphics: Popups, sidebars, swipe animations, text. Animations: Fade, slide, pulse, or create your own.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Progress Saving&lt;/strong&gt;: Player progress is saved via JSON between sessions. To clear memory for testing, there is a special "Reset Tutor" button in SSP component in the inspector.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Advanced Step Control&lt;/strong&gt;: Run steps in parallel. Trigger steps manually via script. Localize hint text dynamically.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;
&lt;strong&gt;Optimized Performance&lt;/strong&gt;: Minimal CPU/GPU impact. Animations use lightweight math operations. Caching targets for pointer hints and clearing memory after steps are executed.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Supported Visuals: UI &amp;amp; World Hints
&lt;/h2&gt;

&lt;p&gt;ATM offers a wide range of visual hints to guide players. These visuals are divided into UI and World, each with customizable animations and behaviors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI Visuals&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;UI Pointers: Arrows, hands, or mouse icons on Canvas that point to target elements (2D/3D/UI) and are dynamic. Example: A hand pointer over the UI button with text: "Click here to open the menu".&lt;/li&gt;
&lt;li&gt;UI Graphics: Static or animated elements on Canvas like popups, sidebars, or swipe animations. Unlike pointers these elements are not tied to a specific target. Example: A popup with text: "Click anywhere to continue."&lt;/li&gt;
&lt;li&gt;UI Hovers: Additional Pointers highlighting the final target (e.g., highlighting inventory slot).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gk3yfa4xyoywqg99mkk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5gk3yfa4xyoywqg99mkk.png" alt="Game learning interface" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyegubrpaj3vyxeocf1oh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fyegubrpaj3vyxeocf1oh.png" alt="Unity tutorial system" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;World Visuals&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;World Pointers: Arrows, geotags, or frames in world coordinates that point to target elements (2D/3D/UI) and are dynamic. Example: A 3D arrow pointing to a door: "Go here to exit."&lt;/li&gt;
&lt;li&gt;World Graphics: Text or particle effects placed in world coordinates. Example: A highlighted static area that the player must enter, with a floating text facing Camera: "Enter the void."&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqa3wlkpoz70mfcwifty.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feqa3wlkpoz70mfcwifty.png" alt="Unity tutorial maker" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;All visuals support custom animations (fade, slide, pulse) and can be easily replaced with your own prefabs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code Snippets for Advanced Use
&lt;/h2&gt;

&lt;p&gt;For example, trigger steps via script&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[SerializeField] private TutorialSceneReferences sceneReferences;  
void StartTutorial() {  
    sceneReferences.StartTutorialStep(0); // Start step 0  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;or for asynchronously running a step without disabling others:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;[SerializeField] private TutorialSceneReferences sceneReferences;  
void StartTutorial() {  
    sceneReferences.AsyncStartTutorialStep(0); // Async start step 0  
}  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Translate hints at runtime&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceneReferences.ChangeStepVisualText(0, "Presiona I", TextToChange.PointerText);  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Force complete a step&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;sceneReferences.ForceCompleteStep(2); // Skip step 2  
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step Customization: Tailor Each Tutorial Step
&lt;/h2&gt;

&lt;p&gt;In addition to recording, a Unity developer can manually use the flexible step system to customize the tutorial.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxlbzzhtcp97b9g0s7dq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foxlbzzhtcp97b9g0s7dq.png" alt="Unity step tutorial system" width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Example: Customizing a Click/Touch Step&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Set Interaction to Click.&lt;/li&gt;
&lt;li&gt;Set Check Interaction to By GameObject.&lt;/li&gt;
&lt;li&gt;Add the target object to GameObjects list and set it`s type (UI/2D/3D) in ObjectTypes list.&lt;/li&gt;
&lt;li&gt;Assign a UI Pointer to highlight the target object.&lt;/li&gt;
&lt;li&gt;Add text: "Click the object X."&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;For dynamic/procedurally generated content use Check Interaction: ByTag or ByLayer or set targets manually via StartTutorialStepWithTargets method.&lt;/p&gt;

&lt;h2&gt;
  
  
  Comprehensive documentation with AI support
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhheuwovk4bc08y1et25r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhheuwovk4bc08y1et25r.png" alt="Unity tutorial asset" width="800" height="443"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Automate repetitive tasks by recording tutorial in minutes instead of hours. Quickly resolve misunderstandings with prefabs and a gallery of templates, such as “Tap to continue” popups and keyboard hints. For full asset support, use the PDF documentation and the AI Helper file to save time on research.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2962mwym42lofjtzklbr.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2962mwym42lofjtzklbr.png" alt="Unity tutorial asset" width="800" height="474"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Freelancers and studios can quickly onboard clients with clear tutorials. Casual developers can reuse tutorial logic across similar projects, while innovative developers can explain unconventional mechanics without frustration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Documentation
&lt;/h2&gt;

&lt;p&gt;Full Docs: &lt;a href="https://octanta-studio.gitbook.io/automatic-tutorial-maker-for-unity-in-game-tips/" rel="noopener noreferrer"&gt;GitBook&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62tbtmxtevwjgz0x8hw7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F62tbtmxtevwjgz0x8hw7.png" alt="Unity tutorial asset" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>gamedev</category>
      <category>freelance</category>
      <category>csharp</category>
    </item>
    <item>
      <title>In-Game Photo Camera in Unity: Tutorial</title>
      <dc:creator>Octanta Studio</dc:creator>
      <pubDate>Mon, 23 Dec 2024 06:58:35 +0000</pubDate>
      <link>https://dev.to/octanta_studio/in-game-photo-camera-in-unity-tutorial-26gk</link>
      <guid>https://dev.to/octanta_studio/in-game-photo-camera-in-unity-tutorial-26gk</guid>
      <description>&lt;p&gt;Hello! Our team at Octanta Studio recently launched the Dynamic Photo Camera asset, enabling in-game photography mechanics with rich interactivity and integration options. This article will guide you through how it works and what can be achieved using this asset.&lt;/p&gt;

&lt;p&gt;Dynamic Photo Camera is available on the Unity Asset Store: &lt;a href="https://u3d.as/3qTN" rel="noopener noreferrer"&gt;https://u3d.as/3qTN&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The asset is optimized for PC, mobile, 2D, and 3D projects. It simplifies the development of quests, interactive educational projects, and mechanics involving user-generated content. As a developer, you save time, while the user gains greater freedom of action.&lt;/p&gt;

&lt;p&gt;🔰 Beginner friendly.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyspi6zazhx17qdv7ls9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkyspi6zazhx17qdv7ls9.jpg" alt="How we imagine the player using this Camera asset" width="800" height="566"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Photography
&lt;/h2&gt;

&lt;p&gt;This feature allows player to take screenshots of any part of the screen during gameplay and save them to the game’s memory for later use. This way, the player can leave hints for themselves instead of taking notes in a real notebook with a pen or using mobile notes or a phone camera.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Hold the click or touch anywhere for 1 second to activate the photo capture. The captured photo is saved in the in-game collection. This is done by capturing screenshot from the specified camera view, cropping it around the click point and creating a texture from it for the photo Image. Textures &amp;amp; photo data are saved in Application.persistentDataPath in explorer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to set it up (~1 minute)&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Import the Dynamic Photo Camera package to your Unity project. Import the TextMeshPro package if required.&lt;/li&gt;
&lt;li&gt;Add the PhotoController Prefab to your game scene.&lt;/li&gt;
&lt;li&gt;In the InputController inspector, assign your active camera to the Current Camera field and disable prefab`s camera.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funub7xcyosbeszbp8iss.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Funub7xcyosbeszbp8iss.png" alt="Easy setup" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Recommended: unpack prefab and move UICanvas content to existing Canvas or vice versa so UI elements don`t ovelap. The UI elements and photo prefab can be customized to match your desired visual style.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Done! Run your scene, capture a photo by holding the interaction for 1 second, and verify the photo appears in the collection.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Photo Settings scriptable object to adjust both the size of the photo cards and the capture area. A larger crop size will allow more content to be captured within each frame. A smaller crop size will zoom and may produce low-quality pixelated images.&lt;/li&gt;
&lt;li&gt;Also experiment with the Camera settings, as it is the source of the image. For example, select Temporal Anti-aliasing for Rendering. You can also use a disabled camera that renders specific layers to capture the invisible. 💡 Thus, for example, the player can take pictures of people and identify vampires among them, as they are not rendered.&lt;/li&gt;
&lt;li&gt;To modify screenshot capture properties via script, use the MakeScreenshot method in PhotoController.cs. The key line is:
&lt;code&gt;targetTexture = RenderTexture.GetTemporary(photoSettings.resWidth, photoSettings.resHeight);&lt;/code&gt;
You can customize it according to your graphics requirements. For example:
&lt;code&gt;targetTexture = RenderTexture.GetTemporary(photoSettings.resWidth, photoSettings.resHeight, 24, RenderTextureFormat.ARGB32);&lt;/code&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;List of all possible formats: &lt;a href="https://docs.unity3d.com/6000.0/Documentation/ScriptReference/RenderTextureFormat.html" rel="noopener noreferrer"&gt;https://docs.unity3d.com/6000.0/Documentation/ScriptReference/RenderTextureFormat.html&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Data Recognition &amp;amp; Photo Description
&lt;/h2&gt;

&lt;p&gt;Every photo captured has a description that can display coordinates, the name of the recognized object, or custom metadata. This feature enriches gameplay by providing detailed information about photographed objects and enabling interaction with in-game content. It allows players to explore and learn about the game world, as well as complete “photo hunt” photography-based quests by capturing specific objects. 💡 For example, create an educational game for children where their task is to learn to identify birds by taking photographs of correct ones in trees.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6rv1lxfdj867yclo1c2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq6rv1lxfdj867yclo1c2.png" alt="In-game photography in Unity game example" width="743" height="512"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How it works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When taking a photo, the system uses a raycast combined with an overlap cube to determine the camera’s focal point. It identifies the intersection with the nearest collider and stores data about the object associated with that collider in the photo memory. By default, this includes the photographed object’s name and its world coordinates. Objects can also be configured to transmit additional custom data.&lt;/p&gt;

&lt;p&gt;By hovering over or touching photo, the player can view the stored data and gather information about the game world.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to set it up&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ensure all objects you want to recognize in photos have any Collider component and either visible mesh or ObjectToPhoto.cs script (for 2D Sprites, for example).&lt;/li&gt;
&lt;li&gt;For custom description attach the ObjectToPhoto.cs script to the object in the scene. In the inspector, assign PhotoController to the Controller field and fill Custom Description field.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1tpzj4ouq47qngj4h7s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1tpzj4ouq47qngj4h7s.png" alt="Image description" width="800" height="545"&gt;&lt;/a&gt;&lt;br&gt;
Done! Run your scene, capture a photo of object with a custom description, hover over it in the collection and check the description.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use the Photo Settings scriptable object to adjust Sphere Radius. A smaller value results in a thinner ray, making object recognition more precise.&lt;/li&gt;
&lt;li&gt;Object Recognition can also be configured using a script. The description of the recognized object is passed to the RaycastCheck method in the PhotoController at the time of the photo being taken (dataString). Customize the behavior of the RaycastCheck method in the PhotoController.cs to better fit your specific requirements.&lt;/li&gt;
&lt;li&gt;Change description`s UI on the scene (DynamicPhotoDescription object) and via script in the ConfigureDescriptionPanel method in PhotoUIManager.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Photo Validation
&lt;/h2&gt;

&lt;p&gt;This feature enables objects in the game world to recognize description of the photo when the player drags and drops it onto the object. In this way, the objects read the content of the photo and can react to it. 💡 For example, a terminal recognizes the correct password, or a character recognizes a location based on coordinates, or the in-game directory provides information about the car by its number.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i2yfpz22bp2h3o6gkkj.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8i2yfpz22bp2h3o6gkkj.jpeg" alt="Image description" width="800" height="357"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;How it works&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When a player drags a photo, aims it at an object in the game world, and drops it, a check is triggered for a detector object. If one is found, the description of the photo is passed to it, and the detector object responds with whether it’s what it needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;How to set it up&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure that the validator object has any Collider and the PhotoDetector.cs script.&lt;/li&gt;
&lt;li&gt;Fill Required Photo Data field. This field must match the Custom Description of the captured object (and photo description accordingly) for validation to succeed. For example, the validator object requires the car number AS 891 RX for input, and the full description of the photo of the blue car contains exactly “AS 891 RX”.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsivjiuc9fbse4if3o4r.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdsivjiuc9fbse4if3o4r.jpeg" alt="Image description" width="420" height="840"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Done! Run a scene where there is a recognizable object with the required data and a validator object that needs this data. Take a photo of the object, drag it to the validator, and trigger a reaction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Additional&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Object validation and photo data transfer can be configured via script.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The detector object is searched for in the CheckHitOnDrag method in the PhotoPrefab.cs as follows:&lt;br&gt;
&lt;code&gt;if (photoData != null &amp;amp;&amp;amp; hit.collider.TryGetComponent(out PhotoDetector detector))&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The entire PhotoDetector.cs is responsible for the logic of recognizing the received data. ValidatePhotoData, SetDetectorState, OnActivate and OnError methods. Use them to customize your activation effects.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;These three described functions are basic and essential in Dynamic Photo Camera asset.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Capture in-game photos&lt;/li&gt;
&lt;li&gt;Recognize game objects in photos&lt;/li&gt;
&lt;li&gt;Validate object data in photos&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Use this article to work better with the asset, and don’t forget that the documentation and our support on Discord are always available.&lt;/p&gt;

&lt;p&gt;Thank you.&lt;/p&gt;

</description>
      <category>unity3d</category>
      <category>gamedev</category>
      <category>tutorial</category>
      <category>csharp</category>
    </item>
  </channel>
</rss>
