DEV Community

Manish
Manish

Posted on

JSON.parse() kept failing with Ollama responses on localhost - spent 3 hours debugging, here's what worked ๐Ÿ’ช

Integrating llama3.2 into my internal tools has been one of the most sane parts of AI that Iโ€™ve been doing for the past few months, but last week it went insane. The API was responding perfectly, requests were perfectly hitting the desired endpoints, and when I checked my console, the JSON responses looked pristine ๐Ÿ‘Œ.

But every single time I tried to parse the response, it would throw me a weird parsing error, which made no sense. The API was giving 200 status codes, the content-type header was set perfectly, so what else could go wrong?. In this guide, we will discover how I finally tracked down this sneaky JSON parsing bug that was hiding right under my nose.

Summary

While integrating llama3.2 into one of my internal tools, I faced a bizarre JSON.parse() issue:


Expected ',' or '}' after property value in JSON at position 350 (line 11 column 23)

Enter fullscreen mode Exit fullscreen mode

At first glance, I thought it would be a simple fix, but I was wrong. This mentality drove me to spend endless hours debugging, starting with consoling the output I was receiving from the Ollama.

Error-Description

Solution

Glimpsing at the response (without properly looking at it), I thought a simple JSON.parse() would fix the issue.

But then I noticed something else: two different syntactical outputs for two different inputs. But Why?

First-Output

Second-Output

Thatโ€™s the point when I started to look into my prompt, which I was giving to llama3.2:


## Assessment Guidelines:

    1. **Be Conservative**: Err on the side of caution - if something could be interpreted as a violation, factor that into the risk score
    2. **Context Matters**: Consider the subreddit's culture and typical moderation practices
    3. **Cumulative Risk**: Multiple minor issues can compound into a higher risk
    4. **Moderator Discretion**: Remember that human moderators may interpret rules differently
    5. **Recent Trends**: Consider if the content type has been problematic recently

    ## Special Considerations:

    - **New Accounts**: Higher scrutiny for promotional content
    - **Community Guidelines**: Factor in Reddit's site-wide rules
    - **Subreddit Size**: Larger subreddits often have stricter enforcement
    - **Content Type**: Images, links, and text posts may have different risk profiles
    - **Timing**: Some content may be temporarily restricted during certain events

    **Subreddit Rules:**
    ${
      Array.isArray(subredditRules) && subredditRules.length > 0
        ? subredditRules
            .map((rule, index) => {
              // Handle different possible rule structures
              const ruleNumber = rule.number || rule.priority + 1 || index + 1;
              const ruleTitle =
                rule.title ||
                rule.short_name ||
                rule.violation_reason ||
                `Rule ${ruleNumber}`;
              const ruleDescription =
                rule.description ||
                rule.description_html?.replace(/<[^>]*>/g, "") ||
                "No description provided";

              return `${ruleNumber}. ${ruleTitle} - ${ruleDescription}`;
            })
            .join("\n")
        : "No specific subreddit rules provided"
    }

    **Post Title:** ${postTitle || "No title provided"}

    **Post Body:** ${postBody || "No body content provided"}

    Now analyze the provided content and respond with a JSON assessment.`;

Enter fullscreen mode Exit fullscreen mode

Notice the last line where I say: Now analyze the provided content and respond with JSON assessment

Seemed okay to me the first time I wrote it. But the time I zoomed in and tested it with different variations, it hit me. ๐Ÿ’ฅ

Basically, instructing Llama 3.2 to respond with a JSON assessment meant it would output only the JSON assessment, whether or not that JSON was valid.

So, tweaking the prompt a bit, instead of using JSON.parse(), solved the problem. Here is the final version of the prompt I used:


    ## Assessment Guidelines:

    1. **Be Conservative**: Err on the side of caution - if something could be interpreted as a violation, factor that into the risk score
    2. **Context Matters**: Consider the subreddit's culture and typical moderation practices
    3. **Cumulative Risk**: Multiple minor issues can compound into a higher risk
    4. **Moderator Discretion**: Remember that human moderators may interpret rules differently
    5. **Recent Trends**: Consider if the content type has been problematic recently

    ## Special Considerations:

    - **New Accounts**: Higher scrutiny for promotional content
    - **Community Guidelines**: Factor in Reddit's site-wide rules
    - **Subreddit Size**: Larger subreddits often have stricter enforcement
    - **Content Type**: Images, links, and text posts may have different risk profiles
    - **Timing**: Some content may be temporarily restricted during certain events

    **Subreddit Rules:**
    ${
      Array.isArray(subredditRules) && subredditRules.length > 0
        ? subredditRules
            .map((rule, index) => {
              // Handle different possible rule structures
              const ruleNumber = rule.number || rule.priority + 1 || index + 1;
              const ruleTitle =
                rule.title ||
                rule.short_name ||
                rule.violation_reason ||
                `Rule ${ruleNumber}`;
              const ruleDescription =
                rule.description ||
                rule.description_html?.replace(/<[^>]*>/g, "") ||
                "No description provided";

              return `${ruleNumber}. ${ruleTitle} - ${ruleDescription}`;
            })
            .join("\n")
        : "No specific subreddit rules provided"
    }

    **Post Title:** ${postTitle || "No title provided"}

    **Post Body:** ${postBody || "No body content provided"}

    Now analyze the provided content and respond with ONLY a valid JSON assessment. Fix any JSON formatting issues if there are any`;

Enter fullscreen mode Exit fullscreen mode

Key takeaway

Always double-check your prompt before building something with it, because a wrong prompt can lead to hours of debugging problems that never should have existed in the first place.

Top comments (0)