DEV Community

Kenichiro Nakamura
Kenichiro Nakamura

Posted on

2

Azure ML Prompt flow: Use content safety before sending a request to LLM

Azure Machine Learning (Azure ML) offers an effective connector known as Azure Content Safety within its Prompt Flow feature. This article provides an in-depth exploration of how user input is scrutinized before it is directed towards LLM.

Prerequisites

Setting up Azure AI Content Safety and Establishing Connection

  1. Create an Azure Content safety account (free is fine).

  2. Create a connector in prompt flow. Use the endpoint and key information obtained from the previous step.

Constructing the Flow

The flow's construction is uncomplicated and includes the following steps:

  • Direct input toward content safety
  • Analyze the result
  • If the input is deemed safe, invoke LLM
  • Consolidate the result and transfer it to the output

[The following diagram shows 'Bypassed' when the input was considered to be unsafe.]
prompt flow

Content Safety Segment

I use the default sensitivity for all categories.

Content Safety

Result Extraction

Given that the content safety segment yields an object, I employ the Python segment to parse it.

extract

from promptflow import tool

@tool
def my_python_tool(safety_result) -> str:
  return safety_result["suggested_action"]
Enter fullscreen mode Exit fullscreen mode

LLM Segment

A standard LLM prompt is employed in this section.

LLM

But use the activate config to see the content safety result.

Active config

system:
You are an AI assistant reading the transcript of a conversation between an AI and a human. Given an input question and conversation history, infer user real intent.

The conversation history is provided just in case of a coreference (e.g. "What is this?" where "this" is defined in previous conversation).

{% for item in chat_history %}
user:
{{item.inputs.question}}
assistant:
{{item.outputs.answer}}
{% endfor %}

user:
{{question}}
Enter fullscreen mode Exit fullscreen mode

Final Output

The final output is then produced by gathering the results from both content safety and LLM.

  • If the input is unsafe, a 'None' value is utilized for the LLM output as it does not provide an answer.

Final output

from promptflow import tool

@tool
def my_python_tool(safety_result, llm_answer=None) -> str:
  if safety_result["suggested_action"] == "Accept":
    return llm_answer
  else:
    return safety_result
Enter fullscreen mode Exit fullscreen mode

Result

I prefer not to share the unsafe sentences. However, the general rule is that if the content safety identifies inappropriate input, it will refrain from sending the input to LLM

Conclusion

It's advisable to apply the content safety check to the LLM output as well to prevent any unwanted responses from LLM. For this purpose, the content filter feature within AOAI can also be employed.

Do your career a favor. Join DEV. (The website you're on right now)

It takes one minute, it's free, and is worth it for your career.

Get started

Top comments (0)

A Workflow Copilot. Tailored to You.

Pieces.app image

Our desktop app, with its intelligent copilot, streamlines coding by generating snippets, extracting code from screenshots, and accelerating problem-solving.

Read the docs

👋 Kindness is contagious

Discover a treasure trove of wisdom within this insightful piece, highly respected in the nurturing DEV Community enviroment. Developers, whether novice or expert, are encouraged to participate and add to our shared knowledge basin.

A simple "thank you" can illuminate someone's day. Express your appreciation in the comments section!

On DEV, sharing ideas smoothens our journey and strengthens our community ties. Learn something useful? Offering a quick thanks to the author is deeply appreciated.

Okay