DEV Community

Alain Airom
Alain Airom

Posted on

How to set simply all “sampling parameters” or “generation parameters” for applications using watsonx?

Image description

Introduction

A question which comes up very often for the users who access the watsonx.ai LLMs is “how do we set the sampling parameters?” !

Actually, it is quite easy.

Sampling Parameters (or generation parameters)

  • Access your watsonx.ai instance.

Image description

  • Click on “Open Prompt Lab”. Once in the prompt lab, in either tabs, click on the parameters icon (the icon on the far right as shown).

Image description

You can change the LLM which is set (the one used previously or the one set by default).

  • Once the parameters dialog box is open, they could be set as necessary.

Image description

  • Once the parameters set, on the same set of tools’ icons choose “view code </>”.

Image description

The interface will provide 3 types of code embedding implementation of the parameters; Curl, Node.js and Python as the samples below.

curl "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29" \
  -H 'Content-Type: application/json' \
  -H 'Accept: application/json' \
  -H "Authorization: Bearer ${YOUR_ACCESS_TOKEN}" \
  -d '{
  "input": "<|start_of_role|>system<|end_of_role|>You are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>",
  "parameters": {
    "decoding_method": "sample",
    "max_new_tokens": 200,
    "min_new_tokens": 100,
    "random_seed": 42,
    "stop_sequences": [],
    "temperature": 0.7,
    "top_k": 50,
    "top_p": 1,
    "repetition_penalty": 1
  },
  "model_id": "ibm/granite-3-8b-instruct",
  "project_id": "the one you get"
}'
Enter fullscreen mode Exit fullscreen mode
export const generateText = async () => {
 const url = "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29";
 const headers = {
  "Accept": "application/json",
  "Content-Type": "application/json",
  "Authorization": "Bearer YOUR_ACCESS_TOKEN"
 };
 const body = {
  input: "<|start_of_role|>system<|end_of_role|>You are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|end_of_text|>\n<|start_of_role|>assistant<|end_of_role|>",
  parameters: {
   decoding_method: "sample",
   max_new_tokens: 200,
   min_new_tokens: 100,
   random_seed: 42,
   stop_sequences: [],
   temperature: 0.7,
   top_k: 50,
   top_p: 1,
   repetition_penalty: 1
  },
  model_id: "ibm/granite-3-8b-instruct",
  project_id: "the-one-you-get"
 };

 const response = await fetch(url, {
  headers,
  method: "POST",
  body: JSON.stringify(body)
 });

 if (!response.ok) {
  throw new Error("Non-200 response");
 }

 return await response.json();
}
Enter fullscreen mode Exit fullscreen mode
import requests

url = "https://us-south.ml.cloud.ibm.com/ml/v1/text/generation?version=2023-05-29"

body = {
 "input": """<|start_of_role|>system<|end_of_role|>You are Granite, an AI language model developed by IBM in 2024. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.<|end_of_text|>
<|start_of_role|>assistant<|end_of_role|>""",
 "parameters": {
  "decoding_method": "sample",
  "max_new_tokens": 200,
  "min_new_tokens": 100,
  "random_seed": 42,
  "temperature": 0.7,
  "top_k": 50,
  "top_p": 1,
  "repetition_penalty": 1
 },
 "model_id": "ibm/granite-3-8b-instruct",
 "project_id": "the-one-you-get"
}

headers = {
 "Accept": "application/json",
 "Content-Type": "application/json",
 "Authorization": "Bearer YOUR_ACCESS_TOKEN"
}

response = requests.post(
 url,
 headers=headers,
 json=body
)

if response.status_code != 200:
 raise Exception("Non-200 response: " + str(response.text))

data = response.json()
Enter fullscreen mode Exit fullscreen mode

The only information which should be adjusted by the developer is the access token.

Et voilà 😉

Conclusion

The watsonx.ai platform makes it very easy for application developers to adjust the set of LLM sampling parameters.

Top comments (0)