Python Version က 3.8.5 ကိုသုံးတာပါ။
Torch ပဲ install လုပ်ဖို့ လိုပါတယ်။
pip install torch
ပြီးရင် ကိုယ်သုံးချင်တဲ့ model ကို Hugging Face မှာရှာရမှာပါ။
ssmadha/gpt2-finetuned-scientific-articles ဆိုတဲ့ Model ပါ။
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("ssmadha/gpt2-finetuned-scientific-articles")
model = AutoModelForCausalLM.from_pretrained("ssmadha/gpt2-finetuned-scientific-articles")
# Start the chat loop
while True:
# Get user input
user_input = input("You: ")
# Tokenize user input and generate response
input_ids = tokenizer.encode(user_input, return_tensors='pt')
output = model.generate(
input_ids,
max_length=50,
pad_token_id=tokenizer.eos_token_id,
do_sample=False,
top_p=0.9,
top_k=0
)
response_text = tokenizer.decode(output[0], skip_special_tokens=True)
print("ChatBot:", response_text)
Top comments (0)