<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Wong Voon Wong</title>
    <description>The latest articles on DEV Community by Wong Voon Wong (@wongvw1).</description>
    <link>https://dev.to/wongvw1</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/wongvw1"/>
    <language>en</language>
    <item>
      <title>NLP@AWS Newsletter 08/2022 (Summer Edition)</title>
      <dc:creator>Wong Voon Wong</dc:creator>
      <pubDate>Mon, 22 Aug 2022 13:40:00 +0000</pubDate>
      <link>https://dev.to/aws/nlpaws-newsletter-082022-summer-edition-1n73</link>
      <guid>https://dev.to/aws/nlpaws-newsletter-082022-summer-edition-1n73</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz9dzrqo6nb50uchugdc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdz9dzrqo6nb50uchugdc.png" alt=" " width="800" height="440"&gt;&lt;/a&gt;&lt;br&gt;
Hello world. This is the summer edition of the AWS Natural Language Processing(NLP) newsletter covering everything related to NLP at AWS. Feel free to leave comments &amp;amp; share it on your social network.&lt;/p&gt;

&lt;h2&gt;
  
  
  NLP@AWS Customer Success Story
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/how-mantium-achieves-low-latency-gpt-j-inference-with-deepspeed-on-amazon-sagemaker/" rel="noopener noreferrer"&gt;How Mantium achieves low-latency GPT-J inference&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2re4cpf7p5xg4dll29tn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2re4cpf7p5xg4dll29tn.png" alt=" " width="800" height="296"&gt;&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
Mantium helps customers build AI applications that incorporate state of the art language models in minutes and managing them at scale through their low-code cloud platform.  Mantium supports access to model APIs from AI providers as well as open source models like GPT-J that are trained using SageMaker distributed model parallel library.  To ensure users get best-in-class performance from such open source models, Mantium used DeepSpeed to optimize inference of GPT-J deployed on SageMaker inference endpoints.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/how-emagazines-utilizes-amazon-polly-to-voice-articles-for-school-aged-kids/" rel="noopener noreferrer"&gt;How eMagazines utilizes Amazon Polly to voice articles for school-aged kids&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
Research has shown that developing brains need to hear language even before learning to talk and is a pre-requisite for learning to read.  eMagazines used Amazon Polly to help TIME For Kids to automate audio synthesis as content were added dynamically on a daily basis without involvement of the audio artist.   With Amazon Polly, they were also able to support new features like text highlight and scrolling as the article is read aloud as well as collecting and analysing usage data in real time.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI Language Services
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/break-through-language-barriers-with-amazon-transcribe-amazon-translate-and-amazon-polly/" rel="noopener noreferrer"&gt;Break through language barriers with AWS AI Services&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5xu5mk09mgsf34t8ro3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fy5xu5mk09mgsf34t8ro3.png" alt=" " width="800" height="390"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The ability to directly communicate in a multi-lingual context on demand without the need of a human translator can be applied in many areas like media, medicine, education, hospitality and many more.  This blog post will show how you can combine three fully managed AWS AI services (Amazon Transcribe, Amazon Translate, and Amazon Polly) to create a near-real-time speech-to-speech translator solution that can quickly translate a source speaker’s live voice input into a spoken, accurate, translated target language.  &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/how-service-providers-can-use-natural-language-processing-to-gain-insights-from-customer-tickets-with-amazon-comprehend/" rel="noopener noreferrer"&gt;Using NLP to gain insights from customer tickets&lt;br&gt;
&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw9nne44p3knfvxn21jb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fiw9nne44p3knfvxn21jb.png" alt=" " width="800" height="297"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Amazon Comprehend is a natural language processing (NLP) service that uses machine learning (ML) to uncover valuable insights and connections in text.  This blog shares how Amazon Managed Services (AMS) used Amazon Comprehend's custom classifications to categorise inbound requests by resource and operation type according to customer’s description of the issue.  This allowed AMS to build workflows that recommend automated solutions for the issue related to the tickets and generate classification analysis reports using Amazon Quicksight.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/live-call-analytics-and-agent-assist-for-your-contact-center-with-amazon-language-ai-services/" rel="noopener noreferrer"&gt;Enable your contact centre with live call analytics and agent assist using AI Services&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwtzti2o7y2g402tfscu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffwtzti2o7y2g402tfscu.png" alt=" " width="" height=""&gt;&lt;/a&gt;&lt;br&gt;
One way to raise the bar on good caller experience to your contact center is to provide supervisors the ability to assess the quality of the caller experiences through call analytics and respond decisively before the call ends.  Furthermore, if agents are assisted with proactive and contextual information guidance, it can greatly enhance their ability to deliver a great caller experience.  This blog shows how to build a solution for live call analytics and real time agent assist using AWS AI services like Amazon Comprehend, Amazon Lex, Amazon Kendra and Amazon Transcribe. &lt;/p&gt;

&lt;h2&gt;
  
  
  NLP on SageMaker
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/text-classification-for-online-conversations-with-machine-learning-on-aws/" rel="noopener noreferrer"&gt;Text classification for online conversations with machine learning on AWS&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
The explosion of usage of online conversations in modern digital life have led to wide-spread non-traditional usage of language.  One phenomenon is the use of constantly evolving and domain-specific vocabularies.  Another is the (both intentional and accidental) adoption of lexical deviation of words from proper English in such conversations.  Traditional NLP techniques do not perform well in analysing such online conversations.   The authors of this blog post from Amazon ML Solutions Lab discusses 2 different model approaches to predict toxicity and subtype labels like obscene, threat, insult, identity attack and sexual explicit and tested them on the Jigsaw Unintended Bias in Toxicity Classification dataset.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/text-summarization-with-amazon-sagemaker-and-hugging-face/" rel="noopener noreferrer"&gt;Text summarization with Amazon SageMaker and Hugging Face&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0e14evxssuqwpwtrt6a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi0e14evxssuqwpwtrt6a.png" alt=" " width="800" height="289"&gt;&lt;/a&gt;&lt;br&gt;
The use of modern digital services and communication generates data that are growing at zettabyte scale.  Text summarization is a helpful technique in understanding large amounts of text data because it creates a subset of contextually meaningful information from source documents.  Hugging Face and AWS have a partnership to seamlessly integrate the HuggingFace library into Amazon SageMaker to enable developers and data scientists to get started with NLP on AWS more easily.  Hugging Face’s 400 pretrained text summarization models can be easily deployed using the summarization pipeline Hugging Face transformer API.  This blog post is a good starting point to learn how to quickly experiment and select suitable Hugging Face text summarization models and deploy them on Amazon SageMaker.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/build-a-news-based-real-time-alert-system-with-twitter-amazon-sagemaker-and-hugging-face/" rel="noopener noreferrer"&gt;Build a news-based real-time alert system with Twitter, Amazon SageMaker, and Hugging Face&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmydj7u0imem9j7uzgg7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdmydj7u0imem9j7uzgg7.png" alt=" " width="587" height="543"&gt;&lt;/a&gt;&lt;br&gt;
In industries like insurance, law enforcement, first respondents and government agencies, the ability to process news and social media feeds in near real time can allow them to respond immediately as events unfold.  If you are thinking about such an use case, this blog post can guide you on building a real-time alert system on AWS that consumes news alerts from social media and classify the alerts using a pre-trained model from Hugging Face Hub deployed on Amazon SageMaker.&lt;/p&gt;

&lt;h2&gt;
  
  
  NLP@Community
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://bigscience.huggingface.co/blog/bloom" rel="noopener noreferrer"&gt;The World’s Largest Open Multilingual Language Model&lt;/a&gt;&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5qc13w42gk34z3f8ju8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fp5qc13w42gk34z3f8ju8.png" alt=" " width="764" height="326"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BLOOM is the first multilingual LLM (Large Language Model) trained completely transparently to make LLMs accessible to academia, nonprofits, and smaller research labs.  With its 176 billion parameters, BLOOM is able to generate text in 46 natural languages and 13 programming languages and will be the very first LLM with over 100B parameters for almost all of these languages.  It is hoped that BLOOM will be the seed for a living family of models that will grow in the future through the community.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;a href="https://www.prindleinstitute.org/2022/06/the-curious-case-of-lamda-the-ai-that-claimed-to-be-sentient/" rel="noopener noreferrer"&gt;The Curious Case of LaMDA, the AI that Claimed to Be Sentient&lt;/a&gt;&lt;/strong&gt; &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sjqlghyzuxrhdtdluf5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7sjqlghyzuxrhdtdluf5.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In this article, the author discusses the recent controversial claim that Google’s LaMDA maybe sentient by examining the suppositions of the claim. &lt;/p&gt;

</description>
      <category>aws</category>
      <category>nlp</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>NLP@AWS Newsletter 04/2022</title>
      <dc:creator>Wong Voon Wong</dc:creator>
      <pubDate>Mon, 04 Apr 2022 16:06:27 +0000</pubDate>
      <link>https://dev.to/aws/nlpaws-newsletter-042022-178d</link>
      <guid>https://dev.to/aws/nlpaws-newsletter-042022-178d</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xz162psn7s26fz97rqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3xz162psn7s26fz97rqm.png" alt="NLP@AWS" width="800" height="454"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hello world. This is the monthly AWS Natural Language Processing(NLP) newsletter covering everything related to NLP at AWS. Feel free to leave comments &amp;amp; share it on your social network.&lt;/p&gt;

&lt;h1&gt;
  
  
  NLP@AWS Customer Success Story
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=dAQhNjwkOX8" rel="noopener noreferrer"&gt;&lt;strong&gt;How to Build a Scalable Chatbot and Deploy to Europe’s Largest Airline&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytfk8y0b5nsc45p10xq6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fytfk8y0b5nsc45p10xq6.png" alt="Build a Scalable Chatbot" width="800" height="407"&gt;&lt;/a&gt;Ryanair had a customer care improvement initiative to enable customers to access support 24x7 and yet allow agents to spend more time with customers with challenging needs that require a human touch.  In this video, Cation Consulting CTO explains the architecture which uses Amazon Translate, Amazon Lex and Amazon SageMaker to help Ryanair implement a multi-lingual and multi-channel Chatbot to optimise customer responses while using Amazon Comprehend for sentiment analysis of customer interactions.&lt;/p&gt;

&lt;h1&gt;
  
  
  AI Language Services
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/extract-granular-sentiment-in-text-with-amazon-comprehend-targeted-sentiment/" rel="noopener noreferrer"&gt;&lt;strong&gt;Extract granular sentiment in text&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj8rjdjf1079xdnxfjmv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmj8rjdjf1079xdnxfjmv.png" alt="Extract granular sentiment in text" width="468" height="208"&gt;&lt;/a&gt;When using NLP to perform sentiment analysis on an input document, typically the dominant sentiment is determined even though the document may contain nuanced sentiment referring to multiple entities .  This level of granularity can be exploited by businesses using Amazon Comprehend Targeted Sentiment feature to understand which specific attributes of their products or services were well received by customers and therefore should be retained or strengthen, and which attributes needs to be improved upon.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/automate-email-responses-using-amazon-comprehend-custom-classification-and-entity-detection/" rel="noopener noreferrer"&gt;&lt;strong&gt;Automate email responses using Amazon Comprehend custom classification and entity detection&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flaos4c0t2asxpp24b0f5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flaos4c0t2asxpp24b0f5.png" alt="Automate email responses using Amazon Comprehend" width="468" height="246"&gt;&lt;/a&gt;Organizations and businesses that provide customer care spend a lot of resources and manpower to be responsive to customer’s needs and to ensure a good customer experience.  Using an automated approach to answer customer queries can often improve the customer care experience in addition to lowering costs. Many organisations have the requisite data assets but are often hindered by lack of AI/ML expertise. This blog post shows how you can use Amazon Comprehend to identify the intent of customer care email and automate the response using AWS services. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/build-a-traceable-custom-multi-format-document-parsing-pipeline-with-amazon-textract/" rel="noopener noreferrer"&gt;&lt;strong&gt;Build a traceable, custom, multi-format document parsing pipeline with Amazon Textract&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld5janslhcn3pxzin83c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fld5janslhcn3pxzin83c.png" alt="Document parsing pipeline with Amazon Textract" width="468" height="395"&gt;&lt;/a&gt;Many businesses today still rely on using forms as a necessary business tool due to compliance reasons or where digital data capture is not possible.&lt;br&gt;
Very often these businesses have to release new versions of their forms which can impact traditional OCR systems and impact downstream processing tools.&lt;br&gt;
Amazon Textract is a machine learning (ML) service that automatically extracts text or  handwriting data from forms in minutes.  You can build a serverless event-driven, multi-format document parsing pipeline using Amazon Textract.  This post demonstrates how to design a robust pipeline that can handle multiple versions of forms easily.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/enable-conversational-chatbots-for-telephony-using-amazon-lex-and-the-amazon-chime-sdk/" rel="noopener noreferrer"&gt;&lt;strong&gt;Enable conversational chatbots for telephony using Amazon Lex and the Amazon Chime SDK&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
Conversational AI can deliver powerful, automated and interactive experiences through voice and text.  A common application of this is AI-powered self-service applications such as conversational interactive voice response systems (IVRs) that can handle voice service calls by automating informational responses. This blog post shows how you can easily build such an application in a serverless manner without any need for an expensive IVR platform. It uses an Amazon Lex-driven chatbot with audio powered by Amazon Chime SDK Public Switched Telephone Network (PSTN) and natively integrates with Amazon Polly’s text-to-speech capabilities to convert text responses into speech.&lt;/p&gt;

&lt;h1&gt;
  
  
  NLP on Amazon SageMaker
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://github.com/aws/amazon-sagemaker-examples/blob/main/training/distributed_training/pytorch/model_parallel/gpt-j/train_gptj_smp_notebook.ipynb" rel="noopener noreferrer"&gt;&lt;strong&gt;Train EleutherAI GPT-J using SageMaker&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
EleutherAI released GPT-J 6B as an open-source alternative to OpenAI's GPT-3.  EleutherAI’s goal was to train a model that is equivalent in size to GPT⁠-⁠3 and make it available to the public under an open license and has since gained a lot of interest from Researchers, Data Scientists, and even Software Developers.   This notebook shows you how to easily train and tune GPT-J using Amazon SageMaker Distributed Training and Hugging Face on NVIDIA GPU instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://towardsdatascience.com/how-to-build-your-own-gpt-j-playground-733f4f1246e5" rel="noopener noreferrer"&gt;&lt;strong&gt;Have fun with your own private GPT text generation playground&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgsfplcbpx8tl4wuc0qb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjgsfplcbpx8tl4wuc0qb.png" alt="Playground" width="694" height="551"&gt;&lt;/a&gt;Want to create your very own text generation playground without having to pay for GPT-3 usage or even having to train a GPT-J model?  This blog shows how you can easily deploy GPT-J on Amazon SageMaker and create a web interface to interact with the model.  Have fun!&lt;/p&gt;

&lt;p&gt;&lt;a href="https://huggingface.co/blog/bert-inferentia-sagemaker" rel="noopener noreferrer"&gt;&lt;strong&gt;Accelerate BERT inference&lt;/strong&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uziifzc457ayudl5x7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1uziifzc457ayudl5x7o.png" alt="Accelerate BERT inference" width="800" height="161"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;BERT and Transformers based models tend to be relatively large, complex and slow compared to traditional Machine Learning algorithms.   One way of accelerating these models is to deploy them on AWS Inferentia based EC2 instances.  AWS Inferentia is a chip designed to deliver up 2.3x higher throughput and up to 70% lower cost per inference than comparable current generation GPU-based Amazon EC2 instances.  This post shows you how to use the AWS Neuron SDK to convert a BERT    based model to run on AWS Inferentia powered EC2 Inf1 instances.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/blogs/machine-learning/part-1-set-up-a-text-summarization-project-with-hugging-face-transformers/" rel="noopener noreferrer"&gt;&lt;strong&gt;A practical guide to assessing text summarisation models&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63jeps6ib70iqkj6aylh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F63jeps6ib70iqkj6aylh.png" alt="assessing text summarisation" width="700" height="150"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Many organisations have huge amount of text document that often need to be summarised and NLP based text summarisation is one way of automating such tasks.  This 2 part series proposes a practical approach to assessing summarisation models at scale.  The &lt;a href="https://aws.amazon.com/blogs/machine-learning/part-1-set-up-a-text-summarization-project-with-hugging-face-transformers/" rel="noopener noreferrer"&gt;first part&lt;/a&gt; introduces the dataset and metric to evaluate a simple heuristic approach.  The &lt;a href="https://aws.amazon.com/blogs/machine-learning/part-2-set-up-a-text-summarization-project-with-hugging-face-transformers/" rel="noopener noreferrer"&gt;second part&lt;/a&gt; uses a zero-shot learning model and discusses an approach to comparing the 2 models that can be applied to compare many other models during the experimentation phase.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.youtube.com/watch?v=80ix-IyNnQI" rel="noopener noreferrer"&gt;&lt;strong&gt;Introduction to Hugging Face on Amazon SageMaker&lt;/strong&gt;&lt;/a&gt; &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv4pfdde2yvboll4ny9r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsv4pfdde2yvboll4ny9r.png" alt="Introduction to Hugging Face on Amazon SageMaker" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Transformers are state of the art NLP models that displays almost human level of performance in NLP tasks like text generation, text classification, question and answering etc.  You can use pretrained transformer models and fine tune on your data to raise the intelligence on your NLP models.  This video introduces you to Hugging Face and how you can easily build, train and deploy state of the art NLP models using Amazon SageMaker.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/aws-samples/hugging-face-workshop" rel="noopener noreferrer"&gt;&lt;strong&gt;Hugging Face on Amazon SageMaker and AWS Workshop&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
Interested to use NLP to generate models in the style of your favourite poets?  This workshop shows you how you can fine tune text generation models in the style of your favourite poets using Hugging Face on Amazon SageMaker.&lt;/p&gt;

&lt;h1&gt;
  
  
  NLP@AWS Community Content
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://www.projectceti.org/" rel="noopener noreferrer"&gt;&lt;strong&gt;Decoding Sperm Whale language using NLP models&lt;/strong&gt;&lt;/a&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfd4crvdlibzhdx5tbvs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkfd4crvdlibzhdx5tbvs.png" alt="Decoding Sperm Whale language" width="800" height="371"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Project &lt;a href="https://www.projectceti.org/" rel="noopener noreferrer"&gt;CETI&lt;/a&gt; was selected for an &lt;a href="https://www.amazon.science/research-awards" rel="noopener noreferrer"&gt;Amazon Research Award&lt;/a&gt; and is an &lt;a href="https://aws.amazon.com/government-education/nonprofits/aws-imagine-grant-program/" rel="noopener noreferrer"&gt;Imagine Grant&lt;/a&gt; winner. AWS ML teams are working with the Open Data program team to help decode sperm whale language. By building NLP models to parse and interpret sperm whales’ voices and hopefully develop frameworks for communicating back, CETI aims to show that today’s most cutting-edge technologies can be used to not only drive business impact, but can enable a deeper understanding of other species on this planet. CETI’s work is taking place in the Caribbean, off the coast of Dominica, and remotely with AWS via Chime.&lt;/p&gt;

&lt;h1&gt;
  
  
  Upcoming Events
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://app.livestorm.co/hugging-face/accelerate-bert-inference-with-knowledge-distillation-and-aws-inferentia?type=detailed" rel="noopener noreferrer"&gt;&lt;strong&gt;Workshop:  Techniques to accelerate BERT Inference&lt;/strong&gt;&lt;/a&gt;&lt;br&gt;
Learn how to apply knowledge distillation to compress a large BERT model to a small model, and then to an optimised neuron model with AWS Inferentia. By the end of this process, our model will go from 100ms+ to 5ms+ latency - a 20x improvement!  Click &lt;a href="https://app.livestorm.co/hugging-face/accelerate-bert-inference-with-knowledge-distillation-and-aws-inferentia?type=detailed" rel="noopener noreferrer"&gt;here&lt;/a&gt; for registration.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://aws.amazon.com/events/summits/?awsf.events-location=*all&amp;amp;awsf.events-series=*all" rel="noopener noreferrer"&gt;&lt;strong&gt;AWS Summits around the globe&lt;/strong&gt;&lt;/a&gt; &lt;br&gt;
AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn through technical breakout sessions, demonstrations, interactive workshops, labs, and team challenges. Summits are held in major cities around the world both virtually as well as in person starting April 2022.  &lt;a href="https://aws.amazon.com/events/summits/?awsf.events-location=*all&amp;amp;awsf.events-series=*all" rel="noopener noreferrer"&gt;Register&lt;/a&gt; for one near you!&lt;/p&gt;

&lt;h1&gt;
  
  
  Stay in touch with NLP on AWS
&lt;/h1&gt;

&lt;p&gt;Our contact: &lt;a href="mailto:aws-nlp@amazon.com"&gt;aws-nlp@amazon.com&lt;/a&gt;&lt;br&gt;
Email us about (1) your awesome project about NLP on AWS, (2) let us know which post in the newsletter helped your NLP journey, (3) other things that you want us to post on the newsletter. Talk to you soon.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>nlp</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
  </channel>
</rss>
