DEV Community

Matt Martz
Matt Martz

Posted on • Originally published at Medium on

Are You a Jerk?

Toxicity Analysis using a Chrome Extension built with TensorflowJS and Angular Elements

I went to ng-conf (the BIG Angular conference) this week and got inspired by the TensorflowJS workshop done by Asim Hussain and all of the Angular Elements presentations (mostly by Manfred Steyer) and decided to combine the two to create a POC Google Chrome extension that uses Angular Elements and TensorflowJS to determine if what you write makes you sound like a jerk.

The Chrome Extension in Action

This isn’t meant to be an in-depth post, but I’ll go over some basics on:

  • Using a pre-trained model in TensorflowJS
  • Creating Angular Elements
  • Creating a Google Chrome extension

Code for this story can be found here:


The Chrome extension will scan sites you’re browsing for textarea inputs (for large blocks of text) and wraps them with a custom Angular Element… When you enter text into the textarea and click the injected button it will run the text through a pre-trained toxicity model that uses Natural Language Processing and Deep Learning to classify whether the text is an identity attack, an insult, obscene, severely toxic, sexually explicit, a threat and overall toxicity. This is all done in the browser, nothing is sent to a backend for analysis.

Alternately, a user can highlight text on a page (in an input or not) and click the “toxicity” / hazmat icon of the chrome extension to get the analysis in a pop-up.

Note: Right now the injection mode of the extension only looks for textarea inputs… but not all sites use those. For example in GMail Google uses a custom div tag with the role of textarea. Medium uses something entirely different. See the “What next?” section at the bottom for possible improvements…

Using a Pre-Trained Model in TensorflowJS


Tensorflow is a popular / google-backed Machine Learning framework. TensorflowJS is a complete re-write of Tensorflow into JavaScript (hence the JS). This effectively allows browsers to run Machine Learning models without any interaction with the backend. This is great for privacy related concerns and there are many interesting applications of it. It’s not ideal for training new models due to resource constraints, but in this case we don’t need that. There’s already a model that can be used to detect the toxicity of text. This means all I have to do is import it and I can use it to classify text that I pass into it.

The code is pretty simple…

Those code snippets were copied from the service file created as part of the Angular library (described below)

How does the model work?

The model uses Supervised Learning to classify the data as whether it is toxic or not. Supervised Learning takes a set of data that is pre-labeled to train the model. The model is then able to classify content it hasn’t seen before based on the model weights.

This particular model uses a Universal Sentence Encoder that converts the sentences into numbers using a deep neural network. More information on using the toxicity classifier in TensorflowJS can be read on TensorFlow’s story:

Creating Angular Elements

What are Angular Elements?

Angular elements are Angular components packaged as custom elements, a web standard for defining new HTML elements in a framework-agnostic way.


That means that you can export a combination of Angular components and inject them into a non-Angular page. This is fairly key to how the extension works.

Manfred Steyer has some great articles on how to build Angular Elements here.

There are three angular projects in the code repo … A library with the main code. A dummy application that is used to help bundle the code into something the Chrome Extension can use, and a “docs” project that I used for debugging and may eventually turn into proper documentation.

Library: toxicity-library

The library contains the main code that creates a ShadowDOM which wraps the textarea input and adds a button to check the toxicity using the TensorflowJS classifier. The toxicity-lib.service.ts file has the TensorflowJS code discussed above.

The components are fairly basic… There’s a button component that is supposed to indicate status of the analysis, a component for the results, a component for the popup, and a container with a slot that transcludes the textarea input into it and then registers a function to the textarea’s oninput…

That’s it.

The popup component works similar… it uses an Input decorator that the chrome extension passes selected text to.

I couldn’t get zone.js to work in the Chrome Extension… so I had to disable it and manually do change detection with the ChangeDetectionRef. I got around this by creating several BehaviorSubjects in the toxicity-lib.service.ts file and having the components subscribe to them and use that to flag the change detection.

Application: toxicity-output

The output application imports the library components and actually creates the elements for use by the chrome extension. ngx-build-plus does the actual bundling.

In the main.ts file I needed to disable zone.js… which was done like this:

You’ll see in the toxicity-output app folder there’s literally no other components. It simply imports the library container and popup components and registers (defines) them as a custom elements.

Application: Docs

I may turn this app into a github page with documentation on the code… but for now it was just used for debugging and simply has a textarea element and a header.

The Docs Application Running without the Chrome Extension

Creating a Google Chrome extension

Getting Started Tutorial

Google Chrome Extensions are fairly straightforward. They’re effectively a manifest.json file that lists which javascript files should be included and some assets (html, etc) as needed.

When Angular builds the output application (discussed above) it puts the code in the chrome extension’s dist folder. The manifest.json then imports the bundled output application (also containing the custom elements) and has a separate script that detects text areas and wraps them with the custom element… like this:

The shouldAppend method makes sure the parent element of the textarea isn’t already a toxicity-container element, and there’s a mutation observer that keeps track of things loading on the screen. This is fairly inefficient but gets the job done for the POC. I added an options page for the chrome extension that enables / disables this behavior.

I ended up adding an additional method that enables a user to highlight any text on screen and click the extension icon to run analysis on any text. This is always enabled with the extension.

By highlighting some text and clicking the “toxicity” icon (hazmat symbol). An Angular Element was used to do the toxicity analysis and the results were displayed in the popup.

Great… now what?

I’m not packaging this as an actual Chrome Extension. If I end up making the below improvements I may end up publishing it. If you want to try it out, you can clone the repo and load the dist/chrome folder as an Unpacked Extension (you’ll need developer mode turned on in Chrome).

I don’t really plan on extending this much further… but there are a few improvements that I’d make if I did:

  • Have the chrome extension detect entry into text inputs and only add the element where the focus is
  • Expand the types of text inputs (divs with textarea role, inputs, etc)
  • Flesh out the documentation

With the addition of the text-highlighting analysis some of these seem unnecessary.

Top comments (0)