Abusive Content Classifier

Protect abusive and offensive language in your forums or portals. This API identifies offensive language with 98% accuracy and helps you in fighting online abuse and spam.

Try the demo

enter a text
Abusive Content Classifier

Offensive

--

Abusive Content Classifier

Non Offensive

--

You might also be interested in these APIs: Sentiment Analysis, Keyword Extractor, Named Entity Recognition

Why our Abusive Analysis API ?

Accurate

High accuracy to classify abusive content that are commonly present in social media conversations or chatbot applications. Achieves 98% accuracy on our internal test set.

Fast

Komprehend Abuse detection solution is built for most demanding requirements, already in use by various industries, from social media monitoring to chatbots.

Flexible Deployment

Komprehend Abuse detection API support private cloud deployments via Docker containers or on-premise deployment ensuring no data leakage.

How our Abusive Content Classifier API works?

It uses Long Short Term Memory (LSTM) algorithms to classify a text into different. LSTMs model sentences as chain of forget-remember decisions based on context. It is trained on social media data and news data differently for handling casual and formal language. We also have trained this algorithm for various custom datasets for different clients.

use cases

Moderation of public forums

Current best-practices of letting users flag inappropriate content is an unreliable and time-consuming task. It requires a team of human moderators to check each of the flagged item and take action accordingly. Using our abuse classifier, forum operators can moderate content and automatically ban the users who are repeat offenders.

Comment Moderation

One can use the Abusive Content Classifier to keep the comments section of their blogs, forums etc free from any inappropriate content. News media websites, covering sensitive topics such as Immigration, Terrorism, Unemployment, etc struggle to keep their content safe and abuse-free. Such media houses, can therefore benefit from moderating such content automatically and protect their brand integrity.

Get Started

Custom Solutions

Want to train your own custom model? Contact Sales to get started

Contact Sales