ABUSIVE CONTENT CLASSIFIER

Protect abusive and offensive language in your forums or portals. This API identifies offensive language with 98% accuracy and helps you in fighting online abuse and spam.

know more
demo- enter a text

Abusive

--

Non Abusive

--

Ready to Integrate? Check out the API wrappers below

For setup and installation instruction, please visit our Github Page
import paralleldots.ParallelDots;
// Get your API key here
ParallelDots pd = new ParallelDots("<YOUR_API_KEY>");
import paralleldots.ParallelDots;
ParallelDots pd = new ParallelDots();
String abuse = pd.abuse("you f**king a$$hole");
System.out.println(abuse);
//Response
{
	"sentence_type": "Abusive",
	"confidence_score": 0.953125
}
For setup and installation instruction, please visit our Github Page
from paralleldots import set_api_key, get_api_key
# Get your API key here
set_api_key(<YOUR_API_KEY>)
get_api_key()
from paralleldots import similarity, ner, taxonomy, sentiment, keywords, intent, emotion, multilang, abuse
abuse("you f**king a$$hole")
#Response
{
	"sentence_type": "Abusive",
	"confidence_score": 0.953125
}
For setup and installation instruction, please visit our Github Page
require 'paralleldots'
# Get your API key here
set_api_key(<YOUR_API_KEY>)
get_api_key()
require 'paralleldots'
abuse('you f**king a$$hole')
#Response
{
	"sentence_type": "Abusive",
	"confidence_score": 0.953125
}
For setup and installation instruction, please visit our Github Page
using ParallelDots
# Get your API key here
ParallelDots.api pd = new ParallelDots.api("<YOUR_API_KEY>");
var abuse = pd.abuse('Is this content Abusive?');
Console.WriteLine(abuse);
#Response
{
	"sentence_type": "Abusive",
	"confidence_score": 0.953125
}
For setup and installation instruction, please visit our Github Page
require(__DIR__ . '/vendor/paralleldots/apis/autoload.php');
# Get your API key here
set_api_key("<YOUR_API_KEY>");
get_api_key();
require(__DIR__ . '/vendor/paralleldots/apis/autoload.php');
abuse('you f**king a$$hole');
#Response
{
	"sentence_type": "Abusive",
	"confidence_score": 0.953125
}
HOW OUR ABUSIVE CONTENT CLASSIFIER API WORKS?

It uses Long Short Term Memory (LSTM) algorithms to classify a text into different. LSTMs model sentences as chain of forget-remember decisions based on context. It is trained on social media data and news data differently for handling casual and formal language. We also have trained this algorithm for various custom datasets for different clients.

ABUSIVE CONTENT CLASSIFICATION USE CASES

There are abundant platforms for people to interact and voice their opinions on.With the number of these platforms increasing, the amount out textual data dumped on them is going up too. It is important to filter out abusive content for a pleasant user experience. Also, such platform generally appeal to all age groups. Detecting and filtering out abusive content makes sure it is safe to use for all age groups

WHY OUR ABUSIVE CONTENT CLASSIFIER ?
Accurate

Highly accurate classification of unstructured textual data.

Real Time

State of the art technology to provide accurate results real-time.

Customizable

Can be trained on custom dataset to obtain similar accuracy and performance.