DEV Community

Darshan Chauhan
Darshan Chauhan

Posted on • Originally published at Medium

Fake News Detection Using Python | Learn Data Science in 2020

  1. Introduction As per the current scenario of social media and every kind of internet-related things people are totally depended on that sometimes we don’t know that every news and articles are not a real thing which happened In the world but we were believed in that social media is the largest user base platform which consists the news that sometimes real or fake this system identifies that every kind of fake and real news with a powerful platform of data science and also uses a large amount of dataset which consists lots of news related data by analytical platforms.

What is FAKE NEWS?

A type of yellow journalism, fake news encapsulates pieces of news that may be hoaxes and is generally spread through social media and other online media. This is often done to further or impose certain ideas and is often achieved with political agendas. Such news items may contain false and/or exaggerated claims, and may end up being virtualized by algorithms, and users may end up in a filter bubble.

What is TfidfVectorizer?

· TF (Term Frequency): The number of times a word appears in a document is its Term Frequency. A higher value means a term appears more often than others, and so, the document is a good match when the term is part of the search terms.
· IDF (Inverse Document Frequency): Words that occur many times a document, but also occur many times in many others, maybe irrelevant. IDF is a measure of how significant a term is in the entire corpus.
The TfidfVectorizer converts a collection of raw documents into a matrix of TF-IDF features.

What is PassiveAggressiveClassifier?

The passive-aggressive algorithms are a family of algorithms for large-scale learning. They are similar to the Perceptron in that they do not require a learning rate. However, contrary to the Perceptron, they include a regularization parameter C.

Software Requirements
IDE — Jupyter Notebook (Ipython Programming Environment)

Step-1: Download First Dataset of news to work with real-time data
The dataset we’ll use for this python project- we’ll call it news.csv. This dataset has a shape of 7796×4. The first column identifies the news, the second and third are the title and text, and the fourth column has labels denoting whether the news is REAL or FAKE

Step-2: Make Necessary Imports
import numpy as np
import pandas as pd
import itertools
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
df = pd.read_csv(‘E://news/news.csv’)
df.shape
df.head()

Step-3: Now, let’s read the data into a DataFrame, and get the shape of the data and the first 5 records.

Step-4: And Get labels from DataFrame

Step-5: Split the dataset into training and testing sets.

Step-6: Let’s initialize a TfidfVectorizer with stop words from the English language and a maximum document frequency of 0.7 (terms with a higher document frequency will be discarded). Stop words are the most common words in a language that is to be filtered out before processing the natural language data. And a TfidfVectorizer turns a collection of raw documents into a matrix of TF-IDF features.

Initialize a TfidfVectorizer

tfidf_vectorizer=TfidfVectorizer(stop_words=’english’, max_df=0.7)

Fit and transform train set, transform test set

tfidf_train=tfidf_vectorizer.fit_transform(x_train)
tfidf_test=tfidf_vectorizer.transform(x_test)

Initialize a PassiveAggressiveClassifier

pac=PassiveAggressiveClassifier(max_iter=50)
pac.fit(tfidf_train,y_train)

DataPredict on the test set and calculate accuracy

y_pred=pac.predict(tfidf_test)
score=accuracy_score(y_test,y_pred)
print(f’Accuracy: {round(score*100,2)}%’)
Now, fit and transform the vectorizer on the train set, and transform the vectorizer on the test set.

Step-7: Now, we will initialize the PassiveAggressiveClassifier This is. We’ll fit this on tfidf_train and y_train.

Then, we’ll predict the test set from the TfidfVectorizer and calculate the accuracy with accuracy_score () from sklearn.metrics.

Step-8: Now after the Accuracy computation we have to build a confusion matrix.

So with this model, we have 589 true positives, 585 true negatives, 44 false positives, and 49 false negatives.

Conclusion
I hereby declared that my system detecting Fake and real news from a given dataset with 92.82% Accuracy Level. And also solve the issue of Yellow Journalism.

Top comments (0)