Skip to content

Sentiment analysis using Naive Bayes method and logistic regression on film reviews data. The goal is to classify positive and negative reviews using these two shallow algorithms, try to achieve at least 75-80% accuracy giving performance.

Notifications You must be signed in to change notification settings

Antanskas/Sentiment_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sentiment analysis using Naive Bayes and Logistic regression

  • Idea

    • Analyze films review data and classify positives and negatives of them
  • Environment colab img

    • Google colaboratory server
    • Python 3
  • Dataset

    • Film reviews dataset
      • rt-polarity.pos contains 5331 positive snippets
      • rt-polarity.neg contains 5331 negative snippets
  • Models

    • Naive Bayes
    • Logistic regression
  • Results

    • Naive Bayes:
      • Mean test accuracy: 77.60 %
      • f1-score for positive class: 0.79
      • f1-score for negative class: 0.79
      • execution time with GridSearch: 0.277s
      • execution time without GridSearch: 0.013s
      • execution time calculating from scratch: 0.008s
    • Logistic regression
      • Mean test accuracy: 74.53 %
      • f1-score for positive class: 0.78
      • f1-score for negative class: 0.78
      • execution time with GridSearch: 2.51s
      • execution time without GridSearch: 0.073s
  • Questions

1. Describe text processing pipeline you have selected.

  1. Because models do not work with plain text I needed to convert it to numerical representation. For that I needed to have units - tokens later to be encoded. In a first attempt i decided to use keras tokenizer but then came to conclusion that with keras tokenizer I am loosing information about every word frequency in a sentence (one hot encoding only unique words) and do not have information about how common or unique each word is in whole dataset.
  2. Then I switched to Sklearn CountVectorizer which solves frequency problem but does not covers uniqueness. It works like one_hot_encoding but integer value of each token after encoding means frequence in the sentence.
  3. After that I came to sklearn TfidfVectorizer which covers all these problems - basically saving more information with this method. With help oh TfidfVectorizer I encoded my tokenized data matrix into sparse matrix (increase efficiency and protects from storage problems while most of the entries in each example are zeros) of tfidf scores where each score means frequency and uniqueness of each token (word). In short - lower tfidf-score means more common/frequent word in single example and more unique word in total dataset.
    Using TfidfVectorizer improves f1-scores for both methods compared with previous 2 attempts.

2. Why you have selected these two classification methods?

They are easy to implement, fast, doing reasonably good performance.

3. Compare selected classification methods. Which one is better? Why?

As we see both of them showing similar results (the same f1-score), but for Naive Bayes method I found that we need to tune less hyperparameters. Overall it seems that both algorithms performing within more or less the same time but if looking closer an execution time is a bit less for Naive Bayes and even more fast if doing calculations from scratch (no hyperparameters used)

4. How would you compare selected classification methods if the dataset was imbalanced?

By checking into confusion matrix, classification report. Basically f-score is good enought metrix to use on unbalanced data for checking models performance because it is not affected of negative and positive examples rates.

About

Sentiment analysis using Naive Bayes method and logistic regression on film reviews data. The goal is to classify positive and negative reviews using these two shallow algorithms, try to achieve at least 75-80% accuracy giving performance.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published