State of the art Aspect Based Sentiment Analysis

The State of the art in Deep Learning is reaching new levels every year to say the least, thanks to the availability of powerful machines and frameworks by Nvidia, Google, and the growing open source community.

Particularly in Natural Language Processing (NLP), 2018 and later have seen breakthroughs by huge pre-trained language models such as BERT/GPT-3, and the ability to apply transfer learning for custom problems using transformer models. ‘Attention is all you need’ transformers paper demonstrated its popularity and applications use is getting realized only now and is growing.

This article focuses on the Sentiment Analysis problem, particularly Aspect Based Sentiment Analysis (ABSA), and how a state of the art deep learning can be used for ABSA, producing high quality results.

About Aspect Based Sentiment Analysis (ABSA)

Aspect based sentiment analysis deals with detecting the sentiment pertaining to a specific target or aspect within text, whereas more non-aspect based approach works to detect the overall tone in a sentence or text.

Results

The model implemented in this post set the state of the art performance benchmarks higher in a few task categories in ABSA.

It is able to automatically pick up a sentiment target or aspect within the tweet, and also predict the sentiment polarity towards it.

So what results can be expected through this after implementing a state of the art model? See some examples below:

Example tweet 1: lol , yes , Sarah Jessica Parker , you look amazing . I wear Sirwal and Faneela to bed . Want to be friends !!!

Aspect term picked: sarah jessica parker

Aspect term polarity: positive

Example tweet 2: President santos urges to denounce on timely crimes like extortion & kidnapping

Aspect term picked: president santos

Aspect term polarity: neutral

Example tweet 3: So Cameron screws up the whole of Europe just to save his own job …

Aspect term picked: cameron

Aspect term polarity: negative

The Implementation in this article

Introduction

The implementation in this article scratches the surface of applying a modern deep learning model to solve ABSA. There is a lot more fine-tuning required to achieve the best possible accuracy for a custom task.

These are not discussed in this post, but are essential to receiving the model’s benefits for your problem:

  • Fine-tuning the language model to domain dataset
  • Transfer Learning from SemEval task data to the prediction problem
  • Network Confidence Calibration
  • Inference Model deployment and ML Ops

We cover implementation steps to get a Deep Learning ABSA model up and running and see prediction in action on twitter data.

The How-to step by step post will be added here soon. If you are keen to follow it soon: Please ping me and I can share the unformatted steps.

The model selection: A crucial step

This step is important because the model you select should match the nature of the problem you are trying to solve. In ABSA deep learning model space alone, there are hundreds if

not thousands of models and techniques available online. The important factors to consider are:

  • Recency of the model published — This is important because of the pace of innovation especially in the NLP space, so you want to use one which is reasonably recent.
  • Performance on benchmark datasets which are similar to the problem you are solving
  • Hardware and framework requirements — They should match what your are comfortable with or can comfortably attain. We can achieve very good results with easily available GPU machines these days
  • Ease of implementation: See available documentation and github community for resources and support for the model. This can make things easier.

A tech consultant and self proclaimed thinker