Next Article in Journal
Concept Paper for a Digital Expert: Systematic Derivation of (Causal) Bayesian Networks Based on Ontologies for Knowledge-Based Production Steps
Previous Article in Journal
A Comprehensive Survey on Deep Learning Methods in Human Activity Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model

1
Department of Computer Science, University of Oviedo, 33003 Oviedo, Spain
2
Faculty of Science and Technology, Athabasca University, 1 University Drive, Athabasca, AB T9S 3A3, Canada
*
Author to whom correspondence should be addressed.
Mach. Learn. Knowl. Extr. 2024, 6(2), 877-897; https://doi.org/10.3390/make6020041
Submission received: 10 February 2024 / Revised: 6 April 2024 / Accepted: 7 April 2024 / Published: 19 April 2024
(This article belongs to the Section Learning)

Abstract

:
This research investigates the application of deep learning in sentiment analysis of Canadian maritime case law. It offers a framework for improving maritime law and legal analytic policy-making procedures. The automation of legal document extraction takes center stage, underscoring the vital role sentiment analysis plays at the document level. Therefore, this study introduces a novel strategy for sentiment analysis in Canadian maritime case law, combining sentiment case law approaches with state-of-the-art deep learning techniques. The overarching goal is to systematically unearth hidden biases within case law and investigate their impact on legal outcomes. Employing Convolutional Neural Network (CNN)- and long short-term memory (LSTM)-based models, this research achieves a remarkable accuracy of 98.05% for categorizing instances. In contrast, conventional machine learning techniques such as support vector machine (SVM) yield an accuracy rate of 52.57%, naïve Bayes at 57.44%, and logistic regression at 61.86%. The superior accuracy of the CNN and LSTM model combination underscores its usefulness in legal sentiment analysis, offering promising future applications in diverse fields like legal analytics and policy design. These findings mark a significant choice for AI-powered legal tools, presenting more sophisticated and sentiment-aware options for the legal profession.

1. Introduction

In today’s dynamic and interconnected world, the significance of information spans various critical domains, including legal, political, commercial, and individual perspectives, and many more. Recognizing the pivotal role that opinions play in shaping decisions and influencing outcomes, there is a growing need for automated tools to analyze sentiments effectively. Regarding this case, sentiment analysis emerges as a significant participant. Sentiment mining, or sentiment analysis, is a comprehensive natural language processing approach that can identify and classify textual data’s emotional tone and subjective content. People are beginning to communicate their thoughts more quickly and in a shorter time, making the manual processing of many viewpoints quite tricky. Therefore, sentiment analysis has proven extremely useful in this field [1,2,3]. By employing the sentiment analysis technique, stakeholders can also navigate the intricate layers of precedents and decisions, enhancing their capacity for nuanced interpretation and contributing to more informed decision making and policy formulation [2].
Recently, a significant amount of research has been conducted on opinion mining and sentiment analysis by applying machine learning and deep learning in various domains [4,5,6]. Opinion and sentiment analysis activities have been improved with the application of several neural networks, such as convolutional neural networks (CNNs), gated recurrent unit (GRU) or long short-term memory (LSTM), and recurrent neural networks (RNNs) [7]. Additionally, machine learning and deep learning models excel in analyzing short texts, leveraging abundant datasets from social networks to identify opinions quickly. However, tackling longer documents presents a more intricate challenge, given the higher word count and complex semantic links between sentences. Researchers are increasingly invested in developing advanced analysis techniques to extract nuanced points of view on specific subjects from this substantial data mass. Navigating through the intricacies of longer documents, they aim to enhance sentiment analysis accuracy and gain deeper insights into complex topics, reflecting the evolving landscape of text analysis. From a legal perspective, there is a discernible trend toward integrating cutting-edge technologies such as machine learning and sentiment analysis to enhance the analytical capabilities of legal practitioners. Rhanoui, et al. [8] utilized the CNN-BiLSTM model to analyze press articles and blog posts and reported almost 90.66% accuracy. Similarly, Tripathy, et al. [9] employed a hybrid machine model to classify document-level sentiment and claim positive feedback. Hence, this technological infusion holds particular promise in Canadian maritime case law, where the complexities of legal texts and the need for precise forecasting of court decisions pose significant challenges [10].

2. Research Significance

Thus, this study aimed to use a novel combination of deep learning techniques, namely convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, to glean emotional insights from Canadian maritime case law papers. To the best of the authors’ knowledge, there is a wealth of research on sentiment analysis using deep learning but not much on the combined impact of CNN and LSTM techniques. This technique fills a gap in the literature by providing a novel strategy for using sentiment analysis in the law, and it does so by focusing on the application of deep learning to the study of Canadian maritime case law [11]. Legal sentiment analysis has the potential to completely alter how lawyers and judges examine massive collections of case law. The document’s emotional tone, judgments, and sentiment dynamics are insightful for attorneys, judges, politicians, and academics. This study introduces deep learning models for examining Canadian maritime case law, which may pick up on subtleties of feeling that more conventional approaches would otherwise miss. Explored were the potential effects of these cutting-edge computational methods on legal analytics, policy formation, and the creation of AI-powered legal instruments.
This paper begins with presenting case law, followed by an emotional evaluation of the findings. An extensive literature review explores the topic of sentimental analysis and its relevance to the legal profession. Then, the process involved gathering data to develop an ML model and analyze experiment outcomes aligning with prior research [12].

3. Background and Context: Sentiment Analysis

3.1. Sentiment Analysis

Sentiment analysis (SA) is a technological evaluation of people’s thoughts, attitudes, and feelings toward a given object, which can be positive, negative, or neutral. Therefore, in this research, deep learning methods were used to solve the problem of extracting sentiment insights from Canadian maritime case law texts. Through the adept training of convolutional neural networks (CNNs) and long short-term memory (LSTM) networks, this novel approach opens new avenues for comprehending established legal doctrine and case law, facilitating better legal analysis and judgment. The implications extend far beyond the legal field, as deep learning models and algorithms can modernize the analysis of massive legal documents by revealing hidden emotions and how they impact the final judgment. With more significant implications in areas like legal analytics, policy design, and AI-powered legal tools, this study can potentially shape a more nuanced and well-informed legal environment [13].

3.1.1. Levels of Sentiment Analysis

Opinion analysis involves sentiment analysis, assessing sentiments at both document and sentence levels. Sentiment analysis gauges the overall tone of a text and the sentence levels, providing broad and detailed insights, respectively. Aspect-based analysis focuses on specific elements or features, uncovering positive or negative feedback. Concurrently, sentence-level analysis is crucial for detecting and evaluating views directed at particular entities, offering a more intricate understanding of sentiments expressed within the text.
Comparative analysis involves assessing multiple entities or characteristics to ascertain their respective influences. Temporal analysis explores opinion evolution, trends, and the repercussions of events over time. In contrast, multilingual analysis uses various data formats, including text, images, audio, and video, to examine different points of view across multiple languages. Contextual analysis considers the contextual nuances that may alter words’ meaning. The extent of opinion analysis depends on a particular application’s specific objectives and conditions, which dictate the degree of opinion analysis used.

3.1.2. Word Embedding

Word embedding in natural language processing (NLP) is a remarkable strategy for enhancing sentiment analysis. Effectively unraveling the sentiments, attitudes, and views articulated in legal documents demands the deployment of sentiment analysis to discern the emotional tone of a text. The unparalleled ability of deep learning models, notably CNNs and LSTM networks, to detect intricate patterns within text data has positioned them as the gold standard for sentiment analysis. Word embedding methods like Word2Vec, GloVe, and FastText play a critical role by mapping specialized legal lexicons into numerical vectors to transmute the semantic richness of words while translating them into numerical representations. Deep learning algorithms then harness these embeddings to decode feelings, proficiently capturing emotionally charged phrases and the nuanced deployment of language within context. The seamless integration of word embedding and deep learning methodologies is indispensable in advancing sentiment analysis within Canadian maritime case law. The consequential insights from this amalgamation hold immense value for legal practitioners, policymakers, and researchers, furnishing a nuanced comprehension of the emotional dimensions embedded in legal discourse [14].

3.2. Deep Learning

Deep learning, a subfield of machine learning and artificial intelligence (see Figure 1), focuses on training artificial neural networks to excel in tasks like data processing, pattern recognition, and decision making. These networks, mirroring the structure and function of the human brain, consist of interconnected layers of artificial neurons. One of the remarkable strengths of these models is their ability to automatically extract features and patterns, rendering them invaluable for applications such as sentiment analysis. Noteworthy designs within the realm of deep learning include CNNs, RNNs, and LSTMs. Particularly adept at deciphering intricate patterns and uncovering interdependencies in data, these networks find their effectiveness amplified in domains marked by complex terminology and nuanced relationships.
The following are some of the sentiment analysis aspects that deep learning models can handle:
  • Feature Extraction Word-to-word associations, the sentiments conveyed by individual words, and the overall context are all things deep learning models can deduce automatically.
  • Context Understanding is a comprehensive capability to capture the contextual details essential for gaining a sense of emotion in complicated fields such as maritime law.
  • Sequential Information Modeling can efficiently generate sequential information, like RNNs and LSTM models, which is essential for tasks requiring text order and sentiment. This is of utmost significance in legal documents, where the structure and flow of information are critical.
  • The scalability and complexity of Canadian maritime case law are well within the capabilities of deep learning models. These models can handle vast datasets and be trained for specialized tasks.
While grappling with computational complexity and the necessity for fine-tuned hyperparameter adjustment, deep learning models are remarkably effective tools in sentiment analysis within legal domains. As underscored in reference [16], these models extract subtle insights, offering a valuable conduit to elevate legal analytics. Their nuanced capabilities empower a more profound comprehension of sentiment within legal analytics and furnish a robust foundation for refining policy choices.

3.2.1. CNN

Convolutional neural networks (CNNs) are robust computational frameworks capable of decoding complex patterns within visual and textual data, particularly in sentiment analysis. These networks leverage spatial hierarchies to uncover subtle nuances in data, making them ideal choices for extracting important characteristics and patterns from textual data. CNNs apply filters or kernels to input data segments through strategic convolutional layers, precisely detecting sentiment-related words or phrases. This systematic feature extraction enables CNNs to uncover complex legal language.
A pooling layer in the convolutional layer efficiently reduces input dimensionality, capturing essential textual characteristics for sentiment analysis. By pinpointing critical sentiment-associated words, max pooling selectively extracts pertinent information. CNNs leverage this interplay to create sentiment parameters through iterative training on labeled text–sentiment data and bridge the gap between predictions and accurate sentiment labels. Pretrained word embedding like Word2Vec and GloVe are employed to capture semantic relationships to aid in interpreting legal contexts. For the best results when analyzing legal texts for sentiment, it is important to adjust hyperparameters such as CNN architecture, kernel size, filter number, and others to achieve optimum results [17].

3.2.2. RNN-LSTM

Recurrent neural networks (RNNs) with long short-term memory (LSTM) units are powerhouse tools for processing sequential data, shining in areas like language translation, speech recognition, and sentiment analysis [18]. Their secret weapon is the ability to tackle the vanishing gradient problem, a hiccup that often plagues traditional neural networks, which makes them particularly adept at parsing complex sequences found in texts, such as legal documents [19]. LSTMs excel at understanding the nuances of language thanks to their design, which captures long-distance dependencies within text [20]. This capability is amplified by bidirectional LSTMs, which look at text from both directions, ensuring a robust context grasp for accurate sentiment detection. These networks delve deeply into the specialized vocabulary required for jobs requiring extensive text analysis by utilizing pretrained word embedding.
Training these networks involves meticulously adjusting their architecture, including the layers and units specific to LSTMs, optimizing them for tasks that demand an understanding of extended sequences, which makes RNN-LSTMs particularly valuable for projects like sentiment analysis in Canadian maritime case law, offering insights into the shifting tones within legal documents over time.

3.2.3. RNN-BiLSTM

Bidirectional long short-term memory (BiLSTM) networks significantly advance recurrent neural networks, particularly in processing natural language [18]. Unlike traditional LSTMs, BiLSTM incorporates two hidden layers, enabling the simultaneous processing of data in both the forward and backward directions. This BiLSTM approach enhances the network’s ability to capture context and dependencies in sequential data. This architecture has demonstrated notable effectiveness in various natural language processing tasks, showcasing its prowess in sentiment analysis, named entity recognition, and machine translation. The bidirectional nature of BiLSTM allows it to capture nuanced patterns and relationships within language structures, making it a valuable tool in the ever-evolving landscape of deep learning applications for natural language understanding.
A powerful tool in natural language processing (NLP), BiLSTM combines the advantages of LSTM with bidirectional processing [19]. Placing words in sentences within their historical and prospective contexts helps clarify their meanings. However, the BiLSTM network has many potential uses, including machine translation, text categorization, and named entity identification. Integrating it into advanced designs like BERT achieves benchmark performance on NLP that is second to none. On the other hand, longer sequences provide challenges for it because of the amount of computational work involved. Architectures based on transformers, such as BERT and GPT, are preferred in natural language processing owing to their parallelism and scalability [20].

3.2.4. Semantic-Oriented Approach (SOA)

The sentiment analysis method known as SOA is dictionary-based. With dictionary-based methods, sentiment analysis uses premade dictionaries listing the polarity of various words and phrases. A lexicon-based system for analyzing blogs and news was developed Godbole, et al. [21] and was based on the Lidia text analysis system. They suggest using WordNet to extend candidate seed lists of opinion words. Baccianella, et al. [22] designed SentiWordNet, a lexical resource to facilitate sentiment classification and opinion mining applications based on WordNet Synset.
Many academics have proposed SOA and machine learning-based methods for the sentiment analysis of news headlines. For more subjective data, SOA-based solutions outperform machine-learning-based methods. Machine learning models operate with multiple domains and enormous datasets. Many studies have suggested that machine-learning-based classification is better suited for large domains. Denecke [23] demonstrated that machine learning approaches perform better on multidomain data than SOA-based methods.

4. Related Works

4.1. Short Text Sentiment Analysis

Understanding the emotions conveyed in 140-character posts like tweets, product reviews, comments, and status updates is the goal of the specialized discipline of short text sentiment analysis. Since more and more of our digital communication consists of concise sentences, this area has attracted a lot of studies. While lengthier papers with more context may be analyzed using standard sentiment analysis techniques, brief texts with condensed and constrained characters introduce new obstacles. Extracting and analyzing feelings from brief writings, particularly in social media and online reviews, are crucial since they provide vital information about the author’s emotional tone and viewpoints.
Difficulties arise when analyzing the tone of short texts due to factors such as the absence of context, the use of informal language, background noise and abbreviations, and an unequal distribution of social classes. Since words and phrases in text messages may have multiple meanings, which can change depending on the context in which they are used, context is essential for deciphering emotions. Slang, conversational phrases, and emoticons/emojis provide unique challenges to emotion analysis because of their informal nature. Since noise and abbreviations might impact sentiment analysis findings, they are not ideal for brief text conversations [11].
Sentiment analysis models may be biased if there is a significant racial or ethnic minority in the population. Emoticon and emoji analysis, deep learning models like recurrent neural networks and convolutional neural networks, and transfer learning approaches are just a few specialized methods researchers and data scientists have created for brief text sentiment analysis. These methods can be utilized in various contexts, such as social media monitoring, customer service, and legislation, to gauge public opinion, consumer satisfaction, and new trends.

4.2. Document Level Sentiment Analysis

Document-level sentiment analysis focuses on analyzing sentiment in lengthy texts such as articles, reviews, and reports, providing deep insight into emotional nuances and context. Unlike short-text analysis, which deals with concise texts, document-level analysis benefits from more comprehensive information to understand complex emotions in large texts, which is crucial for detailed sentiment comprehension applications [24]. Additionally, through a combination of lexical, syntactic, and semantic analyses, this process involves identifying sentiment-bearing elements such as words, phrases, and context cues, discerning their polarity and aggregating them to form an understanding of the document’s sentiment. It incorporates aspect-based evaluation for in-depth opinions on specific topics, necessitating algorithms capable of effectively handling sarcasm, ambiguity, and complex expressions. Machine learning models, including SVM, naïve Bayes, and RNNs, excel in this domain by capturing contextual nuances, which are vital for dissecting sentiments in product reviews and other detailed documents. Document-level analysis plays a significant role in natural language processing, supporting decision making across various sectors by analyzing sentiments in product evaluations, financial reports, and social media [25]. Legal texts assist in identifying positive or negative sentiments, with neural networks offering advantages over traditional models by eliminating the need for explicit feature definitions [26]. Integrating AI and machine learning in law transforms traditional practices, enhancing document analysis and prediction accuracy. However, with the growing integration of AI in legal processes, ethical considerations and potential biases require careful management, particularly concerning data privacy and the ethical use of AI in judicial decisions [20]. Table 1 provides an overview of the related work.

5. Proposed Model: CNN-LSTM and Doc2vec for Document-Level Sentiment Analysis

Cutting-edge methods in document-level sentiment analysis, like CNN-LSTM and Doc2Vec (see Figure 2), leverage advanced techniques to extract valuable insights and sentiment information from extensive texts like reviews, articles, and reports. These methods aim to decipher the text’s underlying meaning and emotional nuances by employing deep learning and vector representations.
Convolution layer: Although some complexities like time and space complexity are associated with the size of the input image (or feature maps), the number of convolutional layers, and the size of the filters with image processing, CNNs can also be effectively trained for text analysis. Additionally, CNNs are highly efficient for processing grid-like data such as images (see Figure 3).
In this context, CNNs are crucial in localizing receptive model captures of particular segments and global feelings, aided by max-pooling layers to preserve excessive feature loss. This approach benefits Canadian marine case law, providing a more accessible understanding of rulings and accommodating diverse perspectives. “LeNet” and “AlexNet,” two prominent CNNs, share linear neuron model principles. CNNs, unlike traditional MLPs, incorporate weight sharing and restricted connection in convolutional layers. Conv1D
The 1D forward propagation (1D-FP) expressions in each CNN layer are as follows:
x k l = b k l i = 1 N l 1 c o n v 1 D ( w i k l 1 , s i l 1 )
x k l presents the input, whereas b k l denotes the bias of the kth neuron in layer l. Similarly, s i l 1 illustrates the output of the ith neuron at layer l-1, and w i k l 1 exhibits the kernel from the ith neuron at layer l-1 to the kth neuron at layer l.
y k l = f ( x k l )   and   s k l = y k l   s s
With l = 1 as the input and l as the output, the back-propagation procedure begins at the MLP layer. There are NL distinct types of data in the repository. In the output layer, we represent the mean squared error (MSE) between an input vector p and its target and output vectors, t p and [ y l L ,   y N L L ], as
E p = M S E t p , y l L , , y N L L = i = 1 N L ( y i L t i p ) 2
E p ’s derivative by each network parameter may be calculated using the delta error, k l = E × k l. To be more precise, the chain rule of derivatives may be used to update not just the bias of the current neuron but also the weights of all of the neurons in the layer above.
CNNs with several layers use both back-propagation and forward propagation (see Figure 4).
Through forward and reverse propagation, the last hidden CNN layer is linked to the first hidden MLP layer (see Figure 5)
(1)
Initialize weights and biases (e.g., randomly, ~U(−0.1, 0.1)) of the network.
(2)
For each BP iteration, DO:
  • For each PCG beat in the dataset, DO:
FP: A layer’s neuron outputs may be found by forward propagation from the input layer to the output layer.
s i   l   ,   i     [ 1 ,   N l ] ,   a n d   l     [ 1 ,   L ] .
BP: Compute delta error at the output layer and back-propagate it to first hidden layer to compute the delta errors.
k   l   ,   k     [ 1 ,   N l ] ,   a n d   l     [ 1 ,   L ] .
PP: Postprocess to compute the weight and bias.
Update: Update the weights and biases by the (accumulation of) sensitivities scaled with the learning factor.
LSTM layer: Though there are circumstances in which problems with the sequence length (T), hidden state size (h), and number of LSTM layers occur, due to their better sequential data modeling skills, LSTMs are excellent at capturing the natural flow and context of text. However, LSTMs are crucial for extracting word and sentence dependencies in document-level sentiment analysis, allowing them to capture evolving attitudes throughout lengthy texts. Their ability to selectively retain and forget information over extended sequences ensures consistent and nuanced sentiment analysis, making LSTMs indispensable in natural language processing.
Gates:
LSTMs use three types of gates: (i) the forget gate (f), (ii) the input gate ( i ) , and (iii) the output gate (o).
These gates control the flow of information into and out of the cell state ( C _ t ) .
a. Cell State ( C _ t ) :
The cell state represents the memory of the LSTM. It can be updated and modified using the gates.
The cell state is updated using the forget gate, input gate, and a new candidate cell state ( C ~ _ t ) .
b. Hidden State ( h _ t ) :
The hidden state carries information about the current time step’s input and the previous hidden state.
It is used to make predictions and updated using the output gate.
Forget Gate ( f _ t ) :
f _ t = σ ( W _ f   ·   [ h _ ( t 1 ) ,   x _ t ] + b _ f )
Here, σ represents the sigmoid activation function.
c. Input Gate ( i _ t ) :
i _ t = σ ( W _ i   ·   [ h _ ( t 1 ) ,   x _ t ] + b _ i )
d. Candidate Cell State ( C ~ _ t ) :
C ~ t = t a n h ( W _ c   ·   [ h ( t 1 ) ,   x _ t ] + b _ c )
e. Update Cell State ( C _ t ) :
C _ t = f _ t   x C _ ( t 1 ) + i _ t     C ~ _ t
This equation combines the old and new candidate cells based on forgetting and input gates.
f. Output Gate ( o _ t ) :
o _ t = σ ( W _ o   ·   [ h _ ( t 1 ) ,   x _ t ] + b _ o )
g. Hidden State ( h _ t ) :
h _ t = o _ t     t a n h ( C _ t )
The output gate controls the information that is passed to the hidden state. However, here, x _ t represents the input at time step t; h _ t 1 represents the hidden state at time step t − 1. Similarly, W _ f ,   W _ i ,   W _ c ,   W _ o , and b _ f ,   b _ i ,   b _ c ,   b _ o represent the weight matrices for the gates andbBias vectors for the gates, respectively. On the other hand, σ stands for the sigmoid activation function, whereas t a n h presents the hyperbolic tangent activation functions.
Activation layer: In the context of sentiment analysis applied to Canadian marine case law texts using CNN + LSTM architecture, the activation layer, also known as the activation function, is a pivotal element. Adding this nonlinear layer greatly enhances the model’s ability to capture complicated connections and produce precise predictions. By introducing nonlinearity, the model becomes adept at discerning complex patterns, enabling more accurate predictions and nuanced insights into sentiment from the nuanced language of legal documents. One of the most important parts of deep learning models is the activation layer, which helps to interpret complex legal documents’ sentiment patterns and other nuanced emotional expressions in the data they include. This essential layer is the linchpin for capturing and learning from recurring structures, decision making, and controlling gradient flow within the model. Despite its undeniable significance, the activation layer is not without challenges, with the specter of saturation looming as a potential impediment to the deep learning model’s learning speed and overall effectiveness. Nevertheless, its indispensability remains unassailable, as the success of deep learning models in the nuanced domain of sentiment analysis within legal texts is intricately tied to the adept functioning of the activation layer, as underscored by empirical evidence [27].
Regularization: Combining deep learning methods, like CNN with LSTM models for sentiment analysis in Canadian marine case law, heavily employs regularization techniques to counteract overfitting. The complexity of legal language patterns makes accurate representation critical. An issue is overfitting, which occurs when a model performs exceptionally well on training data but poorly on new data. Regularization plays a crucial role in sentiment analysis to ensure that the model can handle the vast range of legal text patterns and the intricacies of the training data

5.1. Detailed Model Architecture and Training Procedure

Model Architecture and Hyperparameters

Our convolutional neural network (CNN)–long short-term memory (LSTM) architecture was meticulously designed to harness the strengths of both models for the sentiment analysis of Canadian maritime case law documents. The CNN component focuses on extracting salient features from textual data. At the same time, the LSTM part captures temporal dependencies, making the model particularly adept at understanding the context and sequence within a text.
CNN architecture: The CNN part of our model consists of two convolutional layers. The first layer has 32 filters with a kernel size of 3 × 3, followed by a max-pooling layer with a pool size 2 × 2 to reduce dimensionality and capture the most relevant features. The second convolutional layer increases the depth with 64 filters, enhancing the model’s ability to recognize more complex patterns in the data. Each convolutional layer is followed by a ReLU activation function to introduce nonlinearity.
LSTM architecture: Following the CNN layers, we integrated a bidirectional LSTM (BiLSTM) layer with 100 units to process the sequence data forward and backward, thus capturing context from both directions. This bidirectionality is crucial for understanding the nuanced legal language present in maritime case law documents.
Combination and output: The output from the CNN layers is flattened and then passed to the BiLSTM layer. The final output layer employs a softmax activation function to classify the sentiment into categories, reflecting the multiclass nature of our sentiment analysis task.

5.2. Document Representation

Training Procedure

Loss Function: Given the multiclass classification problem, we employed the categorical cross-entropy loss function. This choice was made because it effectively measures the discrepancy between the predicted sentiment distribution and the actual distribution in the training data.
Optimizer algorithm: We opted for the Adam optimizer for its adaptive learning rate capabilities, setting an initial learning rate of 0.001. Adam combines the benefits of two other extensions of stochastic gradient descent, adaptive gradient algorithm (AdaGrad) and root mean square propagation (RMSProp), making it well suited for our complex model architecture.
Training process: The dataset was meticulously divided into training (70%) and test (30%) sets, ensuring a balanced distribution of sentiments across both partitions. Our model underwent training for 50 epochs utilizing a batch size of 64, and early stopping with a patience of 5 epochs on validation loss was implemented to mitigate overfitting. Throughout the training process, model performance was consistently assessed using accuracy and loss metrics on both the training and validation datasets. The final model was chosen based on achieving the highest validation accuracy, ensuring robust performance across diverse sentiment representations.
Hyperparameter tuning: Preliminary experiments were conducted to determine the optimal architecture and training configurations. We experimented with different numbers of CNN and LSTM layers, kernel sizes, and filter counts, ultimately selecting the configuration that maximized validation accuracy while minimizing overfitting.

6. Experimental Results

6.1. Dataset

Legal documents in Canada are organized into several types, with maritime law legislation being just one example. Many techniques are employed for data classification, including text mining, document clustering, and machine learning algorithms. CNN is one of the models that is often used in the document classification process. The main tools were used to predict a judge or jury’s decisions and examine previous cases and decisions. However, in some instances, machine learning algorithms can make it easier to consider releasing a suspect on bail. This study examined two thousand cases from the Federal High Court’s website to find patterns in Canadian maritime law (see Table 2). The final decision rendered in the case was categorized as either affirmed or reversed. An affirmed judgment indicates that the higher court upheld the lower court’s decision, while a reversed judgment signifies that the decision was overturned. The datasets were divided into training and test sets to evaluate the model’s performance on unseen data. Additionally, the data were collected manually, without using anonymization, from both the plaintiffs and defendants.
To enhance sentiment analysis within maritime law, this research strategically used the filter tool available on the Federal High Court website. This tool facilitated the identification of pertinent legislation and precedents from court rulings. However, by analyzing the most-used words and key phrases in the input text, outputs were generated that maintained relevance to the legal context while ensuring coherence and accuracy. Additionally, considering the input text’s length and structure, the generated outputs were tailored to meet the specific requirements of legal professionals, judges, and other stakeholders within the Canadian maritime law domain. A meticulous augmentation process was undertaken to bolster the sentiment analysis model, generating an additional 98,000 new samples through a random sample technique. This method deliberately addressed demographic disparities, ensuring a more even distribution of examples across emotion categories. The resultant effect was a marked improvement in the model’s precision and consistency. Notably, the model’s accuracy in categorizing emotions was fortified by incorporating Canadian marine case law. After collecting data and training completion, the model’s prediction was evaluated using a held-out test set. Specificity, representing the actual negative rate, is calculated by dividing the number of correctly identified negative sentiments by the total number of negative sentiments.
In contrast, sensitivity, representing the true positive rate, is calculated by dividing the number of correctly identified positive sentiments by the total number of actual positive sentiments. These metrics provide insights into how well the model could distinguish between positive and negative sentiments. Data augmentation and postprocessing methods were also used to balance representation across different groups and adjust the model’s performance. This deliberate and rigorous approach to data augmentation contributed significantly to the overall trustworthiness and precision of the sentiment analysis methodology employed in this study [28].

6.2. Results

This study on case adjudication in Canadian maritime law revealed intriguing insights into the outcomes of trials based on the number of judges involved. When a case was assigned to a single judge, guilty judgment stood at 46%, while the likelihood of a not guilty result was approximately 51%. Strikingly, this indicated a remarkably even distribution of judgments, with approximately 3% of cases remaining undecided. Surprisingly, the incidence of indecisive verdicts did not significantly change when three judges were involved, as it increased marginally to 5%. These findings suggest that additional judges in the trial process did not substantially alter the proportion of undecided cases, highlighting a noteworthy consistency in judgment outcomes across varying judicial scenarios in the realm of Canadian maritime law. Surprisingly, the incidence of indecisive verdicts did not significantly shift when there were three judges.
Accuracy is the rate at which a model makes accurate predictions.
A c c u r a c y = C o r r e c t   P r e d i c t i o n s T o t a l   P r e d i c t i o n s
In Canadian maritime law, a significant shift occurred in citation practices, revealing 41% of citations in single-judge trials and 46% in multijudge cases. This evolving trend underscores the dynamic nature of the legal landscape. We employed advanced techniques for sentiment analysis of Canadian marine case law papers, including deep learning and traditional machine learning models. This analytical approach extends beyond statistics, offering valuable insights for informed decision making in judge selection and jury verdicts [29]. Integrating technology into legal scholarship reflects a proactive response to contemporary challenges, enhancing the adaptability of legal practices.
Figure 6 is a comprehensive visual representation of a bar chart, elucidating the distribution of judgments and statuses throughout the dataset. Each bar’s height succinctly encapsulates the number of instances within its corresponding category, offering a clear and insightful overview of the dataset’s composition. This visualization lays a robust foundation for forthcoming legal sentiment analysis studies and provides vital insights into the dataset’s composition, knowing the predominance of judgments in Canadian marine case law [30].
In the initial stages of model assessment (see Figure 7a,b), the dataset was carefully split into training and test sets, with nonpredictive columns removed from the feature matrix X. The target variable y was appropriately labeled “target” for the subsequent binary classification task. To ensure repeatability, 30 percent of the dataset was reserved for thorough examination. Preceding sentiment analysis, the ‘Opinion’ text input underwent tokenization to achieve consistent sequence lengths in the CNN with LSTM model. The largest sequence in the dataset (max_len) was found, and the vocabulary size, which included all unique words in the ‘Opinion’ text data, was computed. These preprocessing steps were vital for the success of sentiment analysis performed on Canadian marine case law materials that required this preliminary processing [30].

6.3. Comparison

This section compares CNN, LSTM, BiLSTM, and CNN-LSTM to the CNN-BiLSTM model.
This study examined the sentiment analysis of Canadian maritime case law using two machine learning models: deep learning (CNN + LSTM) and more traditional methods (logistic regression, multinomial naïve Bayes, linear support vector machine). This study’s success is credited with employing CNN and LSTM models to collect sentiment information from judicial documents. Gamage, et al. [31] used different machine learning models for maritime surveillance to detect abnormal maritime vessels and reported 91% accuracy for the CNN model. Syed and Ahmed [32] conducted research employing CNN, LSTM, BiLSTM, and CNN-LSTM models on marine surveillance, distinguishing between normal and abnormal vessel movement patterns and claimed CNN-LSTM exhibited the most accurate result (89%).

6.3.1. CNN Model

The ability of CNN models to extract local patterns and features from text input makes them particularly well suited to tasks that require recognizing nearby signals or characteristics. They can spot terms, phrases, or clauses in legal papers that convey emotion. CNNs provide computational efficiency during training because of their ability to learn local patterns quickly through utilizing shared weights across several input areas. The training time is drastically reduced, making them particularly suitable for big legal text datasets. In this case, CNNs are powerful feature extractors that can glean important information from texts, including patterns, structures, or even individual words. The local environment brief pieces of text are quickly captured by them, and they excel at identifying patterns within such sections. The effectiveness of a CNN model in detecting emotions in Canadian maritime case law papers was demonstrated by its 98% accuracy rate on the tests [33].

6.3.2. LSTM Models

Long short-term memory (LSTM) models excel in understanding the context and sequence of words in text data, making them excellent for jobs requiring such an understanding. A thorough comprehension of the complex textual environment is essential in legal sentiment analysis. LSTMs are well suited to the level of detail needed to comprehend the nuanced sentiment patterns and intricate interconnections common in legal writings. LSTM models are more complex and have a more significant number of parameters than CNN models. However, they still achieved high accuracy rates, reflecting how well they red sentiment dynamics in Canadian maritime case law.

6.3.3. CNN-LSTM Model

This study assessed the efficacy of CNN and LSTM models over 50 training epochs using visual representations of loss and accuracy measurements. While the accuracy graph illustrates how well the model can classify data, the loss graph shows how well it can reduce inaccurate predictions. In addition, the loss graph reflects the model’s skill in minimizing prediction mistakes, whereas the accuracy graph depicts its skill in accurately incorporating labels into opinions. By making it more straightforward to visualize how the model evolved during training to integrate documents from Canada’s marine case law, the SE visuals add to the broader discussion on sentiment analysis in the law [34].
For each CNN + LSTM model, we display loss and accuracy graphs across 50 iterations during training.
In a groundbreaking study of Canada’s maritime sector, convolutional neural network (CNN) and long short-term memory (LSTM) models were employed to analyze case law and identify patterns of emotion. The impressive successes in emotion categorization, as depicted in Table 3, underscore the complexity of emotion in this intricate area of law. This research also showcases the effectiveness of advanced machine learning in navigating the challenging landscape of maritime law, where understanding and addressing emotions add a layer of complexity for legal professionals.
CNN and LSTM Model 1 achieved an impressive 98.01% test accuracy rate (see Figure 8a), showcasing its dominance in sentiment categorization and understanding of the intricacies of Canadian maritime case law texts. On the other hand, Model 2 (see Figure 8b), a descendant of Model 1, highlighted the robustness of the CNN + LSTM architecture with a test accuracy of 97.94%, proving its efficacy in extracting sentiment information from dense legal texts.
Similarly, Model 3 (see Figure 8c) achieved a test accuracy rate of 98.05%, and the third model earned a test accuracy rate of 98.05%, demonstrating the approach’s resilience in predicting sentiment dynamics within Canadian maritime case law despite the continuously high accuracy rates of CNN and LSTM models.
This research illustrates the effectiveness of CNN + LSTM models in analyzing legal sentiment analysis. It demonstrates how these models can more accurately detect sentiment patterns in Canadian maritime case law papers and successfully grasp the nuances of legal language. Legal analytics, policymaking, and the creation of AI-powered legal tools all stand to benefit significantly from this breakthrough [35]. Feizollah, et al. [36] utilized CNN and LSTM algorithms to extract Twitter text and claimed 93.78% accuracy.
The sentiment analysis of Canadian maritime case law was conducted using multiple machine learning methods. With an average accuracy of 0.9805, CNN + LSTM models exhibited excellent precision in interpreting the nuances of legal documents (see Figure 9). Logistic regression, multinomial naïve Bayes, and linear support vector machine (SVM) are classic models that have significantly contributed to our knowledge of sentiment analysis by emphasizing the trade-offs between complexity, interpretability, and performance.
Multinomial naïve Bayes is practical with text data, whereas logistic regression sheds light on the effect of model complexity. In linear SVM, the emphasis is on parameterization and dataset dimensionality. These additional data will help us select more suitable policymaking and legal analytics models. With this new information, we can better choose appropriate models for legal analytics and policy development [37].

6.3.4. Precision and Recall Metrics

Upon re-examining our CNN-LSTM model’s performance on the sentiment analysis of Canadian maritime case law documents, we present additional evaluation metrics—precision and recall. These metrics are particularly informative for understanding the model’s performance across different sentiment classes, providing insights into its ability to minimize false positives (precision) and false negatives (recall).
Precision measures the model’s accuracy in predicting a specific sentiment class, calculated as the number of true positive predictions divided by the total number of positive predictions (true positives + false positives). On the other hand, recall measures the model’s ability to detect all relevant instances of a sentiment class, calculated as the number of true positive predictions divided by the total number of actual positives (true positives + false negatives).
Including these metrics addresses a critical aspect of model evaluation, especially in legal sentiment analysis, where the cost of misclassification can significantly impact the interpretation of legal documents and the subsequent legal analytics and policymaking processes.
The following table (Table 4) summarizes the precision and recall metrics for our CNN-LSTM model across the identified sentiment categories:
These results demonstrate the model’s strong performance in accurately classifying sentiments (as previously evidenced by the accuracy metrics) and its precision and recall across different sentiment categories. The high precision indicates a low rate of false positives, while the high recall reflects the model’s effectiveness in identifying all relevant instances of each sentiment class.
By incorporating precision and recall metrics into our evaluation, we offer a more detailed and nuanced understanding of our CNN-LSTM model’s performance in Canadian maritime case law sentiment analysis. This comprehensive evaluation underscores the model’s efficacy and reliability, reinforcing its potential utility in legal analytics and policy formulation. We believe these additional metrics address the previous omission and enhance the manuscript’s contribution to the field.

6.4. Discussion

The CNN-BiLSTM with Doc2vec, a pretrained sentence/paragraph representation model, stood out when we compared its performance to that of other deep learning models.
Doc2Vec word embedding models’ accuracy ratings when using several neural network topologies, including convolutional neural networks (CNNs), long short-term memory (LSTM) networks, back-propagation neural networks (CNN-LSTMs), and convolutional neural networks (CNN-BiLSTMs), are offered. Document classification using Doc2Vec receives = d 90% on CNN, 88% on LSTM, 86.40% on BiLSTM, 91% on CNN-LSTM, and 93% on CNN-BiLSTM. Higher accuracy levels imply superior performance in sentiment analysis and text categorization [38].
This research explored the integration of deep learning, specifically convolutional neural network (CNN) and long short-term memory (LSTM) models, for analyzing public opinion on maritime law in Canada. With a precision rate of 98%, this research highlights the revolutionary influence of artificial intelligence in the legal domain, stressing the mechanization of processes, interpretation of lengthy legal documents, and enhanced judgment.
The findings highlight both the benefits and drawbacks of these technologies, offering crucial insights for future applications. Significantly, sentiment analysis emerges as a valuable tool in various legal activities, including researching the law, investigating potential outcomes, preparing for court, interpreting precedent, and developing policies [39]. This research is a foundational step toward enhanced AI integration in legal practices, paving the way for further exploration and refinement in maritime law and beyond.

7. Conclusions

This study used advanced deep learning methods, including convolutional neural networks (CNNs) and long short-term memory (LSTM) architectures, to unearth the nuanced feelings behind Canadian maritime case law. These results shed light on the subtleties of Canadian marine case law and the complex interplay between public opinion and judicial decisions. With an average accuracy of 98.05% across several examples, the CNN and LSTM models proved their ability to identify nuanced emotions in legal writing. This research shows that convolutional neural networks (CNNs) and long short-term memory (LSTMs) networks are helpful for sentiment analysis of maritime law in Canada [40].
The models showed impressive accuracy ratings, with some reaching 98%. Convolutional neural networks (CNNs) effectively recognized local textual patterns, while long short-term memory (LSTM) models captured long-range relationships and sequential information. These models could provide valuable information for lawyers, enhancing investigations, evaluations, policymaking, legal analysis, and court strategy. AI-driven tools can provide fresh insights into complex issues and improve legal procedures [41].
This research highlighted the significance of parameter tuning and dataset dimensionality by comparing deep learning outcomes with traditional machine learning models. It was found that logistic regression achieved the highest accuracy (61.86%), multinomial naïve Bayes showed 56.44% efficiency, and linear support vector machine depicted 52.5% efficiency.
This research explored deep learning methods. Its primary focus was on analyzing sentiment in maritime case law in Canada, and it also applied deep learning techniques to legal analytics and policy creation. This research highlights the importance of AI in legal practice and policy development and compared various machine learning models. The outcomes suggest that AI can produce a more complex and well-informed legal environment, demonstrating its potential in legal practice [42].

Author Contributions

Conceptualization, B.A. and Q.T.; methodology, B.A.; validation, B.A., Q.T. and E.d.L.C.M.; formal analysis, B.A.; investigation, B.A; resources, B.A. and Q.T.; writing—original draft preparation, B.A.; writing—review and editing, B.A., Q.T. and E.d.L.C.M.; visualization, B.A.; supervision, Q.T. and E.d.L.C.M.; project administration, B.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data available at https://doi.org/10.17632/s3r4thpy95.1 (accessed on 9 February 2024).

Acknowledgments

The authors thank Jose Villar for fruitful discussions and the anonymous reviewers for their feedback. B.A. would like to thank Kola Abimbola and Qing Tan for their inspiration.

Conflicts of Interest

The authors declare no conflicts of interest.

List of Notations

C _ t Cell state
h _ t Hidden state
f _ t Forget gate
σSigmoid activation function
i _ t Input gate
C ~ _ t Candidate cell state
o _ t Output gate
x _ t Input at time step t
h _ t 1 Hidden state at time step t − 1
W _ f ,   W _ i ,   W _ c ,   W _ o Weight matrices
b _ f ,   b _ i ,   b _ c ,   b _ o Bias vectors
t a n h Hyperbolic tangent activation function
x k l Input data
  b k l Bias of the kth neuron at layer l
s i l 1 Output of the ith neuron at layer l-1
w i k l 1 Kernel from the ith neuron at layer l-1 to the kth neuron at layer l
pInput vector
t pTarget
y k l , yNL LOutput vector

References

  1. Liu, B. Sentiment Analysis And Opinion Mining; Springer Nature: Berlin/Heidelberg, Germany, 2022. [Google Scholar]
  2. Nasukawa, T.; Yi, J. Sentiment analysis: Capturing favorability using natural language processing. In Proceedings of the 2nd International Conference on Knowledge Capture, Sanibel Island, FL, USA, 23–25 October 2003; pp. 70–77. [Google Scholar]
  3. Bai, X. Predicting consumer sentiments from online text. Decis. Support. Syst. 2011, 50, 732–742. [Google Scholar] [CrossRef]
  4. Naseem, U.; Razzak, I.; Musial, K.; Imran, M. Transformer based Deep Intelligent Contextual Embedding for Twitter sentiment analysis. Future Gener. Comput. Syst. 2020, 113, 58–69. [Google Scholar] [CrossRef]
  5. Yusof, N.N.; Mohamed, A.; Abdul-Rahman, S. Context Enrichment Model Based Framework for Sentiment Analysis. In Proceedings of the Soft Computing in Data Science: 5th International Conference, SCDS 2019, Iizuka, Japan, 28–29 August 2019; Proceedings 5. Springer: Berlin/Heidelberg, Germany, 2019; pp. 325–335. [Google Scholar]
  6. Vijayaragavan, P.; Ponnusamy, R.; Aramudhan, M. An optimal support vector machine based classification model for sentimental analysis of online product reviews. Future Gener. Comput. Syst. 2020, 111, 234–240. [Google Scholar] [CrossRef]
  7. Mikolov, T.; Karafiát, M.; Burget, L.; Cernocký, J.; Khudanpur, S. Recurrent neural network based language model. Interspeech 2010, 2, 1045–1048. [Google Scholar]
  8. Rhanoui, M.; Mikram, M.; Yousfi, S.; Barzali, S. A CNN-BiLSTM model for document-level sentiment analysis. Mach. Learn. Knowl. Extr. 2019, 1, 832–847. [Google Scholar] [CrossRef]
  9. Tripathy, A.; Anand, A.; Rath, S.K. Document-level sentiment classification using hybrid machine learning approach. Knowl. Inf. Syst. 2017, 53, 805–831. [Google Scholar] [CrossRef]
  10. Newmyer, K.; Zaccagnino, M. Connecticut Law Review Volume 52, February 2021, Number 4, 2021. Available online: https://heinonline.org/ (accessed on 8 February 2024).
  11. Christodoulou, A.; Echebarria Fernández, J. Maritime Governance and International Maritime Organization instruments focused on sustainability in the light of United Nations’ sustainable development goals. In Sustainability in the Maritime Domain: Towards Ocean Governance and Beyond; Springer: Berlin/Heidelberg, Germany, 2021; pp. 415–461. [Google Scholar]
  12. Gavrilov, V.; Dremliuga, R.; Nurimbetov, R. Article 234 of the 1982 United Nations Convention on the law of the sea and reduction of ice cover in the Arctic Ocean. Mar. Policy 2019, 106, 103518. [Google Scholar] [CrossRef]
  13. Undavia, S.; Meyers, A.; Ortega, J.E. A comparative study of classifying legal documents with neural networks. In Proceedings of the 2018 Federated Conference on Computer Science and Information Systems (FedCSIS), Poznan, Poland, 9–12 September 2018; pp. 515–522. [Google Scholar]
  14. Abimbola, B.; Tan, Q.; Villar, J.R. Introducing Intelligence to the Semantic Analysis of Canadian Maritime Case Law: Case Based Reasoning Approach. In Proceedings of the International Workshop on Soft Computing Models in Industrial and Environmental Applications, Salamanca, Spain, 5–7 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 587–595. [Google Scholar]
  15. Abimbola, B.; Marin, E.D.L.C.; Tan, Q. Enhancing Legal Sentiment Analysis: A CNN-LSTM Document-Level Model. Preprints 2024. [Google Scholar] [CrossRef]
  16. Ghorbani, M.; Bahaghighat, M.; Xin, Q.; Özen, F. ConvLSTMConv network: A deep learning approach for sentiment analysis in cloud computing. J. Cloud Comput. 2020, 9, 1–12. [Google Scholar] [CrossRef]
  17. Jin, Z.; Yang, Y.; Liu, Y. Stock closing price prediction based on sentiment analysis and LSTM. Neural Comput. Appl. 2020, 32, 9713–9729. [Google Scholar] [CrossRef]
  18. Schuster, M.; Paliwal, K.K. Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 1997, 45, 2673–2681. [Google Scholar] [CrossRef]
  19. Tai, K.S.; Socher, R.; Manning, C.D. Improved semantic representations from tree-structured long short-term memory networks. arXiv 2015, arXiv:1503.00075. [Google Scholar]
  20. Sadia, A.; Khan, F.; Bashir, F. An overview of lexicon-based approach for sentiment analysis. In Proceedings of the 2018 3rd International Electrical Engineering Conference (IEEC 2018), Karachi, Pakistan, 9–10 February 2018; pp. 1–6. [Google Scholar]
  21. Godbole, N.; Srinivasaiah, M.; Skiena, S. Large-Scale Sentiment Analysis for News and Blogs. Icwsm 2007, 7, 219–222. [Google Scholar]
  22. Baccianella, S.; Esuli, A.; Sebastiani, F. Sentiwordnet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Lrec, Valletta, Malta, 17–23 May 2010; pp. 2200–2204. [Google Scholar]
  23. Denecke, K. Are SentiWordNet scores suited for multi-domain sentiment classification? In Proceedings of the 2009 Fourth International Conference on Digital Information Management, Ann Arbor, MI, USA, 1–4 November 2009; pp. 1–6. [Google Scholar]
  24. Yeskuatov, E.; Chua, S.-L.; Foo, L.K. Leveraging reddit for suicidal ideation detection: A review of machine learning and natural language processing techniques. Int. J. Environ. Res. Public Health 2022, 19, 10347. [Google Scholar] [CrossRef] [PubMed]
  25. Tahseen, T.; Kabir, M.M.J. A comparative study of deep learning neural networks in sentiment classification from texts. In Machine Learning and Autonomous Systems: Proceedings of ICMLAS 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 289–305. [Google Scholar]
  26. Sahoo, C.; Wankhade, M.; Singh, B.K. Sentiment analysis using deep learning techniques: A comprehensive review. Int. J. Multimed. Inf. Retr. 2023, 12, 41. [Google Scholar] [CrossRef]
  27. Alghazzawi, D.; Bamasag, O.; Albeshri, A.; Sana, I.; Ullah, H.; Asghar, M.Z. Efficient prediction of court judgments using an LSTM+ CNN neural network model with an optimal feature set. Mathematics 2022, 10, 683. [Google Scholar] [CrossRef]
  28. Bramantoro, A.; Virdyna, I. Classification of divorce causes during the COVID-19 pandemic using convolutional neural networks. PeerJ Comput. Sci. 2022, 8, e998. [Google Scholar] [CrossRef] [PubMed]
  29. Watson, J.; Aglionby, G.; March, S. Using machine learning to create a repository of judgments concerning a new practice area: A case study in animal protection law. Artif. Intell. Law 2023, 31, 293–324. [Google Scholar] [CrossRef]
  30. Da Silva, N.C.; Braz, F.; De Campos, T.; Gusmao, D.; Chaves, F.; Mendes, D.; Bezerra, D.; Ziegler, G.; Horinouchi, L.; Ferreira, M. Document type classification for Brazil’s supreme court using a convolutional neural network. In Proceedings of the 10th International Conference on Forensic Computer Science and Cyber Law (ICoFCS), Sao Paulo, Brazil, 29–30 October 2018; pp. 29–30. [Google Scholar]
  31. Gamage, C.; Dinalankara, R.; Samarabandu, J.; Subasinghe, A. A comprehensive survey on the applications of machine learning techniques on maritime surveillance to detect abnormal maritime vessel behaviors. WMU J. Marit. Aff. 2023, 22, 447–477. [Google Scholar] [CrossRef]
  32. Syed, M.A.B.; Ahmed, I. A CNN-LSTM architecture for marine vessel track association using automatic identification system (AIS) data. Sensors 2023, 23, 6400. [Google Scholar] [CrossRef] [PubMed]
  33. Pillai, V.G.; Chandran, L.R. Verdict prediction for indian courts using bag of words and convolutional neural network. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 676–683. [Google Scholar]
  34. Chen, D.L.; Eagel, J. Can machine learning help predict the outcome of asylum adjudications? In Proceedings of the 16th Edition of the International Conference on Articial Intelligence and Law, London, UK, 12–16 June 2017; pp. 237–240. [Google Scholar]
  35. Lum, K. Limitations of mitigating judicial bias with machine learning. Nat. Hum. Behav. 2017, 1, 0141. [Google Scholar] [CrossRef]
  36. Feizollah, A.; Ainin, S.; Anuar, N.B.; Abdullah, N.A.B.; Hazim, M. Halal products on Twitter: Data extraction and sentiment analysis using stack of deep learning algorithms. IEEE Access 2019, 7, 83354–83362. [Google Scholar] [CrossRef]
  37. Tasdelen, A.; Sen, B. A hybrid CNN-LSTM model for pre-miRNA classification. Sci. Rep. 2021, 11, 14125. [Google Scholar] [CrossRef]
  38. Muhlenbach, F.; Phuoc, L.N.; Sayn, I. Predicting Court Decisions for Alimony: Avoiding Extra-legal Factors in Decision made by Judges and Not Understandable AI Models. arXiv 2020, arXiv:2007.04824. [Google Scholar]
  39. Alsayat, A. Improving sentiment analysis for social media applications using an ensemble deep learning language model. Arab. J. Sci. Eng. 2022, 47, 2499–2511. [Google Scholar] [CrossRef] [PubMed]
  40. Lam, J.T.; Liang, D.; Dahan, S.; Zulkernine, F.H. The Gap between Deep Learning and Law: Predicting Employment Notice. In Proceedings of the NLLP@ KDD, San Diego, CA, USA, 24 August 2020; pp. 52–56. [Google Scholar]
  41. Abimbola, B. Sentiment Analysis of Canadian Maritime Case Law: A Sentiment Case Law and Deep Learning Approach, Version 1; Mendeley Data: Amsterdam, The Netherlands, 2023. [CrossRef]
  42. Alzahrani, M.E.; Aldhyani, T.H.; Alsubari, S.N.; Althobaiti, M.M.; Fahad, A. Developing an intelligent system with deep learning algorithms for sentiment analysis of E-commerce product reviews. Comput. Intell. Neurosci. 2022, 2022, 3840071. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Sentiment polarity categorization using machine learning (above) and deep learning (below) [15].
Figure 1. Sentiment polarity categorization using machine learning (above) and deep learning (below) [15].
Make 06 00041 g001
Figure 2. CNN-LSTM for document-level sentiment analysis [15].
Figure 2. CNN-LSTM for document-level sentiment analysis [15].
Make 06 00041 g002
Figure 3. Input image processing [15].
Figure 3. Input image processing [15].
Make 06 00041 g003
Figure 4. CNN back-propagation and forward propagation [15].
Figure 4. CNN back-propagation and forward propagation [15].
Make 06 00041 g004
Figure 5. CNN layer linked to the first hidden MLP layer [15].
Figure 5. CNN layer linked to the first hidden MLP layer [15].
Make 06 00041 g005
Figure 6. Dynamics of case adjudication.
Figure 6. Dynamics of case adjudication.
Make 06 00041 g006
Figure 7. (a) Distribution of status in the dataset. (b) Distribution of judgments in the dataset.
Figure 7. (a) Distribution of status in the dataset. (b) Distribution of judgments in the dataset.
Make 06 00041 g007aMake 06 00041 g007b
Figure 8. Loss and accuracy graphs of CNN + LSTM (a) Model 1, (b) Model 2, and (c) Model 3.
Figure 8. Loss and accuracy graphs of CNN + LSTM (a) Model 1, (b) Model 2, and (c) Model 3.
Make 06 00041 g008
Figure 9. Model performance comparison for all models used.
Figure 9. Model performance comparison for all models used.
Make 06 00041 g009
Table 1. Related works.
Table 1. Related works.
Word EmbeddingLevelModelAccuracy
WORD2VEC [24]Word level
Document level
Sentence level
CNN-LSTM
BERT
KNN
SSR
84.9%
84.7%
89.0%
85.01%
GLOVE [25]Document level
Word level
Sentence level
CNN-BiLSTM
KNN
CNN
88.9%
82.7%
81.0%
91.01%
BOMW [26]Sentence level
Word level
Document level
BOMW
BERT
CNN
SR-LSTM
92.9%
78.7%
86.0%
80.01%
Table 2. Features identified in the data.
Table 2. Features identified in the data.
Case YearThe Year the Case Was Registered
majority opinionOpinion of the majority of judges engaged in the case.
minority opinionOpinion of the minority of judges engaged in the case.
number of judgesThe total number of judges hearing the case.
court judgmentFinal court judgment on the case (whether the decision is affirmed or reversed).
number of cited documents
(court decision legislation data)
The number of laws and judicial jurisprudence cited by the judges to support their decision.
Table 3. Results obtained for each deep learning model.
Table 3. Results obtained for each deep learning model.
ModelTest Accuracy
CNN + LSTM model 198.01%
CNN + LSTM model 297.94%
CNN + LSTM model 398.05%
Table 4. Precision and recall metrics.
Table 4. Precision and recall metrics.
Sentiment CategoryPrecisionRecall
Positive0.970.95
Neutral0.930.90
Negative0.950.96
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Abimbola, B.; de La Cal Marin, E.; Tan, Q. Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model. Mach. Learn. Knowl. Extr. 2024, 6, 877-897. https://doi.org/10.3390/make6020041

AMA Style

Abimbola B, de La Cal Marin E, Tan Q. Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model. Machine Learning and Knowledge Extraction. 2024; 6(2):877-897. https://doi.org/10.3390/make6020041

Chicago/Turabian Style

Abimbola, Bolanle, Enrique de La Cal Marin, and Qing Tan. 2024. "Enhancing Legal Sentiment Analysis: A Convolutional Neural Network–Long Short-Term Memory Document-Level Model" Machine Learning and Knowledge Extraction 6, no. 2: 877-897. https://doi.org/10.3390/make6020041

Article Metrics

Back to TopTop