Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (66)

Search Parameters:
Keywords = legal text analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
34 pages, 20058 KiB  
Article
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
by Grant Wardle and Teo Sušnjak
Big Data Cogn. Comput. 2025, 9(6), 149; https://doi.org/10.3390/bdcc9060149 - 3 Jun 2025
Abstract
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop [...] Read more.
Our study investigates how the sequencing of text and image inputs within multi-modal prompts affects the reasoning performance of Large Language Models (LLMs). Through empirical evaluations of three major commercial LLM vendors—OpenAI, Google, and Anthropic—alongside a user study on interaction strategies, we develop and validate practical heuristics for optimising multi-modal prompt design. Our findings reveal that modality sequencing is a critical factor influencing reasoning performance, particularly in tasks with varying cognitive load and structural complexity. For simpler tasks involving a single image, positioning the modalities directly impacts model accuracy, whereas in complex, multi-step reasoning scenarios, the sequence must align with the logical structure of inference, often outweighing the specific placement of individual modalities. Furthermore, we identify systematic challenges in multi-hop reasoning within transformer-based architectures, where models demonstrate strong early-stage inference but struggle with integrating prior contextual information in later reasoning steps. Building on these insights, we propose a set of validated, user-centred heuristics for designing effective multi-modal prompts, enhancing both reasoning accuracy and user interaction with AI systems. Our contributions inform the design and usability of interactive intelligent systems, with implications for applications in education, medical imaging, legal document analysis, and customer support. By bridging the gap between intelligent system behaviour and user interaction strategies, this study provides actionable guidance on how users can effectively structure prompts to optimise multi-modal LLM reasoning within real-world, high-stakes decision-making contexts. Full article
Show Figures

Figure 1

30 pages, 845 KiB  
Article
A Multimodal Deep Learning Approach for Legal English Learning in Intelligent Educational Systems
by Yanlin Chen, Chenjia Huang, Shumiao Gao, Yifan Lyu, Xinyuan Chen, Shen Liu, Dat Bao and Chunli Lv
Sensors 2025, 25(11), 3397; https://doi.org/10.3390/s25113397 - 28 May 2025
Viewed by 92
Abstract
With the development of artificial intelligence and intelligent sensor technologies, traditional legal English teaching approaches have faced numerous challenges in handling multimodal inputs and complex reasoning tasks. In response to these challenges, a cross-modal legal English question-answering system based on visual and acoustic [...] Read more.
With the development of artificial intelligence and intelligent sensor technologies, traditional legal English teaching approaches have faced numerous challenges in handling multimodal inputs and complex reasoning tasks. In response to these challenges, a cross-modal legal English question-answering system based on visual and acoustic sensor inputs was proposed, integrating image, text, and speech information and adopting a unified vision–language–speech encoding mechanism coupled with dynamic attention modeling to effectively enhance learners’ understanding and expressive abilities in legal contexts. The system exhibited superior performance across multiple experimental evaluations. In the assessment of question-answering accuracy, the proposed method achieved the best results across BLEU, ROUGE, Precision, Recall, and Accuracy, with an Accuracy of 0.87, Precision of 0.88, and Recall of 0.85, clearly outperforming the traditional ASR+SVM classifier, image-retrieval-based QA model, and unimodal BERT QA system. In the analysis of multimodal matching performance, the proposed method achieved optimal results in Matching Accuracy, Recall@1, Recall@5, and MRR, with a Matching Accuracy of 0.85, surpassing mainstream cross-modal models such as VisualBERT, LXMERT, and CLIP. The user study further verified the system’s practical effectiveness in real teaching environments, with learners’ understanding improvement reaching 0.78, expression improvement reaching 0.75, and satisfaction score reaching 0.88, significantly outperforming traditional teaching methods and unimodal systems. The experimental results fully demonstrate that the proposed cross-modal legal English question-answering system not only exhibits significant advantages in multimodal feature alignment and deep reasoning modeling but also shows substantial potential in enhancing learners’ comprehensive capabilities and learning experiences. Full article
Show Figures

Figure 1

35 pages, 18520 KiB  
Article
Optimizing Legal Text Summarization Through Dynamic Retrieval-Augmented Generation and Domain-Specific Adaptation
by S Ajay Mukund and K. S. Easwarakumar
Symmetry 2025, 17(5), 633; https://doi.org/10.3390/sym17050633 - 23 Apr 2025
Viewed by 1112
Abstract
Legal text summarization presents distinct challenges due to the intricate and domain-specific nature of legal language. This paper introduces a novel framework integrating dynamic Retrieval-Augmented Generation (RAG) with domain-specific adaptation to enhance the accuracy and contextual relevance of legal document summaries. The proposed [...] Read more.
Legal text summarization presents distinct challenges due to the intricate and domain-specific nature of legal language. This paper introduces a novel framework integrating dynamic Retrieval-Augmented Generation (RAG) with domain-specific adaptation to enhance the accuracy and contextual relevance of legal document summaries. The proposed Dynamic Legal RAG system achieves a vital form of symmetry between information retrieval and content generation, ensuring that retrieved legal knowledge is both comprehensive and precise. Using the BM25 retriever with top-3 chunk selection, the system optimizes relevance and efficiency, minimizing redundancy while maximizing legally pertinent content. with top-3 chunk selection, the system optimizes relevance and efficiency, minimizing redundancy while maximizing legally pertinent content. A key design feature is the compression ratio constraint (0.05 to 0.5), maintaining structural symmetry between the original judgment and its summary by balancing representation and information density. Extensive evaluations establish BM25 as the most effective retriever, striking an optimal balance between precision and recall. A comparative analysis of transformer-based (Decoder-only) models—DeepSeek-7B, LLaMA 2-7B, and LLaMA 3.1-8B—demonstrates that LLaMA 3.1-8B, enriched with Legal Named Entity Recognition (NER) and the Dynamic RAG system, achieves superior performance with a BERTScore of 0.89. This study lays a strong foundation for future research in hybrid retrieval models, adaptive chunking strategies, and legal-specific evaluation metrics, with practical implications for case law analysis and automated legal drafting. Full article
Show Figures

Figure 1

26 pages, 279 KiB  
Article
Aligning National Protected Areas with Global Norms: A Four-Step Analysis of Türkiye’s Conservation Laws
by Arife Eymen Karabulut and Özlem Özçevik
Sustainability 2025, 17(8), 3432; https://doi.org/10.3390/su17083432 - 11 Apr 2025
Viewed by 437
Abstract
The International Union for Conservation of Nature (IUCN) conducts critical international studies and offers recommendations on the sustainable conservation, use, and management of protected areas worldwide by setting targets within the framework of the Nature 2030 goals and the Green List standards. These [...] Read more.
The International Union for Conservation of Nature (IUCN) conducts critical international studies and offers recommendations on the sustainable conservation, use, and management of protected areas worldwide by setting targets within the framework of the Nature 2030 goals and the Green List standards. These initiatives are essential for protecting designated areas and encouraging their use through nature-based and community-based solutions. The success of implementing these solutions depends on the effectiveness of the local legal regulations that are currently in place. This article argues that developing a common language and norms between global and national conservation frameworks, along with the efficiency of the national legal framework, plays a crucial role in facilitating the goals of the protection, use, and management of global protected areas. This study evaluates how the reflections and presence of IUCN’s globally significant targets are addressed within Türkiye’s national legal framework and policy level. The article evaluates global and national legal texts in Türkiye for social, environmental, and economic sustainability, comparing them with the Nature 2030 and Green List standards through methodologies such as word matching, comparison, and compatibility analysis. For the development of laws and policies that align with Türkiye’s global goals for the protection, use, and governance of protected areas regarding language and normative standards unity, the article highlights the importance of nature- and community-based national policy norms in achieving global protected area targets. The article’s results highlight the absence of community-based norms such as participation, governance, transparency, and equality, despite international consensus on norms like planning, area management, and the rule of law for the effective management of protected areas in Türkiye. Full article
43 pages, 735 KiB  
Systematic Review
Causal Artificial Intelligence in Legal Language Processing: A Systematic Review
by Philippe Prince Tritto and Hiram Ponce
Entropy 2025, 27(4), 351; https://doi.org/10.3390/e27040351 - 28 Mar 2025
Viewed by 1260
Abstract
Recent advances in legal language processing have highlighted limitations in correlation-based artificial intelligence approaches, prompting exploration of Causal Artificial Intelligence (AI) techniques for improved legal reasoning. This systematic review examines the challenges, limitations, and potential impact of Causal AI in legal language processing [...] Read more.
Recent advances in legal language processing have highlighted limitations in correlation-based artificial intelligence approaches, prompting exploration of Causal Artificial Intelligence (AI) techniques for improved legal reasoning. This systematic review examines the challenges, limitations, and potential impact of Causal AI in legal language processing compared to traditional correlation-based methods. Following the Joanna Briggs Institute methodology, we analyzed 47 papers from 2017 to 2024 across academic databases, private sector publications, and policy documents, evaluating their contributions through a rigorous scoring framework assessing Causal AI implementation, legal relevance, interpretation capabilities, and methodological quality. Our findings reveal that while Causal AI frameworks demonstrate superior capability in capturing legal reasoning compared to correlation-based methods, significant challenges remain in handling legal uncertainty, computational scalability, and potential algorithmic bias. The scarcity of comprehensive real-world implementations and overemphasis on transformer architectures without causal reasoning capabilities represent critical gaps in current research. Future development requires balanced integration of AI innovation with law’s narrative functions, particularly focusing on scalable architectures for maintaining causal coherence while preserving interpretability in legal analysis. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

24 pages, 2927 KiB  
Article
Text Mining Approaches for Exploring Research Trends in the Security Applications of Generative Artificial Intelligence
by Jinsick Kim, Byeongsoo Koo, Moonju Nam, Kukjin Jang, Jooyeoun Lee, Myoungsug Chung and Youngseo Song
Appl. Sci. 2025, 15(6), 3355; https://doi.org/10.3390/app15063355 - 19 Mar 2025
Viewed by 1197
Abstract
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. This [...] Read more.
This study examines the security implications of generative artificial intelligence (GAI), focusing on models such as ChatGPT. As GAI technologies are increasingly integrated into industries like healthcare, education, and media, concerns are growing regarding security vulnerabilities, ethical challenges, and potential for misuse. This study not only synthesizes existing research but also conducts an original scientometric analysis using text mining techniques. To address these concerns, this research analyzes 1047 peer-reviewed academic articles from the SCOPUS database using scientometric methods, including Term Frequency–Inverse Document Frequency (TF-IDF) analysis, keyword centrality analysis, and Latent Dirichlet Allocation (LDA) topic modeling. The results highlight significant contributions from countries such as the United States, China, and India, with leading institutions like the Chinese Academy of Sciences and the National University of Singapore driving research on GAI security. In the keyword centrality analysis, “ChatGPT” emerged as a highly central term, reflecting its prominence in the research discourse. However, despite its frequent mention, “ChatGPT” showed lower proximity centrality than terms like “model” and “AI”. This suggests that while ChatGPT is broadly associated with other key themes, it has a less direct connection to specific research subfields. Topic modeling identified six major themes, including AI and security in education, language models, data processing, and risk management. The analysis emphasizes the need for robust security frameworks to address technical vulnerabilities, ensure ethical responsibility, and manage risks in the safe deployment of AI systems. These frameworks must incorporate not only technical solutions but also ethical accountability, regulatory compliance, and continuous risk management. This study underscores the importance of interdisciplinary research that integrates technical, legal, and ethical perspectives to ensure the responsible and secure deployment of GAI technologies. Full article
(This article belongs to the Special Issue New Advances in Computer Security and Cybersecurity)
Show Figures

Figure 1

21 pages, 286 KiB  
Article
Intellectual Property as a Strategy for Business Development
by Ligia Isabel Beltrán-Urvina, Byron Fabricio Acosta-Andino, Monica Cecilia Gallegos-Varela and Henry Marcelo Vallejos-Orbe
Laws 2025, 14(2), 18; https://doi.org/10.3390/laws14020018 - 19 Mar 2025
Viewed by 969
Abstract
The objective of this research is to examine the role of intellectual property (IP) in fostering business development, particularly focusing on patent management in Ecuador and its alignment with international standards. The study employs a comparative analysis of Ecuadorian legislation against the framework [...] Read more.
The objective of this research is to examine the role of intellectual property (IP) in fostering business development, particularly focusing on patent management in Ecuador and its alignment with international standards. The study employs a comparative analysis of Ecuadorian legislation against the framework established by the World Intellectual Property Organization (WIPO) to identify challenges and opportunities within the national IP system. Key methods include reviewing existing legal texts, interviewing stakeholders, and analyzing patent registration processes. The findings indicate that while Ecuador has made significant strides in harmonizing its IP laws with international treaties, such as the Patent Cooperation Treaty (PCT) and the Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS), considerable barriers remain, particularly related to bureaucratic inefficiencies and a lack of technical resources in key institutions like the National Service of Intellectual Rights (SENADI). The conclusions highlight the need for enhanced efficiency and implementation of IP regulations to stimulate sustained innovation growth, attract national and foreign investments, and, ultimately, strengthen Ecuador’s competitiveness in a global economy. This research contributes to the understanding of how effective IP management can serve as a vital tool for economic development and innovation. Full article
10 pages, 554 KiB  
Review
Should Artificial Intelligence-Based Patient Preference Predictors Be Used for Incapacitated Patients? A Scoping Review of Reasons to Facilitate Medico-Legal Considerations
by Pietro Refolo, Dario Sacchini, Costanza Raimondi, Simone S. Masilla, Barbara Corsano, Giulia Mercuri, Antonio Oliva and Antonio G. Spagnolo
Healthcare 2025, 13(6), 590; https://doi.org/10.3390/healthcare13060590 - 8 Mar 2025
Viewed by 673
Abstract
Background: Research indicates that surrogate decision-makers often struggle to accurately interpret and reflect the preferences of incapacitated patients they represent. This discrepancy raises important concerns about the reliability of such practice. Artificial intelligence (AI)-based Patient Preference Predictors (PPPs) are emerging tools proposed to [...] Read more.
Background: Research indicates that surrogate decision-makers often struggle to accurately interpret and reflect the preferences of incapacitated patients they represent. This discrepancy raises important concerns about the reliability of such practice. Artificial intelligence (AI)-based Patient Preference Predictors (PPPs) are emerging tools proposed to guide healthcare decisions for patients who lack decision-making capacity. Objectives: This scoping review aims to provide a thorough analysis of the arguments, both for and against their use, presented in the academic literature. Methods: A search was conducted in PubMed, Web of Science, and Scopus to identify relevant publications. After screening titles and abstracts based on predefined inclusion and exclusion criteria, 16 publications were selected for full-text analysis. Results: The arguments in favor are fewer in number compared to those against. Proponents of AI-PPPs highlight their potential to improve the accuracy of predictions regarding patients’ preferences, reduce the emotional burden on surrogates and family members, and optimize healthcare resource allocation. Conversely, critics point to risks including reinforcing existing biases in medical data, undermining patient autonomy, raising critical concerns about privacy, data security, and explainability, and contributing to the depersonalization of decision-making processes. Conclusions: Further empirical studies are needed to assess the acceptability and feasibility of these tools among key stakeholders, such as patients, surrogates, and clinicians. Moreover, robust interdisciplinary research is needed to explore the legal and medico-legal implications associated with their implementation, ensuring that these tools align with ethical principles and support patient-centered and equitable healthcare practices. Full article
(This article belongs to the Special Issue Ethics of Well-Done Work and Proposals for a Better Healthcare System)
Show Figures

Figure 1

29 pages, 3281 KiB  
Article
An Automated Repository for the Efficient Management of Complex Documentation
by José Frade and Mário Antunes
Information 2025, 16(3), 205; https://doi.org/10.3390/info16030205 - 5 Mar 2025
Viewed by 664
Abstract
The accelerating digitalization of the public and private sectors has made information technologies (IT) indispensable in modern life. As services shift to digital platforms and technologies expand across industries, the complexity of legal, regulatory, and technical requirement documentation is growing rapidly. This increase [...] Read more.
The accelerating digitalization of the public and private sectors has made information technologies (IT) indispensable in modern life. As services shift to digital platforms and technologies expand across industries, the complexity of legal, regulatory, and technical requirement documentation is growing rapidly. This increase presents significant challenges in managing, gathering, and analyzing documents, as their dispersion across various repositories and formats hinders accessibility and efficient processing. This paper presents the development of an automated repository designed to streamline the collection, classification, and analysis of cybersecurity-related documents. By harnessing the capabilities of natural language processing (NLP) models—specifically Generative Pre-Trained Transformer (GPT) technologies—the system automates text ingestion, extraction, and summarization, providing users with visual tools and organized insights into large volumes of data. The repository facilitates the efficient management of evolving cybersecurity documentation, addressing issues of accessibility, complexity, and time constraints. This paper explores the potential applications of NLP in cybersecurity documentation management and highlights the advantages of integrating automated repositories equipped with visualization and search tools. By focusing on legal documents and technical guidelines from Portugal and the European Union (EU), this applied research seeks to enhance cybersecurity governance, streamline document retrieval, and deliver actionable insights to professionals. Ultimately, the goal is to develop a scalable, adaptable platform capable of extending beyond cybersecurity to serve other industries that rely on the effective management of complex documentation. Full article
Show Figures

Figure 1

14 pages, 341 KiB  
Article
The Permanence and Indissolubility of Marriage Against the Background of Deuteronomy 24:1
by Grzegorz Bzdyrak and Przemysław Kubisiak
Religions 2025, 16(3), 292; https://doi.org/10.3390/rel16030292 - 26 Feb 2025
Viewed by 711
Abstract
This article is an interdisciplinary study. The authors (a canon lawyer and a biblical theologian) endeavour to examine the text of the Book of Deuteronomy 24:1 through both canonical and exegetical lenses. They look at whether and to what extent it is aligned [...] Read more.
This article is an interdisciplinary study. The authors (a canon lawyer and a biblical theologian) endeavour to examine the text of the Book of Deuteronomy 24:1 through both canonical and exegetical lenses. They look at whether and to what extent it is aligned with the contemporary Catholic teaching on the permanence and indissolubility of marriage. They frame the research problem through a series of questions: Is the analysed text contrary to the Catholic Church’s position on the inadmissibility of divorce? Does it imply consent to divorce? Or does it permit marital separation but solely under specific conditions? First, the authors discuss the Catholic teaching on the permanence and indissolubility of marriage. They highlight a distinction between the two terms. They seek to expose the process of evolution of the institution of marriage from the Creation, i.e., God’s original intention in relation to marriage, through the Old Testament period of “hardness of heart”, i.e., from the original sin to the time of Jesus, to the third stage since Jesus, who restored the original order destroyed by sin and elevated the conjugal bond of two baptized people to the dignity of a sacrament. The authors then examine the concept of marital separation. By its very nature, it does not sever the marital bond. The authors explain the legal grounds for separation, among them adultery and failure to maintain marital fidelity. Next, they conduct an in-depth semantic analysis of the studied text and discuss divorce proceedings in the light of Deuteronomy 24:1. They close the discussion with conclusions. Due to the interdisciplinary nature of the work, the authors relied on the literature from the domains of biblical studies and canon law. Full article
33 pages, 3827 KiB  
Review
Distinguishing Reality from AI: Approaches for Detecting Synthetic Content
by David Ghiurău and Daniela Elena Popescu
Computers 2025, 14(1), 1; https://doi.org/10.3390/computers14010001 - 24 Dec 2024
Cited by 6 | Viewed by 5991
Abstract
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with [...] Read more.
The advancement of artificial intelligence (AI) technologies, including generative pre-trained transformers (GPTs) and generative models for text, image, audio, and video creation, has revolutionized content generation, creating unprecedented opportunities and critical challenges. This paper systematically examines the characteristics, methodologies, and challenges associated with detecting the synthetic content across multiple modalities, to safeguard digital authenticity and integrity. Key detection approaches reviewed include stylometric analysis, watermarking, pixel prediction techniques, dual-stream networks, machine learning models, blockchain, and hybrid approaches, highlighting their strengths and limitations, as well as their detection accuracy, independent accuracy of 80% for stylometric analysis and up to 92% using multiple modalities in hybrid approaches. The effectiveness of these techniques is explored in diverse contexts, from identifying deepfakes and synthetic media to detecting AI-generated scientific texts. Ethical concerns, such as privacy violations, algorithmic bias, false positives, and overreliance on automated systems, are also critically discussed. Furthermore, the paper addresses legal and regulatory frameworks, including intellectual property challenges and emerging legislation, emphasizing the need for robust governance to mitigate misuse. Real-world examples of detection systems are analyzed to provide practical insights into implementation challenges. Future directions include developing generalizable and adaptive detection models, hybrid approaches, fostering collaboration between stakeholders, and integrating ethical safeguards. By presenting a comprehensive overview of AIGC detection, this paper aims to inform stakeholders, researchers, policymakers, and practitioners on addressing the dual-edged implications of AI-driven content creation. Full article
Show Figures

Graphical abstract

29 pages, 8082 KiB  
Article
Charting the Growth of Text Summarisation: A Data-Driven Exploration of Research Trends and Technological Advancements
by Anukriti Kaushal, Chia-Chen Lin, Rishabh Chauhan and Rajeev Kumar
Appl. Sci. 2024, 14(23), 11462; https://doi.org/10.3390/app142311462 - 9 Dec 2024
Cited by 1 | Viewed by 1782
Abstract
Text summarisation plays a pivotal role in efficiently processing large volumes of textual data, making it an indispensable tool across diverse domains such as healthcare, legal, education, and journalism. It addresses the challenge of information overload by condensing or generating concise, meaningful summaries [...] Read more.
Text summarisation plays a pivotal role in efficiently processing large volumes of textual data, making it an indispensable tool across diverse domains such as healthcare, legal, education, and journalism. It addresses the challenge of information overload by condensing or generating concise, meaningful summaries that improve decision-making, enhance accessibility, and save valuable time. Advances in artificial intelligence continue to propel the growth of text summarisation research, particularly with the evolution from traditional extractive approaches to cutting-edge abstractive models like BERT and GPT, as well as emerging innovations in multimodal and multilingual summarisation. To trace the development of this field, this study integrates bibliometric analysis and an in-depth survey, leveraging data from the Web of Science database to explore citation trends, uncover influential contributors, and highlight emerging research areas. Furthermore, bibliometric and critical evaluations are employed to outline strategic pathways and propose future directions for the continued advancement of the field. By incorporating sophisticated visualisation tools such as VOSviewer and RawGraphs, the analysis provides an enriched understanding of the field’s trajectory, identifying significant methodologies, landmark contributions, and existing gaps. This comprehensive exploration not only underscores the progress achieved in text summarisation but also serves as an invaluable resource for shaping forthcoming research endeavours and inspiring innovation in this dynamic area of study. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

28 pages, 615 KiB  
Review
Trustworthy AI: Securing Sensitive Data in Large Language Models
by Georgios Feretzakis and Vassilios S. Verykios
AI 2024, 5(4), 2773-2800; https://doi.org/10.3390/ai5040134 - 6 Dec 2024
Cited by 10 | Viewed by 5419
Abstract
Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper proposes a comprehensive framework [...] Read more.
Large language models (LLMs) have transformed Natural Language Processing (NLP) by enabling robust text generation and understanding. However, their deployment in sensitive domains like healthcare, finance, and legal services raises critical concerns about privacy and data security. This paper proposes a comprehensive framework for embedding trust mechanisms into LLMs to dynamically control the disclosure of sensitive information. The framework integrates three core components: User Trust Profiling, Information Sensitivity Detection, and Adaptive Output Control. By leveraging techniques such as Role-Based Access Control (RBAC), Attribute-Based Access Control (ABAC), Named Entity Recognition (NER), contextual analysis, and privacy-preserving methods like differential privacy, the system ensures that sensitive information is disclosed appropriately based on the user’s trust level. By focusing on balancing data utility and privacy, the proposed solution offers a novel approach to securely deploying LLMs in high-risk environments. Future work will focus on testing this framework across various domains to evaluate its effectiveness in managing sensitive data while maintaining system efficiency. Full article
Show Figures

Figure 1

18 pages, 297 KiB  
Article
AI Accountability in Judicial Proceedings: An Actor–Network Approach
by Francesco Contini, Elena Alina Ontanu and Marco Velicogna
Laws 2024, 13(6), 71; https://doi.org/10.3390/laws13060071 - 23 Nov 2024
Viewed by 2413
Abstract
This paper analyzes the impact of AI systems in the judicial domain, adopting an actor–network theory (ANT) framework and focusing on accountability issues emerging when such technologies are introduced. Considering three different types of AI applications used by judges, this paper explores how [...] Read more.
This paper analyzes the impact of AI systems in the judicial domain, adopting an actor–network theory (ANT) framework and focusing on accountability issues emerging when such technologies are introduced. Considering three different types of AI applications used by judges, this paper explores how introducing non-accountable artifacts into justice systems influences the actor–network configuration and the distribution of accountability between humans and technology. The analysis discusses the actor–network reconfiguration emerging when speech-to-text, legal analytics, and predictive justice technologies are introduced in pre-existing settings and maps out the changes in agency and accountability between judges and AI applications. The EU legal framework and the EU AI Act provide the juridical framework against which the findings are assessed to check the fit of new technological systems with justice system requirements. The findings show the paradox that non-accountable AI can be used without endangering fundamental judicial values when judges can control the system’s outputs, evaluating its correspondence with the inputs. When this requirement is not met, the remedies provided by the EU AI Act fall short in costs or in organizational and technical complexity. The judge becomes the unique subject accountable for the use and outcome of a non-accountable system. This paper suggests that this occurs regardless of whether the technology is AI-based or not. The concrete risks emerging from these findings are that these technological innovations can lead to undue influence on judicial decision making and endanger the fair trial principle. Full article
18 pages, 4421 KiB  
Article
Assessing Scientific Text Similarity: A Novel Approach Utilizing Non-Negative Matrix Factorization and Bidirectional Encoder Representations from Transformer
by Zhixuan Jia, Wenfang Tian, Wang Li, Kai Song, Fuxin Wang and Congjing Ran
Mathematics 2024, 12(21), 3328; https://doi.org/10.3390/math12213328 - 23 Oct 2024
Viewed by 1184
Abstract
The patent serves as a vital component of scientific text, and over time, escalating competition has generated a substantial demand for patent analysis encompassing areas such as company strategy and legal services, necessitating fast, accurate, and easily applicable similarity estimators. At present, conducting [...] Read more.
The patent serves as a vital component of scientific text, and over time, escalating competition has generated a substantial demand for patent analysis encompassing areas such as company strategy and legal services, necessitating fast, accurate, and easily applicable similarity estimators. At present, conducting natural language processing(NLP) on patent content, including titles, abstracts, etc., can serve as an effective method for estimating similarity. However, the traditional NLP approach has some disadvantages, such as the requirement for a huge amount of labeled data and poor explanation of deep-learning-based model internals, exacerbated by the high compression of patent content. On the other hand, most knowledge-based deep learning models require a vast amount of additional analysis results as training variables in similarity estimation, which are limited due to human participation in the analysis part. Thus, in this research, addressing these challenges, we introduce a novel estimator to enhance the transparency of similarity estimation. This approach integrates a patent’s content with international patent classification (IPC), leveraging bidirectional encoder representations from transformers (BERT), and non-negative matrix factorization (NMF). By integrating these techniques, we aim to improve knowledge discovery transparency in NLP across various IPC dimensions and incorporate more background knowledge into context similarity estimation. The experimental results demonstrate that our model is reliable, explainable, highly accurate, and practically usable. Full article
(This article belongs to the Special Issue Probability, Stochastic Processes and Machine Learning)
Show Figures

Figure 1

Back to TopTop