Using AI and NLP for Tacit Knowledge Conversion in Knowledge Management Systems: A Comparative Analysis
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsIn this 20-page submission technologies-3405984, Seghroucheni et al. attempted to conduct a comparative analysis of using AI and NLP for tacit knowledge conversion in knowledge management systems.
W1. The current submission appears to be closely related to Ref. [1]. However, the authors failed to clearly indicate their relationship.
W2. If the current submission was an extension of Ref. [1], it is unclear what are the new contributions?
W3. The submission is poorly organised and poorly written. For example, Section 2.1 is simply a long list of bullet points.
W4. It is also unclear what are differences among bullet points, check marks, and arrows?
W5. Where is Figure 3?
W6. Most parts of Section 3.1 (especially, lines 453-481) are uninterpretable.
W7. The submission also lacks exhaustive comparisons.
In this 20-page submission technologies-3405984, Seghroucheni et al. attempted to conduct a comparative analysis of using AI and NLP for tacit knowledge conversion in knowledge management systems. The submission in its current form suffers from several serious problems: Poor organisation, poor written (with many bullet points), lack of comparative analysis. Hence, rejection and no resubmission is suggested.
Author Response
Comment 1. The current submission appears to be closely related to Ref. [1]. However, the authors failed to clearly indicate their relationship.
Response 1: Thank you for pointing this out. In the revised manuscript, we have explicitly clarified the relationship between this work and Ref. [1]. Ref. [1] is a systematic review that comprehensively examines methodologies for tacit knowledge conversion. Building on that foundation, this manuscript extends the discussion by applying advanced technologies, specifically Artificial Intelligence (AI) and Natural Language Processing (NLP), to enhance the tacit knowledge conversion process.
Comment 1. If the current submission was an extension of Ref. [1], it is unclear what are the new contributions?
Response 2: To address this concern, we have added a paragraph to the Introduction to clarify the relationship between the current work and Ref. [1]:
Additionally, we have elaborated on the specific new contributions in this manuscript which is The integration of modern NLP techniques, including transformer-based models, for improved tacit knowledge extraction and conversion.
Comment 3 &4. The submission is poorly organized and poorly written. For example, Section 2.1 is simply a long list of bullet points.
Response 4: We agree that Section 2.1 was overly simplified. We have reorganized the content by converting the bullet points into well-structured paragraphs, with detailed explanations and examples to ensure clarity and flow.
Comment 5. Where is Figure 3?
Response 5: We regret the oversight. Figure 3 was inadvertently omitted in the submission process. It has now been included in the revised manuscript, and we have ensured its proper placement in Section 3.
Comment 6. Most parts of Section 3.1 (especially lines 453–481) are uninterpretable.
Response 6: We appreciate your feedback regarding the clarity of Section 3.1. To address this issue, we have revised and clarified the content, particularly focusing on lines 453–481. Specifically, we have provided an interpretation of the SBERT (Sentence-BERT) architecture, explaining its components, functionality, and role in the tacit knowledge conversion process.
Reviewer 2 Report
Comments and Suggestions for AuthorsI would like to express my gratitude to the editorial team for entrusting me with the review of this manuscript. It is a privilege to evaluate research that delves into the conversion of tacit knowledge using AI and NLP in Knowledge Management Systems (KMS). The topic is both timely and significant, addressing a crucial challenge in modern organizational knowledge management. Following are my concerns about the paper:
· While the abstract provides a reasonable overview, it lacks clarity in explaining the practical implications of the findings. Additionally, specific results or key takeaways are missing.
· The introduction establishes the importance of tacit knowledge conversion and contextualizes the role of AI and NLP effectively but it does not Clearly define the research gap being addressed.
· The paper proposes a structured NLP pipeline and explains the components systematically but here you are required to:
i. Include more details on the datasets or text sources used for testing the pipeline.
ii. Provide a justification for the choice of specific NLP techniques over others.
iii. Address scalability and real-world applicability challenges of the proposed methods.
· The comparative analysis of NLP algorithms is thorough and well-categorized but you you need to adjust some concerns i.e., Strengthen the explanation of how the selected algorithms were benchmarked or validated, Quantitative performance metrics (e.g., accuracy, runtime) should be included for better comparison, highlight specific examples or use cases where the proposed pipeline was tested.
Discuss the practical relevance of the findings for industries or organizations looking to enhance their KMS. Acknowledge challenges such as computational costs, dataset quality, and the subjective nature of tacit knowledge. Suggest exploring the integration of multimodal data (e.g., audio, visual) and extending the pipeline for cross-domain knowledge conversion.
Author Response
Comment 1: Abstract lacks clarity in explaining the practical implications and specific results.
Response 1: We have revised the abstract to include specific results and key takeaways from our findings. Additionally, we have highlighted the practical implications of the proposed NLP pipeline, emphasizing its relevance to industries and organizations seeking to optimize their Knowledge Management Systems (KMS).
Comment 2: Introduction does not clearly define the research gap.
Response 2: In the revised Introduction, we have explicitly defined the research gap being addressed, focusing on the limitations of existing tacit knowledge conversion methodologies and the need for AI and NLP integration to overcome these challenges.
Comment 3: Details required for the structured NLP pipeline:
Response 3: The proposed NLP pipeline is designed to be tested on diverse text sources, including general-purpose corpora, domain-specific texts, and user-generated content, to ensure adaptability across domains. Techniques like BERT and BART are chosen for their advanced contextual understanding and summarization capabilities. Scalability is addressed through distributed computing and incremental learning, enabling efficient processing of large datasets and domain-specific fine-tuning. The pipeline’s modular design ensures applicability across industries, with future evaluations planned to benchmark performance and assess real-world usability.
Comment 4: Comparative Analysis of NLP Algorithms:
Response 4: The comparative analysis in this study is based on general observations and insights from the literature rather than direct experimental testing. The comparisons provide qualitative evaluations to guide the selection of the most suitable algorithms for various NLP tasks. By focusing on criteria such as contextual understanding, scalability, and adaptability, this analysis aims to identify algorithms that ensure optimal performance in real-world applications. These insights serve as a foundation for future experimental validation and practical implementations.
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsIn this 21-page revision technologies-3405984R1, Seghroucheni et al. attempted to conduct a comparative analysis of using AI and NLP for tacit knowledge conversion in knowledge management systems.
S1. The authors addressed most of reviewer's previous concerns.
S2. Glad that the authors tracked changes.
S3. The authors provided comparisons amongst a few techniques for text cleaning, tokenization, normalization, stopword removal, named entity recognition, part-of-speech tagging, co-reference resolution, key phrase extraction, sentiment analysis, text summarization, and semantic analysis.
W1. However, the comparisons were shallow.
W2. Terms like "fast" or "slow", "high" or "low" can be subjective. More concrete or quantifiable measures would be better.
W3. In terms of organization, the authors only numbered top-2 level of sections. Beyond that, (sub)*-section can be confusing. Some section titles started with rightarrows, some with check marks, and some with upper-right arrows.
W4. What is the purpose of Section 3.1 on SBERT?
Author Response
Comment 1 and 2: The comparisons were shallow. Terms like "fast" or "slow", "high" or "low" can be subjective. More concrete or quantifiable measures would be better.
Response : Thank you for your feedback. In response to your comment about the comparisons being shallow, I have restructured the tables to present the comparison more comprehensively. Rather than relying solely on general observations, I have included more detailed literature references to support the analysis. These references enhance the depth of the comparative evaluation, providing insights into the performance and characteristics of each algorithm in various contexts. This allows for a better understanding of their suitability for different NLP tasks.
Comment 3: In terms of organization, the authors only numbered top-2 level of sections. Beyond that, (sub)*-section can be confusing. Some section titles started with rightarrows, some with check marks, and some with upper-right arrows.
Response 3: Thank you for your observation regarding the organization of the sections. I acknowledge that the use of different symbols (right arrows, check marks, and upper-right arrows) may have led to inconsistencies in the document's structure. To address this, I have revised the section titles to ensure uniformity in their formatting, and I have provided a clear hierarchical numbering system for all sections and subsections. This restructuring aims to improve the document’s clarity and readability, making it easier for readers to follow the flow of the content.
Comment 4: What is the purpose of Section 3.1 on SBERT?
Response 4: The purpose of Section 3.1 on SBERT is to highlight its significance as a crucial algorithm for semantic analysis, particularly in the context of tacit knowledge conversion. I included this section because SBERT is especially well-suited for tasks that require an in-depth understanding of text at the sentence or paragraph level. In the case of tacit knowledge conversion, the ability to accurately capture the meaning of sentences is key to transforming implicit knowledge into explicit, shareable information.
Reviewer 2 Report
Comments and Suggestions for AuthorsAuthors have done very well in the revised manuscript
Author Response
Thank you for your positive feedback. I greatly appreciate your recognition of the improvements made in the revised manuscript. Your constructive comments have been instrumental in refining the content, and I am glad to hear that the revisions have enhanced the clarity and quality of the work.
Round 3
Reviewer 1 Report
Comments and Suggestions for AuthorsIn this 24-page change-tracked revision technologies-3405984R2, Seghroucheni et al. attempted to conduct a comparative analysis of using AI and NLP for tacit knowledge conversion in knowledge management systems.
S1. The authors addressed most of reviewer's concerns in the two rounds.
S2. Glad that the authors tracked changes.
S3. Glad that the authors reoganized some paragraphs to make the final revision more readable.
S4. The authors provided comparisons amongst a few techniques for text cleaning, tokenization, normalization, stopword removal, named entity recognition, part-of-speech tagging, co-reference resolution, key phrase extraction, sentiment analysis, text summarization, and semantic analysis.
W1. Although the comparisons have improved, they were still shallow.
W2. For consistency, pls remove the upper-right arrow from line 609.
W3. In terms of organization, it can be further improved by using subsubsections, which is allowed/supported by the journal. It is unclear why the authors refused to use subsubsections.
Author Response
Comment 1. The comparisons have improved, they were still shallow.
Response 1: We appreciate your feedback regarding the depth of our comparisons. In response, we have restructured the tables to present the comparison more comprehensively. Rather than relying solely on general observations, we have integrated more detailed literature references to support the analysis. These references provide deeper insights into the performance and characteristics of each algorithm in various contexts, allowing for a more thorough understanding of their suitability for different NLP tasks. We hope these enhancements address your concerns and improve the clarity and depth of our comparative evaluation.
Comment 2. For consistency, pls remove the upper-right arrow from line 609.
Response 2: Thank you for pointing this out. We have removed the upper-right arrow from line 609 to ensure consistency throughout the manuscript.
Comment 3 In terms of organization, it can be further improved by using subsubsections, which is allowed/supported by the journal. It is unclear why the authors refused to use subsubsections.
Response 3: We appreciate the suggestion regarding the use of subsubsections. To improve the manuscript’s organization, we have carefully reorganized the content while maintaining clarity and coherence. The revised structure enhances readability and logical flow.