Enhancing E-Recruitment Recommendations Through Text Summarization Techniques
Abstract
:1. Introduction
1.1. Recommendation Models
- The type of recommendation approach to be used: collaborative filtering, content-based filtering, or a hybrid approach.
- The type of information to be used: job descriptions, resumes, social media data, or a combination of these.
- The domain which the recommendation will be about.
- The evaluation metrics to be used: accuracy, precision, recall, or a combination of these.
- The target audience: job seekers, employers, or both.
- The budget and resources available.
1.2. Transformer Pretrained Large Language Models (LLMs)
- Bidirectional and Auto-Regressive Transformers (BART) [abstractive]: 1.5 GB;
- Text-to-Text Transfer Transformer (T5) [abstractive]: 850 MB;
- Bidirectional Encoder Representations from Transformers (BERT) [extractive]: 1.2 GB;
- Pretraining with Extracted Gap Sentences for Abstractive Summarization (Pegasus) [abstractive]: 2.1 GB.
1.3. Main Contribution
- Filling the research gap existent in the integration of job recommendation systems with text summarization for efficient processing of job descriptions, transformer-based models like BERT or GPT for cutting-edge semantic understanding, user behavior and preferences for personalized recommendations, and standardized evaluation metrics and scalability for handling large datasets in real time.
- Constructing a content-based job recommender system with integrated text summarization.
- Employing the LinkedIn Job Postings dataset, which was posted on Kaggle in 2023.
- Using transformers to benefit from the text summarization of pretrained large language models for BART, T5, BERT, and Pegasus architectures.
- Evaluating text summarization techniques using ROUGE-1, ROUGE-2, and ROUGE-L.
- Attaining the top-N recommendation results using the Term Frequency–Inverse Document Frequency (TF-IDF) of records and cosine similarity.
- Answering the following research question: “Can the use of transformer-based text summarization techniques improve the performance of content-based job recommender systems by reducing noise and enhancing semantic relevance in job descriptions?”
2. Related Work
Job Recommender Systems Comparative Analysis
# | Study | Method | Approach | Limitation |
---|---|---|---|---|
1 | [9] | LLMs, Generative Adversarial Networks (GANs) | Job recommendation using GANs and LLMs | Lack of interpretability in the GAN model |
2 | [10] | Survey of AI techniques | Overview of AI-based job recommendation techniques | Lack of in-depth analysis of specific models and performance |
3 | [11] | LLMs, Explainable AI (XAI) | Job recommendations using explainable LLMs | Limited scalability for large datasets |
4 | [12] | Machine learning algorithms | General machine learning models for job recommendation | Lack of personalization and advanced feature integration |
5 | [13] | Semantic relational recommendation | Relation-based job recommendation using semantic matching | Poor generalization across industries or job types |
6 | [14] | NLP, bidirectional matching | Matching job seekers and recruiters using NLP | Limited user feedback integration and personal preferences |
7 | [15] | Feature fusion, representation learning | Fusion of features to improve job recommendation quality | Insufficient exploration of real-time performance |
8 | [16] | Multi-Criteria Decision Making (MCDM) | Ranking candidates using MCDM methods | Limited to certain industries, lack of dynamic data updates |
9 | [17] | Bidirectional Long Short-Term Memory (BiLSTM), convolutional neural network (CNN) | CNN and BiLSTM for HR recommendations | Lack of integration with external data sources |
10 | [18] | K-Means Clustering | Clustering-based recommendation system | Limited to static clustering, lacks dynamic adjustment of clusters |
11 | [19] | Knowledge graph | Domain-specific knowledge graph for staffing recommendations | Knowledge graph construction can be resource intensive |
12 | [20] | CNN, NLP | CNN-based career recommendation for Pakistani students | Limited to a single demographic (Pakistani engineering students) |
13 | [21] | Automation techniques | Automation in the HR recruiting process | Lack of deep learning integration |
14 | [22] | Deep learning, personalized attention | Personalized prediction model for job applications | Not addressing job context diversity |
15 | [23] | Conversational AI | Job recommendation through conversational agents | Limited to specific public sector use case |
16 | [24] | Content-based filtering | Job recommendations based on content similarity | Limited personalization and context understanding |
17 | [25] | Bidirectional Long Short-Term Memory (BiLSTM), attention mechanism | Job recommendation using LSTM and attention mechanism | Lack of interpretability and real-time feedback |
18 | [26] | Machine learning, data mining, RESTful API | Job recommendation through machine learning and RESTful API | Limited integration with external platforms |
19 | [27] | Collaborative filtering, Bayesian ranking | Collaborative filtering with Bayesian ranking | Limited scalability and not enough real-time data handling |
20 | [28] | Micro-service architecture | Human capital management recommendation using micro-services | Limited to specific platforms and data sources |
21 | [29] | Ensemble learning, gradient boosting | Ensemble learning for hybrid recommendation system | Lack of real-time adaptation and dynamic learning |
3. Job Recommendation Systems Research Gap
- Text Summarization: Despite several papers showing the recommendation of jobs based on an analysis of job descriptions, no paper claimed or even mentioned any form of output-modifying text summarization as a preprocessing step. Text summarization may need to remove noise and consider the context of a job description, to allow for recommending jobs based on job key points rather than a full job description. This could allow for more recommendations by processing the full job description faster, while allowing the job recommendation systems to consider the most salient features of a job description may produce more relevant job recommendations.
- Adoption of Transformers/Pretrained Models: Some research papers mentioned the use of deep learning models, like CNNs, BiLSTM, and GANs. However, they did not integrate modern deep learning transformer-based models like BERT or GPT, or any pretrained large language models. An appropriate transformer model typically provides results that have transformed the completion of tasks in NLP, to summarize and generate, and they are not used to their potential for recommendation systems.
- Use of User Behaviors and Preferences: Most research papers focused either on job descriptions or resumes, but a small percentage of studies focused on including user behaviors, user preferences, or historical data in their models. The embracing of user behaviors, such as past job applications, would allow for greater personalization of recommendations, leading to higher recommendation relevance. This personalization can help enhance recommendation accuracy.
- Evaluation Metrics: One of the significant findings was the lack of detailed evaluation metrics in a large number of research papers. Most studies did not clarify how the recommendations were evaluated, whether it was through accuracy, precision, recall, or other rank-aware metrics. Clear evaluation metrics are vital for comparing the effectiveness of different techniques and providing evidence for the success of the proposed model.
- Scalability of Recommendation Systems: When it comes to dealing with large datasets or a real-time recommendation environment, very few pieces of research addressed the scalability of recommendation systems and how to grow. Scalability is vital when deploying recommendation systems at a large scale.
- Explainability of Recommendations: The scarcity of studies addressing Explainable AI (XAI) is clearly noticeable. Throughout our survey, only one piece of research, on JobRecoGPT [11], embraced XAI. The level of explainability integration needs to drastically increase. Understanding why a job is recommended or why a candidate is ranked a certain way is crucial for trust and transparency in recommendation systems. Incorporating explainability would make the systems more adequate and reliable.
Potential Research Directions
4. Methodology
4.1. Dataset
- job_postings.csv;
- job_details: benefits.csv, job_industries.csv, job_skills.csv;
- company_details: companies.csv, company_industries.csv, company_specialities.csv, employee_counts.csv.
- Job_title;
- Job_Description;
- BART_Description;
- T5_Description;
- BERT_Description;
- Pegasus_Description.
4.2. Proposed Model
- Step 1: Data Loading: This process begins by loading the LinkedIn Job Postings dataset obtained from Kaggle. This dataset contains numerous job details like titles, descriptions, salary, and company details.
- Step 2: Data Preprocessing: The data undergo a preprocessing step to understand their context and the relationships between different entities. This step involves merging appropriate tables to combine useful fields, removing duplicates if the same job is posted by different companies, and resolving any missing data by appropriately filling in null fields. This step makes the dataset clean and ready for analysis.
- Step 3: Text Summarization: In the “Text Summarization” module, four pretrained transformer models—BART, T5, BERT, and Pegasus—are used to generate summaries of job descriptions. Each summarization technique creates a separate column in the dataset. After summarization, the new dataset, now containing additional summarization columns, is created.
- Step 4: Feedback: The results of the summarization are then examined through a few standard metrics like ROUGE-1, ROUGE-2, ROUGE-L, BLEU, and inference time as an indicator of text summarization effectiveness. This brings up additional research questions: Will the transformer with the best summarization scores give the best recommendation result? Are they directly proportional? Can it be an early indicator of the upcoming recommendations?
- Step 5: Recommendation Generation: The aim of this step involves calculating similarities to generate a list of top-N relevant job recommendations. It starts by receiving the preprocessed data from two modules: the “Without Summarization” module and the “Text Summarization” module.In the “Without Summarization” module, the merged job details are directly vectorized using the TF-IDF, and similarities are calculated using cosine similarity.In the “Text Summarization” module, the new dataset containing the summarized job descriptions from all the pretrained models undergoes merging before vectorization. The job titles and job descriptions are firstly combined for vectorization. Then, the requested job title that needs recommendation is searched for in the dataset. Following that, the cosine similarity is calculated between the vectorized requested job title and the complete jobs vector containing all job titles and descriptions. Finally, the top-N similar job titles are generated.
- Step 6: Rank-Based Recommendation Evaluation: Two types of evaluation metrics are used to evaluate the effectiveness of job recommendations. Rank-unaware metrics, such as precision, recall, and F1-score, evaluate the quality of recommendations without considering their ranking. Rank-aware metrics, such as the Mean Reciprocal Rank (MRR), Mean Average Precision (MAP), Root Mean Square Error (RMSE), and Normalized Discounted Cumulative Gain (NDCG), are used to evaluate the ranking quality of the recommendations, guaranteeing that more relevant job recommendations appear higher up the list.
4.3. Content-Based Recommendation System Similarity Measures
4.4. Cosine Similarity
Benefits of Cosine Similarity
4.5. Term Frequency–Inverse Document Frequency (TF-IDF)
Benefits of TF-IDF
4.6. Combining TF-IDF with Cosine Similarity
4.7. Limitations of TF-IDF with Cosine Similarity
5. Results and Discussion
- ROUGE-1: Calculates the amount that the generated summary and the reference summary overlap in terms of unigrams, or single words.
- ROUGE-2: Calculates the amount that the created summary and the reference summary overlap in bigrams, or two consecutive words.
- ROUGE-L: Measures the longest common subsequence between the generated summary and the reference summary.
5.1. Text Summarization Evaluation
5.2. Recommendation Results Evaluation
5.3. Ranking Significance in Recommendation
5.4. Job Recommendation Experiment
- Where a job title is ranked: The closer a recommendation is to the top of the list, the more relevant it is.
- How many times a job title is recommended: A job title recommended multiple times can be considered more relevant. Irrelevant titles decrease the effectiveness of the recommendation system.
- Step 1: Define Ground Truth and Weighting Scheme
- Data Engineer;
- Senior Data Engineer;
- Software Data Engineer;
- Lead Data Engineer;
- Principal Data Engineer;
- Azure Data Engineer;
- AWS Data Engineer;
- Implementation Data Engineer;
- Data Engineer with Python AWS;
- Data Engineer/SQL Developer.
- Step 2: Assign Weights to Ground Truth Jobs
- Weight for Rank 1 (Data Engineer): 1/1 = 1.0;
- Weight for Rank 2 (Senior Data Engineer): 1/2 = 0.5;
- Weight for Rank 3 (Software Data Engineer): 1/3 = 0.33;
- Weight for Rank 4 (Lead Data Engineer): 1/4 = 0.25;
- Weight for Rank 5 (Principal Data Engineer): 1/5 = 0.20;
- Weight for Rank 6 (Azure Data Engineer): 1/6 = 0.17;
- Weight for Rank 7 (AWS Data Engineer): 1/7 = 0.14;
- Weight for Rank 8 (Implementation Data Engineer): 1/8 = 0.125;
- Weight for Rank 9 (Data Engineer with Python AWS): 1/9 = 0.11;
- Weight for Rank 10 (Data Engineer/SQL Developer): 1/10 = 0.10.
- Step 3: Compare Each Approach to the Ground Truth
- Approach 1: TF-IDF
Rank | Job Title | Ground Truth Rank | Weight Assigned | Relevance |
---|---|---|---|---|
1 | Analytics Data Solutions Architect | Irrelevant | 0 | Irrelevant |
2 | Lead Data Engineer | 4 | 0.25 | Relevant |
3 | Senior Data Architect | Irrelevant | 0 | Irrelevant |
4 | IT Data Warehouse Analyst | Irrelevant | 0 | Irrelevant |
5 | Data Engineer | 1 | 1.0 | Relevant |
6 | Senior Manager | Irrelevant | 0 | Irrelevant |
7 | Senior Data Engineer | 2 | 0.5 | Relevant |
8 | Data Engineer | 1 | 1.0 | Relevant |
9 | Enterprise Data Management Admin | Irrelevant | 0 | Irrelevant |
10 | Lead Data Engineer | 4 | 0.25 | Relevant |
- Approach 2: BART
Rank | Job Title | Ground Truth Rank | Weight Assigned | Relevance |
---|---|---|---|---|
1 | Data Engineer IV | Irrelevant | 0 | Irrelevant |
2 | Senior Manager | Irrelevant | 0 | Irrelevant |
3 | Data Engineer | 1 | 1.0 | Relevant |
4 | Data Engineer | 1 | 1.0 | Relevant |
5 | Data Engineering Product Lead | Irrelevant | 0 | Irrelevant |
6 | Lead Data Engineer | 4 | 0.25 | Relevant |
7 | Database Engineer | Irrelevant | 0 | Irrelevant |
8 | Senior Data Engineer | 2 | 0.5 | Relevant |
9 | Database Management Analyst | Irrelevant | 0 | Irrelevant |
10 | Data Engineer | 1 | 1.0 | Relevant |
- Approach 3: T5
Rank | Job Title | Ground Truth Rank | Weight Assigned | Relevance |
---|---|---|---|---|
1 | Lead Data Architect | Irrelevant | 0 | Irrelevant |
2 | Senior Manager | Irrelevant | 0 | Irrelevant |
3 | Data Engineer | 1 | 1.0 | Relevant |
4 | Data Governance Specialist | Irrelevant | 0 | Irrelevant |
5 | Database Engineer | Irrelevant | 0 | Irrelevant |
6 | Data Analyst | Irrelevant | 0 | Irrelevant |
7 | Enterprise Data Architect | Irrelevant | 0 | Irrelevant |
8 | Data Analytics Solutions Engineer | Irrelevant | 0 | Irrelevant |
9 | Global Data Insights Analyst | Irrelevant | 0 | Irrelevant |
10 | Senior Data Engineer | 2 | 0.5 | Relevant |
- Approach 4: BERT
Rank | Job Title | Ground Truth Rank | Weight Assigned | Relevance |
---|---|---|---|---|
1 | Data Engineer | 1 | 1.0 | Relevant |
2 | Data Engineer | 1 | 1.0 | Relevant |
3 | Data Engineer | 1 | 1.0 | Relevant |
4 | Lead Data Engineer | 4 | 0.25 | Relevant |
5 | Lead Data Engineer | 4 | 0.25 | Relevant |
6 | Scala Developer | Irrelevant | 0 | Irrelevant |
7 | Data Engineer | 1 | 1.0 | Relevant |
8 | Data Engineer | 1 | 1.0 | Relevant |
9 | Senior Data Engineer | 2 | 0.5 | Relevant |
10 | Senior Information Technology Program Manager | Irrelevant | 0 | Irrelevant |
- Approach 5: Pegasus
Rank | Job Title | Ground Truth Rank | Weight Assigned | Relevance |
---|---|---|---|---|
1 | Data Engineer | 1 | 1.0 | Relevant |
2 | Data Analytics Engineer | Irrelevant | 0 | Irrelevant |
3 | Data Engineer | 1 | 1.0 | Relevant |
4 | Senior Data Engineer | 2 | 0.5 | Relevant |
5 | Sr. Data Engineer | 2 | 0.5 | Relevant |
6 | GCP Data lead/Architect | Irrelevant | 0 | Irrelevant |
7 | Lead Data Engineer | 4 | 0.25 | Relevant |
8 | Data Engineer | 1 | 1.0 | Relevant |
9 | Information Technology Infrastructure Engineer | Irrelevant | 0 | Irrelevant |
10 | Staff Data Engineer and Team Lead | Irrelevant | 0 | Irrelevant |
5.4.1. Rank-Unaware Evaluation
5.4.2. Rank-Aware Evaluation
5.5. Why Did BERT Perform Better Despite Its ROUGE Scores?
6. Limitations
7. Conclusions and Future Work
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Deldjoo, Y.; Schedl, M.; Cremonesi, P.; Pasi, G. Recommender Systems Leveraging Multimedia Content. Acm Comput. Surv. 2020, 53, 1–38. [Google Scholar] [CrossRef]
- Kulkarni, S.; Rodd, S.F. Context Aware Recommendation Systems: A review of the state of the art techniques. Comput. Sci. Rev. 2020, 37, 100255. [Google Scholar] [CrossRef]
- Shokeen, J.; Rana, C. A study on features of social recommender systems. Artif. Intell. Rev. 2019, 53, 965–988. [Google Scholar] [CrossRef]
- Alhijawi, B.; Kilani, Y. A collaborative filtering recommender system using genetic algorithm. Inf. Process. Manag. 2020, 57, 102310. [Google Scholar] [CrossRef]
- Çano, E.; Morisio, M. Hybrid recommender systems: A systematic literature review. Intell. Data Anal. 2017, 21, 1487–1524. [Google Scholar] [CrossRef]
- Aggarwal, C.C. Recommender Systems; Springer: Cham, Switzerland, 2016; Volume 1. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar]
- Lalramhluna, R.; Dash, S.; Pakray, D. MizBERT: A Mizo BERT Model. Acm Trans. Asian-Low-Resour. Lang. Inf. Process. 2024, 23, 1–14. [Google Scholar] [CrossRef]
- Du, Y.; Luo, D.; Yan, R.; Wang, X.; Liu, H.; Zhu, H.; Song, Y.; Zhang, J. Enhancing Job Recommendation through LLM-Based Generative Adversarial Networks. Proc. AAAI Conf. Artif. Intell. 2024, 38, 8363–8371. [Google Scholar] [CrossRef]
- Patil, A.; Suwalka, D.; Kumar, A.; Rai, G.; Saha, J. A Survey on Artificial Intelligence (AI) based Job Recommendation Systems. In Proceedings of the 2023 International Conference on Sustainable Computing and Data Communication Systems (ICSCDS), Erode, India, 23–25 March 2023; IEEE: New York, NY, USA, 2023; pp. 730–737. [Google Scholar] [CrossRef]
- Ghosh, P.; Sadaphal, V. JobRecoGPT—Explainable job recommendations using LLMs. arXiv 2023, arXiv:2309.11805. [Google Scholar] [CrossRef]
- Gadegaonkar, S.; Lakhwani, D.; Marwaha, S.; Salunke, P.A. Job Recommendation System using Machine Learning. In Proceedings of the 2023 Third International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 2–4 February 2023; IEEE: New York, NY, USA, 2023; pp. 596–603. [Google Scholar] [CrossRef]
- Denis, R.; Peter Jose, P.; Sushma Margaret, A. Performance Analysis of Machine Learning—Semantic Relational Approach based Job Recommendation System. In Proceedings of the 2023 10th International Conference on Computing for Sustainable Global Development (INDIACom), New Delhi, India, 15–17 March 2023; pp. 1478–1486. [Google Scholar]
- Alsaif, S.A.; Sassi Hidri, M.; Ferjani, I.; Eleraky, H.A.; Hidri, A. NLP-Based Bi-Directional Recommendation System: Towards Recommending Jobs to Job Seekers and Resumes to Recruiters. Big Data Cogn. Comput. 2022, 6, 147. [Google Scholar] [CrossRef]
- He, M.; Zhu, Y.; Lv, N.; He, R. A Feature Fusion-based Representation Learning Model for Job Recommendation. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; IEEE: New York, NY, USA, 2022; pp. 791–794. [Google Scholar] [CrossRef]
- Minhas, A.H.; Shaiq, M.D.; Qureshi, S.A.; Cheema, M.D.A.; Hussain, S.; Khan, K.U. An Efficient Algorithm for Ranking Candidates in E-Recruitment System. In Proceedings of the 2022 16th International Conference on Ubiquitous Information Management and Communication (IMCOM), Seoul, Republic of Korea, 3–5 January 2022; IEEE: New York, NY, USA, 2022; pp. 1–8. [Google Scholar] [CrossRef]
- Xu, G. Human Resource Recommendation Based on Recurrent Convolutional Neural Network. In Proceedings of the 2022 2nd International Conference on Consumer Electronics and Computer Engineering (ICCECE), Guangzhou, China, 14–16 January 2022; IEEE: New York, NY, USA, 2022; pp. 54–58. [Google Scholar] [CrossRef]
- Puspasari, B.D.; Damayanti, L.L.; Pramono, A.; Darmawan, A.K. Implementation K-Means Clustering Method in Job Recommendation System. In Proceedings of the 2021 7th International Conference on Electrical, Electronics and Information Engineering (ICEEIE), Malang, Indonesia, 2 October 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Wang, Y.; Allouache, Y.; Joubert, C. A Staffing Recommender System based on Domain-Specific Knowledge Graph. In Proceedings of the 2021 Eighth International Conference on Social Network Analysis, Management and Security (SNAMS), Gandia, Spain, 6–9 December 2021; IEEE: New York, NY, USA, 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Saeed, T.; Sufian, M.; Ali, M.; Rehman, A.U. Convolutional Neural Network Based Career Recommender System for Pakistani Engineering Students. In Proceedings of the 2021 International Conference on Innovative Computing (ICIC), Lahore, Pakistan, 9–10 November 2021; IEEE: New York, NY, USA, 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Rafiei, G.; Farahani, B.; Kamandi, A. Towards Automating the Human Resource Recruiting Process. In Proceedings of the 2021 5th National Conference on Advances in Enterprise Architecture (NCAEA), Mashhad, Iran, 1–2 December 2021; IEEE: New York, NY, USA, 2021; pp. 43–47. [Google Scholar] [CrossRef]
- Zhu, J.; Viaud, G.; Hudelot, C. Improving Next-Application Prediction with Deep Personalized-Attention Neural Network. In Proceedings of the 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), Pasadena, CA, USA, 13–16 December 2021; IEEE: New York, NY, USA, 2021; pp. 1615–1622. [Google Scholar] [CrossRef]
- Bellini, V.; Biancofiore, G.M.; Di Noia, T.; Sciascio, E.D.; Narducci, F.; Pomo, C. GUapp: A Conversational Agent for Job Recommendation for the Italian Public Administration. In Proceedings of the 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems (EAIS), Bari, Italy, 27–29 May 2020; IEEE: New York, NY, USA, 2020; pp. 1–7. [Google Scholar] [CrossRef]
- Yadalam, T.V.; Gowda, V.M.; Kumar, V.S.; Girish, D.; Namratha, M. Career Recommendation Systems using Content based Filtering. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; IEEE: New York, NY, USA, 2020; pp. 660–665. [Google Scholar] [CrossRef]
- Nigam, A.; Roy, A.; Singh, H.; Waila, H. Job Recommendation through Progression of Job Selection. In Proceedings of the 2019 IEEE 6th International Conference on Cloud Computing and Intelligence Systems (CCIS), Singapore, 19–21 December 2019; IEEE: New York, NY, USA, 2019; pp. 212–216. [Google Scholar] [CrossRef]
- Jain, H.; Kakkar, M. Job Recommendation System based on Machine Learning and Data Mining Techniques using RESTful API and Android IDE. In Proceedings of the 2019 9th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 10–11 January 2019; IEEE: New York, NY, USA, 2019; pp. 416–421. [Google Scholar] [CrossRef]
- Zhou, Q.; Liao, F.; Ge, L.; Sun, J. Personalized Preference Collaborative Filtering: Job Recommendation for Graduates. In Proceedings of the 2019 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), Leicester, UK, 19–23 August 2019; IEEE: New York, NY, USA, 2019; pp. 1055–1062. [Google Scholar] [CrossRef]
- Mehta, M.; Derasari, R.; Patel, S.; Kakadiya, A.; Gandhi, R.; Chaudhary, S.; Goswami, R. A Service-Oriented Human Capital Management Recommendation Platform. In Proceedings of the 2019 IEEE International Systems Conference (SysCon), Orlando, FL, USA, 8–11 April 2019; IEEE: New York, NY, USA, 2019; pp. 1–8. [Google Scholar] [CrossRef]
- Lin, Y.; Huang, Y.; Chen, P. Employment Recommendation Algorithm Based on Ensemble Learning. In Proceedings of the 2019 IEEE 1st International Conference on Civil Aviation Safety and Information Technology (ICCASIT), Kunming, China, 17–19 October 2019; IEEE: New York, NY, USA, 2019; pp. 267–271. [Google Scholar] [CrossRef]
- Almalki, L. BERT-based Job Recommendation System Using LinkedIn Dataset. J. Inf. Syst. Eng. Manag. 2025, 10, 280–291. [Google Scholar] [CrossRef]
- Hickey, P.J.; Erfani, A.; Cui, Q. Use of LinkedIn Data and Machine Learning to Analyze Gender Differences in Construction Career Paths. J. Manag. Eng. 2022, 38, 04022060. [Google Scholar] [CrossRef]
- Panchasara, S.; Gupta, R.K.; Sharma, A. AI Based Job Recommedation System using BERT. In Proceedings of the 2023 7th International Conference On Computing, Communication, Control And Automation (ICCUBEA), Pune, India, 18–19 August 2023; IEEE: New York, NY, USA, 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Li, M.; Chen, X.; Li, X.; Ma, B.; Vitanyi, P. The Similarity Metric. IEEE Trans. Inf. Theory 2004, 50, 3250–3264. [Google Scholar] [CrossRef]
- Lahitani, A.R.; Permanasari, A.E.; Setiawan, N.A. Cosine similarity to determine similarity measure: Study case in online essay assessment. In Proceedings of the 2016 4th International Conference on Cyber and IT Service Management, Bandung, Indonesia, 26–27 April 2016; IEEE: New York, NY, USA, 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Rosnes, D.; Starke, A.D.; Trattner, C. Shaping the Future of Content-based News Recommenders: Insights from Evaluating Feature-Specific Similarity Metrics. In Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, Cagliari, Italy, 1–4 July 2024; ACM: New York, NY, USA, 2024; pp. 201–211. [Google Scholar] [CrossRef]
- Billsus, D. User Modeling for Adaptive News Access. User Model.-User-Adapt. Interact. 2000, 10, 147–180. [Google Scholar] [CrossRef]
- Sanchan, N. Comparative Study on Automated Reference Summary Generation using BERT Models and ROUGE Score Assessment. J. Curr. Sci. Technol. 2024, 14, 26. [Google Scholar] [CrossRef]
- Shakil, H.; Farooq, A.; Kalita, J. Abstractive Text Summarization: State of the Art, Challenges, and Improvements. Neurocomputing 2024, 603, 128255. [Google Scholar] [CrossRef]
- Garrido-Merchan, E.C.; Gozalo-Brizuela, R.; Gonzalez-Carvajal, S. Comparing BERT Against Traditional Machine Learning Models in Text Classification. J. Comput. Cogn. Eng. 2023, 2, 352–356. [Google Scholar] [CrossRef]
- Wehnert, S.; Sudhi, V.; Dureja, S.; Kutty, L.; Shahania, S.; De Luca, E.W. Legal norm retrieval with variations of the bert model combined with TF-IDF vectorization. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, São Paulo, Brazil, 21–25 June 2021; ACM: New York, NY, USA, 2021; pp. 285–294. [Google Scholar] [CrossRef]
Pretrained LLM | ROUGE-1 | ROUGE-2 | ROUGE-L | BLEU | Inference Time |
---|---|---|---|---|---|
BART Descriptions | 0.535832 | 0.505921 | 0.510311 | 0.108321 | 472.6042 |
T5 Descriptions | 0.323948 | 0.287648 | 0.292804 | 0.004917 | 267.7768 |
BERT Descriptions | 0.199262 | 0.195287 | 0.199262 | 0.000235 | 228.2956 |
Pegasus Descriptions | 0.181876 | 0.105088 | 0.139619 | 0.000106 | 207.56 |
Technique | Mean Reciprocal Rank | Root Mean Square Error | Mean Average Precision | Normalized Discounted Cumulative Gain |
---|---|---|---|---|
TF-IDF | 0.1543 | 5.099 | 0.4657 | 0.4304 |
BART | 0.1292 | 5.1769 | 0.4667 | 0.4163 |
T5 | 0.0433 | 5.831 | 0.2667 | 0.1737 |
BERT | 0.55 | 3.6056 | 0.9617 | 0.7917 |
Pegasus | 0.106 | 4.062 | 0.4405 | 0.3476 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
El-Deeb, R.H.; Abdelmoez, W.; El-Bendary, N. Enhancing E-Recruitment Recommendations Through Text Summarization Techniques. Information 2025, 16, 333. https://doi.org/10.3390/info16040333
El-Deeb RH, Abdelmoez W, El-Bendary N. Enhancing E-Recruitment Recommendations Through Text Summarization Techniques. Information. 2025; 16(4):333. https://doi.org/10.3390/info16040333
Chicago/Turabian StyleEl-Deeb, Reham Hesham, Walid Abdelmoez, and Nashwa El-Bendary. 2025. "Enhancing E-Recruitment Recommendations Through Text Summarization Techniques" Information 16, no. 4: 333. https://doi.org/10.3390/info16040333
APA StyleEl-Deeb, R. H., Abdelmoez, W., & El-Bendary, N. (2025). Enhancing E-Recruitment Recommendations Through Text Summarization Techniques. Information, 16(4), 333. https://doi.org/10.3390/info16040333