Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development
Abstract
:1. Introduction
- To address the limitations of manual duration estimation by ensuring greater consistency, accuracy, and efficiency in managing project timelines, this paper introduces a novel ML approach to automatically predict the duration of Crowdsourcing Software Development (CSD) projects by leveraging BERT word embeddings to process and convert project-related textual information into vectors.
- Application of various ML algorithms to BERT vectors, with zero-shot algorithms demonstrating superior performance that achieves high performance with averages A of 92.76%, P of 92.76%, R of 99.33%, and of 95.93%.
2. Related Work
2.1. Success Prediction
2.2. Developer Recommendation
2.3. CCSD Project Success Factors
2.4. Task Scheduling
2.5. CCSD Quality Assessment
3. Methodology
3.1. Overview
- First, CSD projects are extracted by implementing a Python script that utilizes the TopCoder public API (https://tcapi.docs.apiary.io/, accessed on 25 July 2024).
- Second, NLP technologies are employed to preprocess the available data, with a particular focus on the requirement documents of each project.
- Third, each CSD project is labeled as small, medium, large, or extra-large based on the duration calculated by subtracting the last submission date from the posting date. This labeling follows standard project-duration settings as instructed in Section 1.
- Fourth, the extracted attributes are utilized, and the BERT model is applied for embedding calculations to represent each CSD project as a vector.
- Fifth, it trains the ML classifiers for project duration prediction and provides the best optimal model for the project duration prediction.
3.2. Detailed Example
- Project name: “Swift-iOS8-User Notification Actions”.
- Posting date: “6 June 2014” is the date of posting of the open call for the development competition.
- Last submission date: “11 June 2014” is the last date for submitting the software solution.
- Detailed Description: (a snippet from the detailed description of the project) “Hone your iOS development skills by implementing a new iOS 8 API in the new programming language Swift. We are challenging you to implement the new UI User Notification Action API, referred to as Quick Actions in the WWDC Keynote. You should showcase two types of actions; foreground actions (ones that take you into the application for further action) and background actions (ones that perform the action and let the user get back to what they were currently doing) with an authenticated service of some sort, like Salesforce Chatter. For this challenge, the following features are required: Native/Universal/iOS 8+, bundle ID is (deleted for privacy purpose). Implementation of background actions, implementation of an authenticated service that will be leveraged in the quick actions. UI Login Should be functional for authenticating with the service you choose to implement. Primary Screen The screen that will be used for performing your foreground action, e.g., replying to a post. User Notification Implement using the Minimal or Default Action context, this isn’t very customizable, but it is where the actual functionality of the challenge happens”.
- Required technologies: “iOS, REST, Swift, R” are the technology constraints that are specified for developing the project.
- Prize money: “1000, 500” are the first and second winners prize monies.
- Platforms: “iOS” are the platform constraints (potentially multiple) on which the finished project will operate.
- Status: “Cancelled-Failed Review” is the ultimate status of the project that signifies whether the project was completed or failed to meet the deadline (the specifics of the various statuses are detailed in Section .....).
3.3. Problem Definition
- = “Hone your iOS development skills by implementing a new iOS 8 API in the new programming language Swift. We are challenging you to implement the new UI User Notification Action API, referred to as Quick Actions in the WWDC Keynote. You should showcase two types of actions; foreground actions (ones that take you into the application for further action) and background actions (ones that perform the action and let the user get back to what they were currently doing) with an authenticated service of some sort, like Salesforce Chatter. For this challenge, the following features are required: Native/Universal/iOS 8+, bundle ID is (deleted for privacy purpose). Implementation of background actions, implementation of an authenticated service that will be leveraged in the quick actions.UI Login Should be functional for authenticating with the service you choose to implement. Primary Screen The screen that will be used for performing your foreground action, e.g., replying to a post. User Notification Implement using the Minimal or Default Action context, this isn’t very customizable, but it is where the actual functionality of the challenge happens”. It is a snippet of the detailed description of the project.
- = “1000, 500 = 1500” is the sum of prize money of the first and second winners.
- = 1 is number of platform’s constrains.
- = 0 is the project’s status, where 0 indicates the project has failed due to different reasons, e.g., zero submission, failed review, or failed screening, and 1 indicates the project is successfully completed.
- = 4 is the number of technology constraints required to build the software project.
- = 5 is a number of days to complete the project (the project duration changes according to the size of the project).
- = 120 are the number of hours calculated from number of days .
- = “small” is the size of the project labeled based on the given project duration, i.e., between 5 and 7 days.
3.4. Preprocessing
- Tokenization: in this step, the text is split into words and each word is called a token. Special characters like punctuation marks are also decomposed and converted into lowercase.
- Spell correction: in this step, the spell correction is performed using textblob module (https://github.com/sloria/TextBlob, accessed on 25 July 2024).
- Stop-words removal: Stop-words are commonly used words, i.e., the, a, an, in, and are. NLTK in Python has a list of stopwords, and in this step, all the stopwords are removed.
- POS tagging: a process of categorizing words to correspond to a particular part of speech called POS tag. Each tokenized word is assigned a POS tag in this step, especially from requirement documents.
- Replacing emails, phone numbers, and URL: clean-text (https://pypi.org/project/clean-text/, accessed on 25 July 2024) library is applied to remove and replace emails, phone numbers, and URL (if any) to blank spaces.
- Word morphology and lemmatization: word morphology is a method of transforming words into their singular forms. For instance, problems changes into problem. Additionally, lemmatization transforms nouns and adjectives into their base forms. For example, the term glasses changes into glass.
3.5. Word Embeddings
3.6. Zero-Shot Learning Classifier
4. Evaluation
4.1. Research Questions
- RQ1 Does the proposed approach work for the project-duration prediction of CCSD projects? If yes, to what extent?
- RQ2 Does embedding influence the proposed method of CCSD project-duration prediction?
- RQ3 Does preprocessing influence the proposed method of CCSD for project-duration prediction?
- RQ4 Does the proposed method outperform the machine learning classifiers for CCSD project-duration prediction?
4.2. Dataset
4.3. Process
Algorithm 1 Training and Evaluation of Classifiers |
|
4.4. Metrics
4.5. Results
Effectiveness of the Proposed Approach
- The mean P, R, and of the proposed methodology, random forecasting, and zero rule are (92.76%, 99.33%, 95.93%), (65.23%, 65.64%, 65.43%), and (82.58%, 100.00%, 90.46%), respectively.
- The suggested methodology surpasses the random forecasting and zero rule classifiers.
- Concerning P, the enhancement in performance of the suggested methodology over random forecasting and zero rule is 42.20% = (92.76% − 65.23%)/65.23% and 12.33% = (92.76% − 82.58%)/82.58%, respectively.
- In terms of R, the performance enhancement of the suggested methodology over random forecasting and zero rule is 51.33% = (99.33% − 65.64%)/65.64% and (0.67)% = (99.33% − 100.00%)/100.00%, respectively. The reason for the decline in performance of the suggested methodology in R compared to zero rule is that zero rule consistently predicts the majority class.
- Regarding , the performance enhancement of the suggested methodology over random forecasting and zero rule is 46.61% = (95.93% − 65.43%)/65.43% and 6.05% = (95.93% − 90.46%)/90.46%, respectively.
4.6. Importance of Embedding
4.7. Importance of Preprocessing
- Enabling preprocessing leads to enhanced performance. It boosts the average P, R, and of the proposed methodology by 0.34% = (92.76% − 92.45%)/92.45%, 0.38% = (99.33% − 98.95%)/98.95%, and 0.36% = (95.93% − 95.59%)/95.59%, respectively.
- The likely rationale behind this enhancement is the presence of extraneous and irrelevant content within the textual data of projects, such as stop-words and punctuation. Consequently, feeding such data into the proposed methodology poses an additional burden. Hence, implementing preprocessing may enhance performance and reduce computational expenses.
4.8. Proposed Classifier Verses Other Machine Learning Classifiers
- The mean P, R, and of ZSC, SVM, LR, and RF are (92.76%, 99.33%, and 95.93%), (99.93%, 98.72%, and 95.21%), (91.68%, 97.94%, and 94.71%), and (89.57%, 83.64%, and 86.51%), respectively. The application of these classifiers indicates that ZSC provides the most accurate results on the given dataset.
- The ZSC algorithm outperformed SVM, LR, and RF in terms of P, R, and . Importantly, we did not use boosting algorithms to correct classification errors due to their additional computational cost. ZSC excels for several reasons. Firstly, it has superior generalization capabilities, allowing it to perform well across domains and datasets without extensive tuning. Secondly, ZSC requires less labeled data for training than SVM, LR, and RF, making it ideal for scenarios where annotated data is limited or expensive. Thirdly, ZSC is more adaptable to new or unseen classes, which is beneficial for tasks involving rapidly changing data environments.
- SVM surpasses LR and RF because it constructs a hyperplane in the feature space that maximizes the margin for most projects, except for outliers. This characteristic helps SVM generalize better on test data compared to distance-based and similarity-based algorithms like RF. Furthermore, linear SVM efficiently explores different feature combinations and performs classification with lower computational complexity than other SVM kernels. SVM also excels in long text classification scenarios, outperforming classifiers like LR, RF, and MNB.
- LR also shows better performance than RF, primarily due to its rapid training capability and effectiveness with sparse features. Although the performance difference between LR and RF is small, LR’s ability to handle high-dimensional data can significantly enhance its performance on larger datasets. In contrast, RF’s complexity makes it less suitable for high-dimensional features, particularly in project-duration prediction tasks.
4.9. Threats to Validity
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Urbaczek, J.; Saremi, R.; Saremi, M.; Togelius, J. Greedy Scheduling: A Neural Network Method to Reduce Task Failure in Software Crowdsourcing. In Proceedings of the 23rd International Conference on Enterprise Information Systems—Volume 1: ICEIS,. INSTICC, Virtual, 26–28 April 2021; SciTePress: Setúbal, Portugal, 2021; pp. 410–419. [Google Scholar] [CrossRef]
- Illahi, I.; Liu, H.; Umer, Q.; Zaidi, S.A.H. An Empirical Study on Competitive Crowdsource Software Development: Motivating and Inhibiting Factors. IEEE Access 2019, 7, 62042–62057. [Google Scholar] [CrossRef]
- Wang, R.; Chen, B. A Configurational Approach to Attracting Participation in Crowdsourcing Social Innovation: The Case of Openideo. Manag. Commun. Q. 2023, 37, 340–367. [Google Scholar] [CrossRef]
- Illahi, I.; Liu, H.; Umer, Q.; Niu, N. Machine learning based success prediction for crowdsourcing software projects. J. Syst. Softw. 2021, 178, 110965. [Google Scholar] [CrossRef]
- Zhang, Z.; Sun, H.; Zhang, H. Developer recommendation for Topcoder through a meta-learning based policy model. Empir. Softw. Eng. 2020, 25, 859–889. [Google Scholar] [CrossRef]
- Afridi, H.G. Empirical investigation of correlation between rewards and crowdsource-based software developers. In Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), Buenos Aires, Argentina, 20–28 May 2017; pp. 80–81. [Google Scholar] [CrossRef]
- de Souza, C.R.B.; Machado, L.S.; Melo, R.R.M. On Moderating Software Crowdsourcing Challenges. Proc. ACM Hum.-Comput. Interact. 2020, 4, 1–22. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Patel, C.; Husairi, M.A.; Haon, C.; Oberoi, P. Monetary rewards and self-selection in design crowdsourcing contests: Managing participation, contribution appropriateness, and winning trade-offs. Technol. Forecast. Soc. Chang. 2023, 191, 122447. [Google Scholar] [CrossRef]
- Mazzola, E.; Piazza, M.; Perrone, G. How do different network positions affect crowd members’ success in crowdsourcing challenges? J. Prod. Innov. Manag. 2023, 40, 276–296. [Google Scholar] [CrossRef]
- Rashid, T.; Anwar, S.; Jaffar, M.A.; Hakami, H.; Baashirah, R.; Umer, Q. Success Prediction of Crowdsourced Projects for Competitive Crowdsourced Software Development. Appl. Sci. 2024, 14, 489. [Google Scholar] [CrossRef]
- Yin, X.; Wang, H.; Wang, W.; Zhu, K. Task recommendation in crowdsourcing systems: A bibliometric analysis. Technol. Soc. 2020, 63, 101337. [Google Scholar] [CrossRef]
- Huang, Y.; Nazir, S.; Wu, J.; Hussain Khoso, F.; Ali, F.; Khan, H.U. An efficient decision support system for the selection of appropriate crowd in crowdsourcing. Complexity 2021, 2021, 5518878. [Google Scholar] [CrossRef]
- Yin, X.; Huang, J.; He, W.; Guo, W.; Yu, H.; Cui, L. Group task allocation approach for heterogeneous software crowdsourcing tasks. Peer-Peer Netw. Appl. 2021, 14, 1736–1747. [Google Scholar] [CrossRef]
- Wang, J.; Yang, Y.; Wang, S.; Chen, C.; Wang, D.; Wang, Q. Context-aware personalized crowdtesting task recommendation. IEEE Trans. Softw. Eng. 2021, 48, 3131–3144. [Google Scholar] [CrossRef]
- Yuen, M.C.; King, I.; Leung, K.S. Temporal context-aware task recommendation in crowdsourcing systems. Knowl.-Based Syst. 2021, 219, 106770. [Google Scholar] [CrossRef]
- Wang, J.; Yang, Y.; Wang, S.; Hu, J.; Wang, Q. Context-and Fairness-Aware In-Process Crowdworker Recommendation. ACM Trans. Softw. Eng. Methodol. (TOSEM) 2022, 31, 1–31. [Google Scholar] [CrossRef]
- He, H.R.; Liu, Y.; Gao, J.; Jing, D. Investigating Business Sustainability of Crowdsourcing Platforms. IEEE Access 2022, 10, 74291–74303. [Google Scholar] [CrossRef]
- Dubey, A.; Abhinav, K.; Taneja, S.; Virdi, G.; Dwarakanath, A.; Kass, A.; Kuriakose, M.S. Dynamics of software development crowdsourcing. In Proceedings of the 2016 IEEE 11th International Conference on Global Software Engineering (ICGSE), Orange County, CA, USA, 2–5 August 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 49–58. [Google Scholar]
- Messinger, D. Elements of Good Crowdsourcing. In Proceedings of the 3rd International Workshop in Austin, Austin, TX, USA, 17 May 2016. [Google Scholar]
- Yang, Y.; Karim, M.R.; Saremi, R.; Ruhe, G. Who should take this task? Dynamic decision support for crowd workers. In Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Ciudad Real, Spain, 8–9 September 2016; pp. 1–10. [Google Scholar]
- Borst, I. Understanding Crowdsourcing: Effects of Motivation and Rewards on Participation and Performance in Voluntary Online Activities. Number EPS-2010-221-LIS; 2010; Available online: https://repub.eur.nl/pub/21914/EPS2010221LIS9789058922625.pdf (accessed on 27 September 2024).
- Yang, Y.; Saremi, R. Award vs. worker behaviors in competitive crowdsourcing tasks. In Proceedings of the 2015 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Beijing, China, 22–23 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–10. [Google Scholar]
- Kamar, E.; Horvitz, E. Incentives for truthful reporting in crowdsourcing. In Proceedings of the AAMAS. Citeseer, Valencia Spain, 4–8 June 2012; Volume 12, pp. 1329–1330. [Google Scholar]
- Machado, L.; Melo, R.; Souza, C.; Prikladnicki, R. Collaborative Behavior and Winning Challenges in Competitive Software Crowdsourcing. Proc. Acm -Hum.-Comput. Interact. 2021, 5, 1–25. [Google Scholar] [CrossRef]
- Al Haqbani, O.; Alyahya, S. Supporting Coordination among Participants in Crowdsourcing Software Design. In Proceedings of the 2022 IEEE/ACIS 20th International Conference on Software Engineering Research, Management and Applications (SERA), Las Vegas, NV, USA, 25–27 May 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 132–139. [Google Scholar]
- Alabdulaziz, M.S.; Hassan, H.F.; Soliman, M.W. The effect of the interaction between crowdsourced style and cognitive style on developing research and scientific thinking skills. Eurasia J. Math. Sci. Technol. Educ. 2022, 18, em2162. [Google Scholar] [CrossRef]
- Xu, H.; Wu, Y.; Hamari, J. What determines the successfulness of a crowdsourcing campaign: A study on the relationships between indicators of trustworthiness, popularity, and success. J. Bus. Res. 2022, 139, 484–495. [Google Scholar] [CrossRef]
- Feng, Y.; Yi, Z.; Yang, C.; Chen, R.; Feng, Y. How do gamification mechanics drive solvers’ Knowledge contribution? A study of collaborative knowledge crowdsourcing. Technol. Forecast. Soc. Chang. 2022, 177, 121520. [Google Scholar] [CrossRef]
- Shi, X.; Evans, R.D.; Shan, W. What Motivates Solvers’ Participation in Crowdsourcing Platforms in China? A Motivational–Cognitive Model. IEEE Trans. Eng. Manag. 2022, 71, 12068–12080. [Google Scholar] [CrossRef]
- Mejorado, D.M.; Saremi, R.; Yang, Y.; Ramirez-Marquez, J.E. Study on patterns and effect of task diversity in software crowdsourcing. In Proceedings of the 14th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM), Bari, Italy, 5–7 October 2020; pp. 1–10. [Google Scholar]
- Saremi, R.; Yang, Y.; Vesonder, G.; Ruhe, G.; Zhang, H. Crowdsim: A hybrid simulation model for failure prediction in crowdsourced software development. arXiv 2021, arXiv:2103.09856. [Google Scholar]
- Khanfor, A.; Yang, Y.; Vesonder, G.; Ruhe, G.; Messinger, D. Failure prediction in crowdsourced software development. In Proceedings of the 2017 24th Asia-Pacific Software Engineering Conference (APSEC), Nanjing, China, 4–8 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 495–504. [Google Scholar]
- Urbaczek, J.; Saremi, R.; Saremi, M.L.; Togelius, J. Scheduling tasks for software crowdsourcing platforms to reduce task failure. arXiv 2020, arXiv:2006.01048. [Google Scholar]
- Saremi, R.; Yagnik, H.; Togelius, J.; Yang, Y.; Ruhe, G. An evolutionary algorithm for task scheduling in crowdsourced software development. arXiv 2021, arXiv:2107.02202. [Google Scholar]
- Hu, Z.; Wu, W.; Luo, J.; Wang, X.; Li, B. Quality assessment in competition-based software crowdsourcing. Front. Comput. Sci. 2020, 14, 146207. [Google Scholar] [CrossRef]
- Jung, H.J. Quality assurance in crowdsourcing via matrix factorization based task routing. In Proceedings of the 23rd International Conference on World Wide Web, Seoul, Republic of Korea, 7–11 April 2014; pp. 3–8. [Google Scholar]
- Wu, W.; Tsai, W.T.; Li, W. An evaluation framework for software crowdsourcing. Front. Comput. Sci. 2013, 7, 694–709. [Google Scholar] [CrossRef]
- Blohm, I.; Zogaj, S.; Bretschneider, U.; Leimeister, J.M. How to Manage Crowdsourcing Platforms Effectively? Calif. Manag. Rev. 2018, 60, 122–149. [Google Scholar] [CrossRef]
- Sarzynska-Wawer, J.; Wawer, A.; Pawlak, A.; Szymanowska, J.; Stefaniak, I.; Jarkiewicz, M.; Okruszek, L. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Res. 2021, 304, 114135. [Google Scholar] [CrossRef]
- Mikolov, T.; Chen, K.; Corrado, G.; Dean, J. Efficient estimation of word representations in vector space. arXiv 2013, arXiv:1301.3781. [Google Scholar]
- Pennington, J.; Socher, R.; Manning, C.D. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 25–29 October 2014; pp. 1532–1543. [Google Scholar]
- Joulin, A.; Grave, E.; Bojanowski, P.; Mikolov, T. Bag of tricks for efficient text classification. arXiv 2016, arXiv:1607.01759. [Google Scholar]
Study | Method Used | Result | Strengths | Limitations |
---|---|---|---|---|
Illahi et al. [4] | CNN Classifiers | , , | Efficient prediction of software project success | Limited generalizability beyond specific datasets |
Mazzola et al. [10] | Network Position Analysis | Inverted U-shaped relationship identified among 2479 participants | Focus on network effects in crowdsourcing | Only addresses network position; lacks broader scope |
Rashid et al. [11] | BERT Model | P, R, improvements of 13.46%, 8.83%, and 11.13%, respectively | Strong performance in CSP success prediction | Limited scalability for other crowdsourcing tasks |
X. Yin et al. [12] | Probabilistic Matrix Factorization | Enhanced task alignment with developers on platforms like TopCoder | Effective task-to-developer matching | Difficulty in gathering comprehensive worker data for bias correction |
Yongjun et al. [13] | Crowdsourcing Decision Framework | Improved task selection efficiency | Effective crowd selection based on project characteristics | Requires more empirical data for validation |
Junjie et al. [15] | Context-Sensitive Task Recommendation | Enhanced P and R, and reduced exploration effort | Effective at task recommendation in testing environments | Lack of testing outside the testing crowds |
Yuen et al. [16] | Temporal Task Recommendation | Improved worker preferences prediction | Considers time-based worker preferences | Performance dependent on accurate worker history data |
Dubey et al. [19] | Analysis of Task Category and Worker Ratings | Well-organized tasks attract skilled developers | Insights into task categorization and skill alignment | Focuses only on small task pools |
Sultan et al. [26] | Automated Project Manager Selection | Effective crowdsourcing management using automation | Strong project success association with manager selection | Lacks in-depth metrics for task progress monitoring |
Razieh et al. [34] | Neural Networks | Task failure reduced by 4% | Improved efficiency and task completion rate | Focuses only on specific CCSD platforms |
Zhenghui et al. [36] | Clustering-Based Quality Metric | Project rating on TopCoder based on clustering | Effective quality assurance mechanism | Focuses on TopCoder, limiting generalization |
Hyun Joon Jung [37] | SVD and PMF Models | Exceeded benchmarks in developer performance prediction | Accurate project routing and developer selection | Requires significant computational resources |
Approach | P | R | |
---|---|---|---|
Proposed Approach | 92.76% | 99.33% | 95.93% |
Random prediction | 65.23% | 65.64% | 65.43% |
Zero Rule | 82.58% | 100.00% | 90.46% |
Input | P | R | |
---|---|---|---|
Embedding | 92.76% | 99.33% | 95.93% |
TF-IDF | 90.45% | 95.32% | 92.82% |
Preprocessing | P | R | |
---|---|---|---|
Enable | 92.76% | 99.33% | 95.93% |
Disable | 92.45% | 98.95% | 95.59% |
P | R | ||
---|---|---|---|
ZSC | 92.76% | 99.33% | 95.93% |
SVM | 91.93% | 98.72% | 95.21% |
LR | 91.68% | 97.94% | 94.71% |
RF | 89.57% | 83.64% | 86.51% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rashid, T.; Illahi, I.; Umer, Q.; Jaffar, M.A.; Ramay, W.Y.; Hakami, H. Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development. Computers 2024, 13, 266. https://doi.org/10.3390/computers13100266
Rashid T, Illahi I, Umer Q, Jaffar MA, Ramay WY, Hakami H. Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development. Computers. 2024; 13(10):266. https://doi.org/10.3390/computers13100266
Chicago/Turabian StyleRashid, Tahir, Inam Illahi, Qasim Umer, Muhammad Arfan Jaffar, Waheed Yousuf Ramay, and Hanadi Hakami. 2024. "Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development" Computers 13, no. 10: 266. https://doi.org/10.3390/computers13100266
APA StyleRashid, T., Illahi, I., Umer, Q., Jaffar, M. A., Ramay, W. Y., & Hakami, H. (2024). Zero-Shot Learning for Accurate Project Duration Prediction in Crowdsourcing Software Development. Computers, 13(10), 266. https://doi.org/10.3390/computers13100266