Next Article in Journal
Ransomware Detection Model Based on Adaptive Graph Neural Network Learning
Previous Article in Journal
Automatic Optimization System for Heat Source Layout of Multi-Chip Components Based on Multi-Software Integration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automated Assessment Tool for 3D Computer-Aided Design Models

1
LMS, ISSATSo, University of Sousse, Sousse 4000, Tunisia
2
Department of Mechanical Engineering, College of Engineering, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
3
Department of Engineering, Utah Valley University, 800 W University Pkwy, Orem, UT 84058, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(11), 4578; https://doi.org/10.3390/app14114578
Submission received: 15 April 2024 / Revised: 14 May 2024 / Accepted: 16 May 2024 / Published: 27 May 2024

Abstract

:
Computer-aided design (CAD) has become an integral part of engineering education, particularly for those studying mechanical engineering. By providing practical skills that are highly valued in the engineering industry, proficiency in CAD systems enhances students’ employability. Generally, CAD systems provide students with the tools and knowledge necessary to excel in their engineering education and future careers. In order to help teachers to give the best training to their students and to make the right evaluations, an automatized tool is needed to support the evaluation of CAD models during training sessions. After an extensive bibliographical search, this paper proposes a CAD Model Automatized Assessment (MAA) Tool for mechanical courses called the CAD MAA Tool. This tool is mainly based on a developed model that takes into account different aspects of modeling, such as geometric, feature-based, and parametric modeling. To correctly evaluate a given part compared to a reference one, the proposed model uses different coefficients fixed by the teacher according to their teaching strategies or course objectives.

1. Introduction

The comparison of three-dimensional (3D) CAD models has been the main focus of a number of progress efforts over the last decade [1]. There are several methods for comparing CAD models. The following are some of the different methods that may be used [2]: Geometric comparison is the most important one. This technique consists of comparing the geometry of CAD models [3,4,5]. Then comes feature-based comparison [5,6]. Here, specific features of CAD models such as holes, fillets, and chamfers, are compared. This involves extracting feature information from models and making comparisons. Topology-based comparison is also a well-known technique [7]. Using this method, the topology or connectivity of models is compared. This is possible through the analysis and comparison of the graphical data of the models. There is also visual comparison; this method involves comparing the general appearance of CAD models. This can be achieved by both previewing the models and comparing the images, or by examining the feature vectors of the models based on the preview.
Several other techniques are used, such as mesh-based comparison, surface-based comparison, or shape based comparison [8]. There are also other methods available for comparing CAD models, and the choice of method will depend on the specific requirements of their application. In an educational context, a teacher can utilize geometric comparison to compare a student’s model to a reference model based on geometric similarity. In addition, the teacher can use feature-based comparison to verify that the specific features of models have been generated properly and correctly by students. Topology and structure can also be compared with reference models. After the comparison, the teacher can give feedback to the students to help them improve their models based on the comparison results. As a result, they will be able to identify both their strengths and weaknesses and enhance their CAD modeling skills.

2. State of the Art and Related Works

2.1. CAD Model Comparison: Application Domains

CAD model comparison has a wide range of applications in a variety of industries and fields. The following are a few notable areas where CAD model comparisons are often used:
  • Engineering and manufacturing: To ensure the accuracy and consistency of design models, CAD model comparison is widely used in the engineering and manufacturing industries. It helps identify inconsistencies between different versions of a CAD model, assists in design verification, and provides assurance of compliance with design standards and constraints. It can be applied in the automotive and aerospace fields, industrial equipment, and many more areas.
  • Architecture and civil engineering [9]: In the architecture domain, CAD models are compared to detect changes or discrepancies between design iterations, to track changes made during construction, and to verify the accuracy of existing models against original design drawings. This helps with quality control and documentation, facilitates clash detection, and improves coordination between project stakeholders.
  • Digital prototypes and simulations [10]: Comparing CAD models is an essential part of creating digital prototypes and simulations. This allows engineers and designers to compare model iterations to evaluate design changes, analyze the effects on performance parameters, and optimize the design to be both functional and efficient. This domain includes areas such as product development, virtual testing, and computer-aided engineering (CAE).
  • Reverse engineering and 3D printing [11,12]: CAD model comparison is used in the reverse engineering process, which involves the digital recreation of existing physical objects or parts. Any discrepancies or deviations can be identified by comparing the CAD model created from scanned data with the original object. This facilitates the creation of accurate and functional models.
  • Quality assurance and testing: In QA and Inspection processes, CAD model comparison is used to validate that manufactured parts match the original design. Deviations or defects can be detected and corrective action taken to maintain product quality.
  • Education and training [11]: Comparing CAD models is also used for education purposes, teaching students about design principles, tolerance analysis, and model verification. Educators can also assess student learning, provide feedback, and improve students’ understanding of design concepts by comparing different versions of CAD models or evaluating student designs against predefined criteria.

2.2. Comparison Tools in Commercial CAD Systems

Computer-aided design (CAD) systems and models are key tools in engineering, design projects, and education. CAD systems are software applications that enable users to create 2D or 3D digital models or prototypes of physical structures or products. These models may be manipulated, edited, and enhanced before the final design is fully developed. These systems include a number of features and tools to automatically perform certain tasks, such as taking accurate measurements, generating BOMs, and running simulations. Commercially available software tools indeed play a critical role in identifying differences between 3D CAD models, known as Model Difference Identification (MDI) tools [13].
The Comparison Feature is one of the features generally available in 3D CAD systems. According to their documentation, the main goal of 3D Computer Aided Design systems in providing model comparison capabilities is to assist users to compare different versions of a model and, ultimately, to manage modifications to 3D CAD models. Notably, 3D CAD systems are the only software that can compare the procedural representations of models, i.e., the sequencing of the modeling operations with respect to their corresponding parameters (also called feature manager, modeling histories, etc.). Although comparing imported models is also possible, it is highly dependent on the capabilities of the 3D CAD system to import and regenerate external models, which are likely to introduce a degree of data degradation during the import process (e.g., the loss of model features, topological changes, etc.) [6].
Comparing CAD models using commercial CAD software is a common industry practice. Advanced features for comparing and evaluating differences between different CAD models are available in commercial CAD software. These tools allow engineers and designers to identify design discrepancies, errors, or inconsistencies between different versions of CAD models, whether they are geometric or dimensional differences or changes in material properties. Furthermore, commercial CAD software can provide detailed reporting on differences, performing quantitative measurement to assess the extent of variation between models.
Each piece of commercial CAD software existing on the market has its own specific comparison capabilities. Comparison tools can vary in their ability to detect differences, ease of use, and file formats supported.
CAD model comparison tools in commercial software can have limitations, particularly in detecting subtle differences, handling complex modifications, handling a variety of file formats, automating the comparison process, and integrating with other analysis tools. It is very important to be conscious of these limitations (Table 1).

2.3. Comparison Tools in the Literature

The assessment of similarity in 3D mechanical parts has garnered considerable attention. In a study by Hong et al. [2], previous research on shape similarity techniques was categorized into two groups. The first category focuses solely on geometric similarity, while the second compares predefined typical features. The former allows for any input data structure, as it compares parts based on geometric data alone. On the other hand, the latter requires more information and feature extraction, such as holes and fillets, but results in less computational load and more reasonable outcomes [16].
Various approaches have been proposed for 3D shape similarity assessment. Some utilize design features as the basis for a semantic design information model, representing 3D CAD models using feature-adjacent attribute graphs (FAAGs) [17]. Structural CAD models, presented as sets of features and characteristic structures, are compared using bi-graphs [17]. Similar methods use feature dependency-directed acyclic graphs (FDAGs) [18]. Assembly models are also represented as relational graphs, and their similarity is measured based on compatibility matrices [19].
Signature-based methods, such as shape signature, offer a transferable and understandable model for 3D geometry comparison. Some authors [20] highlight the efficiency of methods generating shape signatures from CAD models. They classify methods into the following six categories based on the CAD information comprising the shape signature: feature-based, spatial function, shape histogram, section image, topological graph, and shape statistics.
However, these methods have limitations, particularly the absence of a unique shape signature. Recognizing the importance of both geometrical and topological information for mechanical CAD models, others [21] proposed two shape signatures to describe geometry and topology separately. Another approach, shape distribution, computes a statistical distribution of distances on the object surface for similarity measurement [3]. Despite being geometry-based, this approach may struggle to differentiate parts that are similar in shape but distinct in design features [20].
Maier and Langer [22] introduced a method for comparing 3D CAD assemblies based on shape distribution, considering assembly relationships and part shape dissimilarities. Many works transform the original 3D model into other formats (STEP files, signatures, graphs) for comparison, potentially causing information loss. This transformation alters geometric features, making the identification of identical features challenging.
In contrast, Siddique et al. [23] developed a Visual C++ application to determine dimensional and positional commonality between mechanical parts by extracting dimensions from represented IGES files. However, this method faces limitations in extracting feature information from such files.
A new methodology has been proposed [24] to explain the semantics of components within an assembly, starting with CAD models themselves. In particular, this methodology includes a rule-based approach that prioritizes the identification and categorization of a collection of engineering-relevant components, known as standard parts. This approach takes into account both the physical attributes of the components and the assembly environment.
In addition, there is a paper that presents an approach which facilitates process planning by comparing the design of the generated part with that of previous validated products. By using these previous CAD part models, it is possible to reduce development costs, speed up production start-up, and improve product quality [25].
In summary, various methods exist for 3D mechanical part similarity assessment, each with its advantages and limitations, emphasizing the importance of considering both geometric and topological information.

2.4. Recent Work in CAD Assessment Tools for Education

To improve CAD pedagogy, efforts have been made in recent years to develop innovative, creative, and practical approaches [26]. One of the most important approaches is the development of CAD assessment tools, which are CAD tools used to test designs submitted by students to produce an assessment mark and generate detailed and consistent feedback for students [27]. In order to change the traditional learning practice, an interactive CAD teaching system was proposed to enhance the teaching process and software was developed in order to test this proposed new system [28]. In the same context and to make education more enjoyable, attractive, and effective using multimedia technology, a bespoke system was developed to enhance the teaching and learning experience in large classrooms. It provides lecture recording with selective replay, implicit and explicit responses, and multiple-tutor support [29]. The developed system was tested in a real classroom environment of a computer-aided design course with over 150 students and received more than 60% positive feedback from the participating students. Computer-aided design (CAD) courses have traditionally been taught in computer rooms, with students requiring a high degree of assistance. Assessing students’ modeling exercises was time-consuming and potentially error-prone [30]. Two new online auto-assessment tools were presented to support the development of both command and strategic knowledge while learning CAD. The first tool uses the neutral file format (STEP). It is able to recognize surface differences between the student’s model and the teacher’s model. The proposed tool can assess students’ skills in creating a predefined shape. The second auto-assessment tool utilizes commercial CAD software’s Application Programming Interface (API) to test how a student’s model behaves when the 3D modeling parameters are changed. This tool evaluate students’ capabilities to build and design a CAD model with good design intent [30]. Another automatized CAD model assessment tool was developed to evaluate student 3D CAD models in mechanical courses. The tool can be implemented using any CAD computer application that supports customization [31]. A recent paper proposed a web-based framework for the assessment of CAD data in an undergraduate design education program, which combines traditional classroom-based teaching with elements of e-learning [32]. Students receive instructions and submit their solutions via a web interface that is evaluated by instructors using a semi-automated analysis of the CAD geometry. Each solution is compared with a reference solution to identify typical modeling errors as part of the assessment process. Students are given feedback on their work at the end of the assessment process.
In the same context, a research paper aimed to demonstrate the benefits of incorporating interactive self-assessment tools into CAD learning approaches like problem-based learning (PBL) [26]. Participants in the study included 111 first-year mechanical engineering students. They were split into two groups, an experimental group and a control group, equally and at random. PBL was utilized by the control group’s students. The experimental group used PBL in conjunction with TrainCAD, a novel interactive technology that enables 2D-CAD models created in AutoCAD to self-assess. Another article describes an innovative tool that enables the assessment process to be fully automated once students have submitted their work [33]. The recently introduced Virtual Teaching Assistant (ViTA) is a CAD tool-independent platform that is capable of processing drawings exported from various CAD programs with distinct export parameters. Using computer vision methodologies on exported images of drawings, ViTA is able not only to discern the accuracy of a two-dimensional (2D) drawing, but also to identify a multitude of critical errors in orthographic and sectional views, including those relating to structural features, hatching, orientation, scale, line thickness, colors, and views. Identically, AMCAD, a web-based artificial intelligence (AI) system was developed to autonomously evaluate students’ computer-aided design (CAD) projects and provide support for self-directed learning [34]. It compares two technical drawings and identifies any discrepancies (errors) that may exist. Errors are divided into two distinct categories: those indicated by ‘Missing Line’ in red and those indicated by ‘Erroneous Line’ in blue. AMCAD is compatible with all CAD applications. There is a paper that addresses the challenge of the classroom assessment and evaluation of three-dimensional CAD models, which is time-consuming and tedious. In order to automate and customize the assessment process and significantly reduce the time required for assessment, it is proposed to use the API of commercial CAD software [35].
The tools under examination demonstrate a degree of inflexibility, preventing the user from modifying the predefined evaluation criteria. Additionally, the majority of the tools utilize a step-based format, which has resulted in the deterioration and loss of certain data, raising questions about the reliability of the evaluation. Furthermore, the tools’ capacity to perform comparisons between three-dimensional files is limited (only two-dimensional comparison is possible), which may influence their effectiveness. It is therefore proposed, in this research paper, that an assessment tool be created for 3D CAD models that would permit teachers greater flexibility in setting assessment criteria according to teaching objectives, which may change from one session to the next and according to the level of the target students.

3. Proposed Approach

3.1. Comparison Process

The SolidWorks feature manager is a key window within SolidWorks, a widely used piece of industrial CAD/CAM software (Autodesk Fusion 360). It enables users to manage features and operations applied to a 3D model. Essentially, the feature manager in SolidWorks provides the following features. First, it provides a hierarchical structure for organizing the properties of the model. Users can easily visualize and manage the sequence of operations applied to the model by viewing features in a tree-like structure. It also provides a historical view of the features that have been applied. Each feature is listed chronologically, reflecting the order of their creation. This allows users to track operations and make changes easily. Another important aspect of the feature manager is its interactive nature. By selecting, reordering, or changing the parameters for each feature, users can interact with the model’s features. Editing in the feature manager is instantly reflected in the model geometry. The management of relationships between features is also handled by the feature manager. To maintain model consistency during subsequent modifications, it allows users to define geometric relationships, constraints, and parametric relationships. Feature properties such as dimensions, tolerances, and materials are also visible and accessible in the feature manager. This makes it easy to manage and update related information within the model.
To summarize, the SolidWorks feature manager is a unique window that allows users to visualize, organize, and modify the features of a 3D model in a hierarchical manner. Its intuitive approach simplifies the process of designing and modifying models by facilitating feature operations and property management. Thus the authors think it is possible to benefit from this feature manager as it can access the needed data [36].
The main idea of this paper is to help teachers evaluate their students during CAD courses in an objective manner. Therefore, the feature manager will be useful to access both the student CAD model and the reference CAD model data. Then, the comparison process begins, which is mainly divided into four main steps, as mentioned in the flowchart shown in Figure 1.
  • General comparison: As a first step, a general evaluation of the model is performed. An overview of the model’s general properties is necessary to decide whether the model is correct or needs some corrections. If the model being evaluated is identical to the reference model, then the mass and the center of gravity must be the same. At the second stage, the material properties and the model parameters (if they exist) must also be identical.
  • Feature-based comparison: The feature data of both models have to be retrieved. The extracted features will then be compared in a two-by-two manner. Two identical features must have the same size, orientations, dimensions, and proprieties.
  • Geometry- and topology-based comparison: to decide about the similarity of two models a geometrical and topological comparison must be done. Two identical model have the same coordinate system, same faces, edges and vertexes.
  • Checking relationships and dependencies: Actually, two models can have the same topology, geometry, and features but never coincide because of a wrong constraint or a missed relationship. That is why the dependencies, relationships, and constraints must be carefully verified.

3.2. Evaluation Method

CAD model evaluation methods involve various techniques to assess the correctness and consistency of the model, including its geometry and topology. One the existing approaches is to compute the geometric properties of the model’s NURBS surfaces, such as inverse evaluation to determine a 3D query point’s corresponding UV parameter values [37]. There is also a method that involves the use of Model-Based Definition (MBD) and a Multi-Dimensional Attributed Adjacency Matrix (MDAAM) to identify and evaluate the local features of 3D CAD models, considering topological structure, shape, size, tolerance, and surface roughness [38]. Additionally, another paper describes an attempt to address the automatized evaluation of student 3D CAD models. The proposed solution involves using a feature-based design (FBD) approach. The evaluation is based on rules created by the teacher, specifying which elements of the FBD model must be present in the student model and assigning marks based on the conditions met [31]. The evaluation engine reads the CAD module data and generates a student report based on the evaluation results, providing a systematic and automated way to assess student models.
Therefore, the evaluation of student CAD models is common practice in mechanical engineering education. This process helps assess the quality of a student’s work and provides valuable feedback for improvement.
To properly evaluate a given model, it is essential to establish clear criteria that will be used to assess the student’s CAD model. So, creating a checklist of items or features that need to be present in the CAD model is a critical step. These items should be based on the reference model and project requirements, such as specific shapes, dimensions, assembly constraints, etc. Then starts the technical accuracy evaluation of the student CAD model. This includes checking the model’s dimensions, tolerances, material properties, and whether it meets the functional requirements of the project or not. It is also important to evaluate the use of color, texture, and how well the model mimics the object’s real-world appearance. If the given work involves assemblies, evaluating how well the student’s model fits together is important to verify whether it shows a clear understanding of assembly principles.
Ones the checklist is prepared the student’s model can be compared to the reference model, item by item, using the pre-defined assessment checklist.
Based on the comparison report generated while applying the proposed comparison process in this paper, the checklist that can be used will be mainly composed of the following questions:
  • Are the part’s weight and overall dimensions (length, width, height) correct?
  • Are the part’s shape and geometry (with precise angles, specific parameters) accurate according to the reference model?
  • Does the part look realistic (e.g., finish, color, texture)?
  • Is the material selection correct and are the material properties (e.g., density, modulus of elasticity) correct?
  • Are all necessary features (holes, fillets, chamfers, etc.) according to reference model?
  • Are the dimensions and positions of the features accurate and in agreement with the reference model?
  • Is the part correctly fitted and aligned with other components in the assembly (if applicable)?
  • Are all required assembly constraints or relationships correctly defined and used?
  • Are there any problems with interference or collision with other components in the system?
The previously detailed checklist also needs to be translated into a concert evaluation method (Figure 2). Therefore, the attributed mark will be mainly composed of four major sub-marks: the first mark is the mark given while examining just the general information of the model, such as the weight, the dimensions, and the center of gravity; the second part of the mark will be based on the feature manager of the evaluated model; the third part of the mark is obtained according to a general overview of the model’s Brep information; and finally, the existing dependencies and constraints will be scanned in order to determine the fourth part of the mark. Each one of these parts have a proper weight (some criteria may be more critical than others), which is fixed according to the course objectives. The weighting also may change according to the institution’s standards and the personal philosophy of the teacher on assessment.
M a r k = M G × C G + M F m × C F m + M B r e p × C B r e p + M C × C C 100
  • MG: the mark obtained by just scanning the General information;
  • MFm: the mark obtained by analyzing the feature manager tree;
  • MBrep: the mark obtained by just scanning the Brep information;
  • MC: the mark obtained by analyzing the constraint, dependency, and interference information;
  • CBrep: the coefficient of the Brep evaluation;
  • CG: the coefficient of the general evaluation;
  • CFm: the coefficient of the feature manager evaluation;
  • CC: the coefficient of the constraint evaluation.

3.3. Implementation and Validation

The proposed approach will be implemented into an Auto-Assessment CAD Tool that can be used by students to evaluate their work. At the same time as the grade, the proposed tool provides the student with detailed feedback, explaining the strengths and weaknesses of their CAD model based on the assessment criteria.
This encourages the student to make revisions based on these comments. There will also be the option of awarding a separate mark to the revised version or adjusting the original mark. If the student submits revised work, the CAD model will be re-assessed to ensure that the necessary improvements have been made, and the mark will be adjusted accordingly.
The implemented tool uses mainly the original format of the CAD model instead of converting it into a neutral format (STEP, STL…) that causes the loss of the same information. It is implemented using the CAD System API, which make it applicable to all software (SolidWorks, Catia…). The information flow within the proposed tool is shown in Figure 3.

3.3.1. The Developed Tool

The developed tool is called the CAD MAA Tool as an abbreviation of Computer-Aided Design Model Automatized Assessment Tool. It is mainly composed of two interfaces, as shown in Figure 4. The first interface appears when the user launches the created add-in and loads the reference model as well as the model under evaluation (Figure 4a). After choosing the CBrep (the coefficient of the Brep evaluation), the CG (the coefficient of the general evaluation), the CFm (the coefficient of the feature manager evaluation), and the CC (the coefficient of the constraints evaluation) coefficients, and pressing the “Calculate” button of the CAD MAA Tool results interface (Figure 4b), the results interface will be displayed.
The latter interface clearly shows the student’s final mark and sub-marks and a brief evaluation (good level, quite good level, …). This main interface also offers the user the option to reload the model (after making corrections).
The created tool offers the possibility to calculate a global mark, but it also enables the user to calculate sub-marks such as a general evaluation mark, a feature evaluation mark, a geometry evaluation mark, or even a constraint evaluation (Figure 5). The final score is simply a linear combination of the subscores with different coefficients depending on the teacher’s strategy and the objectives of the lesson.
This general evaluation makes an evaluation of the model possible without the need to enter to the geometric and topological details. It globally compares the two models (Figure 5) in terms of the following:
  • Center of mass (X-coordinate, Y-coordinate, Z-coordinate);
  • Volume;
  • Surface area;
  • Mass;
  • Density (includes material comparison);
  • Moment of inertia (Lxx, Lyy, Lzz, Lxy, Lxz, Lyz).
However, feature-based evaluation directly compares the feature manager of the two models and verifies whether specific features are created properly or not.

3.3.2. Execution Example

Figure 6 shows an example of a part created by a student compared with one created by their teacher. The global mark given to this part is 58.3. In fact, this part shows some errors; the wrong facets are colored yellow (Figure 6f), and the feature sources of the mistakes are listed in the result details interface (Figure 6e). It clearly shows a list of the non-valid, the excessive, and the missing feature. Therefore, this feedback can be useful for the student to correct their work and also to the teacher to better evaluate the present work. The student can reload the corrected model after examining the details of the errors.
The “Final assessment” is merely an assessment of learning which aims to describe the student’s level on the basis of the mark awarded (Figure 7).

4. Results

Case Study 1

The aim of this case study was to assess a sample of six students in preparation for their CSWA (Certified SolidWorks Associate) certification. The students were asked to model the part shown in the figure provided (Figure 8) using the following information:
  • Unit system: MMGS (millimeter, gram, second).
  • Decimal places: 2.
  • Part origin: arbitrary.
  • A = 66.
  • B = 56.
  • Material: cast carbon steel.
The focus of this course is to master the basic modeling tools for 3D parts. Therefore, the course has four main fixed objectives:
  • Objective 1: mastering sketch design;
  • Objective 2: equation creation and modification management;
  • Objective 3: be familiar with creating basic SolidWorks features;
  • Objective 4: mastering the techniques of material use.
To this end, the assessment coefficients were set as follows:
  • CG = 10;
  • CBrep =30;
  • CFm = 30;
  • CC: 30.

5. Discussion

Student 1 created a model that was evaluated as geometrically and topologically 100% identical to the reference model (as shown in Table 2) (the same number of faces, edges, and vertexes, some neighborhood relations, and the geometrical constraints and dimensions are also the same). In terms of features, however, the two models scored 92 out of 100, as they are in fact almost identical, except for a small change in the basic shape. This difference is due to an error in the dimensions of sketch 1 (this was mentioned in the details shown by the CAD MAA). In fact, this small error engendered differences in terms of mass, volume, surface area, density, and the moment of inertia, which explains the low score given while evaluating the part in a general way. Regarding the constraint-based evaluation, the obtained mark was due to the absence of one of the two important equations demanded by the teacher, which are “A” and “B”. The student therefore did not respect one of the important objectives of the course, which is “Equation creation and modification management”.
Student 2 was given a higher global mark than student 2. In fact, they committed an error that influenced the base thickness (Bos-Extruded 1). Compared to student 1, student 2 respected the model constraint and correctly created the two equations “A” and “B”. The same is true for student 4, who only made a mistake in the creation of the chamfer (Chamfer 1). However, geometrically speaking, the error made by student 1 has more influence than the one made by student 2, as it has an important number of affected faces. This explains the difference in the given geometry-based evaluation marks.
Student 5 was evaluated globally as having a poor level, despite their model geometrically obtaining a score of 79 out of 100. In fact, they respected neither equation creation nor material choice. In addition, from a feature point of view, they made some mistakes, and they had some excessive features. Therefore, globally, this student did not validate the aims and objectives of the lesson.
Student 6, however, respected both the geometric- and feature-based representation of the model. Therefore, they obtained the highest score. In addition, the global evaluation shows that the reference model and the student one were identical. They had the same material, mass, volume, surface area, and moment of inertia. However, the only mistake was that they forgot one of the equations (“B”). This is why when they were assessed from the point of view of constraint, they achieved 84 out of 100.
The students were evaluated in a precise and detailed way from different points of view, and an overall mark was attributed, whereas the previously developed tools [31,32] only enabled an overall evaluation of the students’ CAD models. Some others only extract and save the models’ data in a database to facilitate manual evaluation [35]. In addition, the proposed tool provides results in a clear graphical interface, which represents a significant improvement over previous works. In contrast to this work, it must be acknowledged that our research paper merely demonstrates the output generated by the tool, and was not subjected to a direct comparison with a proper assessment conducted by teachers. In order to avoid plagiarism, it is essential to involve the file creator in the overall comparison when comparing our work with existing works. This approach has been shown to be effective in previous works [30,32,35], which have been able to detect a significant number of plagiarism cases.

6. Conclusions

The developed tool is a tool that can be used by both teachers and students. In fact, teachers can use it to evaluate their students’ level of performance during class sessions or exams. Moreover, the latter can be useful for students’ self-assessment during exam preparation periods, for example. The computer tool described demonstrated objectivity in assessing the students’ CAD models, as well as rigor in deciding whether or not to reward a student with points for successfully meeting a particular rule.
This paper introduces an automated assessment tool designed for the specific purpose of evaluating 3D CAD models in mechanical engineering courses. In contrast to manual methods, this tool offers efficient and impartial evaluation, thereby facilitating students’ proficiency in this pivotal area. The assessment criteria encompass four principal elements: general model information, feature manager evaluation, Brep information consideration, and the analysis of dependencies and constraints. The weighting of each element is determined by the course objectives, institutional standards, and instructor preferences, offering unparalleled flexibility compared to existing tools.
Teachers are given some flexibility by being able to choose the evaluation coefficients; in fact, they can assign 100 to one coefficient and 0 to the three others, depending on the objectives they have set for a particular course.
In future research, a comparative study is planned to validate the chosen evaluation criteria and the resulting scores generated by the “CAD MAA tool”. This comparative analysis will involve comparing the scores given by the tool with those given by experts and teachers. In doing so, we aim to validate the effectiveness and reliability of the automated assessment tool in accurately measuring students’ performance in creating and manipulating 3D CAD models.
In addition, this comparative study will further explore the correlations between the different coefficients used to calculate the overall or global score. By examining these coefficients in relation to each other, we expect to gain insights into how different aspects of model evaluation contribute to the final score. This analysis will not only improve our understanding of the assessment process, but will also inform potential refinements or adjustments to the weightings and criteria used by the CAD MAA tool. Ultimately, this comprehensive investigation will help to continue to improve and validate the automated assessment approach in mechanical engineering education.

Author Contributions

Conceptualization, A.E. and B.L.; Methodology, A.E. and S.B.A.; Software, A.E. and S.B.A.; Validation, A.E., S.B.A. and B.L.; Resources, A.E.; Data curation, A.E.; Writing—original draft, A.E., S.B.A. and B.L.; Writing—review & editing, A.E, S.B.A. and N.H.A.; Visualization, A.E., N.H.A. and A.S.; Supervision, B.L.; Project administration, B.L.; Funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University (IMSIU) (grant number IMSIU-RP23018).

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brière-Coté, A.; Rivest, L.; Maranzana, R. 3D CAD model comparison: An evaluation of model difference identification technologies. Comput. Aided. Des. Appl. 2013, 10, 173–195. [Google Scholar] [CrossRef]
  2. Hong, T.; Lee, K.; Kim, S.; Chu, C.; Lee, H. Similarity Comparison of Mechanical Parts. Comput. Aided. Des. Appl. 2002, 2, 759–768. [Google Scholar] [CrossRef]
  3. Osada, R.; Funkhouser, T.; Chazelle, B.; Dobkin, D. Matching 3D Models with Shape Distributions. In Proceedings of the International Conference on Shape Modeling and Applications, Genova, Italy, 7–11 May 2002; pp. 154–166. [Google Scholar]
  4. Ohbuchi, R.; Otagiri, T.; Ibato, M.; Takei, T. Shape-Similarity Search of Three-Dimensional Models Using Paramterized Statistics. In Proceedings of the Pacific Graphics, Beijing, China, 9–11 October 2002; pp. 1–10. [Google Scholar]
  5. Elinson, A.; Nau, D.S.; Regli, W.C. Feature-based Similarity Assessment of Solid Models. In Proceedings of the 4th ACM Symposium on Solid Modeling and Applications, Atlanta, GA, USA, 14–16 May 1997; pp. 297–310. [Google Scholar]
  6. Eltaief, A.; Remy, S.; Louhichi, B.; Ducellier, G.; Eynard, B. Comparison between CAD models using modification ratio calculation. Int. J. Comput. Integr. Manuf. 2019, 32, 996–1008. [Google Scholar] [CrossRef]
  7. Hilaga, M.; Shinagawa, Y.; Kohmura, T.; Kunii, T.L. Topology Matching for Fully Automatic Similarity Estimation of 3D Shapes. In Proceedings of the ACM SIGGRAPH Conference on Computer Graphics, Los Angeles, CA, USA, 12–17 August 2001; pp. 203–212. [Google Scholar]
  8. Ip, C.Y.; Lapadat, D.; Sieger, L.; Regli, W.C. Using Shape Distributions to Compare Solid Models. In Proceedings of the 7th ACM Symposium on Solid Modeling and Applications, SMA ’02, Saarbrücken, Germany, 17–21 June 2002; p. 273. [Google Scholar]
  9. Ibrahim, R.; Pour Rahimian, F. Comparison of CAD and manual sketching tools for teaching architectural design. Autom. Constr. 2010, 19, 978–987. [Google Scholar] [CrossRef]
  10. Cuillière, J.C.; François, V.; Souaissa, K.; Benamara, A.; BelHadjSalah, H. Automatic CAD Models Comparison and Re-meshing in the Context of Mechanical Design Optimization. In Proceedings of the 18th International Meshing Roundtable, IMR, Salt Lake City, UT, USA, 25–28 October 2009; pp. 231–245. [Google Scholar]
  11. Junk, S.; Burkart, L. Comparison of CAD systems for generative design for use with additive manufacturing. Procedia CIRP 2021, 100, 577–582. [Google Scholar] [CrossRef]
  12. Doutre, P.T.; Morretton, E.; Vo, T.H.; MARIN, P.; Pourroy, F.; Prudhomme, G.; Vignat, F. Comparison of some approaches to define a CAD model from topological optimization in design for additive manufacturing. Lect. Notes Mech. Eng. 2017, 233–240. [Google Scholar]
  13. Brière-Côté, A.; Rivest, L.; Maranzan, R. Comparing 3D CAD models: Uses, methods, tools and perspectives. Comput. Aided Des. Appl. 2012, 9, 771–794. [Google Scholar] [CrossRef]
  14. Siemens. Available online: https://www.sw.siemens.com/en-US/products/nx/ (accessed on 13 May 2024).
  15. PTC. Available online: https://www.ptc.com/fr/products/creo (accessed on 13 May 2024).
  16. Letaief, M.B.; Tlija, M.; Louhichi, B. An approach of CAD/CAM data reuse for manufacturing cost estimation. Int. J. Comput. Integr. Manuf. 2020, 13, 1208–1226. [Google Scholar] [CrossRef]
  17. Huang, R.; Zhang, S.; Bai, X.; Xu, C.; Huang, B. An effective subpart retrieval approach of 3D CAD models for manufacturing process reuse. Nongye Jixie Xuebao/Trans. Chinese Soc. Agric. Mach. 2017, 48, 405–4012. [Google Scholar] [CrossRef]
  18. Maier, A.M.; Langer, S. Engineering Change Management Report: Survey Results on Causes and Effects, Current Practice, Problems, and Strategies in Denmark; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  19. Tao, S.Q.; He, W. Similarity Assessment for Assembly Model Based on Component Attributed Relational Graph Matching. Appl. Mech. Mater. 2012, 215–216, 270–274. [Google Scholar] [CrossRef]
  20. Chu, C.H.; Hsu, Y.C. Similarity assessment of 3D mechanical components for design reuse. Robot. Comput. Integr. Manuf. 2006, 22, 332–341. [Google Scholar] [CrossRef]
  21. Wang, J.; Jiang, B.; He, Y. Shape-based search of mechanical CAD models for product data management. Int. J. Comput. Appl. Technol. 2010, 37, 125. [Google Scholar] [CrossRef]
  22. Fradi, A.; Louhichi, B.; Mahjoub, M.A.; Eynard, B. A new approach for reusable 3D CAD objects detection, by similarity calculation based on models Bayesian network (MBN). Int. J. Comput. Integr. Manuf. 2021, 34, 1285–1304. [Google Scholar] [CrossRef]
  23. Siddique, Z.; Viswanathan, K.; Chowdhury, S. Shape comparison of 3D models based on features and parameters parameters. In Proceedings of the ASME 2008 International Design Engineering Technical Conferences & Computers and Information in Engineering Conference, Brooklyn, NY, USA, 3–6 August 2008; pp. 1–11. [Google Scholar]
  24. Bonino, B.; Giannini, F.; Monti, M.; Raffaeli, R. Shape and Context-Based Recognition of Standard Mechanical Parts in CAD Models. CAD Comput. Aided Des. 2023, 155, 103438. [Google Scholar] [CrossRef]
  25. Bickel, S.; Sauer, C.; Schleich, B.; Wartzack, S. Comparing CAD part models for geometrical similarity: A concept using machine learning algorithms. Procedia CIRP 2020, 96, 133–138. [Google Scholar] [CrossRef]
  26. Pando Cerra, P.; Fernández Álvarez, H.; Busto Parra, B.; Castaño Busón, S. Boosting computer-aided design pedagogy using interactive self-assessment graphical tools. Comput. Appl. Eng. Educ. 2023, 31, 26–46. [Google Scholar] [CrossRef]
  27. Nutter, P.W.; Pavlidis, V.F.; Pepper, J. Efficient teaching of digital design with automated assessment and feedback. In Proceedings of the 10th European Workshop on Microelectronics Education, EWME, Tallinn, Estonia, 14–16 May 2014; pp. 203–207. [Google Scholar]
  28. Xu, W. Development of Interactive CAD Teaching System. Int. Conf. Ind. Eng. Oper. Manag. 2010, 666–671. [Google Scholar]
  29. Akhtar, S.A.; Warburton, S.; Xu, W. Development and preliminary evaluation of an interactive system to support CAD teaching. In Proceedings of the 2013 IEEE International Symposium on Multimedia, ISM, Anaheim, CA, USA, 9–11 December 2013; pp. 480–485. [Google Scholar]
  30. Jaakma, K.; Kiviluoma, P. Auto-assessment tools for mechanical computer aided design education. Heliyon 2019, 5, e02622. [Google Scholar] [CrossRef] [PubMed]
  31. Bojcetic, N.; Valjak, F.; Zezelj, D.; Martinec, T. Automatized evaluation of students’ cad models. Educ. Sci. 2021, 11, 145. [Google Scholar] [CrossRef]
  32. Krüger, B. Web-based assessment of CAD data in undergraduate design education. In Proceedings of the ASME 2014 12th Biennial Conference on Engineering Systems Design and Analysis, Copenhagen, Denmark, 25–27 July 2014; pp. 1–5. [Google Scholar]
  33. Younes, R.; Bairaktarova, D. ViTA: A flexible CAD-tool-independent automatic grading platform for two-dimensional CAD drawings. Int. J. Mech. Eng. Educ. 2022, 50, 135–157. [Google Scholar] [CrossRef]
  34. Jianwu, L.W.; Yew, L.S.; On, L.K.; Keong, T.C.; Sheng, R.T.Y.; Bin Sani, S.; Agnes, T.H.J. Artificial intelligence-enabled evaluating for computer-aided drawings (AMCAD). Int. J. Mech. Eng. Educ. 2024, 52, 3–31. [Google Scholar] [CrossRef]
  35. Joo, S.H. Assessment of three dimensional CAD models using CAD application programming interface. ASME Int. Mech. Eng. Congr. Expo. Proc. 2018, 5, 1–6. [Google Scholar]
  36. Eltaief, A.; Louhichi, B.; Remy, S.; Eynard, B. A CAD Assembly Management Model: Mates Reconciliation and Change Propagation. In Design and Modeling of Mechanical Systems—III: Proceedings of the 7th Conference on Design and Modeling of Mechanical Systems, CMSM’2017, March 27–29, Hammamet, Tunisia 7 (pp. 459–471); Lecture Notes in Mechanical Engineering; Springer: Berlin/Heidelberg, Germany, 2018; Volume 207169, pp. 459–471. [Google Scholar]
  37. Liu, W.; Bao, Z.; Yang, C. A parallel method of NURBS inverse evaluation for 3D CAD model quality testing. In Proceedings of the 14th International Conference on Graphics and Image Processing (ICGIP 2022), Nanjing, China, 21–23 October 2022; SPIE: Bellingham, WA, USA, 2023; Volume 12705, p. 39. [Google Scholar]
  38. Ding, S.; Feng, Q.; Sun, Z.; Ma, F. MBD Based 3D CAD Model Automatic Feature Recognition and Similarity Evaluation. IEEE Access 2021, 9, 150403–150425. [Google Scholar] [CrossRef]
Figure 1. General comparison process flowchart.
Figure 1. General comparison process flowchart.
Applsci 14 04578 g001
Figure 2. Evaluation process.
Figure 2. Evaluation process.
Applsci 14 04578 g002
Figure 3. The information flow within the proposed tool.
Figure 3. The information flow within the proposed tool.
Applsci 14 04578 g003
Figure 4. Developed tool’s structure.
Figure 4. Developed tool’s structure.
Applsci 14 04578 g004
Figure 5. Evaluation details interface.
Figure 5. Evaluation details interface.
Applsci 14 04578 g005
Figure 6. Test results.
Figure 6. Test results.
Applsci 14 04578 g006
Figure 7. Different assessment levels.
Figure 7. Different assessment levels.
Applsci 14 04578 g007
Figure 8. The case study reference model.
Figure 8. The case study reference model.
Applsci 14 04578 g008
Table 1. Commercial CAD system comparison tools.
Table 1. Commercial CAD system comparison tools.
Software Comparison Tool Limitations
Autodesk AutoCAD (2023)“DWG Compare”, “Drawing Compare”, and “Revision Cloud”Difficulty in detecting small changes between the different versions of DWG files.
Dassault Systems CATIA V5“Product Compare” and “Product Structure Compare”Limited ability to manage complex changes to assemblies and associations.
Detecting small features and part changes is challenging.
Siemens NX (2022)“Model Compare”Limited ability to compare advanced geometric or complex parametric models.
Difficulty in dealing with topology changes and structural modifications [14].
PTC Creo 11“Compare Geometry”Reduced accuracy in detecting minor geometric differences between models.
Limited management of changes to the design based on constraints and parameters [15].
SolidWorks (2024)“Compare”Limitations in recognizing small geometric and dimensional differences.
Difficulty in handling major changes in complex assemblies.
Table 2. The assessment results for a sample of 6 students.
Table 2. The assessment results for a sample of 6 students.
Students’ ModelsEvaluation ResultsFinal Evaluation
General EvaluationFeature-Based EvaluationGeometry-Based EvaluationConstraint-Based Evaluation
Applsci 14 04578 i0010921005072.6
Good level
Student 1
Applsci 14 04578 i00217929710088.4
Very Good level
Student 2
Applsci 14 04578 i0033410010010093.4
Very Good level
Student 3
Applsci 14 04578 i00417929510087.8
Very Good level
Student 4
Applsci 14 04578 i00517779027.5
Poor level
Student 5
Applsci 14 04578 i0061001001008495.2
Very Good level
Student 6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Eltaief, A.; Ben Amor, S.; Louhichi, B.; Alrasheedi, N.H.; Seibi, A. Automated Assessment Tool for 3D Computer-Aided Design Models. Appl. Sci. 2024, 14, 4578. https://doi.org/10.3390/app14114578

AMA Style

Eltaief A, Ben Amor S, Louhichi B, Alrasheedi NH, Seibi A. Automated Assessment Tool for 3D Computer-Aided Design Models. Applied Sciences. 2024; 14(11):4578. https://doi.org/10.3390/app14114578

Chicago/Turabian Style

Eltaief, Ameni, Sabrine Ben Amor, Borhen Louhichi, Nashmi H. Alrasheedi, and Abdennour Seibi. 2024. "Automated Assessment Tool for 3D Computer-Aided Design Models" Applied Sciences 14, no. 11: 4578. https://doi.org/10.3390/app14114578

APA Style

Eltaief, A., Ben Amor, S., Louhichi, B., Alrasheedi, N. H., & Seibi, A. (2024). Automated Assessment Tool for 3D Computer-Aided Design Models. Applied Sciences, 14(11), 4578. https://doi.org/10.3390/app14114578

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop