2. Materials and Methods
To implement augmented reality acupuncture surgery navigation, this study proposes a novel method named Six-Point Landmark-Based AR Registration, which addresses several critical factors. First, the irregularity and variability in the shape of the human head make determining its central point and three-dimensional dimensions a complex task. Additionally, minimizing disruptions during the procedure is essential to ensure smooth acupuncture surgery. To meet these requirements, the proposed method is specifically designed to accommodate the diversity and irregularity of head shapes.
The Six-Point Landmark-Based AR Registration method captures the coordinates of six key points on the head model within a Unity environment. These key points include the left ear, right ear, top of the head, chin, back of the head, and tip of the nose (see
Figure 1). Using this data, the system calculates the head model’s dimensions and central point. This information is then used to align the acupuncture point navigation model with the recipient’s head, forming a precise acupuncture navigation system (see
Figure 2).
The entire process is automated, eliminating the need for physician intervention during the alignment phase. Adjustments can be made using gestures or a remote controller after the initial alignment, ensuring minimal interference during the procedure. The steps involved in the Six-Point Landmark-Based AR Registration method are illustrated in
Figure 3.
Figure 3 illustrates the fixed sequence of three scan iterations in the Six-Point Landmark-Based AR Registration process. Each iteration collects head axis data using OnTriggerEnter() and OnTriggerExit(), with conditional branches assigning distinct roles: (1) model scaling, (2) recipient model alignment, and (3) acupuncture model alignment. This process is deterministic and not a dynamic programmatic loop.
To clarify the scanning logic, Unity’s OnTriggerEnter() and OnTriggerExit() functions are used to detect the head model boundaries. During scanning: Along the X-axis (right to left), OnTriggerEnter() captures the right surface to define Plane(CR), and OnTriggerExit() captures the left surface to define Plane(CL). Along the Y-axis (bottom to top) and Z-axis (front to back), they define Plane(CB)/Plane(CT) and Plane(CF)/Plane(CN), respectively. These six boundary planes are used for registration and alignment in subsequent steps.
In our Unity implementation, the OnTriggerEnter() and OnTriggerExit() functions are used to detect when the recipient model intersects with predefined scanning planes. These are standard Unity event functions triggered by object collisions within collider boundaries. Specifically, OnTriggerEnter() captures the recipient model’s entry into the detection region, while OnTriggerExit() marks the exit point. By iterating this process across the X-, Y-, and Z-axes (left–right, top–bottom, front–back), the system defines six bounding planes for alignment. This process is deterministic, rule-based, and requires no parameter tuning, making it reproducible in any Unity environment. Developers familiar with Unity can easily adapt the logic based on publicly available documentation.
2.1. Landmark-Based Scaling for Head Model Dimensions
To accommodate variations in recipient sizes, an algorithm is proposed to adjust the three-dimensional dimensions of the acupuncture point navigation model. The definitions of these key anatomical landmarks and notations are summarized in
Table 1. Using the coordinates of the six key points obtained during the initial scan, the three-dimensional dimensions of the navigation model are calculated. This includes measuring the distance from the right ear to the left ear (AL
LR), from the chin to the top of the head (AL
TB), and from the tip of the nose to the occiput (AL
HN).
In the second scan, the same scanning method is applied to obtain the corresponding measurements for the recipient’s head model: left ear to right ear (LLR), chin to top of the head (LTB), and nose tip to occiput (LHN). Size matching calculations are then performed using this acquired data, as illustrated in Algorithm 1.
The rationale behind this scaling strategy is to proportionally align the recipient’s head model with the reference navigation model based on three key anatomical dimensions—left–right, top–bottom, and front–back—measured using linear distances between designated landmarks. This approach assumes that maintaining proportionality along these anatomical axes preserves the relative spatial arrangement of acupuncture points for effective alignment.
Algorithm 1. Size matching algorithm
|
Input: LLR, LTB, LHN, ALLR, ALTB, ALHN Procedure Size Match(LLR, LTB, LHN, ALLR, ALTB, ALHN)
AMLR = LLR/ALLR
AMTB = LTB/ALTB
AMHN = LHN/ALHN
A.transform.localScale = new Vector3(AMLR, AMTB, AMHN) |
To mitigate the impact of head rotation or tilt, we manually adjust the recipient’s head model in Unity before executing the scan alignment process. Specifically, the nose is positioned at the central front, and the chin is placed at the central bottom of the virtual space (as shown in
Figure 4). This preprocessing step helps reduce the rotational deviation, ensuring that the principal axes of the head model are reasonably aligned with the coordinate system.
We acknowledge that this approach introduces a minor degree of human adjustment error. However, such variability is considered as part of the overall system error and is included in the final precision evaluation.
2.2. Center Alignment in Six-Point Landmark-Based AR Registration
To address the challenge of defining and calculating the center point of the human head, this paper introduces the following concept: The model comprises six vertices, forming three pairs. It is assumed that connecting each pair of vertices along their corresponding axes creates a midpoint plane. Mathematically, the midpoint of each landmark pair defines a plane perpendicular to one of the coordinate axes. The intersection of the three planes derived from the left to right, top to bottom, and front to back midpoints yields the estimated geometric center of the head. In three-dimensional space, such a point can be computed by solving the intersection of three orthogonal planes, assuming their normal vectors are aligned with the X, Y, and Z axes, respectively. The intersection of these three midpoint planes is defined as the assumed center point of the irregular model (as indicated by the white sphere in
Figure 1).
Two possible strategies were considered for defining the center point: (1) using the geometric center of the cube defined by the six anatomical landmarks (left, right, top, bottom, front, back), and (2) using the centroid of the triangle formed by the three midpoints (CLR, CTB, and CHN). We ultimately chose the cube-based geometric center, as it incorporates all six spatial boundaries and produces a more balanced and reproducible alignment reference. This point is used to translate both the recipient head model and the acupuncture navigation model to the origin of the coordinate system.
Rather than directly manipulating this center point, the algorithm calculates the coordinate differences between the diagonal vertices and adjusts the model object accordingly. This approach effectively relocates the center point of the head to the specified coordinates. Once the method for determining and relocating the center point is established, the center points of both the acupuncture point navigation model and the recipient’s head model are moved to the designated coordinates, thereby completing the alignment of the acupuncture navigation model’s center point.
To achieve the center point alignment as described above, a center point moving algorithm is proposed during the second and third scans, as detailed in Algorithm 2. In this context, middlePlane(Plane(C1), Plane(C2)) refers to the plane located halfway between two parallel planes, computed by averaging the positions of Plane(C1) and Plane(C2). The function intersectionPoint(Plane1, Plane2, Plane3) denotes the 3D coordinate point at which three orthogonal planes intersect, defining the model center point. This algorithm performs one iteration each on the recipient’s head model and the acupuncture point navigation model, resulting in the successful alignment of their center points.
To ensure accurate alignment, the center point of each 3D model is translated to the world origin (0, 0, 0). This is accomplished by applying a negative translation vector equal to the computed center point CC. In the Unity implementation, this is achieved by the statement H.transform.localPosition -= new Vector3(CC), effectively relocating the entire model such that its computed center aligns with the global origin.
Algorithm 2. Center point moving algorithm
|
Input: CL, CR, CT, CB, CH, CN
Procedure Size Match(CL, CR, CT, CB, CH, CN)
Plane(CLR) = middlePlane(Plane(CL), Plane(CR))
Plane(CTB) = middlePlane(Plane(CT), Plane(CB))
Plane(CHN) = middlePlane(Plane(CH), Plane(CN))
CC = intersectionPoint(Plane(CLR), Plane(CTB), Plane(CHN))
H.transform.localPosition = H.transform.localPosition - new Vector3(Cc)
// Move model by -Cc to align center point to world origin (0, 0, 0) |
2.3. Implementation
This subsection describes the implementation process of the augmented reality acupuncture navigation method, illustrated in the schematic diagram shown in
Figure 5.
Step 1: A 3D scanner (Artec Eva, Artec 3D, Senningerberg, Luxembourg) is used to scan the recipient’s head, which is then processed into a 3D object model. This model is imported into a Unity project, where manual adjustments are made to the phase. The Unity project already contains the acupuncture point navigation model, which was obtained from Chiyuan Bookstore, a specialized retailer for traditional Chinese medicine books and acupuncture equipment. This model has been post-processed to accurately represent the acupuncture points.
Step 2: The project is then imported into AR glasses (Magic Leap One) and activated. The program automatically executes the Six-Point Landmark-Based AR Registration method to overlay the acupuncture navigation model. Any errors in the overlay can be adjusted in real time through the user interface.
Step 3: Using the user interface, the acupuncture navigation model is moved to the actual position on the recipient’s head, thereby completing the alignment with the subject (as shown in
Figure 6).
At this stage, the acupuncture navigation model can be utilized for acupuncture procedures or on-site teaching with the recipient. Users have the option to remove the human head model, displaying only the acupuncture point model. If issues arise, such as loss of the AR glasses’ internal coordinates or unintended movement by the recipient causing desynchronization between the real and virtual environments, Step 3 can be refined to restore accurate alignment.
The process performs three structured scan iterations. Each scan uses the same trigger-based mechanism to obtain head axis coordinates. The first scan enables scaling of the acupoint navigation model; the second scan aligns the recipient’s head model to the origin; and the third aligns the acupuncture navigation model. This logic is represented in the revised flowchart (
Figure 3).
3. Results
This section investigates and discusses the measurement errors associated with combining augmented reality glasses with the Six-Point Landmark-Based AR Registration method.
Firstly, since the application of augmented reality in surgical navigation relies on practitioners using the information displayed on the screen during procedures, the stability of this virtual information in the real environment can significantly impact the needling process and subsequent measurements of acupuncture point errors. Therefore, it is essential to assess the stability of the acupuncture navigation model.
Furthermore, to evaluate the effectiveness of the center point alignment and size matching methods within the Six-Point Landmark-Based AR Registration approach, this study will measure the errors between the acupuncture points in the navigation model and those identified by acupuncturists on the recipient’s head. This comparative analysis will provide insights into the accuracy and reliability of the proposed methods.
3.1. Stability Measurement of Acupuncture Navigation Model
To confirm the stability of the 3D model presentation in the real environment through augmented reality glasses, this study involved practical testing with one traditional Chinese medicine student acting as the point locator under the supervision of a licensed TCM physician with extensive clinical experience in acupuncture. Five recipients. Stability measurements were performed on eight key acupuncture points across four orientations on each recipient’s head. These acupuncture points included Sù liáo, Zǎn zhú, Shuǐ gōu, Yìn táng, Jiǎo sūn, Tīng gōng, Fēng fǔ, and Bǎi huì, as shown in
Figure 7. The steps for measuring the stability of the acupuncture navigation model are as follows:
Step 1: The acupuncturist locates and marks the acupuncture points on the recipient’s head, adhering to anatomical principles.
Step 2: The Six-Point Landmark-Based AR Registration method is employed to align the acupuncture navigation model with the recipient’s head.
Step 3: The positional errors of each acupuncture point are measured, with each point being assessed ten times. Notably, when measuring points on the sides and rear of the head, adjustments are made to the imaging position of the acupuncture navigation model to ensure proper alignment with the recipient. This adjustment helps eliminate potential coordinate calculation errors caused by significant movements of the augmented reality glasses. The standard deviation data for the positional errors of the acupuncture points is recorded in
Figure 8.
3.2. Acupoint Error Measurement in the Acupuncture Navigation Model
Following the validation of the accuracy of the augmented reality model, this study conducted practical testing to assess the precision of the Six-Point Landmark-Based AR Registration method in real-world applications. Ten recipients were recruited for this testing, and the same eight acupuncture points mentioned in
Section 4.1 were utilized for precision measurements. The steps for measuring acupoint errors in the acupuncture navigation model are outlined as follows:
Step 1: Instruct the acupuncturist to locate and mark the acupuncture points on the recipient’s head according to anatomical principles.
Step 2: Employ the Six-Point Landmark-Based AR Registration method to align the acupuncture navigation model with the recipient’s head.
Step 3: Conduct six measurements for each acupuncture point on each of the ten recipients to assess precision.
The statistical data obtained from these measurements is presented in
Figure 9.
3.3. Analysis of Measurement Results
This subsection examines the stability of the augmented reality presentation and the associated acupuncture point errors.
3.3.1. Stability Analysis of Acupuncture Navigation Model
As illustrated in
Figure 8, the range of drift observed in the projection of three-dimensional objects using augmented reality glasses is minimal. The largest standard deviation recorded is 2.59 mm for Recipient B, with 87.5% of the data exhibiting a standard deviation of less than 2 mm. These findings further validate the practical feasibility of the acupuncture navigation model. The following section will provide a more detailed analysis of the error measurement data related to the Six-Point Landmark-Based AR Registration method.
3.3.2. Acupoint Error Analysis
This subsection primarily analyzes the accuracy of this study’s experiment. First, the variability in acupuncture point locations, as reported in Traditional Chinese Medicine studies, is reviewed. Molsberger et al. [
14] suggested that acupuncture points are not fixed but instead represent areas. In randomized controlled experiments, they recommended a 2-cun (approximately 5 cm) difference between acupuncture points to observe distinct effects, indicating some variability in their placement. Their data also showed that different acupuncturists had a distribution of 11.27 square centimeters when locating GB14 [
14].
Returning to this study’s results, we note that in
Figure 9, the average error for the eight acupuncture points across the four facial orientations was less than 8 mm. This is significantly smaller than the 2-cun range mentioned in the literature, confirming the accuracy of the 3D scanning and positioning method. Points near detailed facial structures like Sù liáo and Shuǐ gōu had lower errors, possibly due to more distinct anatomical features aiding alignment. In contrast, points like Jiǎo sūn, near the ear, showed higher variability, likely due to differences in ear size and position among recipients.
These facial acupoints (like Sù liáo and Shuǐ gōu) are located closer to the center of the face, where numerous and well-defined anatomical landmarks—such as the nose contour, philtrum groove, and upper lip boundary—are concentrated. These rich features enhance the precision of 3D scanning and image alignment during registration. In contrast, acupoints located in peripheral regions, such as the top, back, or sides of the head, often lack such detailed reference structures, making accurate positioning more difficult. This anatomical distinction explains the superior accuracy observed in central facial areas.
In
Table 2, the accuracy of the proposed augmented reality (AR) acupuncture navigation system is evaluated by measuring the average error, standard deviation, and 95% confidence intervals (CI) for each acupoint. The results show that the system achieved an overall average error of 5.01 mm with a standard deviation of 2.64 mm, and a 95% CI ranging from 4.27 to 5.76 mm. These findings indicate a consistent level of precision across different acupoints.
Among the acupoints, Sù liáo exhibited the lowest average error of 1.63 mm with a standard deviation of 1.04 mm and a narrow 95% CI of 1.23–2.03 mm, demonstrating the highest level of accuracy. Shuǐ gōu also performed well, with an average error of 2.8 mm, a standard deviation of 0.87 mm, and a CI of 2.45–3.15 mm, indicating high consistency. In contrast, Jiǎo sūn and Tīng gōng had higher average errors of 7.25 mm and 6.02 mm, respectively, and wider confidence intervals (6.15–8.35 mm and 5.30–6.73 mm), suggesting greater variability and anatomical complexity in these regions. These confidence intervals provide statistical support for evaluating measurement stability and enable clearer comparisons among acupoints.
To assess the consistency of acupuncture point localization among different participants, a one-way analysis of variance (ANOVA) was performed using the average errors across eight acupoints for each of the ten subjects. The analysis tested whether the error measurements varied significantly between participants. As shown in the ANOVA summary table, the F-value was 1.14509, and the p-value was 0.343769, which is greater than the standard alpha level of 0.05. This result indicates that there is no statistically significant difference in localization error among the ten subjects, suggesting that the proposed system maintains a stable level of precision regardless of individual user variation. This finding supports the robustness and usability of the system in practical applications. The consistency of results across subjects also confirms that the training curve or individual experience has minimal impact on localization performance when using the proposed AR navigation method.
Overall, the data shows that while the system performs well across most acupoints, some locations may present more challenges in achieving precise localization. The relatively low standard deviations for most acupoints, however, suggest that the AR system offers reliable and repeatable accuracy in guiding acupuncture point localization. This level of precision, averaging 5.01 mm, is well within clinically acceptable ranges for acupuncture treatments.
4. Discussion
4.1. System Strengths and Innovations
This section discusses the strengths of the proposed augmented reality (AR) acupuncture navigation system, with emphasis on its markerless design, hands-free operation, and clinical practicality.
A key strength of our approach is the elimination of external markers or image targets near the patient, enabling unobstructed acupuncture procedures and reducing setup complexity. The system uses AR glasses (Magic Leap One) to enhance practitioner mobility, allowing dynamic viewing angles and intuitive interaction. This is particularly valuable in acupuncture treatments where freedom of movement is essential. By integrating AR surgical navigation principles, our system improves consistency in acupoint localization while maintaining a natural clinical workflow.
4.2. Comparison with Existing Acupuncture Localization Systems
To contextualize the performance and clinical applicability of our system,
Table 3 presents a detailed comparison of acupuncture localization systems across representative studies, including Chang and Zhu [
11], Chan et al. [
12], and Chiou and He [
13].
Figure 10 visually illustrates representative systems from prior studies and our proposed method. Subfigure A shows facial landmark detection based on image datasets in [
11], while Subfigure B highlights robot-assisted localization on real forearms with variable skin conditions in [
12]. Subfigure C presents tablet-based AR navigation using a mannequin head as proposed in [
13]. In contrast, subfigure D depicts our smart-glasses-based system operating on real human subjects, with a 3D head model aligned in real time. This visual comparison highlights key distinctions in subject testing, interaction modes, and anatomical realism, further supporting the tabulated findings in
Table 3.
In terms of the target body region, most prior works focused on the face [
11,
13] or upper limbs [
12], whereas our study addresses both the head and face region, which contains complex 3D contours and variable hair coverage. Our system employs smart-glasses-based augmented reality (AR) navigation using Magic Leap One, combined with high-resolution 3D scans from Artec Eva, offering real-time interaction and precise anatomical alignment.
Regarding the model construction approach, we use a landmark-based scanning method to generate a personalized 3D head model. In contrast, Chang and Zhu [
11] relied on facial image datasets for feature-point extraction, which lacked real anatomical testing. Chan et al. [
12] implemented a CNN and inch-based mesh estimation on robotic arms for upper-limb points, while Chiou and He [
13] proposed a Vuforia-based AR system for face localization, but tested only on a mannequin head.
Importantly, our study was validated on ten real human subjects, while [
11] used only image datasets and [
13] used synthetic models. Though [
12] tested on real forearms, the anatomical diversity was not statistically validated. In terms of measured acupoints, our system covered eight cranial points, compared to five in [
11,
13], and five upper-limb points in [
12].
Our system supports marker-free alignment and achieves full real-time capability with smart-glasses display. While [
12] had partial real-time interaction due to robotic delay, [
11,
13] lacked dynamic feedback. Personalization is another critical dimension—our method builds a patient-specific model, whereas [
12] supports only partial scaling and [
11,
13] do not support personalization.
One of the key advantages of our system lies in viewpoint mobility. Unlike [
12], which is constrained by a fixed robotic arm, or [
11], which is limited to static images, our device allows dynamic, multi-angle visualization of acupoints. This mobility is particularly beneficial in clinical scenarios requiring flexible observation.
Quantitatively, our system achieved a localization error of 5.01 ± 2.64 mm with full statistical reporting (average, SD, 95% CI, and ANOVA). Chan et al. [
12] reported an offset of 58.5 mm, and Chiou and He [
13] reported a range of 0.6–3.9 mm, but without confidence intervals. Chang and Zhu [
11] did not report quantitative localization accuracy.
In terms of user interaction, our wearable smart-glasses system supports intuitive, hands-free operation, whereas the other systems relied on desktops [
11], robotic arms [
12], or tablets [
13], with limited flexibility. Additionally, only our system and [
13] integrate AR directly, whereas [
11,
12] do not provide immersive AR feedback.
Regarding system strengths, our approach stands out with medical-grade accuracy and real-world usability, validated on real patients. In contrast, [
11] emphasized stable facial landmarks without clinical validation, [
12] focused on integrating anatomical and AI components but lacked human testing, and [
13] claimed novelty in AR application but had limited interaction.
However, each system has limitations. Our system requires expensive hardware and a 3D scanning setup, which may limit its accessibility. Ref. [
11] lacks any clinical validation, [
12] is theoretical without confirmed human testing, and [
13] is restricted by limited interaction and the use of a mannequin head.
In summary, this comparison shows that our system provides high accuracy, real-time interactivity, patient-specific modeling, and full AR integration with clinical-grade validation, features that are only partially or not addressed in previous works [
11,
12,
13].
4.3. Comparison with General AR Surgical Navigation Systems
Table 4 presents a functional and technical comparison of the proposed system with five representative AR-based surgical navigation systems [
16,
17,
18,
19,
20], highlighting differences in registration methods, viewpoint flexibility, interaction modes, and implementation costs.
In terms of AR device types, our system employs the Magic Leap One smart glasses, supporting hands-free use and dynamic overlays. In contrast, Liu et al. [
16] and Kalavakonda et al. [
20] utilize HoloLens or tablet-based systems, which, although functional, vary in portability and depth perception. Other systems, such as those from Zhu et al. [
18] and Ma et al. [
19], rely on see-through helmets or custom displays, which may not be widely available in clinical settings.
The registration method is a key differentiator. Our method applies a six-point anatomical landmark-based approach that is markerless and does not require external image targets. In comparison, Liu et al. [
16] and Zhang et al. [
17] rely on image target-based registration. Zhu et al. [
18] and Ma et al. [
19] use fiducial markers, and Kalavakonda et al. [
20] combine isosurface and Marching Cubes-based registration. These traditional methods require physical markers or preprocessed models, which can obstruct procedures or increase preparation time.
SLAM or tracking capability is another important feature for AR stability. Most existing systems use marker-based or optical tracking mechanisms. Zhang et al. [
17] integrate both image markers and optical tracking to enhance precision. Our system, however, does not currently incorporate SLAM; instead, it leverages a fixed registration once the scan is complete, which simplifies deployment but may affect robustness in dynamic environments. Kalavakonda et al. [
20] similarly do not use SLAM.
Viewpoint freedom is highest in our system, as the use of wearable smart glasses allows users to move naturally around the patient without losing the overlay. In contrast, systems with fixed displays or helmets [
17,
18,
19] limit the user’s movement. HoloLens-based systems [
16,
20] provide moderate mobility, depending on marker visibility.
Interaction modes also vary. Our system supports gesture-based and controller-based interaction with the AR interface. In contrast, tablet-based systems [
16,
17] rely on touchscreen input, while some systems [
18,
19] offer no interaction support at all. Kalavakonda et al. [
20] utilize HoloLens gestures, which allow limited head or hand movement for control.
From an implementation cost perspective, our system is categorized as low-to-medium due to the need for a 3D scanner and AR glasses. In contrast, Ma et al. [
19] and Kalavakonda et al. [
20] report high implementation costs due to custom tracking or reconstruction equipment, while the others fall in the medium range.
Overall, compared with existing AR surgical navigation systems, our method offers unique advantages in mobility, interaction flexibility, and setup efficiency by adopting markerless scanning and commercial wearable devices. While not yet integrated with SLAM, the system remains suitable for procedures that benefit from open-viewpoint precision and streamlined operation.
4.4. Accuracy Analysis and Anatomical Considerations
Despite an average error of 5.01 mm, this level of deviation is acceptable within clinical acupuncture tolerances. It is important to note that due to the inherent differences in head shapes among individuals, perfect anatomical overlap between the recipient’s model and the acupuncture navigation model cannot be guaranteed unless individual DICOM scans are obtained. Even under ideal technical conditions, slight mismatches in head geometry contribute to baseline alignment discrepancies.
Outliers, such as deviations exceeding 7 mm near points like Jiǎo sūn, are likely due to individual anatomical differences—especially in the ear and temple regions. Such variation is well documented in acupuncture research [
6,
14,
15] and is clinically tolerated because therapeutic outcomes are more closely tied to meridian alignment than to pinpoint accuracy.
Our algorithm uses proportional scaling based on six anatomical landmarks, enabling model alignment through simple translation and scaling without iterative optimization. This reduces geometric distortion and minimizes cumulative error. Although ideal registration may be limited by differences between the head model and patient anatomy, the system ensures practical reliability without requiring individualized DICOM-based modeling.
4.5. Preliminary User Feedback
In addition to technical validation, we collected preliminary feedback from medical students and non-acupuncturists during system testing. Participants found the system intuitive, especially the hands-free operation via head and voice gestures. They appreciated not having to manipulate external screens or tracking markers, which allowed greater focus on the acupuncture task. Although formal usability evaluations were not conducted in this phase, the positive feedback suggests strong user acceptance, warranting future studies using standardized scales such as SUS (System Usability Scale).
4.6. Limitations and Future Work
The proposed system has several limitations. First, it relies on commercial hardware (e.g., Magic Leap One and Artec 3D scanner), which may not be widely available or affordable for all clinics. Second, because the system uses a generic head model, anatomical differences between individuals may introduce baseline mismatches, especially in patients with cranial deformities or surgical history. Third, light sensitivity or hair interference may affect scan quality in some users.
The Artec Eva scanner offers submillimeter accuracy, with intra-scanner reliability reaching 0.08 mm and average deviation on real subjects around 0.20 mm, as reported by Moerman et al. [
21]. Its internal drift correction and auto-alignment mechanisms contribute to consistent results across users [
22]. Magic Leap One employs SLAM-based visual-inertial tracking [
23]; while precise tracking error metrics are rarely published, available reports indicate millimeter-level accuracy acceptable for AR overlay. However, the publicly available literature offers limited data regarding inter-user variability and drift correction performance for Magic Leap One. Therefore, further investigation is warranted to evaluate system robustness across different users, assess potential overlay drift over time, and ensure sustained spatial accuracy in long-term clinical applications.
5. Conclusions
This study presents a novel augmented reality (AR) acupuncture navigation method using Six-Point Landmark-Based AR Registration techniques. The research addresses the challenges associated with acupuncture point localization by incorporating AR technology, allowing for a more dynamic and adaptable approach compared to traditional methods conducted on fixed platforms.
The findings demonstrate that the accuracy of the acupuncture navigation model achieved an average error of 5.01 ± 2.64 mm, a commendable result when utilizing augmented reality glasses. While previous research reported lower average precision, it was limited to stationary settings. In contrast, the current study showcases the practicality and effectiveness of AR technology in real-world environments, enhancing the overall acupuncture procedure.
In conclusion, the integration of augmented reality in acupuncture navigation represents a significant advancement in the field, offering a promising tool for practitioners to improve patient outcomes while maintaining a high level of precision in acupuncture point localization.