Next Article in Journal
A Hybrid Recursive Trigonometric Technique for Direct Digital Frequency Synthesizer
Next Article in Special Issue
NRXR-ID: Two-Factor Authentication (2FA) in VR Using Near-Range Extended Reality and Smartphones
Previous Article in Journal
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses

by
Shin-Yan Chiou
1,2,3,*,
Hsiao-Hsiang Chang
1,
Yu-Cheng Chen
4 and
Geng-Hao Liu
3,4
1
Department of Electrical Engineering, College of Engineering, Chang Gung University, Taoyuan 333323, Taiwan
2
Department of Neurosurgery, Keelung Chang Gung Memorial Hospital, Keelung 204201, Taiwan
3
Division of Acupuncture and Moxibustion, Department of Traditional Chinese Medicine, Linkou Chang Gung Memorial Hospital, Taoyuan 333423, Taiwan
4
Department of Traditional Chinese Medicine, Chang Gung University, Taoyuan 333323, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(15), 3025; https://doi.org/10.3390/electronics14153025
Submission received: 26 June 2025 / Revised: 25 July 2025 / Accepted: 26 July 2025 / Published: 29 July 2025

Abstract

Traditional acupuncture relies on the precise selection of acupuncture points to adjust Qi flow along meridians. Traditionally, acupuncture points are localized using cun (or body-relative cun) as a proportional measurement. However, locating specific points can be challenging, even for experienced practitioners. This study aims to enhance the accuracy and efficiency of acupuncture point localization by introducing an augmented reality (AR) navigation system utilizing AR glasses (Magic Leap One). The system employs a Six-Point Landmark-Based AR Registration method to overlay an acupuncture point model onto a patient’s head without the need for external markers. Methods included testing with traditional Chinese medicine students, measuring positional errors, and evaluating stability. Results demonstrated an average error of 5.01 ± 2.64 mm, which is well within the therapeutic range of 2 cun (about 5 cm), with minimal drift during stability tests. This AR system provides an accurate and intuitive tool for practitioners and learners, reducing variability in acupuncture point selection and offering promise for broader clinical applications.

1. Introduction

1.1. Overview of Acupuncture and Its Therapeutic Impact

Acupuncture, a key component of traditional Chinese medicine (TCM), has long been recognized for its safety, broad therapeutic applications, simplicity, and cost-effectiveness [1,2]. By stimulating specific points on the body, known as acupoints, acupuncture is believed to regulate the flow of energy (qi), promoting balance and improving circulation [3]. Its therapeutic effects range from surface-level interventions to deep tissue impacts, influencing pain relief, blood flow, and energy regulation [4]. Over time, methods of locating acupoints have evolved, beginning with vague classical descriptions, progressing to illustrated charts, and further advancing with the development of the bronze acupuncture figures by Wang Weiyi during the Song Dynasty. Today, modern technologies such as virtual projections provide more precise means of identifying acupoints. Despite these advancements, discrepancies in locating acupoints among practitioners persist.

1.2. Inconsistencies in Acupuncture Point Localization

In traditional Chinese medicine, the unit “cun” is defined relative to each individual’s body proportions, commonly measured as the width of the patient’s thumb at the first joint or the width of the middle segment of the middle finger, and typically ranges from 2 to 3 cm. The localization of acupoints remains inconsistent across practitioners. Research by Petra I. Bäumler et al. has shown that deviations in locating acupoints like LI10 can vary between 5.48 and 44.49 cm2 [5]. Although there have been ongoing efforts to standardize acupoint placement, many acupuncturists continue to rely on traditional methods such as proportional cun (or body-relative cun) measurements, which introduce variability into the process [6]. However, the physiological effects of acupuncture are well-documented, and the therapeutic benefits are undeniable [3,7,8,9,10]. In the past, Menglong Chang and colleagues proposed an image recognition technology based on facial acupuncture points, which automatically locates acupuncture points using facial feature analysis [11]. Tai Wing Chan and his team introduced a robotic system that combines deep convolutional neural networks with a robotic arm for acupuncture treatment point localization [12]. However, these approaches have limitations in terms of viewing angles and may not achieve the desired precision levels expected in clinical acupuncture.

1.3. Challenges of Virtual Reality in Acupuncture Applications

Despite the use of traditional methods like cun measurement and visual aids to enhance acupoint accuracy, challenges remain in achieving precise localization. Virtual reality (VR) systems have emerged to aid in acupoint teaching and training, providing visualizations of acupuncture points. However, these systems are limited to internal virtual images and lack integration with the actual human body, which restricts their utility in clinical applications. Augmented reality (AR), in contrast, offers the potential to bridge this gap by overlaying digital acupoint models onto real-world bodies, but it is a technology that has yet to be fully realized in acupuncture practice.

1.4. Proposed Augmented Reality Acupuncture Navigation System

Chiou and He [13] developed a promising augmented reality (AR) acupuncture navigation system, employing a six-point registration method that showed potential in improving accuracy. However, the method still relied on external markers for positioning, which can obstruct the practitioner’s view and complicate the process. In contrast, the acupuncture point navigation method proposed in this paper pioneers the application of AR surgical navigation techniques in the acupuncture field. Notably, it does not require the use of image targets, making it possible to achieve augmented reality acupuncture navigation without the need for external markers. This method simultaneously addresses the requirements for precision, obstacle-free procedures, intuitive navigation, and reduced positioning time in acupuncture treatments.
Additionally, their AR system was confined to the use of handheld or mounted tablets, limiting the practitioner’s ability to freely move around and adjust viewing angles during the needling process. In response to these limitations, this paper proposes a novel AR acupuncture navigation system that eliminates the need for external markers, ensuring an unobstructed treatment process. By leveraging AR glasses, practitioners can gain greater mobility and a dynamic 360-degree view, enhancing both accuracy and ease of use. The system also features a fully customized 3D head model tailored to each patient, improving navigation precision without the manual selection of registration points.

1.5. Results of the Proposed System

The proposed system integrates a 3D scanning and positioning approach that allows acupuncture points to be accurately overlaid onto the human body. In tests conducted on eight representative acupuncture points, the system achieved an average positioning error of less than 8 mm, which is well within the acceptable therapeutic range of approximately 2 cun (about 5 cm) [6,14,15]. Notably, certain acupoints such as Jiǎo sūn showed slightly higher deviations, occasionally exceeding 7 mm. These variations are attributed to differences in individual head anatomy and the challenge of aligning a standardized head model with diverse recipient features. Nevertheless, these errors still fall within the clinically acceptable zone for acupuncture treatment. This improvement in precision demonstrates the potential for AR to greatly enhance the consistency of acupuncture point localization, offering a more intuitive and reliable navigation system than previous methods.

1.6. Paper Structure

The remainder of this paper is organized as follows: Section 2 outlines the preparation work and tools used in this study, describes the methods employed to develop the proposed AR navigation system, and details the implementation process. Section 3 presents the results and error testing under different conditions, while Section 4 offers a comparison between this system and prior methods, particularly focusing on the advantages over Chiou and He’s [13] six-point registration system. Finally, Section 5 concludes the paper by summarizing the key contributions and discussing future directions of research.

2. Materials and Methods

To implement augmented reality acupuncture surgery navigation, this study proposes a novel method named Six-Point Landmark-Based AR Registration, which addresses several critical factors. First, the irregularity and variability in the shape of the human head make determining its central point and three-dimensional dimensions a complex task. Additionally, minimizing disruptions during the procedure is essential to ensure smooth acupuncture surgery. To meet these requirements, the proposed method is specifically designed to accommodate the diversity and irregularity of head shapes.
The Six-Point Landmark-Based AR Registration method captures the coordinates of six key points on the head model within a Unity environment. These key points include the left ear, right ear, top of the head, chin, back of the head, and tip of the nose (see Figure 1). Using this data, the system calculates the head model’s dimensions and central point. This information is then used to align the acupuncture point navigation model with the recipient’s head, forming a precise acupuncture navigation system (see Figure 2).
The entire process is automated, eliminating the need for physician intervention during the alignment phase. Adjustments can be made using gestures or a remote controller after the initial alignment, ensuring minimal interference during the procedure. The steps involved in the Six-Point Landmark-Based AR Registration method are illustrated in Figure 3.
Figure 3 illustrates the fixed sequence of three scan iterations in the Six-Point Landmark-Based AR Registration process. Each iteration collects head axis data using OnTriggerEnter() and OnTriggerExit(), with conditional branches assigning distinct roles: (1) model scaling, (2) recipient model alignment, and (3) acupuncture model alignment. This process is deterministic and not a dynamic programmatic loop.
To clarify the scanning logic, Unity’s OnTriggerEnter() and OnTriggerExit() functions are used to detect the head model boundaries. During scanning: Along the X-axis (right to left), OnTriggerEnter() captures the right surface to define Plane(CR), and OnTriggerExit() captures the left surface to define Plane(CL). Along the Y-axis (bottom to top) and Z-axis (front to back), they define Plane(CB)/Plane(CT) and Plane(CF)/Plane(CN), respectively. These six boundary planes are used for registration and alignment in subsequent steps.
In our Unity implementation, the OnTriggerEnter() and OnTriggerExit() functions are used to detect when the recipient model intersects with predefined scanning planes. These are standard Unity event functions triggered by object collisions within collider boundaries. Specifically, OnTriggerEnter() captures the recipient model’s entry into the detection region, while OnTriggerExit() marks the exit point. By iterating this process across the X-, Y-, and Z-axes (left–right, top–bottom, front–back), the system defines six bounding planes for alignment. This process is deterministic, rule-based, and requires no parameter tuning, making it reproducible in any Unity environment. Developers familiar with Unity can easily adapt the logic based on publicly available documentation.

2.1. Landmark-Based Scaling for Head Model Dimensions

To accommodate variations in recipient sizes, an algorithm is proposed to adjust the three-dimensional dimensions of the acupuncture point navigation model. The definitions of these key anatomical landmarks and notations are summarized in Table 1. Using the coordinates of the six key points obtained during the initial scan, the three-dimensional dimensions of the navigation model are calculated. This includes measuring the distance from the right ear to the left ear (ALLR), from the chin to the top of the head (ALTB), and from the tip of the nose to the occiput (ALHN).
In the second scan, the same scanning method is applied to obtain the corresponding measurements for the recipient’s head model: left ear to right ear (LLR), chin to top of the head (LTB), and nose tip to occiput (LHN). Size matching calculations are then performed using this acquired data, as illustrated in Algorithm 1.
The rationale behind this scaling strategy is to proportionally align the recipient’s head model with the reference navigation model based on three key anatomical dimensions—left–right, top–bottom, and front–back—measured using linear distances between designated landmarks. This approach assumes that maintaining proportionality along these anatomical axes preserves the relative spatial arrangement of acupuncture points for effective alignment.
Algorithm 1. Size matching algorithm
Input: LLR, LTB, LHN, ALLR, ALTB, ALHN
Procedure Size Match(LLR, LTB, LHN, ALLR, ALTB, ALHN)
AMLR = LLR/ALLR
AMTB = LTB/ALTB
AMHN = LHN/ALHN
A.transform.localScale = new Vector3(AMLR, AMTB, AMHN)
To mitigate the impact of head rotation or tilt, we manually adjust the recipient’s head model in Unity before executing the scan alignment process. Specifically, the nose is positioned at the central front, and the chin is placed at the central bottom of the virtual space (as shown in Figure 4). This preprocessing step helps reduce the rotational deviation, ensuring that the principal axes of the head model are reasonably aligned with the coordinate system.
We acknowledge that this approach introduces a minor degree of human adjustment error. However, such variability is considered as part of the overall system error and is included in the final precision evaluation.

2.2. Center Alignment in Six-Point Landmark-Based AR Registration

To address the challenge of defining and calculating the center point of the human head, this paper introduces the following concept: The model comprises six vertices, forming three pairs. It is assumed that connecting each pair of vertices along their corresponding axes creates a midpoint plane. Mathematically, the midpoint of each landmark pair defines a plane perpendicular to one of the coordinate axes. The intersection of the three planes derived from the left to right, top to bottom, and front to back midpoints yields the estimated geometric center of the head. In three-dimensional space, such a point can be computed by solving the intersection of three orthogonal planes, assuming their normal vectors are aligned with the X, Y, and Z axes, respectively. The intersection of these three midpoint planes is defined as the assumed center point of the irregular model (as indicated by the white sphere in Figure 1).
Two possible strategies were considered for defining the center point: (1) using the geometric center of the cube defined by the six anatomical landmarks (left, right, top, bottom, front, back), and (2) using the centroid of the triangle formed by the three midpoints (CLR, CTB, and CHN). We ultimately chose the cube-based geometric center, as it incorporates all six spatial boundaries and produces a more balanced and reproducible alignment reference. This point is used to translate both the recipient head model and the acupuncture navigation model to the origin of the coordinate system.
Rather than directly manipulating this center point, the algorithm calculates the coordinate differences between the diagonal vertices and adjusts the model object accordingly. This approach effectively relocates the center point of the head to the specified coordinates. Once the method for determining and relocating the center point is established, the center points of both the acupuncture point navigation model and the recipient’s head model are moved to the designated coordinates, thereby completing the alignment of the acupuncture navigation model’s center point.
To achieve the center point alignment as described above, a center point moving algorithm is proposed during the second and third scans, as detailed in Algorithm 2. In this context, middlePlane(Plane(C1), Plane(C2)) refers to the plane located halfway between two parallel planes, computed by averaging the positions of Plane(C1) and Plane(C2). The function intersectionPoint(Plane1, Plane2, Plane3) denotes the 3D coordinate point at which three orthogonal planes intersect, defining the model center point. This algorithm performs one iteration each on the recipient’s head model and the acupuncture point navigation model, resulting in the successful alignment of their center points.
To ensure accurate alignment, the center point of each 3D model is translated to the world origin (0, 0, 0). This is accomplished by applying a negative translation vector equal to the computed center point CC. In the Unity implementation, this is achieved by the statement H.transform.localPosition -= new Vector3(CC), effectively relocating the entire model such that its computed center aligns with the global origin.
Algorithm 2. Center point moving algorithm
Input: CL, CR, CT, CB, CH, CN
Procedure Size Match(CL, CR, CT, CB, CH, CN)
Plane(CLR) = middlePlane(Plane(CL), Plane(CR))
Plane(CTB) = middlePlane(Plane(CT), Plane(CB))
Plane(CHN) = middlePlane(Plane(CH), Plane(CN))
CC = intersectionPoint(Plane(CLR), Plane(CTB), Plane(CHN))
H.transform.localPosition = H.transform.localPosition - new Vector3(Cc)
// Move model by -Cc to align center point to world origin (0, 0, 0)

2.3. Implementation

This subsection describes the implementation process of the augmented reality acupuncture navigation method, illustrated in the schematic diagram shown in Figure 5.
Step 1: A 3D scanner (Artec Eva, Artec 3D, Senningerberg, Luxembourg) is used to scan the recipient’s head, which is then processed into a 3D object model. This model is imported into a Unity project, where manual adjustments are made to the phase. The Unity project already contains the acupuncture point navigation model, which was obtained from Chiyuan Bookstore, a specialized retailer for traditional Chinese medicine books and acupuncture equipment. This model has been post-processed to accurately represent the acupuncture points.
Step 2: The project is then imported into AR glasses (Magic Leap One) and activated. The program automatically executes the Six-Point Landmark-Based AR Registration method to overlay the acupuncture navigation model. Any errors in the overlay can be adjusted in real time through the user interface.
Step 3: Using the user interface, the acupuncture navigation model is moved to the actual position on the recipient’s head, thereby completing the alignment with the subject (as shown in Figure 6).
At this stage, the acupuncture navigation model can be utilized for acupuncture procedures or on-site teaching with the recipient. Users have the option to remove the human head model, displaying only the acupuncture point model. If issues arise, such as loss of the AR glasses’ internal coordinates or unintended movement by the recipient causing desynchronization between the real and virtual environments, Step 3 can be refined to restore accurate alignment.
The process performs three structured scan iterations. Each scan uses the same trigger-based mechanism to obtain head axis coordinates. The first scan enables scaling of the acupoint navigation model; the second scan aligns the recipient’s head model to the origin; and the third aligns the acupuncture navigation model. This logic is represented in the revised flowchart (Figure 3).

3. Results

This section investigates and discusses the measurement errors associated with combining augmented reality glasses with the Six-Point Landmark-Based AR Registration method.
Firstly, since the application of augmented reality in surgical navigation relies on practitioners using the information displayed on the screen during procedures, the stability of this virtual information in the real environment can significantly impact the needling process and subsequent measurements of acupuncture point errors. Therefore, it is essential to assess the stability of the acupuncture navigation model.
Furthermore, to evaluate the effectiveness of the center point alignment and size matching methods within the Six-Point Landmark-Based AR Registration approach, this study will measure the errors between the acupuncture points in the navigation model and those identified by acupuncturists on the recipient’s head. This comparative analysis will provide insights into the accuracy and reliability of the proposed methods.

3.1. Stability Measurement of Acupuncture Navigation Model

To confirm the stability of the 3D model presentation in the real environment through augmented reality glasses, this study involved practical testing with one traditional Chinese medicine student acting as the point locator under the supervision of a licensed TCM physician with extensive clinical experience in acupuncture. Five recipients. Stability measurements were performed on eight key acupuncture points across four orientations on each recipient’s head. These acupuncture points included Sù liáo, Zǎn zhú, Shuǐ gōu, Yìn táng, Jiǎo sūn, Tīng gōng, Fēng fǔ, and Bǎi huì, as shown in Figure 7. The steps for measuring the stability of the acupuncture navigation model are as follows:
Step 1: The acupuncturist locates and marks the acupuncture points on the recipient’s head, adhering to anatomical principles.
Step 2: The Six-Point Landmark-Based AR Registration method is employed to align the acupuncture navigation model with the recipient’s head.
Step 3: The positional errors of each acupuncture point are measured, with each point being assessed ten times. Notably, when measuring points on the sides and rear of the head, adjustments are made to the imaging position of the acupuncture navigation model to ensure proper alignment with the recipient. This adjustment helps eliminate potential coordinate calculation errors caused by significant movements of the augmented reality glasses. The standard deviation data for the positional errors of the acupuncture points is recorded in Figure 8.

3.2. Acupoint Error Measurement in the Acupuncture Navigation Model

Following the validation of the accuracy of the augmented reality model, this study conducted practical testing to assess the precision of the Six-Point Landmark-Based AR Registration method in real-world applications. Ten recipients were recruited for this testing, and the same eight acupuncture points mentioned in Section 4.1 were utilized for precision measurements. The steps for measuring acupoint errors in the acupuncture navigation model are outlined as follows:
Step 1: Instruct the acupuncturist to locate and mark the acupuncture points on the recipient’s head according to anatomical principles.
Step 2: Employ the Six-Point Landmark-Based AR Registration method to align the acupuncture navigation model with the recipient’s head.
Step 3: Conduct six measurements for each acupuncture point on each of the ten recipients to assess precision.
The statistical data obtained from these measurements is presented in Figure 9.

3.3. Analysis of Measurement Results

This subsection examines the stability of the augmented reality presentation and the associated acupuncture point errors.

3.3.1. Stability Analysis of Acupuncture Navigation Model

As illustrated in Figure 8, the range of drift observed in the projection of three-dimensional objects using augmented reality glasses is minimal. The largest standard deviation recorded is 2.59 mm for Recipient B, with 87.5% of the data exhibiting a standard deviation of less than 2 mm. These findings further validate the practical feasibility of the acupuncture navigation model. The following section will provide a more detailed analysis of the error measurement data related to the Six-Point Landmark-Based AR Registration method.

3.3.2. Acupoint Error Analysis

This subsection primarily analyzes the accuracy of this study’s experiment. First, the variability in acupuncture point locations, as reported in Traditional Chinese Medicine studies, is reviewed. Molsberger et al. [14] suggested that acupuncture points are not fixed but instead represent areas. In randomized controlled experiments, they recommended a 2-cun (approximately 5 cm) difference between acupuncture points to observe distinct effects, indicating some variability in their placement. Their data also showed that different acupuncturists had a distribution of 11.27 square centimeters when locating GB14 [14].
Returning to this study’s results, we note that in Figure 9, the average error for the eight acupuncture points across the four facial orientations was less than 8 mm. This is significantly smaller than the 2-cun range mentioned in the literature, confirming the accuracy of the 3D scanning and positioning method. Points near detailed facial structures like Sù liáo and Shuǐ gōu had lower errors, possibly due to more distinct anatomical features aiding alignment. In contrast, points like Jiǎo sūn, near the ear, showed higher variability, likely due to differences in ear size and position among recipients.
These facial acupoints (like Sù liáo and Shuǐ gōu) are located closer to the center of the face, where numerous and well-defined anatomical landmarks—such as the nose contour, philtrum groove, and upper lip boundary—are concentrated. These rich features enhance the precision of 3D scanning and image alignment during registration. In contrast, acupoints located in peripheral regions, such as the top, back, or sides of the head, often lack such detailed reference structures, making accurate positioning more difficult. This anatomical distinction explains the superior accuracy observed in central facial areas.
In Table 2, the accuracy of the proposed augmented reality (AR) acupuncture navigation system is evaluated by measuring the average error, standard deviation, and 95% confidence intervals (CI) for each acupoint. The results show that the system achieved an overall average error of 5.01 mm with a standard deviation of 2.64 mm, and a 95% CI ranging from 4.27 to 5.76 mm. These findings indicate a consistent level of precision across different acupoints.
Among the acupoints, Sù liáo exhibited the lowest average error of 1.63 mm with a standard deviation of 1.04 mm and a narrow 95% CI of 1.23–2.03 mm, demonstrating the highest level of accuracy. Shuǐ gōu also performed well, with an average error of 2.8 mm, a standard deviation of 0.87 mm, and a CI of 2.45–3.15 mm, indicating high consistency. In contrast, Jiǎo sūn and Tīng gōng had higher average errors of 7.25 mm and 6.02 mm, respectively, and wider confidence intervals (6.15–8.35 mm and 5.30–6.73 mm), suggesting greater variability and anatomical complexity in these regions. These confidence intervals provide statistical support for evaluating measurement stability and enable clearer comparisons among acupoints.
To assess the consistency of acupuncture point localization among different participants, a one-way analysis of variance (ANOVA) was performed using the average errors across eight acupoints for each of the ten subjects. The analysis tested whether the error measurements varied significantly between participants. As shown in the ANOVA summary table, the F-value was 1.14509, and the p-value was 0.343769, which is greater than the standard alpha level of 0.05. This result indicates that there is no statistically significant difference in localization error among the ten subjects, suggesting that the proposed system maintains a stable level of precision regardless of individual user variation. This finding supports the robustness and usability of the system in practical applications. The consistency of results across subjects also confirms that the training curve or individual experience has minimal impact on localization performance when using the proposed AR navigation method.
Overall, the data shows that while the system performs well across most acupoints, some locations may present more challenges in achieving precise localization. The relatively low standard deviations for most acupoints, however, suggest that the AR system offers reliable and repeatable accuracy in guiding acupuncture point localization. This level of precision, averaging 5.01 mm, is well within clinically acceptable ranges for acupuncture treatments.

4. Discussion

4.1. System Strengths and Innovations

This section discusses the strengths of the proposed augmented reality (AR) acupuncture navigation system, with emphasis on its markerless design, hands-free operation, and clinical practicality.
A key strength of our approach is the elimination of external markers or image targets near the patient, enabling unobstructed acupuncture procedures and reducing setup complexity. The system uses AR glasses (Magic Leap One) to enhance practitioner mobility, allowing dynamic viewing angles and intuitive interaction. This is particularly valuable in acupuncture treatments where freedom of movement is essential. By integrating AR surgical navigation principles, our system improves consistency in acupoint localization while maintaining a natural clinical workflow.

4.2. Comparison with Existing Acupuncture Localization Systems

To contextualize the performance and clinical applicability of our system, Table 3 presents a detailed comparison of acupuncture localization systems across representative studies, including Chang and Zhu [11], Chan et al. [12], and Chiou and He [13].
Figure 10 visually illustrates representative systems from prior studies and our proposed method. Subfigure A shows facial landmark detection based on image datasets in [11], while Subfigure B highlights robot-assisted localization on real forearms with variable skin conditions in [12]. Subfigure C presents tablet-based AR navigation using a mannequin head as proposed in [13]. In contrast, subfigure D depicts our smart-glasses-based system operating on real human subjects, with a 3D head model aligned in real time. This visual comparison highlights key distinctions in subject testing, interaction modes, and anatomical realism, further supporting the tabulated findings in Table 3.
In terms of the target body region, most prior works focused on the face [11,13] or upper limbs [12], whereas our study addresses both the head and face region, which contains complex 3D contours and variable hair coverage. Our system employs smart-glasses-based augmented reality (AR) navigation using Magic Leap One, combined with high-resolution 3D scans from Artec Eva, offering real-time interaction and precise anatomical alignment.
Regarding the model construction approach, we use a landmark-based scanning method to generate a personalized 3D head model. In contrast, Chang and Zhu [11] relied on facial image datasets for feature-point extraction, which lacked real anatomical testing. Chan et al. [12] implemented a CNN and inch-based mesh estimation on robotic arms for upper-limb points, while Chiou and He [13] proposed a Vuforia-based AR system for face localization, but tested only on a mannequin head.
Importantly, our study was validated on ten real human subjects, while [11] used only image datasets and [13] used synthetic models. Though [12] tested on real forearms, the anatomical diversity was not statistically validated. In terms of measured acupoints, our system covered eight cranial points, compared to five in [11,13], and five upper-limb points in [12].
Our system supports marker-free alignment and achieves full real-time capability with smart-glasses display. While [12] had partial real-time interaction due to robotic delay, [11,13] lacked dynamic feedback. Personalization is another critical dimension—our method builds a patient-specific model, whereas [12] supports only partial scaling and [11,13] do not support personalization.
One of the key advantages of our system lies in viewpoint mobility. Unlike [12], which is constrained by a fixed robotic arm, or [11], which is limited to static images, our device allows dynamic, multi-angle visualization of acupoints. This mobility is particularly beneficial in clinical scenarios requiring flexible observation.
Quantitatively, our system achieved a localization error of 5.01 ± 2.64 mm with full statistical reporting (average, SD, 95% CI, and ANOVA). Chan et al. [12] reported an offset of 58.5 mm, and Chiou and He [13] reported a range of 0.6–3.9 mm, but without confidence intervals. Chang and Zhu [11] did not report quantitative localization accuracy.
In terms of user interaction, our wearable smart-glasses system supports intuitive, hands-free operation, whereas the other systems relied on desktops [11], robotic arms [12], or tablets [13], with limited flexibility. Additionally, only our system and [13] integrate AR directly, whereas [11,12] do not provide immersive AR feedback.
Regarding system strengths, our approach stands out with medical-grade accuracy and real-world usability, validated on real patients. In contrast, [11] emphasized stable facial landmarks without clinical validation, [12] focused on integrating anatomical and AI components but lacked human testing, and [13] claimed novelty in AR application but had limited interaction.
However, each system has limitations. Our system requires expensive hardware and a 3D scanning setup, which may limit its accessibility. Ref. [11] lacks any clinical validation, [12] is theoretical without confirmed human testing, and [13] is restricted by limited interaction and the use of a mannequin head.
In summary, this comparison shows that our system provides high accuracy, real-time interactivity, patient-specific modeling, and full AR integration with clinical-grade validation, features that are only partially or not addressed in previous works [11,12,13].

4.3. Comparison with General AR Surgical Navigation Systems

Table 4 presents a functional and technical comparison of the proposed system with five representative AR-based surgical navigation systems [16,17,18,19,20], highlighting differences in registration methods, viewpoint flexibility, interaction modes, and implementation costs.
In terms of AR device types, our system employs the Magic Leap One smart glasses, supporting hands-free use and dynamic overlays. In contrast, Liu et al. [16] and Kalavakonda et al. [20] utilize HoloLens or tablet-based systems, which, although functional, vary in portability and depth perception. Other systems, such as those from Zhu et al. [18] and Ma et al. [19], rely on see-through helmets or custom displays, which may not be widely available in clinical settings.
The registration method is a key differentiator. Our method applies a six-point anatomical landmark-based approach that is markerless and does not require external image targets. In comparison, Liu et al. [16] and Zhang et al. [17] rely on image target-based registration. Zhu et al. [18] and Ma et al. [19] use fiducial markers, and Kalavakonda et al. [20] combine isosurface and Marching Cubes-based registration. These traditional methods require physical markers or preprocessed models, which can obstruct procedures or increase preparation time.
SLAM or tracking capability is another important feature for AR stability. Most existing systems use marker-based or optical tracking mechanisms. Zhang et al. [17] integrate both image markers and optical tracking to enhance precision. Our system, however, does not currently incorporate SLAM; instead, it leverages a fixed registration once the scan is complete, which simplifies deployment but may affect robustness in dynamic environments. Kalavakonda et al. [20] similarly do not use SLAM.
Viewpoint freedom is highest in our system, as the use of wearable smart glasses allows users to move naturally around the patient without losing the overlay. In contrast, systems with fixed displays or helmets [17,18,19] limit the user’s movement. HoloLens-based systems [16,20] provide moderate mobility, depending on marker visibility.
Interaction modes also vary. Our system supports gesture-based and controller-based interaction with the AR interface. In contrast, tablet-based systems [16,17] rely on touchscreen input, while some systems [18,19] offer no interaction support at all. Kalavakonda et al. [20] utilize HoloLens gestures, which allow limited head or hand movement for control.
From an implementation cost perspective, our system is categorized as low-to-medium due to the need for a 3D scanner and AR glasses. In contrast, Ma et al. [19] and Kalavakonda et al. [20] report high implementation costs due to custom tracking or reconstruction equipment, while the others fall in the medium range.
Overall, compared with existing AR surgical navigation systems, our method offers unique advantages in mobility, interaction flexibility, and setup efficiency by adopting markerless scanning and commercial wearable devices. While not yet integrated with SLAM, the system remains suitable for procedures that benefit from open-viewpoint precision and streamlined operation.

4.4. Accuracy Analysis and Anatomical Considerations

Despite an average error of 5.01 mm, this level of deviation is acceptable within clinical acupuncture tolerances. It is important to note that due to the inherent differences in head shapes among individuals, perfect anatomical overlap between the recipient’s model and the acupuncture navigation model cannot be guaranteed unless individual DICOM scans are obtained. Even under ideal technical conditions, slight mismatches in head geometry contribute to baseline alignment discrepancies.
Outliers, such as deviations exceeding 7 mm near points like Jiǎo sūn, are likely due to individual anatomical differences—especially in the ear and temple regions. Such variation is well documented in acupuncture research [6,14,15] and is clinically tolerated because therapeutic outcomes are more closely tied to meridian alignment than to pinpoint accuracy.
Our algorithm uses proportional scaling based on six anatomical landmarks, enabling model alignment through simple translation and scaling without iterative optimization. This reduces geometric distortion and minimizes cumulative error. Although ideal registration may be limited by differences between the head model and patient anatomy, the system ensures practical reliability without requiring individualized DICOM-based modeling.

4.5. Preliminary User Feedback

In addition to technical validation, we collected preliminary feedback from medical students and non-acupuncturists during system testing. Participants found the system intuitive, especially the hands-free operation via head and voice gestures. They appreciated not having to manipulate external screens or tracking markers, which allowed greater focus on the acupuncture task. Although formal usability evaluations were not conducted in this phase, the positive feedback suggests strong user acceptance, warranting future studies using standardized scales such as SUS (System Usability Scale).

4.6. Limitations and Future Work

The proposed system has several limitations. First, it relies on commercial hardware (e.g., Magic Leap One and Artec 3D scanner), which may not be widely available or affordable for all clinics. Second, because the system uses a generic head model, anatomical differences between individuals may introduce baseline mismatches, especially in patients with cranial deformities or surgical history. Third, light sensitivity or hair interference may affect scan quality in some users.
The Artec Eva scanner offers submillimeter accuracy, with intra-scanner reliability reaching 0.08 mm and average deviation on real subjects around 0.20 mm, as reported by Moerman et al. [21]. Its internal drift correction and auto-alignment mechanisms contribute to consistent results across users [22]. Magic Leap One employs SLAM-based visual-inertial tracking [23]; while precise tracking error metrics are rarely published, available reports indicate millimeter-level accuracy acceptable for AR overlay. However, the publicly available literature offers limited data regarding inter-user variability and drift correction performance for Magic Leap One. Therefore, further investigation is warranted to evaluate system robustness across different users, assess potential overlay drift over time, and ensure sustained spatial accuracy in long-term clinical applications.

5. Conclusions

This study presents a novel augmented reality (AR) acupuncture navigation method using Six-Point Landmark-Based AR Registration techniques. The research addresses the challenges associated with acupuncture point localization by incorporating AR technology, allowing for a more dynamic and adaptable approach compared to traditional methods conducted on fixed platforms.
The findings demonstrate that the accuracy of the acupuncture navigation model achieved an average error of 5.01 ± 2.64 mm, a commendable result when utilizing augmented reality glasses. While previous research reported lower average precision, it was limited to stationary settings. In contrast, the current study showcases the practicality and effectiveness of AR technology in real-world environments, enhancing the overall acupuncture procedure.
In conclusion, the integration of augmented reality in acupuncture navigation represents a significant advancement in the field, offering a promising tool for practitioners to improve patient outcomes while maintaining a high level of precision in acupuncture point localization.

Author Contributions

Conceptualization, S.-Y.C. and H.-H.C.; methodology, S.-Y.C. and H.-H.C.; software, S.-Y.C. and H.-H.C.; validation, S.-Y.C. and H.-H.C.; formal analysis, S.-Y.C. and H.-H.C.; investigation, H.-H.C. and Y.-C.C.; resources, H.-H.C. and Y.-C.C.; data curation, S.-Y.C. and H.-H.C.; writing—original draft preparation, S.-Y.C. and H.-H.C.; writing—review and editing, S.-Y.C. and H.-H.C.; visualization, H.-H.C. and Y.-C.C.; supervision, S.-Y.C. and G.-H.L.; project administration, S.-Y.C.; funding acquisition, S.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded in part by the National Science and Technology Council under Grant NSTC 112-2221-E-182-007-MY2 and NSTC 114-2221-E-182-038 and in part by the CGMH Project under Grant BMRPB46.

Data Availability Statement

The statistical data presented in this study are available in Figure 8 and Figure 9 and Table 2. The datasets used and/or analyzed during the current study are available from the corresponding author upon request. These data are not publicly available due to privacy and ethical reasons.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

  1. Jindal, V.; Ge, A.; Mansky, P.J. Safety and efficacy of acupuncture in children: A review of the evidence. J. Pediatr. Hematol. Oncol. 2008, 30, 431. [Google Scholar] [CrossRef] [PubMed]
  2. Fiske, J.; Dickinson, C. The role of acupuncture in controlling the gagging reflex using a review of ten cases. Br. Dent. J. 2001, 190, 611–613. [Google Scholar] [CrossRef] [PubMed]
  3. VanderPloeg, K.; Yi, X. Acupuncture in modern society. J. Acupunct. Meridian Stud. 2009, 2, 26–33. [Google Scholar] [CrossRef] [PubMed]
  4. Sherman, K.J.; Cherkin, D.C.; Eisenberg, D.M.; Erro, J.; Hrbek, A.; Deyo, R.A. The practice of acupuncture: Who are the providers and what do they do? Ann. Fam. Med. 2005, 3, 151–158. [Google Scholar] [CrossRef] [PubMed]
  5. Bäumler, P.I.; Simang, M.; Kramer, S.; Irnich, D. Acupuncture point localization varies among acupuncturists. Complement. Med. Res. 2012, 19, 31–37. [Google Scholar] [CrossRef] [PubMed]
  6. Godson, D.R.; Wardle, J.L. Accuracy and precision in acupuncture point location: A critical systematic review. J. Acupunct. Meridian Stud. 2019, 12, 52–66. [Google Scholar] [CrossRef] [PubMed]
  7. Ghafoor, U.; Lee, J.H.; Hong, K.S.; Park, S.S.; Kim, J.; Yoo, H.R. Effects of acupuncture therapy on MCI patients using functional near-infrared spectroscopy. Front. Aging Neurosci. 2019, 11, 237. [Google Scholar] [CrossRef] [PubMed]
  8. McKee, M.D.; Nielsen, A.; Anderson, B.; Chuang, E.; Connolly, M.; Gao, Q.; Gil, E.N.; Lechuga, C.; Kim, M.; Naqvi, H. Individual vs. group delivery of acupuncture therapy for chronic musculoskeletal pain in urban primary care—A randomized trial. J. Gen. Intern. Med. 2020, 35, 1227–1237. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, X.C.; Chen, H.; Xu, W.T.; Song, Y.Y.; Gu, Y.H.; Ni, G.X. Acupuncture therapy for fibromyalgia: A systematic review and meta-analysis of randomized controlled trials. J. Pain Res. 2019, 12, 527–542. [Google Scholar] [CrossRef] [PubMed]
  10. Yang, F.M.; Yao, L.; Wang, S.J.; Guo, Y.; Xu, Z.F.; Zhang, C.H.; Zhang, K.; Fang, Y.X.; Liu, Y.Y. Current tracking on effectiveness and mechanisms of acupuncture therapy: A literature review of high-quality studies. Chin. J. Integr. Med. 2020, 26, 310–320. [Google Scholar] [CrossRef] [PubMed]
  11. Chang, M.; Zhu, Q. Automatic location of facial acupuncture-point based on facial feature points positioning. In Proceedings of the 5th International Conference on Frontiers of Manufacturing Science and Measuring Technology (FMSMT 2017), Guilin, China, 20–21 May 2017; Atlantis Press: Paris, France, 2017; pp. 545–549. [Google Scholar]
  12. Chan, T.W.; Zhang, C.; Ip, W.H.; Choy, A.W. A combined deep learning and anatomical inch measurement approach to robotic acupuncture points positioning. In Proceedings of the IEEE EMBC 2021, Mexico City, Mexico, 1–5 November 2021; IEEE: New York, NY, USA, 2021; pp. 2597–2600. [Google Scholar]
  13. Chiou, S.-Y.; He, M.-R. Acupuncture navigation method integrated with augmented reality. Biomed. Mater. Eng. 2024, 35, 536–547. [Google Scholar] [CrossRef] [PubMed]
  14. Molsberger, A.F.; Manickavasagan, J.; Abholz, H.H.; Maixner, W.B.; Endres, H.G. Acupuncture points are large fields: The fuzziness of acupuncture point localization by doctors in practice. Eur. J. Pain 2012, 16, 1264–1270. [Google Scholar] [CrossRef] [PubMed]
  15. Du, X. Application of the principle “Prefer missing the point rather than the meridian” in clinical acupuncture at Weizhong point. Tianjin J. Tradit. Chin. Med. 2016, 33, 406–408. (In Chinese) [Google Scholar]
  16. Chiou, S.-Y.; Liu, L.-S.; Lee, C.-W.; Kim, D.-H.; Al-Masni, M.A.; Liu, H.-L.; Wei, K.-C.; Yan, J.-L.; Chen, P.-Y. Augmented reality surgical navigation system integrated with deep learning. Bioengineering 2023, 10, 617. [Google Scholar] [CrossRef] [PubMed]
  17. Chiou, S.-Y.; Zhang, Z.-Y.; Liu, H.-L.; Yan, J.-L.; Wei, K.-C.; Chen, P.-Y. Augmented reality surgical navigation system for external ventricular drain. Healthcare 2022, 10, 1815. [Google Scholar] [CrossRef] [PubMed]
  18. Zhu, M.; Liu, F.; Chai, G.; Pan, J.J.; Jiang, T.; Lin, L.; Xin, Y.; Zhang, Y.; Li, Q. A novel augmented reality system for displaying inferior alveolar nerve bundles in maxillofacial surgery. Sci. Rep. 2017, 7, 42365. [Google Scholar] [CrossRef] [PubMed]
  19. Ma, L.; Jiang, W.; Zhang, B.; Qu, X.; Ning, G.; Zhang, X.; Liao, H. Augmented reality surgical navigation with accurate CBCT–patient registration for dental implant placement. Med. Biol. Eng. Comput. 2019, 57, 47–57. [Google Scholar] [CrossRef] [PubMed]
  20. Kalavakonda, N.; Sekhar, L.; Hannaford, B. Augmented reality application for aiding tumor resection in skull-base surgery. In Proceedings of the 2019 International Symposium on Medical Robotics (ISMR), Atlanta, GA, USA, 3–5 April 2019; IEEE: New York, NY, USA, 2019; pp. 1–6. [Google Scholar]
  21. Schipper, J.A.M.; Merema, B.J.; Hollander, M.H.J.; Spijkervet, F.K.L.; Dijkstra, P.U.; Jansma, J.; Schepers, R.H.; Kraeima, J. Reliability and Validity of Handheld Structured Light Scanners and a Static Stereophotogrammetry System in Facial Three-Dimensional Surface Imaging. Sci. Rep. 2024, 14, 8172. [Google Scholar] [CrossRef] [PubMed]
  22. Artec 3D. Professional 3D Scanning Solutions: Artec Eva and Space Spider [Brochure]; SolidWorks: Waltham, MA, USA, 2019; Available online: https://files.solidworks.com/partners/pdfs/a3dscanners-booklet725.pdf (accessed on 22 July 2025).
  23. Mur-Artal, R.; Tardós, J.D. Visual-Inertial Monocular SLAM with Map Reuse. IEEE Robot. Autom. Lett. 2017, 2, 796–803. [Google Scholar] [CrossRef]
Figure 1. The six vertices of the head and the assumed center point.
Figure 1. The six vertices of the head and the assumed center point.
Electronics 14 03025 g001
Figure 2. Before and after superimposition of display images. (A) Acupoint navagation model. (B) Recipient’s model. (C) Superimposition of the acupoint navagation model on the recipient’s model.
Figure 2. Before and after superimposition of display images. (A) Acupoint navagation model. (B) Recipient’s model. (C) Superimposition of the acupoint navagation model on the recipient’s model.
Electronics 14 03025 g002
Figure 3. Six-Point Landmark-Based AR Registration method flow chart.
Figure 3. Six-Point Landmark-Based AR Registration method flow chart.
Electronics 14 03025 g003
Figure 4. Preprocessing adjustment of the head model for axis alignment in Unity.
Figure 4. Preprocessing adjustment of the head model for axis alignment in Unity.
Electronics 14 03025 g004
Figure 5. Schematic diagram.
Figure 5. Schematic diagram.
Electronics 14 03025 g005
Figure 6. Overlay display of acupuncture navigation model and recipient. (A) Overlay on male recipient (front view). (B) Overlay on female recipient (side view).
Figure 6. Overlay display of acupuncture navigation model and recipient. (A) Overlay on male recipient (front view). (B) Overlay on female recipient (side view).
Electronics 14 03025 g006
Figure 7. Sampling acupoint.
Figure 7. Sampling acupoint.
Electronics 14 03025 g007
Figure 8. Stability measurement results (standard deviation distribution) for five recipients (A–E).
Figure 8. Stability measurement results (standard deviation distribution) for five recipients (A–E).
Electronics 14 03025 g008
Figure 9. Acupoint error measurement results.
Figure 9. Acupoint error measurement results.
Electronics 14 03025 g009
Figure 10. Visual Comparison of Acupuncture Point Localization Approaches in Prior Studies and the Proposed System. (A) Image-Based Facial Landmark Detection on Dataset ([11]). (B) Robot-Guided Acupuncture Localization on Forearm ([12]). (C) Tablet-Based AR on Mannequin Head ([13]). (D) Smart Glasses–Based AR on Real Human Head ([This Study]).
Figure 10. Visual Comparison of Acupuncture Point Localization Approaches in Prior Studies and the Proposed System. (A) Image-Based Facial Landmark Detection on Dataset ([11]). (B) Robot-Guided Acupuncture Localization on Forearm ([12]). (C) Tablet-Based AR on Mannequin Head ([13]). (D) Smart Glasses–Based AR on Real Human Head ([This Study]).
Electronics 14 03025 g010
Table 1. Symbol description.
Table 1. Symbol description.
SymbolDescription
L/R/T/B/H/NMarking points of left ear/right ear/top of head/chin/occiput/tip of nose
CxThe coordinate of marker point x (x = L, R, T, B, H or N)
CCThe coordinate of model-center point C
Plane(Cx)Scanning plane generated at Cx and oriented orthogonal to the corresponding anatomical axis
ALxyThe length of the acupuncture point navigation model along the Plane(Cx) to Plane(Cy) from the first scan
LxyThe length of the recipient’s model along the Plane(Cx) to Plane(Cy) from the second scan
Table 2. Average error and standard deviation of accuracy (mm).
Table 2. Average error and standard deviation of accuracy (mm).
AcupointMeanSDN95% CI Lower95% CI Upper
Sù liáo1.631.04601.232.03
Shuǐ gōu2.80.87602.453.15
Zǎn zhú6.63.12605.757.45
Yìn táng6.552.93605.757.35
Jiǎo sūn7.254.26606.158.35
Tīng gōng6.022.56605.306.73
Fēng fǔ4.072.61603.364.78
Bǎi huì5.183.72604.146.22
Average5.012.64604.275.76
Table 3. Comparison of acupuncture localization systems.
Table 3. Comparison of acupuncture localization systems.
Comparison
Criteria
This Study[11][12][13]
Target Body RegionHead/FaceFaceUpper limbFace
System TypeAR-based smart glasses navigationImage processing (facial features)Robot-guided acupuncture (CNN + inch-measurement)Tablet-based AR (Vuforia)
HardwareMagic Leap One + Artec 3D scannerCamera + facial feature mappingRobot arm + vision systemTablet + fiducial marker
Model Type3D head model via landmark scanningFeature-point extractionSSD-MobileNet + cun meshTemplate matching
Subjects Population10 human subjectsFacial image
dataset
Not specified1 mannequin head
Measured Acupoints8 head acupoints5 facial acupoints5 upper limb acupoints5 facial acupoints
Localization Method3D scanning + AR alignmentFeature point mappingDeep learning + inch measurementFiducial marker registration
Marker-Free
Alignment
YesYesYesNo
Real-Time
Capability
YesNoPartial (robot delay)Yes
Personalization (Patient-Specific Model)YesNoPartial
(cun-based scaling)
No
Viewpoint MobilityHighLowNone
(robotic system fixed)
Medium
Localization
Accuracy
5.01 ± 2.64 mmNot reported58.5 mm0.6 to 3.9 mm
Quantitative MetricsAvg. error ± SD,
95% CI, ANOVA
NoneOffset ratioBasic error stats (no CI)
Human Subject
Testing
Yes
(human participants)
No
(image dataset)
Yes
(forearm testing)
No
(mannequin head)
Interaction ModeSmart glassesDesktopRobotic ArmTablet
AR IntegrationYesNoNoYes
StrengthsMedical-grade accuracy, real usabilityStable facial landmarksCombines anatomy and AIFirst to propose AR for acupuncture
LimitationsExpensive hardware, needs 3D scanningNo clinical validationNo human test, theoretical onlyLimited interaction, no human test
Table 4. Quantitative and functional comparison with existing AR surgical navigation systems.
Table 4. Quantitative and functional comparison with existing AR surgical navigation systems.
FeatureThis Study[16][17][18][19][20]
AR DeviceMagic Leap OneTablet/HoloLensTabletSee-through helmetCustom IV DisplayHoloLens
Registration MethodSix-point landmark (markerless)Image target-basedImage target-basedOcclusal splint + markerFiducial points (extraoral)Isosurface + Marching
Cubes
SLAM or TrackingNo SLAMImage markerImage marker +
optical
Fiducial markerOptical trackerNo SLAM
Viewpoint FreedomHighMediumLowLowLowMedium
Interaction ModeGesture/ControllerTablet touchTablet touchNoneNoneHoloLens gestures
Implementation CostLow–MediumMediumMediumMediumHighHigh
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chiou, S.-Y.; Chang, H.-H.; Chen, Y.-C.; Liu, G.-H. Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses. Electronics 2025, 14, 3025. https://doi.org/10.3390/electronics14153025

AMA Style

Chiou S-Y, Chang H-H, Chen Y-C, Liu G-H. Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses. Electronics. 2025; 14(15):3025. https://doi.org/10.3390/electronics14153025

Chicago/Turabian Style

Chiou, Shin-Yan, Hsiao-Hsiang Chang, Yu-Cheng Chen, and Geng-Hao Liu. 2025. "Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses" Electronics 14, no. 15: 3025. https://doi.org/10.3390/electronics14153025

APA Style

Chiou, S.-Y., Chang, H.-H., Chen, Y.-C., & Liu, G.-H. (2025). Augmented Reality Navigation for Acupuncture Procedures with Smart Glasses. Electronics, 14(15), 3025. https://doi.org/10.3390/electronics14153025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop