JoVE Logo

Sign In

A subscription to JoVE is required to view this content. Sign in or start your free trial.

In This Article

  • Summary
  • Abstract
  • Introduction
  • Protocol
  • Results
  • Discussion
  • Disclosures
  • Acknowledgements
  • Materials
  • References
  • Reprints and Permissions

Summary

This study presents a protocol of designing and manufacturing a glasses-type wearable device that detects the patterns of food intake and other featured physical activities using load cells inserted in both hinges of the glasses.

Abstract

This study presents a series of protocols of designing and manufacturing a glasses-type wearable device that detects the patterns of temporalis muscle activities during food intake and other physical activities. We fabricated a 3D-printed frame of the glasses and a load cell-integrated printed circuit board (PCB) module inserted in both hinges of the frame. The module was used to acquire the force signals, and transmit them wirelessly. These procedures provide the system with higher mobility, which can be evaluated in practical wearing conditions such as walking and waggling. A performance of the classification is also evaluated by distinguishing the patterns of food intake from those physical activities. A series of algorithms were used to preprocess the signals, generate feature vectors, and recognize the patterns of several featured activities (chewing and winking), and other physical activities (sedentary rest, talking, and walking). The results showed that the average F1 score of the classification among the featured activities was 91.4%. We believe this approach can be potentially useful for automatic and objective monitoring of ingestive behaviors with higher accuracy as practical means to treat ingestive problems.

Introduction

Continuous and objective monitoring of food intake is essential for maintaining energy balance in the human body, as excessive energy accumulation may cause overweightness and obesity1, which could result in various medical complications2. The main factors in the energy imbalance are known to be both excessive food intake and insufficient physical activity3. Various studies on the monitoring of daily energy expenditure have been introduced with automatic and objective measurement of physical activity patterns through wearable devices4,5,6, even at the end-consumer level and medical stage7. Research on the monitoring of food intake, however, is still in the laboratory setting, since it is difficult to detect the food intake activity in a direct and objective manner. Here, we aim to present a device design and its evaluation for monitoring the food intake and physical activity patterns at a practical level in daily life.

There have been various indirect approaches to monitor the food intake through chewing and swallowing sounds8,9,10, movement of the wrist11,12,13, image analysis14, and electromyogram (EMG)15. However, these approaches were difficult to apply to daily life applications, because of their inherent limitations: the methods using sound were vulnerable to be influenced by environmental sound; the methods using the movement of the wrist were difficult to distinguish from other physical activities when not consuming food; and the methods using the images and EMG signals are restricted by the boundary of movement and environment. These studies showed the capability of automated detection of the food intake using sensors, but still had a limitation of practical applicability to everyday life beyond laboratory settings.

In this study, we used the patterns of temporalis muscle activity as the automatic and objective monitoring of the food intake. In general, the temporalis muscle repeats the contraction and relaxation as a part of masticatory muscle during the food intake16,17; thus, the food intake activity can be monitored by detecting the periodic patterns of temporalis muscle activity. Recently, there have been several studies utilizing the temporalis muscle activity18,19,20,21, which used the EMG or piezoelectric strain sensor and attaching them directly onto human skin. These approaches, however, were sensitive to the skin location of the EMG electrodes or strain sensors, and were easily detached from the skin due to the physical movement or perspiration. Therefore, we proposed a new and effective method using a pair of glasses that sense the temporalis muscle activity through two load cells inserted in both the hinges in our previous study22. This method showed great potential of detecting the food intake activity with a high accuracy without touching the skin. It was also un-obtrusive and non-intrusive, since we used a common glasses-type device.

In this study, we present a series of detailed protocols of how to implement the glasses-type device and how to use the patterns of temporalis muscle activity for monitoring the food intake and physical activity. The protocols include the process of hardware design and fabrication that consists of a 3D-printed frame of the glasses, a circuit module, and a data acquisition module, and include the software algorithms for data processing and analysis. We furthermore examined the classification among several featured activities (e.g., chewing, walking, and winking) to demonstrate the potential as a practical system that can tell a minute difference between the food intake and other physical activity patterns.

Protocol

NOTE: All the procedures including the use of human subjects were accomplished by a non-invasive manner of simply wearing a pair of glasses. All the data were acquired by measuring the force signals from load cells inserted in the glasses that were not in direct contact with the skin. The data were wirelessly transmitted to the data recording module, which, in this case is a designated smartphone for the study. All the protocols were not related to in vivo/in vitro human studies. No drug and blood samples were used for the experiments. Informed consent was obtained from all subjects of the experiments.

1. Manufacturing of a Sensor-integrated Circuit Module

  1. Purchase electronic components for manufacturing the circuit module.
    1. Purchase two ball-type load cells, each of which operates in a range between 0 N and 15 N, and produces an output of low differential voltage with maximum 120 mV span in a 3.3 V excitation.
      NOTE: These load cells are used to measure force signals on both the left and right sides of the glasses.
    2. Purchase two instrumentation amplifiers and two 15 kΩ gain-setting resistors.
      NOTE: The instrumentation amplifier and the gain-setting resistor are used to amplify the force signal of the load cell eight times, up to 960 mV.
    3. Purchase a micro controller unit (MCU) with wireless capability (e.g., Wi-Fi connectivity), and a 10-bit analog-to-digital converter (ADC).
      NOTE: The MCU is used to read the force signals and transmit them to a data acquisition module wirelessly. Because one analog input pin is used for two analog force inputs, the use of a multiplexer is introduced in the next step 1.1.4.
    4. Purchase a two-channel analog multiplexer that handles the two input signals with one ADC pin on the MCU.
    5. Purchase a lithium-ion polymer (LiPo) battery with 3.7 V nominal voltage, 300 mAh nominal capacity, and 1 C discharge rate.
      NOTE: The battery capacity was chosen to supply enough current per hour more than 200 mAh and to operate the system reliably for about 1.5 h of an experiment.
    6. Purchase a 3.3 V voltage regulator for linear down-regulation of the 3.7 V battery voltage to the 3.3 V operating voltage of the system.
    7. Purchase five 12 kΩ surface-mounted devices (SMD) type resistors as pull-up resistors of the MCU. The resistor's footprint is 2.0 mm x 1.2 mm (size 2012).
  2. Fabricate printed circuit boards (PCBs). This step is about drawing the circuit boards, and making the artwork (i.e., the board layout, the .brd file) and the schematic (i.e., the .sch file) for PCB fabrication. A basic understanding of the process of creating artwork and schematic files is required for development.
    1. Draw a schematic of a left circuit containing the battery using an electronic design application as shown in Figure 1A. Save the result as both artwork (.brd) and schematic (.sch) files.
    2. Draw a schematic of a right circuit containing the MCU using an electronic design application as shown in Figure 1B. Save the result as both artwork (.brd) and schematic (.sch) files.
    3. Fabricate the circuit boards by placing an order with a PCB fabrication company.
    4. Solder every electronic component prepared in step 1.1 to the PCBs as shown in Figure 2 and Figure 3.
      CAUTION: The instrumentation amplifier is very sensitive to the soldering temperature. Make sure that lead temperature does not exceed 300 °C for 10 s during soldering, otherwise it may cause permanent damage to the component.

2. 3D Printing of a Frame of the Glasses

  1. Draw the 3D model of the head piece of the glasses using a 3D modeling tool as shown in Figure 4A. Export the result to the .stl file format.
  2. Draw the 3D model of the left and right temples of the glasses using a 3D modeling tool as shown in Figure 4B and Figure 4C. Export the results to the .stl file format.
  3. Print the head piece and temple parts using a 3D printer and a carbon fiber filament at 240 °C of a nozzle temperature and 80 °C of a bed temperature.
    NOTE: The use of any commercial 3D printer and any types of filaments such as acrylonitrile butadiene styrene (ABS) and polylactide (PLA) can be permitted. The nozzle and bed temperatures may be adjusted according to the filament and printing conditions.
  4. Heat the tips of the temples using a hot air blower of a 180 °C setting and bend them inward about 15 degrees to contact the epidermis of the temporalis muscle like conventional glasses.
    NOTE: The degree of bending of the glasses temple does not need to be rigorous as the purpose of the curvature is to increase a form factor by helping the glasses fit on a subject's head when equipped. Be careful, however, as excessive bending will prevent the temples from touching the temporalis muscle, which makes it impossible to collect significant patterns.
  5. Repeat the steps from step 2.1–2.4 to print two different sizes of the glasses frame to fit multiple head sizes as shown in Figure 4.

3. Assembly of All Parts of the Glasses

  1. Insert the PCBs on both sides of the temples of the glasses using M2 bolts as shown in Figure 5.
  2. Assemble the head piece and the temples by inserting the M2 bolts into the hinge joints.
  3. Connect the left and right PCBs using the 3-pin connecting wires as shown in Figure 5.
  4. Connect the battery to the left circuit and attach it with an adhesive tape to the left temple. The mounting side of the battery is not critical, since it may vary depending on the PCB design.
  5. Cover the glasses with rubber tapes on the tip and the nose pad to add more friction with the human skin as shown in Figure 5.

4. Development of a Data Acquisition System

NOTE: The data acquisition system is composed of a data transmitting module and a data receiving module. The data transmitting module reads the time and the force signals of both sides, and then sends them to the data receiving module, which gathers the received data and writes them to .tsv files.

  1. Upload the data transmitting application to the MCU of the PCB module following the procedures in steps 4.1.1–4.1.3.
    1. Run the "GlasSense_Server" project attached to the supplementary files using a computer.
      NOTE: This project was built with Arduino integrated development environment (IDE). It provides the ability to read the time and force signals with 200 samples/s, and transmit them to the data receiving module.
    2. Connect the PCB module to the computer via a universal serial bus (USB) connector.
    3. Press the "Upload" button on the Arduino IDE to flash the programming codes from step 4.1.1 into the MCU.
  2. Upload the data receiving application to a smartphone, which is used to receive the data wirelessly, following the procedures in steps 4.2.1–4.2.3.
    1. Run the "GlasSense_Client" project attached to the supplementary files using a computer.
      NOTE: This project was built with C# programming language. It provides the ability to receive data and save the .tsv files, which contain a subject's information, such as name, sex, age, and body mass index (BMI).
    2. Connect the smartphone to the computer via a USB connector to build the data receiving application.
    3. Press the "File > Build & Run" button on the C# project to build the data receiving application to the smartphone.

5. Data Collection from a User Study

NOTE: This study collected six featured activity sets: sedentary rest (SR), sedentary chewing (SC), walking (W), chewing while walking (CW), sedentary talking (ST), and sedentary wink (SW).

  1. Select a pair of glasses which have an appropriate size to the user to be tested. Fine-tune the tightness with the support bolts at both the hinges (Figure 5).
    CAUTION: The force values must not exceed 15 N, since the force sensors used in this study may lose the fine linear characteristic beyond the operation range. The force values can be fine-tuned by loosening or tightening the support bolts.
  2. Record the activities of all subjects by pressing the "Record" button on the application built in step 4.2.3.
    1. Record an activity during a 120-s block and generate a recording file of it.
      1. In the case of SR, sit the subject in a chair and have them use a smartphone or read a book. Allow movement of the head, but avoid movement of the whole body.
      2. In the cases of SC and CW, have the subjects eat two types of food texture (toasted bread and chewing jelly) in order to reflect different food properties. Serve the toasted bread in slices of 20 mm x 20 mm, which is a good size for eating.
      3. In the case of W, have the subjects walk at a speed of 4.5 km/h on a treadmill.
      4. In the case of ST, sit the subjects down and have them read a book out loud in a normal tone and speed.
      5. In the case of SW, inform the subjects to wink on the timing of a bell sound of 0.5 s long every 3 s.
    2. Generate a recording file in .tsv format from the data collected in step 5.2.1.
      NOTE: This file contains a sequence of the time when the data were received, a left force signal, a right force signal, and a label representing the current facial activity. Visualizations of temporal signals of all activities in a block of a user were depicted in Figure 6. The six featured activity sets (SR, SC, W, CW, ST, and SW) were labeled as 1, 2, 3, 4, 5, and 6, respectively. The labels were used to compare the predicted classes in section 8 of the protocol.
    3. Take a 60-s break after the recording block. Take off the glasses during the break, and re-wear them again when the recording block restarts.
    4. Repeat the block-and-break set of steps 5.2.1 and 5.2.2 four times for each activity.
    5. In the case of SW, have the subject wink repeatedly with the left eye during one block, and then wink repeatedly with the right eye during the next block.
  3. Repeat steps 5.1–5.2 and collect the data from 10 subjects. In this study, we used five males and five females, the average age was 27.9 ± 4.3 (standard deviation; s.d.) years, which ranged at 19–33 years, and the average BMI was 21.6 ± 3.2 (s.d.) kg/m2, which ranged at 17.9–27.4 kg/m2.
    NOTE: In this study, the subjects who did not have any medical conditions to chew food, wink, and walk were recruited, and this condition was used for inclusion criteria.

6. Signal Preprocessing and Segmentation

NOTE: The left and right signals are calculated separately in the following procedures.

  1. Prepare a series of temporal frames of 2 s long.
    1. Segment the 120 s recorded signals into a set of 2 s frames by hopping them at 1-s intervals using MATLAB as shown in Figure 6.
      NOTE: The segmented frames of 2 s long were used to extract features in section 7. The 1 s hopping size was determined to divide the signals by the 3 s wink interval already mentioned in step 5.2.1.
    2. Apply a low-pass filter (LPF) using a 5th order Butterworth filter with a cutoff frequency of 10 Hz for each frame.
    3. Save the results of step 6.1.2 as the temporal frames for the next steps in step 7.1.
  2. Prepare a series of spectral frames.
    1. Subtract the median from the original signals of each frame to remove the preload when wearing the glasses.
      NOTE: The preload value is not required for the following frequency analysis, since it does not include any information about chewing, walking, wink, etc. It could, however, contain significant information, which can vary from subject to subject, from every setting of the glasses, and even from the moment a subject wears the glasses.
    2. Apply a Hanning window to each frame to reduce a spectral leakage on frequency analysis.
    3. Produce and save a single-sided spectrum by applying a fast Fourier transform (FFT) to each frame.
  3. Define a combination of a temporal and a spectral frame of the same time as a frame block (or simply a frame).

7. Generation of Feature Vectors

NOTE: A feature vector is generated per frame produced in section 6 of the protocol. The left and right frames are calculated separately and combined into a feature vector in the following procedures. All the procedures were implemented in MATLAB.

  1. Extract statistical features from a temporal frame in step 6.1 of the protocol. A list of the total number of 54 features is given in Table 1.
  2. Extract statistical features from a spectral frame in step 6.2 of the protocol. A list of the total number of 30 features is given in Table 2.
  3. Generate an 84-dimensional feature vector by combining the temporal and spectral features above.
  4. Label the generated feature vectors from the recordings in step 5.2 of the protocol.
  5. Repeat the steps from steps 7.1–7.4 for all frame blocks and generate a series of feature vectors.

8. Classification of the Activities into Classes

NOTE: This step is to select the classifier model of a support vector machine (SVM)23 by determining parameters that show the best accuracy from the given problem (i.e., feature vectors). The SVM is a well-known supervised machine learning technique, which shows excellent performance in generalization and robustness using a maximum margin between the classes and a kernel function. We used a grid-search and a cross-validation method to define a penalty parameter C and a kernel parameter γ of the radial basis function (RBF) kernel. A minimum understanding of machine learning techniques and the SVM is required to perform the following procedures. Some referential materials23,24,25 are recommended for better understanding of machine learning techniques and the SVM algorithm. All the procedures in this section were implemented using LibSVM25 software package.

  1. Define a grid of pairs of (C, γ) for the grid-search. Use exponentially growing sequences of C (2-10, 2-5, …, 230) and γ (2-30, 2-25, …, 210).
    NOTE: These sequences were determined heuristically.
  2. Define a pair of (C, γ) (e.g., (2-10, 2-30)).
  3. For the defined grid in step 8.2, perform the 10-fold cross-validation scheme.
    NOTE: This scheme divides the entire feature vectors into 10-part subsets, then test one subset from the classifier model trained by the other subsets, and repeat it over all the subsets, one by one. Therefore, every feature vectors can be tested sequentially.
    1. Divide the entire feature vectors into 10-part subsets.
    2. Define a testing set from a subset, and a training set from the remaining 9 subsets.
    3. Define a scale vector that scales all elements of the feature vectors to the range of [0, 1] for the training set.
      NOTE: The scale vector has same dimension with the feature vector. It consists of a set of multipliers that scales the same row (or column) of all feature vectors to the range of [0, 1]. For example, the first feature of a feature vector is linearly scaled to the range of [0, 1] for the all first features of the training feature vectors. Note that the scale vector is defined from the training set, because the testing set should be assumed to be unknown. This step increases the accuracy of the classification by making the features the equal range and avoiding numerical errors during the calculation.
    4. Scale each feature of the training set to the range of [0, 1] using the scale vector obtained in step 8.2.3.
    5. Scale each feature of the testing set to the range of [0, 1] using the scale vector obtained in step 8.2.3.
    6. Train the training set through the SVM with the defined pair of (C, γ) in step 8.2, and then build a classifier model.
    7. Test the testing set through the SVM with the defined pair of (C, γ) in step 8.2, and the classifier model obtained from the training procedure.
    8. Calculate a classification accuracy on the testing set. The accuracy was calculated from the percentage of feature vectors which are correctly classified.
    9. Repeat the steps 8.2.2–8.2.8 for all the subsets, and calculate the average accuracy of all subsets.
  4. Repeat the steps 8.2–8.3.9 for all grid points of a pair of (C, γ).
  5. Find the local maximum of the highest accuracy of the grid. All the procedures of section 8 are illustrated in Figure 7.
  6. (Optional) If the step of the grid is considered coarse, repeat the steps 8.1–8.5 in a finer grid near the local maximum found in step 8.5, and find a new local maximum of the fine grid.
  7. Compute the precision, recall, and F1 score of each class of activities from the following equations:
    figure-protocol-18594                                   Equation 1
    figure-protocol-18906                                             Equation 2
    figure-protocol-19278          Equation 3
    where TP, FP, and FN represent true positives, false positives, and false negatives for each activity, respectively. The confusion matrix of all the activities is given in Table 3.

Results

Through the procedures outlined in the protocol, we prepared two versions of the 3D printed frame by differentiating the length of the head piece, LH (133 and 138 mm), and the temples, LT (110 and 125 mm), as shown in Figure 4. Therefore, we can cover several wearing conditions, which can be varied from the subjects' head size, shape, etc. The subjects chose one of the frames to fit to their head for the user study. The vert...

Discussion

In this study, we first proposed the design and manufacturing process of glasses that sense the patterns of food intake and physical activities. As this study mainly focused on the data analysis to distinguish the food intake from the other physical activities (such as walking and winking), the sensor and data acquisition system required the implementation of mobility recording. Thus, the system included the sensors, the MCU with wireless communication capability, and the battery. The proposed protocol provided a novel a...

Disclosures

The authors have nothing to disclose.

Acknowledgements

This work was supported by Envisible, Inc. This study was also supported by a grant of the Korean Health Technology R&D Project, Ministry of Health & Welfare, Republic of Korea (HI15C1027). This research was also supported by the National Research Foundation of Korea (NRF-2016R1A1A1A05005348).

Materials

NameCompanyCatalog NumberComments
FSS1500NSBHoneywell, USALoad cell
INA125UTexas Instruments, USAAmplifier
ESP-07Shenzhen Anxinke Technology, ChinaMCU with Wi-Fi module
74LVC1G3157Nexperia, The NetherlandsMultiplexer
MP701435PMaxpower, ChinaLiPo battery
U1V10F3Pololu, USAVoltage regulator
Ultimaker 2Ultimaker, The Netherlands3D printer
ColorFabb XT-CF20ColorFabb, The NetherlandsCarbon fiber filament
iPhone 6s PlusApple, USAData acquisition device
Jelly BellyJelly Belly Candy Company, USAFood texture for user study

References

  1. Sharma, A. M., Padwal, R. Obesity is a sign-over-eating is a symptom: an aetiological framework for the assessment and management of obesity. Obes Rev. 11 (5), 362-370 (2010).
  2. Pi-Sunyer, F. X., et al. Clinical guidelines on the identification, evaluation, and treatment of overweight and obesity in adults. Am J Clin Nutr. 68 (4), 899-917 (1998).
  3. McCrory, P., Strauss, B., Wahlqvist, M. L. Energy balance, food intake and obesity. Exer Obes. , (1994).
  4. Albinali, F., Intille, S., Haskell, W., Rosenberger, M. . Proceedings of the 12th ACM international conference on Ubiquitous computing. , 311-320 (2010).
  5. Bonomi, A., Westerterp, K. Advances in physical activity monitoring and lifestyle interventions in obesity: a review. Int J Obes. 36 (2), 167-177 (2012).
  6. Jung, S., Lee, J., Hyeon, T., Lee, M., Kim, D. H. Fabric-Based Integrated Energy Devices for Wearable Activity Monitors. Adv Mater. 26 (36), 6329-6334 (2014).
  7. Fulk, G. D., Sazonov, E. Using sensors to measure activity in people with stroke. Top Stroke Rehabil. 18 (6), 746-757 (2011).
  8. Makeyev, O., Lopez-Meyer, P., Schuckers, S., Besio, W., Sazonov, E. Automatic food intake detection based on swallowing sounds. Biomed Signal Process Control. 7 (6), 649-656 (2012).
  9. Päßler, S., Fischer, W. Food intake activity detection using an artificial neural network. Biomed Tech (Berl). , (2012).
  10. Passler, S., Fischer, W. -. J. Food intake monitoring: Automated chew event detection in chewing sounds. IEEE J Biomed Health Inform. 18 (1), 278-289 (2014).
  11. Kadomura, A., et al. . CHI'13 Extended Abstracts on Human Factors in Computing Systems. , 1551-1556 (2013).
  12. Fontana, J. M., Farooq, M., Sazonov, E. Automatic ingestion monitor: A novel wearable device for monitoring of ingestive behavior. IEEE Trans Biomed Eng. 61 (6), 1772-1779 (2014).
  13. Shen, Y., Salley, J., Muth, E., Hoover, A. Assessing the Accuracy of a Wrist Motion Tracking Method for Counting Bites across Demographic and Food Variables. IEEE J Biomed Health Inform. , (2016).
  14. Farooq, M., Sazonov, E. . International Conference on Bioinformatics and Biomedical Engineering. , 464-472 (2017).
  15. Grigoriadis, A., Johansson, R. S., Trulsson, M. Temporal profile and amplitude of human masseter muscle activity is adapted to food properties during individual chewing cycles. J Oral Rehab. 41 (5), 367-373 (2014).
  16. Strini, P. J. S. A., Strini, P. J. S. A., de Souza Barbosa, T., Gavião, M. B. D. Assessment of thickness and function of masticatory and cervical muscles in adults with and without temporomandibular disorders. Arch Oral Biol. 58 (9), 1100-1108 (2013).
  17. Standring, S. . Gray's anatomy: the anatomical basis of clinical practice. , (2015).
  18. Farooq, M., Sazonov, E. A novel wearable device for food intake and physical activity recognition. Sensors. 16 (7), 1067 (2016).
  19. Zhang, R., Amft, O. . Proceedings of the 2016 ACM International Symposium on Wearable Computers. , 50-52 (2016).
  20. Farooq, M., Sazonov, E. Segmentation and Characterization of Chewing Bouts by Monitoring Temporalis Muscle Using Smart Glasses with Piezoelectric Sensor. IEEE J Biomed Health Inform. , (2016).
  21. Huang, Q., Wang, W., Zhang, Q. Your Glasses Know Your Diet: Dietary Monitoring using Electromyography Sensors. IEEE Internet of Things Journal. , (2017).
  22. Chung, J., et al. A glasses-type wearable device for monitoring the patterns of food intake and facial activity. Scientific Reports. 7, 41690 (2017).
  23. Cristianini, N., Shawe-Taylor, J. . An introduction to support Vector Machines: and other kernel-based learning methods. , (2000).
  24. Giannakopoulos, T., Pikrakis, A. . Introduction to Audio Analysis: A MATLAB Approach. , (2014).
  25. Chang, C. -. C., Lin, C. -. J. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol. 2 (3), 27 (2011).
  26. Po, J., et al. Time-frequency analysis of chewing activity in the natural environment. J Dent Res. 90 (10), 1206-1210 (2011).
  27. Ji, T. Frequency and velocity of people walking. Struct Eng. 84 (3), 36-40 (2005).
  28. Knoblauch, R., Pietrucha, M., Nitzburg, M. Field studies of pedestrian walking speed and start-up time. Transp Res Red. 1538, 27-38 (1996).

Reprints and Permissions

Request permission to reuse the text or figures of this JoVE article

Request Permission

Explore More Articles

Artificial IntelligenceWearable DevicesSmart GlassesFood Intake MonitoringPhysical Activity ClassificationTemporalis Muscle ActivitySensor DesignCircuit Design3D PrintingPrototyping

This article has been published

Video Coming Soon

JoVE Logo

Privacy

Terms of Use

Policies

Research

Education

ABOUT JoVE

Copyright © 2025 MyJoVE Corporation. All rights reserved