Generic placeholder image

Current Bioinformatics

Editor-in-Chief

ISSN (Print): 1574-8936
ISSN (Online): 2212-392X

General Review Article

Advancements in Yoga Pose Estimation Using Artificial Intelligence: A Survey

Author(s): Vinay Chamola*, Egna Praneeth Gummana, Akshay Madan, Bijay Kumar Rout and Joel José Puga Coelho Rodrigues

Volume 19, Issue 3, 2024

Published on: 13 July, 2023

Page: [264 - 280] Pages: 17

DOI: 10.2174/1574893618666230508105440

Price: $65

conference banner
Abstract

Human pose estimation has been a prevalent field of computer vision and sensing study. In recent years, it has made many advances that have helped humanity in the fields of sports, surveillance, healthcare, etc. Yoga is an ancient science intended to improve physical, mental and spiritual wellbeing. It involves many kinds of asanas or postures that a practitioner can perform. Thus, the benefits of pose estimation can also be used for Yoga to help users assume Yoga postures with better accuracy. The Yoga practitioner can detect their own current posture in real-time, and the pose estimation method can provide them with corrective feedback if they commit mistakes. Yoga pose estimation can also help with remote Yoga instruction by the expert teacher, which can be a boon during a pandemic. This paper reviews various Machine Learning, Artificial Intelligence-enabled techniques available for real-time pose estimation and research pursued recently. We classify them based on the input they use for estimating the individual's pose. We also discuss multiple Yoga posture estimation systems in detail. We discuss the most commonly used keypoint estimation techniques in the existing literature. In addition to this, we discuss the real-time performance of the presented works. The paper further discusses the datasets and evaluation metrics available for pose estimation.

Graphical Abstract

[1]
Mirza OM, Mujlid H, Manoharan H, Selvarajan S, Srivastava G, Khan MA. Mathematical framework for wearable devices in the internet of things using deep learning. Diagnostics 2022; 12(11): 2750.
[http://dx.doi.org/10.3390/diagnostics12112750] [PMID: 36359592]
[2]
Kshirsagar PR, Manoharan H, Selvarajan S, et al. A radical safety measure for identifying environmental changes using machine learning algo- rithms. Electronics 2022; 11(13): 1950.
[http://dx.doi.org/10.3390/electronics11131950]
[3]
Srivastava G, Manoharan H, Gadekallu TR, Jhaveri RH. Connotation of unconventional drones for agri- cultural applications with node arrangements using neural networks. 2022 IEEE 96th Vehicular Technology Conference (VTC2022- Fall); 2022. London, United Kingdom 2022; pp. 1-6.
[4]
Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep learning for computer vision: A brief review. Comput Intell Neurosci 2018; 2018: 7068349.
[http://dx.doi.org/10.1155/2018/7068349] [PMID: 29487619]
[5]
Toshev A, Szegedy C. Deeppose: Human pose estimation via deep neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition 2014. Columbus, OH, USA 2014; pp. 1653-60.
[http://dx.doi.org/10.1109/CVPR.2014.214]
[6]
Carreira J, Agrawal P, Fragkiadaki K, Malik J. Human pose estimation with iterative error feedback. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016. Las Vegas, NV, USA; 2016; pp. 4733-2.
[http://dx.doi.org/10.1109/CVPR.2016.512]
[7]
Tekin B, Katircioglu I, Salzmann M, Lepetit V. Fua. Structured prediction of 3d human pose with deep neural networks. arXiv 2016. Available From: https://arxiv.org/abs/1605.05180
[8]
Li S. 3d human pose estimation from monocular images with deep convolutional neural network. 12th Asian Conference on Computer Vision. Singapore, Singapore. 2014; pp. 332-47.
[9]
Cao Z, Hidalgo G, Simon T, Wei SE, Sheikh Y. Openpose: Realtime multi-person 2d pose estimation using part affinity fields. IEEE Trans Pattern Anal Mach Intell 2021; 43(1): 172-86.
[http://dx.doi.org/10.1109/TPAMI.2019.2929257] [PMID: 31331883]
[10]
Yadav SK, Singh A, Gupta A, Raheja JL. Real-time Yoga recognition using deep learning. Neural Comput Appl 2019; 31(12): 9349-61.
[http://dx.doi.org/10.1007/s00521-019-04232-7]
[11]
Cootes TF, Taylor CJ, Cooper DH, Graham J. Active shape models-their training and application. Comput Vis Image Underst 1995; 61(1): 38-59.
[http://dx.doi.org/10.1006/cviu.1995.1004]
[12]
Ju SX, Black MJ, Yacoob Y. Cardboard people: A parameterized model of articulated image motion. Proceedings of the Second International Conference on Automatic Face and Gesture Recognition. 1996 Oct 16; Killington, VT, USA; 1996; pp. 38-44.
[http://dx.doi.org/10.1109/AFGR.1996.557241]
[13]
Joo H, Simon T, Sheikh Y. Total capture: A 3d deformation model for tracking faces, hands, and bodies. Proc of the IEEE conf on computer vision and pattern recognition. 2018 June 18-23 Salt Lake City, UT, USA; 2018; pp. 8320-9.
[14]
Loper M, Mahmood N, Romero J, Pons-Moll G, Black MJ. SMPL: A skinned multi-person linear model. ACM transactions on graphics 2015; 34(6): 1-16.
[15]
Güdükbay U. Demir İ, Dedeoğlu Y. Motion capture and human pose reconstruction from a single-view video sequence. Digit Signal Process 2013; 23(5): 1441-50.
[http://dx.doi.org/10.1016/j.dsp.2013.06.008]
[16]
Babagholami-Mohamadabadi B, Jourabloo A, Zarghami A, Kasaei S. A bayesian framework for sparse representation-based 3-d human pose estimation. IEEE Signal Process Lett 2014; 21(3): 297-300.
[http://dx.doi.org/10.1109/LSP.2014.2301726]
[17]
Weichen Zhang , Lifeng Shang, Chan AB. A robust likelihood function for 3D human pose tracking. IEEE Trans Image Process 2014; 23(12): 5374-89.
[http://dx.doi.org/10.1109/TIP.2014.2364113] [PMID: 25347879]
[18]
Gong W, Zhang X, Gonzàlez J, et al. Human pose estimation from monocular images: A comprehensive survey. Sensors 2016; 16(12): 1966.
[http://dx.doi.org/10.3390/s16121966] [PMID: 27898003]
[19]
Sun K, Xiao B, Liu D, Wang J. Deep high-resolution representation learning for human pose estimation. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition;. 2019 June 15-20; Long Beach, CA, USA 2019; pp. 5693-703.
[http://dx.doi.org/10.1109/CVPR.2019.00584]
[20]
Fang H-S, Xie S, Tai Y-W, Lu C. Rmpe: Regional multi-person pose estimation. Proceedings of the IEEE international conference on computer vision. 2017 Oct 22-29 Venice, Italy 2017; pp. 2334-43.
[21]
Ning G, Liu P, Fan X, Zhang C. A top-down approach to articulated human pose estimation and tracking. Proc of the European Conf on Computer Vision (ECCV) Workshops arXiv . 2019. Available From: https://arxiv.org/abs/1901.07680
[22]
Pishchulin L, Insafutdinov E, Tang S, et al. Deepcut: Joint subset partition and labeling for multi person pose estimation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2016 June 27-30; Las Vegas, NV, USA; 2016; pp. 4929-37.
[http://dx.doi.org/10.1109/CVPR.2016.533]
[23]
Kreiss S, Bertoni L, Alahi A. Pifpaf: Composite fields for human pose estimation. Proc of the IEEE/CVF Conf on Computer Vision and Pattern Recognition. 2019 June 15-20 Long Beach, CA, USA 2019; pp. 11977-86.
[http://dx.doi.org/10.1109/CVPR.2019.01225]
[24]
Güler RA, Neverova N, Kokkinos I. Densepose: Dense human pose estimation in the wild. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018 June 18-23; Salt Lake City, UT, USA 2018; pp. 7297-306.
[http://dx.doi.org/10.1109/CVPR.2018.00762]
[25]
Kendall A, Grimes M, Cipolla R. Posenet: A convolutional network for real-time 6-dof camera relocalization. Proc of the IEEE int conf on computer vision 2015; 2938-46.
[26]
Maji D, Nagori S, Mathew M, Poddar D. Yolo-pose: Enhancing yolo for multi person pose estimation using object keypoint similarity loss. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022 June 19-20 New Orleans, LA, USA 2022; pp. 2637-46.
[http://dx.doi.org/10.1109/CVPRW56347.2022.00297]
[27]
Wang C-Y, Liao H-YM, Wu Y-H, Chen P-Y, Hsieh J-W, Yeh I-H. Cspnet: A new backbone that can enhance learning capability of cnn. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops. 2020 June 14-19; Seattle, WA, USA 2020; pp. 390-1.
[http://dx.doi.org/10.1109/CVPRW50498.2020.00203]
[28]
Liu S, Qi L, Qin H, Shi J, Jia J. Path aggregation network for instance segmentation. Proceedings of the IEEE conference on computer vision and pattern recognition. 2018 June 18-23; Salt Lake City, UT, USA 2018; pp. 8759-68.
[29]
Jocher G. YOLOv5 by Ultralytics. 2020. Available From: https://github.com/ultralytics/yolov5
[http://dx.doi.org/10.5281/zenodo.3908559]
[30]
Zheng Z, Wang P, Liu W, Li J, Ye R, Ren D. Distance-iou loss: Faster and better learning for bounding box regression. Proc Conf AAAI Artif Intell 2020; 34(7): 12993-3000.
[http://dx.doi.org/10.1609/aaai.v34i07.6999]
[31]
Yuan H, Van Der Wiele C, Khorram S. An automated artificial neural network system for land use/land cover classification from landsat tm imagery. Remote Sens 2009; 1(3): 243-65.
[http://dx.doi.org/10.3390/rs1030243]
[32]
Tang D, Qin B, Liu T. Document modeling with gated recurrent neural network for sentiment classification. Proceedings of the 2015 conference on empirical methods in natural language processing. 2015; pp. 1422-32.
[http://dx.doi.org/10.18653/v1/D15-1167]
[33]
Staudemeyer RC, Morris ER. Understanding lstm–a tutorial into long short-term memory recurrent neural networks. arXiv 2019. Available From: https://arxiv.org/abs/1909.09586
[34]
Wu Y, Schuster M, Chen Z, et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv 2016. Available From: https://arxiv.org/abs/1609.08144
[35]
Tripathi M. Analysis of convolutional neural network based image classification techniques. Journal of Innovative Image Processing 2021; 3(2): 100-17.
[http://dx.doi.org/10.36548/jiip.2021.2.003]
[36]
Dantone M, Gall J, Leistner C, Van Gool L. Human pose estimation using body parts dependent joint regressors. Proc of the IEEE Conf on Computer Vision and Pattern Recognition. 2013 June 23-28; Portland, OR, USA 2013; pp. 3041-8.
[http://dx.doi.org/10.1109/CVPR.2013.391]
[37]
Nagalakshmi Vallabhaneni DPP. The analysis of the impact of yoga on healthcare and conventional strategies for human pose recognition. Turkish J of Computer and Mathematics Education 2021; 12: 1772-83.
[38]
Abarna S, Rathikarani V, Dhanalakshmi P. A review of machine learning technique for yoga posture classification. Int Res J Eng Technol 2021; 8(11)
[39]
Kumar RA, Chakkaravarthy SS. A survey on yogic posture recognition. IEEE Access; 2023; 11: 11183 – 11223.
[40]
Islam MU, Mahmud H, Ashraf FB, Hossain I, Hasan MK. Yoga posture recognition by detecting human joint points in real time using microsoft kinect. 2017 IEEE Region 10 humanitarian technology conf (R10-HTC) ;. 2017 Dec 21-23; Dhaka, Bangladesh 2017; pp. 668-73.
[http://dx.doi.org/10.1109/R10-HTC.2017.8289047]
[41]
Pullen P, Seffens W. Machine learning gesture analysis of yoga for exergame development, IET Cyber-Physical Systems. Theory & Applications 2018; 3: 106-10.
[42]
Trejo EW, Yuan P. Recognition of yoga poses through an interactive system with kinect device. Conf on Robotics and Automation Sciences (ICRAS). 2018 June 23-25; Wuhan, China 2018; pp. 1-5.
[http://dx.doi.org/10.1109/ICRAS.2018.8443267]
[43]
Chen H-T, He Y-Z, Hsu C-C, Chou C-L, Lee S-Y, Lin B-SP. Yoga posture recognition for self-training. Int Conf on Multimedia Modeling; Lecture Notes in Computer Science 2014; 8325: 496-505.
[http://dx.doi.org/10.1007/978-3-319-04114-8_42]
[44]
Chinnaiah M, Nandan TK, Haritha P, Dubey S, Pasha I. A new deliberation of embedded based assistive system for yoga. Symposium on Embedded Computing and System Design (ISED). 2018 Dec 3-15; Cochin, India 2018; pp. 42-7.
[http://dx.doi.org/10.1109/ISED.2018.8703985]
[45]
Maddala TKK, Kishore PVV, Eepuri KK, Dande AK. Yoganet: 3-d yoga asana recognition using joint angular displacement maps with convnets. IEEE Trans Multimed 2019; 21(10): 2492-503.
[http://dx.doi.org/10.1109/TMM.2019.2904880]
[46]
Patil S, Pawar A, Peshave A, Ansari AN, Navada A. Yoga tutor visualization and analysis using surf algorithm. 2011 IEEE Control and System Graduate Research Colloquium;. 2011 June 27-28; Shah Alam, Malaysia 2011; pp. 43-6.
[http://dx.doi.org/10.1109/ICSGRC.2011.5991827]
[47]
Hsieh C-C, Wu B-S, Lee C-C. A distance computer vision assisted yoga learning system. J Comput 2011; 6: 2382-8.
[48]
Chiddarwar GG, Ranjane A, Chindhe M, Deodhar R, Gangamwar P. Ai-based yoga pose estimation for android application. International Journal of Innovative Science and Research Technology 2020; 5(9): 1070-3.
[http://dx.doi.org/10.38124/IJISRT20SEP704]
[49]
Kothari S. Yoga pose classification using deep learning. Master’s Projects. San Jose State University 2020; 932.
[http://dx.doi.org/10.31979/etd.rkgu-pc9k]
[50]
Chaudhari A, Dalvi O, Ramade O, Ambawade D. Yog-guru: Real-time yoga pose correction system using deep learning methods. 2021 Int Conf on Communication information and Computing Technology (ICCICT). 2021 June 25-27; Mumbai, India 2021; pp. 1-6.
[http://dx.doi.org/10.1109/ICCICT50803.2021.9509937]
[51]
Thar MC, Winn KZN, Funabiki N. A proposal of yoga pose assessment method using pose detection for self-learning. Int Conf on Adv Information Technologies (ICAIT). 2019 Nov 06-07; Yangon, Myanmar 2019; pp. 137-42.
[http://dx.doi.org/10.1109/AITC.2019.8920892]
[52]
Jain S, Rustagi A, Saurav S, Saini R, Singh S. Three-dimensional CNN-inspired deep learning architecture for Yoga pose recognition in the real-world environment. Neural Comput Appl 2021; 33(12): 6427-41.
[http://dx.doi.org/10.1007/s00521-020-05405-5]
[53]
Verma M, Kumawat S, Nakashima Y, Raman S. Yoga-82: A new dataset for fine-grained classification of human poses. IEEE/CVF Conf on Computer Vision and Pattern Recognition Work- shops (CVPRW);. 2020 June 14-19; Seattle, WA, USA 2020; pp. 4472-9.
[http://dx.doi.org/10.1109/CVPRW50498.2020.00527]
[54]
Long C, Jo E, Nam Y. Development of a yoga posture coaching system using an interactive display based on transfer learning. J Supercomput 2022; 78(4): 5269-84.
[PMID: 34566258]
[55]
Lai A, Reddy B, Vlijmen B. Yog.ai: Deep learning for yoga. 2019. Available From: https://cs230.stan-ford.edu/projects_winter_ 2019/reports/15813480.pdf
[56]
Marchenkova A. Convolutional neural network for classifying yoga poses. 2019. Available From: https://www.amarchenkova.com/posts/convolutional-neural-network-yoga-poses/
[57]
Jose J, Shailesh S. Yoga asana identification: A deep learning approach. IOP Conf Series. Materials Science and Engineering 2021; 1110(1): 012002.
[http://dx.doi.org/10.1088/1757-899X/1110/1/012002]
[58]
Wu Y, Lin Q, Yang M, et al. A computer vision-based yoga pose grading approach using con- trastive skeleton feature representations. Health Care 2022; 10(1): 36.
[59]
Yoga pose image classification dataset. 2021. Available From: https://www.kaggle.com/shrutisaxena/yoga-pose-imageclassification-dataset
[60]
Agrawal Y, Shah Y, Sharma A. Implementation of machine learning technique for identification of yoga poses. Conf on Communication Systems and Network Technologies (CSNT). 2020 April 10-12; Gwalior, India 2020; pp. 40-3.
[http://dx.doi.org/10.1109/CSNT48778.2020.9115758]
[61]
Goyal S, Jain A. Yoga pose perfection using deep learning: An algorithm to estimate the error in yogic poses. J Stu Res 2021; 10(3)
[62]
Yoga poses dataset. 2020. Available From: https://www.kaggle.com/niharika41298/
[63]
Upadhyay A, Basha NK, Ananthakrishnan B. Deep learning- based yoga posture recognition using the y_pn-mssd model for yoga practitioners. Health Care 2023; 11(4): 609.
[64]
Ashraf FB, Islam MU, Kabir MR, Uddin J. Yonet: A neural network for yoga pose classification. SN Computer Science 2023; 4(2): 198.
[http://dx.doi.org/10.1007/s42979-022-01618-8] [PMID: 36785804]
[65]
Anand Thoutam V, Srivastava A, Badal T, et al. Yoga pose estimation and feedback generation using deep learning. Comput Intell Neurosci 2022; 2022: 1-12.
[http://dx.doi.org/10.1155/2022/4311350] [PMID: 35371230]
[66]
Luo Z, Yang W, Ding ZQ, et al. left arm up! interactive yoga training in virtual environment. 2011 IEEE Virtual Reality Conf;. 2011 March 19-23; Singapore 2011; pp. 261-2.
[67]
Wu Z, Zhang J, Chen K, Fu C. Yoga posture recognition and quantitative evaluation with wearable sensors based on two-stage classifier and prior bayesian network. Sensors 2019; 19(23): 5129.
[http://dx.doi.org/10.3390/s19235129] [PMID: 31771131]
[68]
Gupta A, Gupta HP. Yogahelp: Leveraging motion sensors for learning correct execution of yoga with feedback. IEEE Trans Artif Intell 2021; 2(4): 362-71.
[http://dx.doi.org/10.1109/TAI.2021.3096175]
[69]
Kasman K, Moshnyaga V. New technique for posture identification in smart prayer mat. Electronics 2017; 6(3): 61.
[http://dx.doi.org/10.3390/electronics6030061]
[70]
Yao L, Sheng Q, Ruan W, et al. Rf-care: Device-free posture recognition for elderly people using a passive rfid tag array. MOBIQUITOUS’15: proceedings of the 12th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services. 2015 August 11; Brussels, Belgium 2015; pp. 120-9.
[71]
Gochoo M, Tan TH, Huang SC, et al. Novel iot-based privacy-preserving yoga posture recognition system using low-resolution infrared sensors and deep learning. IEEE Internet Things J 2019; 6(4): 7192-200.
[http://dx.doi.org/10.1109/JIOT.2019.2915095]
[72]
Vemulapalli R, Arrate F, Chellappa R. Human action recognition by representing 3d skeletons as points in a lie group. Proceedings of the IEEE conference on computer vision and pattern recognition. 2014 June 23-28 Columbus, OH, USA 2014; pp. 588-95.
[http://dx.doi.org/10.1109/CVPR.2014.82]
[73]
Evangelidis G, Singh G, Horaud R. Skeletal quads: Human action recognition using joint quadruples. 2014 22nd International Conference on Pattern Recognition;. 2014 August 24-28; Stockholm, Sweden 2014; pp. 4513-8.
[74]
Wang WJ, Chang JW, Haung SF, Wang RJ. Human posture recognition based on images captured by the kinect sensor. Int J Adv Robot Syst 2016; 13(2): 54.
[http://dx.doi.org/10.5772/62163]
[75]
Bazarevsky V, Grishchenko I, Raveendran K, Zhu T, Zhang F, Grundmann M. Blazepose: On-device real-time body pose tracking. arXiv 2020. Available From: https://arxiv.org/abs/2006.10204
[76]
Mohanty A, Ahmed A, Goswami T, Das A, Vaishnavi P, Sahay RR. Robust Pose Recognition Using Deep Learning. In: Raman, B., Kumar, S., Roy, P., Sen, D. (eds) Proceedings of International Conference on Computer Vision and Image Processing Advances in Intelligent Systems and Computing, vol 460 Springer, Singapore. 2017; pp. 93-105.
[77]
Chollet F. Xception: Deep learning with depthwise separable convo- lutions. Proceedings of the IEEE conference on computer vision and pattern recognition;. 2017 July 21-26; Honolulu, HI, USA 2017; pp. 1251-8.
[78]
He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proceedings of the IEEE conference on computer vision and pattern recognition;. 2016 June 27-30; Las Vegas, NV, USA 2016; pp. 770-8.
[79]
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE conference on computer vision and pattern recognition;. 2016 June 27-30; Las Vegas, NV, USA 2016; pp. 2818-6.
[http://dx.doi.org/10.1109/CVPR.2016.308]
[80]
Szegedy C, Ioffe S, Vanhoucke V, Alemi A. Inception-v4, inception-resnet and the impact of residual connections on learning. Proceedings of the AAAI conference on artificial intelligence 31(1)
[http://dx.doi.org/10.1609/aaai.v31i1.11231]
[81]
Andriluka M, Leonid P, Gehler P. 2d human pose estimation: New benchmark and state of the art analysis. IEEE Conf on Computer Vision and Pattern Recognition (CVPR). 2014 June 23-28; Columbus, OH, USA 2014.
[82]
Lin T-Y, Maire M, Belongie S, et al. Microsoft coco: Common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, Eds. Computer Vision – ECCV 2014, Springer Int Cham. 740-55.
[http://dx.doi.org/10.1007/978-3-319-10602-1_48]
[83]
Sigal L, Balan AO, Black MJ. Humaneva: Synchronized video and motion capture dataset and baseline algorithm for evaluation of articulated human motion. Int J Comput Vis 2010; 87(1-2): 4-27.
[http://dx.doi.org/10.1007/s11263-009-0273-6]
[84]
Ionescu C, Papava D, Olaru V, Sminchisescu C. Human3.6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans Pattern Anal Mach Intell 2014; 36(7): 1325-39.
[http://dx.doi.org/10.1109/TPAMI.2013.248] [PMID: 26353306]
[85]
Varol G, Romero J, Martin X, et al. Learning from synthetic humans. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017 July 21-26; Honolulu, HI, USA 2017; pp. 4627-35.
[http://dx.doi.org/ 10.1109/CVPR.2017.492]
[86]
Milosevic B, Leardini A, Farella E. Kinect and wearable inertial sensors for motor rehabilitation programs at home: State of the art and an experimental comparison. Biomed Eng Online 2020; 19(1): 25.
[http://dx.doi.org/10.1186/s12938-020-00762-7] [PMID: 32326957]
[87]
Shotton J, Fitzgibbon A, Cook M, et al. Real-time human pose recognition in parts from single depth images. CVPR ’11: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. June 20 - 25; 2011; pp. 1297-304.
[http://dx.doi.org/10.1109/CVPR.2011.5995316]
[88]
Chen C, Jafari R, Kehtarnavaz N. A survey of depth and inertial sensor fusion for human action recognition. Multimedia Tools Appl 2017; 76(3): 4405-25.
[http://dx.doi.org/10.1007/s11042-015-3177-1]
[89]
Zenia. 2022. Available From: https://zenia.app/ (Accessed: 26 February 2022).
[90]
MixPose. 2022. Available From: https://blog.mixpose.com/(Accessed: 26 February 2022).
[91]
Biswas S, Bardhan S. Sofia - AI powered yoga instructor. 2021. Available From: https://www.sofiayoga.net/
[92]
Maillo C. Yogai. 2022. Available From: https://crismaillo.github.io/yogAI/ (Accessed: 26 February 2022).
[93]
Yoganotch Available From: https://yoganotch.com/
[94]
Wearable X. Nadi x-smart yoga pants 2017. Available From: https://www.wearablex.com/ (Accessed: 26 February 2022).
[95]
Wellnesys. Yogifi smart yoga mat. 2021. Available From: https://yogifi.fit/
[96]
SmartMat. 2015. Available From: https://www.smartmat.com/

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy