Maintenance Notice

Due to necessary scheduled maintenance, the JMIR Publications website will be unavailable from Monday, December 24 through Wednesday, December 26 inclusive. We apologize in advance for any inconvenience this may cause you.

Who will be affected?


Citing this Article

Right click to copy or hit: ctrl+c (cmd+c on mac)

Published on 28.02.18 in Vol 6, No 2 (2018): February

This paper is in the following e-collection/theme issue:

    Original Paper

    Cardiac Auscultation Using Smartphones: Pilot Study

    1Division of Cardiology, Department of Internal Medicine, Seoul National University Bundang Hospital, Seongnam-si, Republic Of Korea

    2School of Computing, Korea Advanced Institute of Science and Technology, Daejeon, Republic Of Korea

    3Division of Cardiology, Cardiovascular Center, Seoul National University Bundang Hospital, Seongnam-si, Republic Of Korea

    Corresponding Author:

    Jung-Won Suh, MD, PhD

    Division of Cardiology

    Cardiovascular Center

    Seoul National University Bundang Hospital

    166 Gumi-ro


    Seongnam-si, 13620

    Republic Of Korea

    Phone: 82 31 787 7016

    Fax:82 31 787 4051



    Background: Cardiac auscultation is a cost-effective, noninvasive screening tool that can provide information about cardiovascular hemodynamics and disease. However, with advances in imaging and laboratory tests, the importance of cardiac auscultation is less appreciated in clinical practice. The widespread use of smartphones provides opportunities for nonmedical expert users to perform self-examination before hospital visits.

    Objective: The objective of our study was to assess the feasibility of cardiac auscultation using smartphones with no add-on devices for use at the prehospital stage.

    Methods: We performed a pilot study of patients with normal and pathologic heart sounds. Heart sounds were recorded on the skin of the chest wall using 3 smartphones: the Samsung Galaxy S5 and Galaxy S6, and the LG G3. Recorded heart sounds were processed and classified by a diagnostic algorithm using convolutional neural networks. We assessed diagnostic accuracy, as well as sensitivity, specificity, and predictive values.

    Results: A total of 46 participants underwent heart sound recording. After audio file processing, 30 of 46 (65%) heart sounds were proven interpretable. Atrial fibrillation and diastolic murmur were significantly associated with failure to acquire interpretable heart sounds. The diagnostic algorithm classified the heart sounds into the correct category with high accuracy: Galaxy S5, 90% (95% CI 73%-98%); Galaxy S6, 87% (95% CI 69%-96%); and LG G3, 90% (95% CI 73%-98%). Sensitivity, specificity, positive predictive value, and negative predictive value were also acceptable for the 3 devices.

    Conclusions: Cardiac auscultation using smartphones was feasible. Discrimination using convolutional neural networks yielded high diagnostic accuracy. However, using the built-in microphones alone, the acquisition of reproducible and interpretable heart sounds was still a major challenge.

    Trial Registration: NCT03273803; (Archived by WebCite at

    JMIR Mhealth Uhealth 2018;6(2):e49




    Cardiovascular diseases are the most common causes of death, accounting for 31.5% of all deaths globally [1,2]. In 2015, in the United States, 92.1 million adults were estimated to have cardiovascular diseases, and 43.9% of the adult population is projected to have some form of cardiovascular disease by 2030 [3].

    The stethoscope has played a key role in the physical examination of patients with cardiac disease since its invention by Rene Laënnec in 1816 [4]. The opening and closing of the heart valves, as well as blood flow and turbulence through the valves or intracardiac defects, generate rhythmic vibrations, which can be heard via the stethoscope [5]. Cardiac auscultation using the stethoscope enables hemodynamic assessment of the heart and can help in the diagnosis of cardiovascular diseases [6].

    Recently, the advent of noninvasive imaging modalities has dwarfed the importance of cardiac auscultation in clinical practice [7,8]. Devices such as the handheld ultrasound have enabled detailed on-site visualization of the cardiac anatomy and are further threatening the role of the stethoscope as a bedside examination tool [9,10]. In this way, there has been a decrease in the appreciation of the importance of cardiac auscultation, and physicians are decreasingly proficient and confident in their examination skills [11-13]. Studies have also suggested a low level of interobserver agreement regarding cardiac murmurs [14].

    The smartphone has become a popular device. As of 2015, 64% of Americans and 88% of South Koreans were reported to own a smartphone [15]. Smartphones are frequently used for health purposes, such as counseling or information searches [16]. The modern smartphone has excellent processing capability and is equipped with multiple high-quality components, such as microphones, display screens, and sound speakers. There have been efforts to use smartphone health apps for self-diagnosis [17]. However, some of these software apps have shown poor credibility, and their role in health care is not yet established [18].

    Therefore, we sought to develop a smartphone app for cardiac auscultation that could be used by non–medical expert users. Although the importance of cardiac auscultation is declining in the hospital setting, it could serve as a screening tool at the prehospital stage if it can be performed easily by smartphone users themselves. This was a pilot study to test the feasibility of cardiac auscultation using the built-in microphones of smartphones without any add-on devices. The study tested (1) whether heart sound recording using a smartphone is feasible, and (2) whether an automated diagnostic algorithm can classify heart sounds with acceptable accuracy. Heart sounds were recorded using the smartphone microphones and processed electronically. We developed a diagnostic algorithm by applying convolutional neural networks, which we used for the diagnosis of the recorded heart sounds. In this study, we assessed the diagnostic accuracy of the algorithm.


    Description of the App

    We developed a smartphone app named CPstethoscope for this study (Figure 1). The app runs on the Android operating system (Google Inc) and is used for research purposes only. Heart sounds were recorded by placing the phone on the skin of the chest, using the built-in microphone. In most smartphones, microphones are located on the lower border of the device. Heart sounds can be best heard in the intercostal spaces. The instructions for this app indicated the anatomical landmarks and auscultation areas. While maintaining the contact of the lower margin of the smartphone with the chest wall, users are required to manipulate the device to start and stop recording. Users can see on the screen whether their heart sounds are properly being captured.

    Study Design

    This was a pilot study designed to demonstrate the feasibility of smartphone-based recording and identification of heart sounds. We sought to enroll 50 participants who were 18 years of age or older and had undergone an electrocardiogram (ECG) and echocardiography within the previous 6 months at Seoul National University Bundang Hospital, Seoul, Republic of Korea. Ultimately, we sought to develop an app for self-diagnosis that could be performed by users. However, for this pilot study, heart sounds were recorded by researchers who were familiar with the use of the app and understood the principles of cardiac auscultation. The investigators who recorded heart sounds were not aware of the patients’ diagnoses. Eligible patients were invited to participate in the study by the research doctors at the outpatient clinics or on the wards. After participants provided informed consent, their heart sounds were recorded in a quiet room that was free from environmental noises.

    Reference heart sounds were recorded by participating cardiologists (SHK, YY, GYC, and JWS) using an electronic stethoscope (3M Littmann Electronic Stethoscope Model 3200; 3M, St Paul, MN, USA). Study devices were the Samsung Galaxy S5 (model SM-G900) and Galaxy S6 (SM-G920; Samsung Electronics, Suwon, Republic of Korea), and LG G3 (LG-F400; LG Electronics, Seoul, Republic of Korea).

    We chose the best site for recording from among the aortic, pulmonic, mitral, and tricuspid areas. The built-in microphones were placed directly on the skin of the chest wall for detection of the heart sound. We tested all 3 devices with all study participants. There were no prespecified orders for tested devices. No add-on devices were used. Recordings were made for approximately 10 seconds after stable heart sounds were displayed on the screen. Final diagnoses of the reference heart sounds were confirmed by a second cardiologist (SHK and GYC) by listening to the audio files and matching them with the echocardiography reports.

    Figure 1. Heart sound recording using a smartphone app. Left: illustration of how the heart sounds were recorded in this study. Smartphones were placed directly on the chest wall; a dedicated app was used with no add-on devices. Middle and right: representative screenshots of the app (called CPstethoscope) developed for this study. ECG: electrocardiogram.
    View this figure

    This study was approved by the Seoul National University Bundang Hospital institutional review board on August 24, 2016 (B-1609-361-303), and all participants provided written informed consent. We registered this study protocol ( NCT03273803). The corresponding author had full access to all the data in the study and takes responsibility for its integrity and the data analysis.

    Data Processing and Identification

    We transferred the recorded audio files to a desktop computer for data processing. After subtracting environmental and thermal noises using fast Fourier transformation, we constructed time-domain noise-reduced heart sounds. We detected the first and second heart sounds without an ECG reference, using a previously reported algorithm [19]. Time-domain signals were transformed into frequency-domain spectrogram features. We developed a diagnostic algorithm using convolutional neural networks, a variant of an artificial neural network that mimics the connections of neurons in the visual cortex of the brain. The convolutional neural network was constructed from 40×40 heart sound spectrogram matrices through 1 input layer. We processed these matrices with 2 convolution-max pool layers whose kernel size was 5×5. Moreover, the number of kernels for each of the 2 convolutional layers was either 8 or 16. Next, a dense, fully connected layer followed the second convolution–max pool layer, and we appended the last readout layer with 5 nodes that corresponded to each disease. We calculated the training loss of function of the network as soft maximum cross-entropy using the values from the readout layer. Finally, we trained the network with the Adam optimizer at a learning rate of 0.001 [20]. We used the TensorFlow version 1.2 Python library to compose this network [21]. Training was conducted using demonstration heart sounds obtained from open databases (The Auscultation Assistant, C Wilkes, University of California, Los Angeles, Los Angeles, CA, USA; Heart Sound & Murmur Library, University of Michigan Medical School, Ann Arbor, MI, USA; Easy auscultation, MedEdu LLC, Westborough, MA, USA; 50 Heart and Lung Sounds Library, 3M, St Paul, MN, USA; and Teaching Heart Auscultation to Health Professionals, J Moore, Rady Children’s Hospital, San Diego, CA, USA). We classified heart sounds into 5 categories: normal, third heart sound, fourth heart sound (S4), systolic murmur, and diastolic murmur. The algorithm showed 81% diagnostic accuracy with the training sets. Testing was performed with the samples acquired from this study.

    Statistical Analysis

    We calculated continuous variables as mean (SD), and categorical variables as counts and percentages. Reference heart sounds were adjudicated by experienced cardiologists. The primary end point of the study was the diagnostic accuracy of the system for heart sound classification. We considered the diagnosis to be accurate when the algorithm classified a heart sound into the correct category with 50% or more probability. We also estimated the performance of the system using sensitivity, specificity, positive predictive value, and negative predictive value. We defined the study end points were as follows: diagnostic accuracy = (TP+TN)/(TP+FP+FN+TN); sensitivity = TP/(TP+FN); specificity = TN/(TN+FP); positive predictive value = TP/(TP+FP); and negative predictive value = TN/(TN+FN), where TP indicates true positive; TN, true negative; FP, false positive; and FN, false negative. We calculated the diagnostic values as simple proportions with corresponding 95% confidence intervals. Statistical analyses were performed using the R programming language version 3.2.4 (The R Foundation for Statistical Computing). A 2-sided P<.05 was considered statistically significant.


    Patient Profiles

    A total of 46 patients participated in this study. Multimedia Appendix 1 shows the Standards for Reporting of Diagnostic Accuracy Studies checklist and flow diagram for the study. Similar numbers of men and women were enrolled, and their median age was 65.5 years. Table 1 describes the participants’ characteristics: 20 (44%) had systolic murmurs, 20 (44%) had normal heart sounds, 5 (11%) had diastolic murmurs, and 1 (2%) had S4.

    After audio file processing, including noise reduction, we confirmed 30 of 46 heart sounds (65%) as interpretable. The reasons for failure to acquire interpretable heart sounds included the small amplitude of the acquired heart sounds, background noise, and the participant’s poor cooperation. Younger age tended to be associated with better interpretability, while body mass index had no impact. Significant factors for uninterpretability included atrial fibrillation and diastolic murmur.

    Diagnostic Performance

    Figure 2 shows the performance of the diagnostic algorithm by device. Heart sounds recorded with the 3 different study devices yielded consistently high diagnostic accuracy: Samsung Galaxy S5, 90% (95% CI 73%-98%); Samsung Galaxy S6, 87% (95% CI 69%-96%); and LG G3, 90% (95% CI 73%-98%). The Samsung Galaxy S5 and S6 showed a high sensitivity (S5: 94%, 95% CI 70%-100%; S6: 94%, 95% CI 70%-100%), while the LG G3 showed a high specificity (100%, 95% CI 68%-100%). The diagnostic performance did not vary significantly according to the study participants’ age or sex (Table 2). Figure 3 shows representative waveforms and spectrograms of heart sounds (audio files are provided in Multimedia Appendix 2,Multimedia Appendix 3, and Multimedia Appendix 4). No meaningful adverse events occurred during the study.

    Table 1. Characteristics of study participants.
    View this table
    Figure 2. Diagnostic performance of each study device. Bold broken lines indicate diagnostic accuracy. FN: false negative; FP: false positive; TN: true negative; TP: true positive.
    View this figure
    Table 2. Diagnostic performance (%) of each study device.
    View this table
    Figure 3. Representative phonocardiograms and spectrograms. (A) Normal heart sounds from the aortic area of a 22-year-old man with a history of vasovagal syncope. (B) Midsystolic ejection murmur from the aortic area of an 83-year-old woman with aortic stenosis, which was classified as a systolic murmur. (C) Systolic murmur from the mitral area of a 63-year-old woman with mitral valve prolapse and mitral regurgitation, which was classified as a systolic murmur.
    View this figure


    Principal Findings

    This was a pilot study to assess the feasibility of heart sound recording and identification using smartphones. We found that reliable heart sound recording was the most important difficulty encountered. However, the results of this study suggest that, once interpretable heart sounds are acquired, cardiac murmur diagnosis using convolutional neural networks yields high diagnostic accuracy.


    With the widespread use of smartphones, an increasing number of health care apps have been developed. There were approximately 165,000 health-related apps available in 2016 [22]. These health care-related apps comprise a variety of aspects of medicine, including prevention, diagnosis, monitoring, treatment, compensation, and investigation [23]. However, there are concerns that many of these apps are not evidence based, and it is difficult to find any information on the research used in their development [18]. This study was a part of our effort to develop a diagnostic app that can differentiate normal and abnormal heart sounds. We sought to validate the diagnostic performance of the recording and identification system among patients in real-world clinical practice.

    There have been attempts to use add-on gadgets in conjunction with smartphones for health care use, but most of these have not been accepted widely. Modern smartphones are equipped with high-quality built-in microphones that can capture low-pitch, low-amplitude heart sounds. We presumed that an app working solely with the featured specifications would have advantages with respect to accessibility and acceptability. However, this study implied that the acquisition of good-quality heart sounds is still far from perfect. A variety of factors were suggested to affect the heart sound recording. First, background noise is difficult to reduce systematically and, thus, should be avoided during recordings. It was necessary to record on the skin of the chest wall, and the choice of the appropriate location was essential. Second, respiration was not as important as expected. The frequency spectrum of lung sounds (100-2500 Hz) is usually distant from that of heart sounds (20-100 Hz) [24]. Thus, lung sounds were easily attenuated by applying a simple band-pass filter. Third, patient factors, such as age, body mass index, and the presence of arrhythmia, were also crucial. Fourth, our system failed to recognize heart sounds with diastolic murmur, although the sample size was small.

    The use of machine learning in clinical medicine is rapidly increasing, with a marked increase in the amount of available data [25]. The interpretation of digitized images and development of prediction models are the leading applications of machine learning in the field of medicine [26,27]. This study suggests that the interpretation of audio signals derived from humans may be a potential application of artificial intelligence.

    Comparison With Prior Work

    To our knowledge, this study is the first attempt to discriminate heart sounds using a deep learning–based diagnostic algorithm. We showed that the diagnostic algorithm was feasible and reproducible. We found only 1 app for cardiac auscultation that enables heart sound recording, which is called the iStethoscope [28]. It amplifies and filters heart sounds in real time for better quality, but it is not capable of diagnosing heart murmurs. AliveCor Kardia, a device approved by the US Food and Drug Administration, enables ECG monitoring and, according to 1 clinical trial, significantly improves the detection of incident atrial fibrillation [29]. Azimpour et al performed an elegant study in which they used an electronic stethoscope to detect stenosis of coronary arteries [30]. Although the study idea was interesting, it may be difficult to use in commonly available smartphones due to the deep location of the coronary arteries and the low amplitude of the acoustic signals. There are several apps that enable heart rate monitoring. Some require specialized devices, and others simply use built-in smartphone cameras and flashes, also known as photoplethysmography. However, their accuracy and clinical application still require further investigation [31,32].


    This study had several limitations. First, the sample size was too small to represent a variety of cardiac murmurs. Second, the enrollment of study participants was selective rather than consecutive; thus, there is a possibility of a selection bias of participants with clear and unambiguous heart sounds. Third, we used the app developed for this study only to record heart sounds. In this pilot study, the audio files were moved to a central server and subsequently analyzed. Therefore, the app needs to be improved such that it can be used in the real world, such as an all-in-one system from acquisition to diagnosis. Fourth, we obtained the heart sounds ourselves, although we ultimately seek to develop an app that can be used by members of the general population.

    Fifth, this study showed variations in performance with different devices, which seem to be caused by the differing specifications of each smartphone. This is one of the major hurdles in the development of an app that can be used in a variety of smartphones from different manufacturers. Our pilot testing indicated that the quality of recorded heart sounds depended on the quality of the built-in microphones. For this reason, we included 3 high-end smartphones for this study. System performance may be worse with inexpensive devices. In addition, we tested only devices running the Android operating system in this study, but not the Apple iPhone, which is one of the most widely used smartphones worldwide.

    Future Research Steps

    The app described in this study requires further development. An all-in-one system is crucial, comprising recording, audio processing, and a diagnostic algorithm. Instructions that help users record their heart sound by themselves are also needed. We are improving the ability of the app to acquire interpretable heart sounds and to diagnose atrial fibrillation. Another potential application is the use of a diagnostic algorithm with commercialized electronic stethoscopes performed by medical personnel [33]. This may improve the quality of clinical practice by assisting early-career doctors or nurses to assess patients.


    The concept of cardiac auscultation using smartphones is feasible. Indeed, diagnosis using convolutional neural networks yielded a high diagnostic accuracy. However, use of the built-in microphones alone was limited in terms of reproducible acquisition of interpretable heart sounds.


    This study was funded by a grant from the Seoul National University Bundang Hospital Research Fund (16-2016-006).

    Conflicts of Interest

    None declared.

    Multimedia Appendix 1

    Standards for Reporting of Diagnostic Accuracy Studies study checklist and study diagram.

    PDF File (Adobe PDF File), 963KB

    Multimedia Appendix 2

    Audio file 1 (normal heart sound).

    WAV File, 573KB

    Multimedia Appendix 3

    Audio file 2 (aortic stenosis).

    WAV File, 672KB

    Multimedia Appendix 4

    Audio file 3 (mitral regurgitation).

    WAV File, 704KB


    1. GBD 2013 DALYs and HALE Collaborators, Murray CJL, Barber RM, Foreman KJ, Abbasoglu OA, Abd-Allah F, et al. Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990-2013: quantifying the epidemiological transition. Lancet 2015 Nov 28;386(10009):2145-2191. [CrossRef] [Medline]
    2. World Health Organization. The top 10 causes of death: fact sheet. 2017 Jan.   URL: [accessed 2018-02-09] [WebCite Cache]
    3. Benjamin EJ, Blaha MJ, Chiuve SE, Cushman M, Das SR, Deo R, et al. Heart disease and stroke statistics-2017 update: a report from the American Heart Association. Circulation 2017 Mar 07;135(10):e146-e603. [CrossRef] [Medline]
    4. Roguin A. Rene Theophile Hyacinthe Laënnec (1781-1826): the man behind the stethoscope. Clin Med Res 2006 Sep;4(3):230-235 [FREE Full text] [Medline]
    5. Luisada AA, MacCanon DM, Kumar S, Feigen LP. Changing views on the mechanism of the first and second heart sounds. Am Heart J 1974 Oct;88(4):503-514. [Medline]
    6. Shaver JA. Cardiac auscultation: a cost-effective diagnostic skill. Curr Probl Cardiol 1995 Jul;20(7):441-530. [Medline]
    7. Frishman WH. Is the stethoscope becoming an outdated diagnostic tool? Am J Med 2015 Jul;128(7):668-669. [CrossRef] [Medline]
    8. Bank I, Vliegen HW, Bruschke AVG. The 200th anniversary of the stethoscope: can this low-tech device survive in the high-tech 21st century? Eur Heart J 2016 Dec 14;37(47):3536-3543. [CrossRef] [Medline]
    9. Liebo MJ, Israel RL, Lillie EO, Smith MR, Rubenson DS, Topol EJ. Is pocket mobile echocardiography the next-generation stethoscope? A cross-sectional comparison of rapidly acquired images with standard transthoracic echocardiography. Ann Intern Med 2011 Jul 05;155(1):33-38 [FREE Full text] [CrossRef] [Medline]
    10. Mehta M, Jacobson T, Peters D, Le E, Chadderdon S, Allen AJ, et al. Handheld ultrasound versus physical examination in patients referred for transthoracic echocardiography for a suspected cardiac condition. JACC Cardiovasc Imaging 2014 Oct;7(10):983-990 [FREE Full text] [CrossRef] [Medline]
    11. Vukanovic-Criley JM, Criley S, Warde CM, Boker JR, Guevara-Matheus L, Churchill WH, et al. Competency in cardiac examination skills in medical students, trainees, physicians, and faculty: a multicenter study. Arch Intern Med 2006 Mar 27;166(6):610-616. [CrossRef] [Medline]
    12. Mangione S, Nieman LZ. Cardiac auscultatory skills of internal medicine and family practice trainees. A comparison of diagnostic proficiency. JAMA 1997 Sep 03;278(9):717-722. [Medline]
    13. Mangione S, Nieman LZ, Gracely E, Kaye D. The teaching and practice of cardiac auscultation during internal medicine and cardiology training. A nationwide survey. Ann Intern Med 1993 Jul 01;119(1):47-54. [Medline]
    14. Lok CE, Morgan CD, Ranganathan N. The accuracy and interobserver agreement in detecting the 'gallop sounds' by cardiac auscultation. Chest 1998 Nov;114(5):1283-1288. [Medline]
    15. Poushter J. Smartphone ownership and Internet usage continues to climb in emerging economies but advanced economies still have higher rates of technology use. Washington, DC: Pew Research Center; 2016 Feb 22.   URL: http:/​/assets.​​wp-content/​uploads/​sites/​2/​2016/​02/​pew_research_center_global_technology_report_final_february_22__2016.​pdf [accessed 2018-02-09] [WebCite Cache]
    16. Smith A, Page D. US smartphone use in 2015. Washington, DC: Pew Research Center; 2015 Apr 01.   URL: [accessed 2018-02-09] [WebCite Cache]
    17. Lupton D, Jutel A. 'It's like having a physician in your pocket!' A critical analysis of self-diagnosis smartphone apps. Soc Sci Med 2015 May;133:128-135. [CrossRef] [Medline]
    18. Armstrong S. Which app should I use? BMJ 2015 Sep 09;351:h4597. [Medline]
    19. Barabasa C, Jafari M, Plumbley MD. A robust method for S1/S2 heart sounds detection without ECG reference based on music beat tracking. 2012 Presented at: 10th International Symposium on Electronics and Telecommunications (ISETC); Nov 15-16, 2012; Timisoara, Romania p. 15-16.
    20. Kingma D, Ba J. Adam: a method for stochastic optimization. arXiv:14126980. 2017 Jan 30.   URL: [accessed 2018-02-09] [WebCite Cache]
    21. Rampasek L, Goldenberg A. TensorFlow: biology's gateway to deep learning? Cell Syst 2016 Jan 27;2(1):12-14 [FREE Full text] [CrossRef] [Medline]
    22. The wonder drug: a digital revolution in health care is speeding up. London, UK: The Economist; 2017 Mar 02.   URL: https:/​/www.​​news/​business/​21717990-telemedicine-predictive-diagnostics-wearable-sensors-and-host-new-apps-will-transform-how [accessed 2017-11-28] [WebCite Cache]
    23. Medicines and Healthcare Products Regulatory Agency. Guidance: medical device stand-alone software including apps (including IVDMDs). 2017 Sep.   URL: https:/​/www.​​government/​uploads/​system/​uploads/​attachment_data/​file/​648465/​Software_flow_chart_Ed_1-04.​pdf [accessed 2018-02-09] [WebCite Cache]
    24. Reichert S, Gass R, Brandt C, Andrès E. Analysis of respiratory sounds: state of the art. Clin Med Circ Respirat Pulm Med 2008 May 16;2:45-58 [FREE Full text] [Medline]
    25. Obermeyer Z, Emanuel EJ. Predicting the future - big data, machine learning, and clinical medicine. N Engl J Med 2016 Sep 29;375(13):1216-1219 [FREE Full text] [CrossRef] [Medline]
    26. Chen JH, Asch SM. Machine learning and prediction in medicine - beyond the peak of inflated expectations. N Engl J Med 2017 Jun 29;376(26):2507-2509. [CrossRef] [Medline]
    27. Goldstein BA, Navar AM, Carter RE. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges. Eur Heart J 2017 Jun 14;38(23):1805-1814. [CrossRef] [Medline]
    28. Bentley PJ. iStethoscope: a demonstration of the use of mobile devices for auscultation. Methods Mol Biol 2015;1256:293-303. [CrossRef] [Medline]
    29. Halcox JPJ, Wareham K, Cardew A, Gilmore M, Barry JP, Phillips C, et al. Assessment of remote heart rhythm sampling using the AliveCor heart monitor to screen for atrial fibrillation: the REHEARSE-AF Study. Circulation 2017 Nov 07;136(19):1784-1794. [CrossRef] [Medline]
    30. Azimpour F, Caldwell E, Tawfik P, Duval S, Wilson RF. Audible coronary artery stenosis. Am J Med 2016 May;129(5):515-521.e3. [CrossRef] [Medline]
    31. Gorny AW, Liew SJ, Tan CS, Müller-Riemenschneider F. Fitbit Charge HR wireless heart rate monitor: validation study conducted under free-living conditions. JMIR Mhealth Uhealth 2017 Oct 20;5(10):e157 [FREE Full text] [CrossRef] [Medline]
    32. Vandenberk T, Stans J, Van Schelvergem G, Pelckmans C, Smeets CJ, Lanssens D, et al. Clinical validation of heart rate apps: mixed-methods evaluation study. JMIR Mhealth Uhealth 2017 Aug 25;5(8):e129 [FREE Full text] [CrossRef] [Medline]
    33. Leng S, Tan RS, Chai KTC, Wang C, Ghista D, Zhong L. The electronic stethoscope. Biomed Eng Online 2015 Jul 10;14:66 [FREE Full text] [CrossRef] [Medline]


    ECG: electrocardiogram
    FN: false negative
    FP: false positive
    S4: fourth heart sound
    TN: true negative
    TP: true positive

    Edited by S Kitsiou; submitted 12.09.17; peer-reviewed by P Athilingam, N Yonemoto, M Younis; comments to author 04.11.17; revised version received 29.12.17; accepted 09.01.18; published 28.02.18

    ©Si-Hyuck Kang, Byunggill Joe, Yeonyee Yoon, Goo-Yeong Cho, Insik Shin, Jung-Won Suh. Originally published in JMIR Mhealth and Uhealth (, 28.02.2018.

    This is an open-access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mhealth and uhealth, is properly cited. The complete bibliographic information, a link to the original publication on, as well as this copyright and license information must be included.