Background: Despite the ubiquity of smartphones, there is little guidance for how to design mobile health apps to increase use. Specifically, knowing what features users expect, grab their attention, encourage use (via predicted use or through positive app evaluations), and signal beneficial action possibilities can guide and focus app development efforts.
Objective: We investigated what features users expect and how the design (prototypicality) impacts app adoption.
Methods: In a web-based survey, we elicited expectations, including presence and placement, for 12 app features. Thereafter, participants (n=462) viewed 2 health apps (high prototypicality similar to top downloaded apps vs low prototypicality similar to research interventions) and reported willingness to download, attention, and predicted use of app features. Participants rated both apps (high and low) for aesthetics, ease of use, usefulness, perceived affordances, and intentions to use.
Results: Most participants (425/462, 92%) expected features for navigation or personal settings (eg, menu) in specific regions (eg, top corners). Features with summary graphs or statics were also expected by many (395-396 of 462, 86%), with a center placement expectation. A feature to “share with friends” was least expected among participants (203/462, 44%). Features fell into 4 unique categories based on attention and predicted use, including essential features with high (>50% or >231 of 462) predicted use and attention (eg, calorie trackers), flashy features with high attention but lower predicted use (eg, links to specific diets), functional features with modest attention and low use (eg, settings), and mundane features with low attention and use (eg, discover tabs). When given a choice, 347 of 462 (75%) participants would download the high-prototypicality app. High prototypicality apps (vs low) led to greater aesthetics, ease of use, usefulness, and intentions, (for all, P<.001). Participants thought that high prototypicality apps had more perceived affordances.
Conclusions: Intervention designs that fail to meet a threshold of mHealth expectations will be dismissed as less usable or beneficial. Individuals who download health apps have shared expectations for features that should be there, as well as where these features should appear. Meeting these expectations can improve app evaluations and encourage use. Our typology should guide presence and placement of expected app features to signal value and increase use to impact preventive health behaviors. Features that will likely be used and are attention-worthy—essential, flashy, and functional—should be prioritized during app development.
With the rapid increase in the use of mobile technologies and smartphones for health information [, ], mobile apps present one possible solution for communicating preventive health information to the public [ - ]. Over the past decade, hundreds of health mobile apps have been produced—many designed by public health interventionists and researchers for cancer and other chronic disease prevention by encouraging healthy eating and physical activity [ - ]. While it remains unclear how successful these apps have been in reducing the incidence of cancer or improving health outcomes for other chronic diseases, there is a call for an increase in the accountability, reliability, and standardizations of evidence-based health apps developed by the research community [ - ].
Despite the potential of mobile health (mHealth) apps for communicating up-to-date, evidence-based prevention information and helping users maintain or implement healthy habits, there is very little guidance on how these intervention apps should be designed to ensure adoption . Designing apps so they are appealing and used is a critical first step for apps to have an impact [ ]. Visual and interactive design influences initial user evaluations, which are made within milliseconds, and serve as gateways for subsequent user engagement (eg, use) of apps as mHealth interventions [ - ]. Ignoring design can detrimentally impact the communication of evidence-based science to health consumers and undercut the effectiveness of mHealth interventions; yet, few mHealth interventions mirror the look and function of popular, industry-developed apps. Thus, our study objective was to explore app features expectations and examine how meeting expectations with high- (vs low-) prototypicality apps may influence predictors of app adoption.
How apps are designed (visual display) and the features they include (interactivity) can influence users’ experience of and willingness to engage with apps. Individuals use salient cues that match their expectations, or mental models, to evaluate web-based information [, ]. These expectations are met (or not) by the level of prototypicality or the degree to which an app resembles others in its comparative group [ , ]. Based on included design cues, in the form of interactive features, apps can range from having high prototypically (looks like others and meets expectations well) to low prototypicality (does not resemble others nor meet expectations) [ ]. Users are often quicker and more willing to attend to apps that have high prototypicality—when designs align with one’s mental models for how an app should look and function [ - ]. Indeed, users look for and pay attention to expected, salient features as guides to orient themselves to novel apps and platforms [ ]. When these expected features are present, they increase familiarity and potential use of the app [ - ]; however, little is known on how attention for specific features translates into individual feature use versus overall app use.
The perceived affordances, or perceived action possibilities (eg, learn health tips), that users sense from app features also directly impact a user’s experience and likelihood to engage with a design [- ]. Specifically for mediated communication, including apps, design communicates what the viewer can do or gain from the use of an app, through interface symbols. Thus, not only must mHealth interventions have evidence-based content to drive use, but also apps must incorporate an evidence-based design to appeal to and engage audiences.
Design features influence the appeal or perceived aesthetics of the app and the likelihood for use [, ]. To be effective, health apps must surely be used. It is necessary to understand how objective design features (the visible objects or designs in an app) influence subjective evaluations for initial appeal on the basis of theories of aesthetics [ - ] and antecedents for technology adoption in the Technology Acceptance Model (TAM); that is, perceived ease of use, perceived usefulness, and intentions to use [ , ]. Aesthetics, including facets for how information is organized and displayed, function as a precursor to perceptions for technology acceptance [ , ]. Accounting for users’ expectations of features and placements within apps will shed light on how prototypicality impacts evaluations critical for future adoption.
Utility also drives evaluation of an app’s usefulness and potential adoption, according to Nielsen et al’s  well-established usability study. Utility refers to the inclusion of necessary features—whether an app provides the elements an individual needs or wants. When utility is paired with usability—when features are perceived as easy (perceived ease of use) and pleasant (aesthetics) to use—individuals are encouraged to engage or interact. In other words, interactivity is dependent on a user’s willingness to engage with specific design features, if present (utility) and function properly (usability). In our work, we focus on the former—how app features that are needed (utility) or expected (prototypical) are the gateway to potential adoption.
Goal of This Study
In sum, engagement with and use of an app is driven by initial impressions and perceptions of what the app can do for the user. Top-rated industry-developed apps often incorporate a user-focused sleekness and are feature loaded; in comparison, pared-down mHealth interventions—despite the inclusion of theory-based content—may not appeal to audiences who need them . When resources are not abundant, health researchers and interventionists need evidence-based guidance for design investments. Thus, we explored app expectations for the presence and placement of potential features, how these features garner attention and predict use, and how high-prototypicality apps (vs low-prototypicality apps) may influence app adoption through app choice and predictors of use. We asked the following research questions: What features do people expect and where do they expect these features to be placed (RQ1)? What specific features are associated with attention and predicted use of the app features (RQ2)? Last, we also examined whether high prototypicality, resembling that of top downloaded apps (vs low-prototypicality apps, resembling research intervention apps) would increase app choice (H1), aesthetics (H2), perceived ease of use (H3), perceived usefulness (H4), intentions to use the app (H5), and perceived affordances or action possibilities with the app (H6).
To explore app features expectations and examine how meeting expectations with high-prototypicality apps (vs low-prototypicality apps) may influence predictors of app adoption, we conducted a web-based survey with an embedded within-subjects experiment. Participants first responded to survey items about expectations for specific app features to answer RQ1-2 and an app choice (preview of apps with high vs low prototypicality) to address H1. Participants were then asked to rate their perceptions of the app overall, with the exposure order of condition (high vs low) randomized, to address H2-6.
Using G*Power, our a priori power analysis indicated a required sample of at least 450 participants to detect a small-to-medium effect (Cohen f=0.14) for within-subjects comparison of the high and low prototypicality apps. Participants (n=462) were recruited from Amazon’s Mechanical Turk (MTurk), a web-based crowdsourcing platform often used for social science research [- ], through a link open to individuals over the age of 18 years. Participants were eligible if they were aged 18 years or older, resided in the United States, and had a task approval rate of 85% or higher on the MTurk platform, which indicates valid participation or completion of previous tasks. Participants received US $3 as compensation for their time (approximately 15 minutes). The institutional review board of University of North Carolina approved this study.
Following consent, participants selected features (from a list) they would expect to find in a health app. For all expected features, participants were shown an outline of a smartphone and asked where that feature would be located in a typical health app. Participants were then randomly assigned to 1 of the 2 app types for the remainder of the study: fitness apps or nutrition apps. Participants selected the app they would most like to download from 2 previews (prototypical: high vs low). On subsequent pages, participants indicated what features grabbed their attention and what features they predicted they would use (predicted use) on their preferred app. Participants were shown the app previews again (one at a time, in a random order) and asked closed-ended items for perceived aesthetics, ease of use, usefulness, intentions to use the app in the future, and perceived affordances. Lastly, demographic, health, and health app information were collected from all participants. Closed-ended items and response options are described below (see Measures) and provided in.
To assess the impact of prototypicality on app perceptions, app previews were created for four fictitious brands: 2 fitness and 2 nutrition health apps (). We designed previews for each app as they would appear if searched for in a mobile app store, including the app icon, brand name, and 2 preview screens of the app. High-prototypicality apps were developed on the basis of structure and content from top rated apps (Aaptiv, Lifesum) in the Health & Fitness section of the App Store. Low-prototypicality apps were designed to mirror the mobile interface of an interactive intervention (Carolina Health Assessment and Research Tool) for data collection and tailored feedback for preventive health behaviors [ ].
Feature Selection and Placement
Participants selected features from a list they “would expect to find in a health app.” The list was generated from structured interviews about fitness tracker apps  and included 12 features: menu, search option, settings option, logo, log/input data option, share with friend option, summary statistics, summary graph/chart, calendar, page title, login, and user profile. For each expected (ie, selected) feature, respondents were shown a smartphone screen divided into a grid of 60 distinct clickable hot spot regions. Respondents selected as many regions of each screen as necessary for expected placement.
Participants were instructed to “select the app you would most likely download.” The 2 response options were the low prototypical app and the high prototypical app, for their randomly assigned app type (physical activity or nutrition).
Feature Attention and Predicted Use
To identify features that attracted participants’ attention and predicted use, participants were shown the app preview they selected during app choice. Participants were asked, “What elements in the app caught your attention?” and instructed to “select all elements that grabbed your attention within the app preview.” On the following page of the questionnaire the app preview was shown again; participants were asked, “What elements in the app do you think you would use?” and selected the elements in the preview. As performed in previous studies [, ], a priori hot spots were constructed around each app feature ( ). Hot spots were not visible until participants selected the feature and then the feature was highlighted.
The validated Visual Aesthetics of Website Inventory (VisAWI) assessed 4 facets of aesthetics with 18 items for simplicity, “The layout appears well structured”; diversity, “The layout appears dynamic”; colorfulness, “The colors are appealing”; and craftsmanship, “The app is designed with care” . Response options ranged from “strongly disagree” (coded as 1) to “strongly agree” (5). Responses were averaged for each facet (α=.76-.90).
Perceived Ease of Use
Participants’ perceived ease of use, or belief that using the technology would not be difficult, were assessed with 3 adapted Likert-type items : “The app was clear and understandable,” “Getting the app to function does not require much mental effort,” and “I find the app to be easy to use.” Response options ranged from “strongly disagree” (coded as 1) to “strongly agree” (5). Responses were averaged (α=.84-.87).
The degree to which one believes that the technology will enhance their life was assessed with 3 adapted Likert-type items : “Using the app would improve my health,” “Using the app would make me more likely to meet my health goals,” and “I would find the app useful for achieving my health goals.” Response options ranged from “strongly disagree” (coded as 1) to “strongly agree” (5). Responses were averaged ( =.85-.88).
Intentions to Use
Intentions or plans to use the app “if the app were available” were assessed with 2 Likert-type items . Participants rated their agreement to statements that they “intend” and “predict” they would use the app next month with response options that ranged from “strongly disagree” (coded as 1) to “strongly agree” (5). Responses were averaged (r=0.88-0.93).
Participants reported perceived action possibilities from the app with the item, “This app would allow me to…” Response options included a list of 13 dichotomous items generated from evidence-based behavior change techniques and reasons for eHealth adoption, such as “set health goals,” “track my progress,” “earn rewards,” and “share my health data with friends” [, ].
Demographic items assessed age, gender, race, ethnicity, and education. Additionally, we asked about one’s health and mental health status with the item: “in general, would you say your [mental] health is…” Response options ranged from “very poor” (coded as 1) to “very good” (5). We also asked whether participants “use a health app” with a “yes”/”no” response option.
We used n (%) values to describe app feature expectations, placement, app choices, attention, predicted use, and perceived affordances. Frequencies for attention vs predicted use and for perceived affordances of the high- vs low-prototypicality apps were compared with McNemar chi-square tests. Prior to this analysis for direct effects of prototypicality, a multivariate analysis of variance (MANOVA) was used to determine if there are any significant differences in perceptions among the app types (fitness and nutrition) across aesthetics and TAM outcomes. No differences were observed for high prototypicality (aesthetics outcomes: Wilks λ=0.98; F4,454=1.08; P=.10; TAM outcomes: Wilks λ=0.99; F3,454=1.08; P=.36) or low prototypicality (aesthetics outcomes: Wilks λ=0.99; F3,452=0.68; P=.61; TAM outcomes: Wilks λ=1.00; F3,454=0.10; P=.96), so data within conditions (high vs low prototypicality) were combined for analyses. Two repeated measure (RM) MANOVAs and analyses of variance (ANOVAs) were then conducted with high vs low prototypicality as the predictor; 1 for aesthetic outcomes (simplicity, diversity, colorfulness, and craftsmanship) and 1 for technology acceptance outcomes (perceived ease of use, usefulness, and intentions to use).
Participants (n=462) were aged 18 to 70 years (mean age 35.03 years, SD 10.02 years) and half of them were female (50%, 232/462). Participants identified as White (78%, 358/462), African American (13%, 58/462), Asian (8%, 35/462), or multiracial/other; additionally, 48 of 462 participants (10%) reported their ethnicity as Hispanic. Education levels included high school to some college (33%, 153/462), associate degree (13%, 60/462), bachelor’s degree (43%, 197/462), master’s degree (10%, 45/462), and doctoral or professional degree (2%, 7/462). Most participants reported their health as good (48%, 220/462) or very good (17%, 78/462), although some did report that their health was fair (30%, 138/462), poor (4%, 20/462), or very poor (1%, 4/462). Over half of the participants (53%, 248/462) reported currently using health apps.
App Feature Selection and Placement
Each of the 12 features was selected by at least 44% (203/462) of participants (RQ1). The majority of participants (92%, 425/462) selected a menu, settings options, and user profile; notably, these features (ie, menu, settings option, and user profile) were selected an equal number of times but not by the same respondents. Additional features were expected, including the following: login (88%, 406/462), summary graph/chart (86%, 396/462), summary statistics (86%, 395/462), input data feature (80%, 368/462), calendar (77%, 354/462), logo, (77%, 357/462), search (69%, 321/462), page title (62%, 286/462), and an option to “share with friends” (44%, 203/462).
Most features were expected in similar locations () among participants who had expected features (n=425). Menus were consistently expected to be in the top-left, while search and login options are placed in the top-right corner. Other features—title, logo, profile, and settings—were expected along the top, in the center, or either side. Sharing capability was expected to appear in the bottom-right of the app, although expectations of where to log input data were more diffuse. Users expect summary statistics, graphs, and calendars to be shown across the center of the app.
Attention and Predicted Use of App Features
Respondents selected features of their preferred app, which caught their attention and they would use (and ). Attention and predicted use patterns of the high-prototypicality apps indicate 4 distinct categories of mHealth app features. Mundane features are those that have similar low attention and predicted use values. In the fitness app, the footer menu options “Discover” and “Saved” represent mundane features. Functional features have higher predicted use than attention, but predicted use remains low (<50%, <231/462) among participants, such as the settings icon in both apps. Flashy features are elements identified as attention-capturing by most participants (>50%, >231/462), and attention is significantly higher than the predicted use. In the nutrition app, large photo-based links for the “Ketogenic Easy” and “Ketogenic Medium” diets represent flashy features. Essential features are elements that most participants (>50%, >231/462) thought they would use, and where predicted use is higher than or similar to attention, as with the “Calorie Tracker” in the nutrition app. Not included in these 4 categories are elements that have higher attention than predicted use, but the attention remains low (<50%, <231/462); the only features with these characteristics were logos and app titles, as well as 2 features partially obscured in the design.
|App||Feature||Attention, n (%)||Predicted use, n (%)||Chi-square (df)||P value|
|Fitness||Footer menu option “Discover”||48 (29)||52 (32)||0.20 (1)||.66|
|Fitness||Footer menu option “Saved”||41 (25)||42 (26)||0.00 (1)||>.99|
|Nutrition||Footer menu option “Plus”||23 (13)||28 (15)||0.46 (1)||.50|
|Fitness||Footer menu option “Settings”||49 (30)||69 (42)||7.22 (1)||.007|
|Nutrition||Search Icon||21 (12)||38 (21)||5.95 (1)||.02|
|Nutrition||Footer menu option “Profile”||25 (14)||58 (32)||19.32 (1)||<.001|
|Fitness||Activity 1 “Outdoor Running”||109 (66)||63 (38)||32.66 (1)||<.001|
|Fitness||Acitivty 2 “Treadmill”||104 (63)||53 (32)||38.46 (1)||<.001|
|Nutrition||Ketogenic Easy feature||109 (60)||71 (39)||20.74 (1)||<.001|
|Fitness||Performance Tracker feature||142 (86)||157 (95)||N/Aa||.003|
|Nutrition||Calorie Tracker feature||156 (86)||160 (88)||N/A||.54|
|Nutrition||Calendar feature||88 (48)||114 (63)||11.57 (1)||.001|
aN/A: chi-square values are not applicable if fewer than 25 discordant pairs; binominal distributions are used for exact 2-tailed significance in these comparisons.
Effects of Prototypicality on App Choice, Aesthetics, and Technology Acceptance
When asked to choose between the high-prototypicality app and one designed to look more like a typical health intervention (low prototypicality), 347 of 462 (75%) participants indicated they would download the high-prototypicality app (H1).
Prototypicality had a significant main effect on all facets of aesthetics and technology acceptance outcomes (). High-prototypicality apps (vs low-prototypicality apps) had significantly higher ratings of aesthetics for simplicity (F1,455=291; P<.001), diversity (F1,455=578; P<.001), colorfulness (F1,455=295; P<.001), and craftsmanship (F1,455=462; P<.001). Similarly, the high-prototypicality app was rated higher than the low prototypicality app for perceived ease of use (F1,455=84; P<.001), usefulness (F1,455=116, P<.001), and intentions to use the app (F1,455=170; P<.001). H2-5 were supported.
|Attributes||High prototypicality, mean (SD)||Low prototypicality, mean (SD)||F test (df)||P value|
|Simplicity||4.26 (0.74)||3.19 (1.00)||291 (1,455)||<.001|
|Diversity||4.10 (0.74)||2.48 (1.09)||578 (1,455)||<.001|
|Colorfulness||4.38 (0.74)||3.41 (0.94)||295 (1,455)||<.001|
|Craftsmanship||4.25 (0.75)||2.83 (1.07)||462 (1,455)||<.001|
|Perceived ease of use||4.26 (0.75)||3.74 (0.97)||84 (1,455)||<.001|
|Perceived usefulness||4.08 (0.74)||3.58 (0.91)||116 (1,455)||<.001|
|Intentions to use||3.83 (1.00)||2.95 (1.28)||170 (1,455)||<.001|
Impact of Prototypicality on Perceived Affordances
Participants reported that the app would allow them to carry out various actions in both the high- and low-prototypicality design (). Almost all perceived affordances had significantly higher endorsement for the high-prototypicality (vs low-prototypicality) apps (P<.01), partially supporting H6; to “learn health tips” was the only affordance endorsed similarly in both conditions. The most highly endorsed affordances (>60% across conditions or >277/462) were the following: “track my progress” (high: 93%, 430/462; low: 70%, 325/462), “set health goals” (high: 88%, 405/462; low: 73%, 339/462), “improve my health” (high: 74%, 342/462; low: 63%, 293/462), “learn health tips” (high: 73%, 336/462; low: 76%, 353/462), and “give me more information about my health” (high: 70%, 325/462; low: 63%, 292/462).
|Affordances||High prototypicality, n (%)||Low prototypicality, n (%)||Chi-square (df)||P value|
|Track my progress||430 (93.1)||325 (70.3)||79.70 (1)||<.001|
|Set health goals||405 (87.7)||339 (73.4)||30.75 (1)||<.001|
|Improve my health||342 (74.0)||293 (63.4)||20.15 (1)||<.001|
|Learn health tips||336 (72.7)||353 (76.4)||2.59 (1)||.11|
|Give me more information about my health||325 (70.3)||292 (63.2)||6.86 (1)||.009|
|Create new health habits||310 (67.1)||265 (57.4)||11.06 (1)||.001|
|Increase my control over my health||323 (69.9)||239 (51.7)||46.86 (1)||<.001|
|Make meeting my health goals easier||292 (63.2)||195 (42.2)||51.28 (1)||<.001|
|Have fun with technology||256 (55.4)||135 (29.2)||79.57 (1)||<.001|
|Interact with others||120 (26.0)||47 (10.2)||56.63 (1)||<.001|
|Share my health data with friends||100 (21.6)||47 (10.2)||35.12 (1)||<.001|
|Share my health data with a healthcare provider||74 (16.0)||50 (10.8)||11.50 (1)||.001|
|Earn rewards||57 (12.3)||34 (7.4)||10.30 (1)||.001|
For mHealth to have an impact on reducing risk for chronic disease, intervention apps must be designed to effectively reach wide audiences to promote preventive health behaviors. Identifying the impact of prototypicality—the extent to which apps meet expectations—on app reception and adoption is a critical step in mHealth intervention research. Designs that match users’ perceptions of organization and content evoke prototypicality and can influence intentions to use web-based tools, including health resources [, , ]. Our study on prototypicality serves as an antecedent to positive app reception and technology acceptance in preventive health apps. We also found designs that contradict what users typically expect from apps (eg, low prototypicality), leading to a suboptimal first impression and diminishing users’ expectations [ ].
It is likely that the actual use of multiple apps influences preventive behavior ; thus, identifying key features, or classes of features, to increase orientation and facilitate ease of use and usefulness are needed to guide intervention development. Our findings for user attention and predicted use of features point to 4 distinct types of mHealth features that should be considered when developing mHealth. Of these, 3 categories serve as useful features of mHealth: driving attention, perceived use, or both.
Functional features have higher predicted use than attention, and a majority “would expect to find” these sorts of features in a health app. To meet expectations, salient functional features such as search options, settings, and menus should be included, in their expected corner placement. Even if these features do not draw attention as much as others, users still expect to see them in mobile apps, and meeting baseline expectations can reduce time and cognitive demand for initial orientation and web-based information processing . Arguably, these functional features constitute a sort of prototypical milieu or background environment for mHealth apps to likely help users orient themselves within new and unfamiliar apps.
Flashy features garner significantly more attention from users; these attention-capturing features may be most influential for positive initial impressions. Flashy features often incorporated photographs or novel design elements, which have been shown to increase attention and appeal [, ]. Beyond meeting expectations, flashy features represent the unique category that should be treated differently in designs: using visuals to highlight salient benefits and perceived affordances.
Essential features—including those selected by most users as features that they predict to use and garner their attention—are also important components of mHealth designs. It is important to note, however, that the essential features seen in this study are all familiar: calendar, calorie counter, and performance tracker. Even though some designers may assume that features as basic as a calendar are not worth the time and effort to include, respondents strongly indicated that these features remain important components of mHealth apps.
Our findings also highlight a distinct category that can be skipped or given little attention in development: mundane features. Mundane features, such as app title and tabs for discovering or saving, elicited little attention and predicted use and are a good indication not to waste precious resources on these elements.
Potential mHealth users had consistent expectations for some features by region (eg, middle or top corner), but not necessarily a specific location. Essential features, such as a calendar, were expected to be shown across the center of the app. Other features, such as function features including search and settings, had more narrow placement expectations. Understanding these location expectations is critical to ensure that feature placement matches individual models .
Higher prototypicality led to higher ratings for aesthetics, perceived ease of use, usefulness, and intentions to use apps. Individuals also expect greater function, possibilities, and valuable outcomes from apps with higher prototypicality. Low prototypicality led to lower rankings for aesthetics, perceived ease of use, and perceived usefulness. Additionally, low prototypicality runs the risk of users initially dismissing the app. Negative product evaluations—where expectations are not met—can also lower satisfaction with product interaction .
This study is limited to the specific health apps manipulated herein; these apps do not represent all available mHealth strategies. Although we evaluated placement, attention, and predicted use, we could have reviewed more features within apps. Our findings are also limited to a convenience sample of participants of a web-based panel. It is possible that our participants have more digital literacy or skills than the general population or diverse subgroups.
Future studies should consider assessing actual use after download, instead of solely predicted use. Replication with more diverse audiences, varied app designs, and expanded methodological approaches are needed to generalize our findings. Notably, future research should account for additional personal characteristics, such as health literacy or the ability to obtain, process, and understand health information , to examine how these skills affect both first impressions for app adoption and actual use to determine the effectiveness of health apps.
Mobile apps can communicate critical health information for preventive health behaviors through readily available and consumer-friendly tools. Apps that are thoughtfully designed to match potential users’ expectations, with increased prototypicality, will support app use. Conversely, designs that do not include a threshold of expected features will be dismissed, thus undermining the potential of app-based interventions. Designing mHealth apps to account for user expectations will increase the likelihood of adoption and impact from actual use. Prototypicality is positively related to favorable reception and expectations for future use of health apps. These findings provide guidance for user expectations of feature presence and location.
This work was supported by an award from the University of North Carolina Lineberger Comprehensive Cancer Center. The funders had no role in the study design, data collection, and analysis, decision to publish, or preparation of the manuscript.
Conflicts of Interest
App hot spots and survey items.PDF File (Adobe PDF File), 680 KB
- Smith A. U.S. Smartphone Use in 2015. Pew Research Center. 2015 Apr 01. URL: https://www.pewresearch.org/internet/2015/04/01/us-smartphone-use-in-2015/ [accessed 2021-10-01]
- Fox S, Duggan M. Mobile Health 2012. Pew Research Center. 2012 Nov 08. URL: https://www.pewresearch.org/internet/2012/11/08/mobile-health-2012/ [accessed 2021-10-01]
- Krishna S, Boren SA, Balas EA. Healthcare via cell phones: a systematic review. Telemed J E Health 2009 Apr;15(3):231-240. [CrossRef] [Medline]
- Ribeiro N, Moreira L, Almeida AMP, Santos-Silva F. Pilot study of a smartphone-based intervention to promote cancer prevention behaviours. Int J Med Inform 2017 Dec;108:125-133. [CrossRef] [Medline]
- Chen J, Gemming L, Hanning R, Allman-Farinelli M. Smartphone apps and the nutrition care process: Current perspectives and future considerations. Patient Educ Couns 2018 Apr;101(4):750-757. [CrossRef] [Medline]
- Bender JL, Yue RYK, To MJ, Deacken L, Jadad AR. A lot of action, but not in the right direction: systematic review and content analysis of smartphone applications for the prevention, detection, and management of cancer. J Med Internet Res 2013 Dec 23;15(12):e287 [FREE Full text] [CrossRef] [Medline]
- Krebs P, Duncan DT. Health App Use Among US Mobile Phone Owners: A National Survey. JMIR Mhealth Uhealth 2015 Nov 04;3(4):e101 [FREE Full text] [CrossRef] [Medline]
- Carlo AD, Hosseini Ghomi R, Renn BN, Areán PA. By the numbers: ratings and utilization of behavioral health mobile applications. NPJ Digit Med 2019;2:54 [FREE Full text] [CrossRef] [Medline]
- Pandey A, Hasan S, Dubey D, Sarangi S. Smartphone apps as a source of cancer information: changing trends in health information-seeking behavior. J Cancer Educ 2013 Mar;28(1):138-142. [CrossRef] [Medline]
- Riley WT, Oh A, Aklin WM, Wolff-Hughes DL. National Institutes of Health Support of Digital Health Behavior Research. Health Educ Behav 2019 Dec;46(2_suppl):12-19. [CrossRef] [Medline]
- Covolo L, Ceretti E, Moneda M, Castaldi S, Gelatti U. Does evidence support the use of mobile phone apps as a driver for promoting healthy lifestyles from a public health perspective? A systematic review of Randomized Control Trials. Patient Educ Couns 2017 Dec;100(12):2231-2243. [CrossRef] [Medline]
- Iten G, Troendle A, Opwis K. Aesthetics in context? The role of aesthetics and usage mode for a website's success. Interacting with Computers 2018;30(2):133-149. [CrossRef]
- Lindgaard G, Fernandes G, Dudek C, Brown J. Attention web designers: You have 50 milliseconds to make a good first impression!. Behav Inf Technol 2006 Mar;25(2):115-126. [CrossRef]
- Lazard A, Mackert M. User evaluations of design complexity: the impact of visual perceptions for effective online health communication. Int J Med Inform 2014 Oct;83(10):726-735. [CrossRef] [Medline]
- Lazard AJ, Brennen JS, Troutman Adams E, Love B. Cues for Increasing Social Presence for Mobile Health App Adoption. J Health Commun 2020 Feb 01;25(2):136-149. [CrossRef] [Medline]
- Roth SP, Schmutz P, Pauwels SL, Bargas-Avila JA, Opwis K. Mental models for web objects: Where do users expect to find the most frequent objects in online shops, news portals, and company web pages? Interact Comput 2010 Mar;22(2):140-152. [CrossRef]
- Leder H, Belke B, Oeberst A, Augustin D. A model of aesthetic appreciation and aesthetic judgments. Br J Psychol 2004 Nov;95(Pt 4):489-508. [CrossRef] [Medline]
- Lazard AJ, Mackert MS. e-health first impressions and visual evaluations. Commun Des Q Rev 2015 Sep 17;3(4):25-34. [CrossRef]
- Tuch AN, Presslaber EE, Stöcklin M, Opwis K, Bargas-Avila JA. The role of visual complexity and prototypicality regarding first impression of websites: Working towards understanding aesthetic judgments. Int J Hum Comput Stud 2012 Nov;70(11):794-811. [CrossRef]
- Oulasvirta A, Karkkainen L, Laarni J. Expectations and memory in link search. Comput Hum Behav 2005 Sep;21(5):773-789. [CrossRef]
- Roth SP, Tuch AN, Mekler ED, Bargas-Avila JA, Opwis K. Location matters, especially for non-salient features–An eye-tracking study on the effects of web object placement on different types of websites. Int J Hum Comput Stud 2013 Mar;71(3):228-235. [CrossRef]
- Gibson JJ. The Ecological Approach to Visual Perception. East Sussex: Psychology Press; 1986.
- Norman D. The Design of Everyday Things. New York, NY: Basic Books; 2002.
- Norman DA. Affordance, conventions, and design. interactions 1999 May;6(3):38-43. [CrossRef]
- Seet B, Goh T. Exploring the affordance and acceptance of an e‐reader device as a collaborative learning system. The Electronic Library 2012 Aug 03;30(4):516-542. [CrossRef]
- Withagen R, de Poel HJ, Araújo D, Pepping G. Affordances can invite behavior: Reconsidering the relationship between affordances and agency. New Ideas Psychol 2012 Aug;30(2):250-258. [CrossRef]
- Moshagen M, Musch J, Göritz AS. A blessing, not a curse: experimental evidence for beneficial effects of visual aesthetics on performance. Ergonomics 2009 Oct;52(10):1311-1320. [CrossRef] [Medline]
- Moshagen M, Thielsch MT. Facets of visual aesthetics. Int J Hum Comput Stud 2010 Oct;68(10):689-709. [CrossRef]
- Lavie T, Tractinsky N. Assessing dimensions of perceived visual aesthetics of web sites. Int J Hum Comput Stud 2004 Mar;60(3):269-298. [CrossRef]
- Venkatesh V, Morris MG, Davis GB, Davis FD. User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly 2003;27(3):425. [CrossRef]
- Lazard AJ, Watkins I, Mackert MS, Xie B, Stephens KK, Shalev H. Design simplicity influences patient portal use: the role of aesthetic evaluations for technology acceptance. J Am Med Inform Assoc 2016 Apr;23(e1):e157-e161 [FREE Full text] [CrossRef] [Medline]
- Nielsen J. Usability Engineering. Burlington, MA: Morgan Kaufmann; 1994.
- Hingle M, Patrick H, Sacher PM, Sweet CC. The Intersection of Behavioral Science and Digital Health: The Case for Academic-Industry Partnerships. Health Educ Behav 2019 Feb;46(1):5-9. [CrossRef] [Medline]
- Buhrmester M, Kwang T, Gosling SD. Amazon's Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspect Psychol Sci 2011 Jan;6(1):3-5. [CrossRef] [Medline]
- Sheehan KB. Crowdsourcing research: Data collection with Amazon’s Mechanical Turk. Commun Monogr 2017 Jul 04;85(1):140-156. [CrossRef]
- Kees J, Berry C, Burton S, Sheehan K. An Analysis of Data Quality: Professional Panels, Student Subject Pools, and Amazon's Mechanical Turk. J Advert 2017 Jan 23;46(1):141-155. [CrossRef]
- What is Chart? Learn how this assessment tool adds value to UNC research. UNC Lineberger. URL: https://chart.unc.edu/ [accessed 2017-03-01]
- Brennen JS, Lazard AJ, Adams ET. Multimodal mental models: Understanding users' design expectations for mHealth apps. Health Informatics J 2020 Sep;26(3):1493-1506 [FREE Full text] [CrossRef] [Medline]
- Horrell L, Knafl GJ, Brady T, Lazard A, Linnan L, Kneipp S. Communication Cues and Engagement Behavior: Identifying Advertisement Strategies to Attract Middle-Aged Adults to a Study of the Chronic Disease Self-Management Program. Prev Chronic Dis 2020 Jun 25;17:E48 [FREE Full text] [CrossRef] [Medline]
- Mays D, Villanti A, Niaura RS, Lindblom EN, Strasser AA. The Effects of Varying Electronic Cigarette Warning Label Design Features On Attention, Recall, and Product Perceptions Among Young Adults. Health Commun 2019 Mar;34(3):317-324 [FREE Full text] [CrossRef] [Medline]
- Michie S, Ashford S, Sniehotta FF, Dombrowski SU, Bishop A, French DP. A refined taxonomy of behaviour change techniques to help people change their physical activity and healthy eating behaviours: the CALO-RE taxonomy. Psychol Health 2011 Nov;26(11):1479-1498. [CrossRef] [Medline]
- Kunst A. Major reasons for adoption of e-health applications and devices by U.S. adults as of 2017. Statista. 2019. URL: https://www.statista.com/statistics/328661/reasons-for-patient-use-of-mhealth-apps-and-services/ [accessed 2021-10-01]
- Lazard AJ, King AJ. Objective Design to Subjective Evaluations: Connecting Visual Complexity to Aesthetic and Usability Assessments of eHealth. Int J Hum-Comput Int 2019 Apr 24;36(1):95-104. [CrossRef]
- Kim K, Lee C, Hornik RC. Exploring the Effect of Health App Use on Fruit and Vegetable Consumption. J Health Commun 2020 Apr 02;25(4):283-290. [CrossRef] [Medline]
- Pieters R, Wedel M, Batra R. The Stopping Power of Advertising: Measures and Effects of Visual Complexity. Journal of Marketing 2010 Sep 01;74(5):48-60. [CrossRef]
- Raita E, Oulasvirta A. Too good to be bad: Favorable product expectations boost subjective usability ratings. Interact Comput 2011 Jul;23(4):363-371. [CrossRef]
- Paasche-Orlow MK, Parker RM, Gazmararian JA, Nielsen-Bohlman LT, Rudd RR. The prevalence of limited health literacy. J Gen Intern Med 2005 Feb;20(2):175-184 [FREE Full text] [CrossRef] [Medline]
|ANOVA: analysis of variance|
|MANOVA: multivariate analysis of variance|
|mHealth: mobile health|
|MTurk: Mechanical Turk|
|RM: repeated measures|
|TAM: technology acceptance model|
Edited by L Buis; submitted 21.04.21; peer-reviewed by S Bhattacharjya, E Brainin; comments to author 30.06.21; revised version received 23.08.21; accepted 23.09.21; published 04.11.21Copyright
©Allison J Lazard, J Scott Babwah Brennen, Stephanie P Belina. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 04.11.2021.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.