This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on http://mhealth.jmir.org/, as well as this copyright and license information must be included.
Ubiquitous health is defined as a dynamic network of interconnected systems that offers health services independent of time and location to a data subject (DS). The network takes place in open and unsecure information space. It is created and managed by the DS who sets rules that regulate the way personal health information is collected and used. Compared to health care, it is impossible in ubiquitous health to assume the existence of a priori trust between the DS and service providers and to produce privacy using static security services. In ubiquitous health features, business goals and regulations systems followed often remain unknown. Furthermore, health care-specific regulations do not rule the ways health data is processed and shared. To be successful, ubiquitous health requires novel privacy architecture.
The goal of this study was to develop a privacy management architecture that helps the DS to create and dynamically manage the network and to maintain information privacy. The architecture should enable the DS to dynamically define service and system-specific rules that regulate the way subject data is processed. The architecture should provide to the DS reliable trust information about systems and assist in the formulation of privacy policies. Furthermore, the architecture should give feedback upon how systems follow the policies of DS and offer protection against privacy and trust threats existing in ubiquitous environments.
A sequential method that combines methodologies used in system theory, systems engineering, requirement analysis, and system design was used in the study. In the first phase, principles, trust and privacy models, and viewpoints were selected. Thereafter, functional requirements and services were developed on the basis of a careful analysis of existing research published in journals and conference proceedings. Based on principles, models, and requirements, architectural components and their interconnections were developed using system analysis.
The architecture mimics the way humans use trust information in decision making, and enables the DS to design system-specific privacy policies using computational trust information that is based on systems’ measured features. The trust attributes that were developed describe the level systems for support awareness and transparency, and how they follow general and domain-specific regulations and laws. The monitoring component of the architecture offers dynamic feedback concerning how the system enforces the polices of DS.
The privacy management architecture developed in this study enables the DS to dynamically manage information privacy in ubiquitous health and to define individual policies for all systems considering their trust value and corresponding attributes. The DS can also set policies for secondary use and reuse of health information. The architecture offers protection against privacy threats existing in ubiquitous environments. Although the architecture is targeted to ubiquitous health, it can easily be modified to other ubiquitous applications.
Both ubiquitous health and pervasive health are terms that describe a new business model (these terms have been used in many papers synonymously). Similarly to health care, its goal is to make health services available to everyone, but many of its features separate it from health care [
Privacy is a complex, personal, and situation-depending concept that can be interpreted in various ways [
Trust can be understood as the subjectively perceived probability by a DS that a system will perform an action before the DS can monitor it [
Privacy and trust are interrelated concepts, that is, “data disclosure means loss of privacy, but an increased level of trustworthiness reduces the need for privacy” [
In health care, internationally accepted principles, good practice rules, and domain-specific legislation define patient’s rights and service providers’ responsibilities. Health care-specific legislation also states how patient’s privacy must be protected [
Furthermore, systems and stakeholders have the responsibility to publish information needed for trust verification and support openness and transparency of data processing.
Ubiquitous health features and its ubiquitous environment suggest that trustworthiness and privacy are real concerns [
Here we hypothesize that in order to be successful, ubiquitous health requires trustworthiness and privacy management made by the DS. Without these two features, DS will not dare to use its services. Furthermore, the architecture supporting ubiquitous health should fulfill the THEWS principles presented above. As traditional security and trust mechanisms used in today’s health care information systems may not provide adequate security and privacy in ubiquitous health [
The development of ubiquitous systems and the growing use of ubiquitous computing have raised the following question: What kind of trust and privacy models, services, and architectures offers acceptable level of privacy and trustworthiness?
Trust models such as belief, organizational trust, dispositional trust, recommended trust, and direct trust have been proposed for pervasive systems [
A trust is typically based on the trustor’s characteristics such as ability, integrity, and benevolence and should not be a blind guess [
In contrast to belief and recommended trust, computational trust built on abstractions of human concept of trust has been proposed by researchers [
The aforementioned trust models have noticeable weaknesses in ubiquitous environment. Recommendations are unreliable because they are based on unsecure opinions. It is difficult to force everyone to accept certificates or common TA, and many virtual organizations do not have connection to it. A common ontology that is required for successful negotiation and calculation of trust attributes seldom exists. Trust manifesto assumes that the DS blindly trusts that service providers will deliver their promises. Furthermore, the reliability of reputations is difficult to measure, and credentials are difficult to evaluate [
Many privacy models developed by researchers are useful in ubiquitous environment. Lederer et al proposed a model of situational faces [
Privacy management model proposed by Lederer et al combined Adams’s perceptual model and Lessing’s societal privacy models [
Numerous trust and privacy technologies have been proposed for ubiquitous systems. In Gray’s solution, the trust is based on the belief of a person that systems have implemented proper de-identification structures and safeguards. It also includes a compliance checker and a trust value calculator [
Computational trust is either based on direct measurements, observed (monitored) features, or past experiences [
Privacy is often protected by using privacy enhancement solutions such as data filtering and minimization, anonymization, and adding noise to disclosed information (eg, data hashing, cloaking, blurring, and identity hiding) [
Other solutions also exist for privacy protection. Kapadia et al created a virtual personal space (a room) to control information flow through its “walls” [
In pervasive systems, privacy requirements are typically expressed as policies that are context-dependent. Policies define what is permitted or prohibited, and which are permitted actions [
The increasing use of the Internet, peer-to-peer systems, multi-agent systems, and social networks has been main drivers for discussed privacy and trust models and solutions. Unfortunately, most of them are focused on one feature (eg, encryption or context). Ubiquitous health requires much wider approach. Like Bryce et al, we also state that pervasive systems require an architecture that combines dynamic privacy policies, a priori trust validation, privacy management, and a posteriori measurement (ie, feedback) what systems are doing [
In this paper, we propose a novel privacy management architecture for ubiquitous health. As ubiquitous health is a new concept without widely accepted principles and privacy and trust models, it is necessary to select on which principles and models the architecture is based. THEWS principles, as previously presented, have been selected by the authors on the basis of the architecture, that is, the architecture should be compliant with them. The solution should take into account features of ubiquitous health and enable the DS to dynamically manage the privacy by defining system-specific privacy policies. The architecture should mimic the way humans use trust information in creation of personal policies. The architecture should also offer protection against many known privacy threats existing in ubiquitous environment.
From system theory and systems engineering perspectives, ubiquitous health is a metasystem that is characterized by its structure, its function/behavior, and how its interrelated components are composed in an ordered way. Instead of creating artificial scenarios or making quantitative privacy risk/threat analysis, a more system-oriented sequential method that combines methodologies used in systems engineering, requirement analysis, and system design is used (
The method used in this study includes the following steps: definition of basic requirements; selection of values, privacy and trust models, and views; identification of concerns; definition of functional requirements; selection of services; developing privacy and trust formula; and designing the architecture. Finally, it is checked how the architecture meets purposes and requirements for which it has been intended.
On the background of processing of health information stay ethical values and codes, principles, and common rules. Selection of these features has also strong impact on the architecture and its services. For some environments (eg, health care), widely accepted codes and rules already exist; however, this is not the case in ubiquitous health. Therefore, the first step is to select privacy and trust models and approaches that are in line with principles and without noticeable weaknesses. This is achieved by carefully analyzing existing research published in journals, conference proceedings, and standards documents. Similarly, identification of concerns and definition of functional requirements are also done. Finally, the architecture combines selected services in such a way that principles and requirements are fulfilled.
In this paper, privacy and trust needs are examined from the DS’s viewpoint. Other views are not discussed. To reduce the complexity, only components that are relevant for the privacy management needs of the DS are included in the architecture.
Method for the development of the THEWS architecture.
Ruotsalainen et al have noted that privacy rules in ubiquitous health are based on trust [
In spite that privacy is widely accepted as human right (value), different privacy models do exist in real life. Regulatory and self-regulatory models are widely used [
Suitability of widely used privacy protection and management approaches in the context of ubiquitous health is shown in
Trustworthy ubiquitous health requires that used trust model enables the DS to work out the level of trustworthiness of systems. Characteristics and weaknesses of widely used trust models in regard to features of ubiquitous health are shown in
Computational trust that is based on systems’ measurable or observed properties can offer reasonable information to the DS in designing personal privacy policies [
From the DS viewpoint, the architecture should mimic humans’ ways to design policies, support more rational choices than intuition, and give feedback to the DS. Louviere’s stated customer choice method fulfills these requirements by including awareness, learning, evaluation and comparison, preference formulation, and choice and post-choice [
Suitability of common privacy protection and management approaches for ubiquitous health.
Approach | Suitability |
Privacy protection using security services (eg, authentication, authorization, and access control) | Security cannot offer reasonable level of privacy in ubiquitous health. Access control alone is insufficient. The DS is not familiar and cannot control authorization rules used inside a system |
Privacy control by hiding the DS’s identity | Health care and health services require the knowledge of the DS’s identity |
Delegation approach | Delegation requires knowledge to whom the DS delegates access rights. Systems specifically do not publish this kind of information to the DS |
Privacy labels | Rules deployed in a label might be inadequate and in conflict with the DS policy that may or could not be specified in labels |
Privacy management using context- and content-aware policies | Supports dynamic policies, but requires computer-understandable policy language. Common ontology, ontology harmonization (matching, mapping, etc.), or reasoning is needed |
Metadata approach | All systems do not accept injected or active code |
Data filtering and adding noise to data | Health services require large amount of PHI for correct and effective services, as incomplete PHI can lead to wrong decisions or prevent the use of services |
Characteristics and weaknesses of common trust models.
Model | Characteristics and weaknesses in ubiquitous health |
Dispositional trust and recommended trust | Characteristics: Based on belief, attitude, or others’ opinions (recommendations) |
|
Weakness: Recommendations are unreliable and based on unsecure opinions. It is difficult or even impossible to check the reliability of others’ recommendations |
Blind trust | Characteristics: Based on belief or attitude that organization has implemented sufficient safeguards |
|
Weakness: Does not guarantee trustworthiness |
Predefined trust | Characteristics: Based on assumption that an organization has implemented required regulatory services |
|
Weakness: Static model. Unsuitable for dynamic environments. |
Trust label | Characteristics: Based on organizational or personal labels |
|
Weakness: Inappropriate granularity and insufficient consideration of dynamic contextual conditions |
Trust manifesto | Characteristics: Based on assurance of service provider |
|
Weakness: Based on belief or attitude. The DS should blindly trust |
Reputation | Characteristics: Based on subjective opinions of others |
|
Weakness: The reliability of reputations is difficult to measure |
Computational trust | Characteristics: Based on system’s measured or observed features |
|
Weakness: A simple trust value or rank might offer insufficient information for the DS in designing personal policies |
Risk- and threat-based models | Characteristics: Based on risk or threat assessment |
|
Weakness: Difficult or even impossible to measure personal privacy risks |
Trust management using credentials | Characteristics: Based on credentials issued by authorities. It is targeted to create trust between organizations |
|
Weakness: Credentials are static. Difficult to evaluate and require a network of trusted authorities. It is difficult to force everyone and virtual systems to accept credentials or a TA |
Typical stakeholders in ubiquitous health are the DS, health service providers, other organizations, and secondary users. Different stakeholders have different concerns [
Derived from previously mentioned assumptions and selections and the proposals made by other researchers, the architecture should identify the following functional requirements. The architecture should offer tools for the DS to define purposes of data collection, express computer-understandable rules regarding the sensitivity of data elements, design protection needed, rule how long data is stored, and which data is disclosed and for what purposes [
The architecture should support dynamic content-, context-, and purpose-aware privacy management. It should also offer to the DS system-specific computational trust information with attributes that describe systems’ features, infrastructures, policies, and relations in advance. Humans’ way to design policies, to support more rational choices than intuition, and to give feedback should need to be mimicked. The architecture must be compliance with Louviere’s stated customer choice method. It should support situations where the DS discloses PHI and where data collection or disclosure is made autonomously by a system. The architecture also enables the DS to be aware of data-processing events, and to set policies regulate the secondary use and reuse of PHI.
Services of the architecture should fulfill above-mentioned requirements, and take into account expected concerns. Trust and privacy services selected for the THEWS architecture are shown in
Trusts and privacy services for the THEWS architecture.
Concern/Function | Service |
System’s trustworthiness | Trust calculation service |
|
Context service |
|
Identification service |
|
Trust interpreter service |
The DS’s information autonomy | Decision support service |
|
Policy-binding service |
Awareness and transparency | Monitoring, trust calculation, and notification services |
The use of PHI inside the system | Monitoring and notification services |
Does the system use PHI according to the DS’s policies | Monitoring and notification services |
Choice and secondary use and post-release of PHI | Policy-binding service |
|
Metadata (eg, sticky policy or active code for apoptosis) |
Designing privacy policies and comparison and preference formulation | Decision support service |
Policy formulation and post-choice and new policy creation | Policy management service |
|
Policy assistant service |
|
Ontology service |
System’s features and relations | Trust calculation service |
Feedback and alarm or conflict notice | Monitoring service |
Learning | Trust interpreter and policy assistance services |
The THEWS principles and functional requirements determine that the DS can use trust information in the formulation of privacy policies [
In this formula, TI refers to
To avoid the drawback of a single calculated trust value and to enable attribute-based creation of personal policies [
where E represents domain specific environmental factors such as legal requirements and system’s contextual features. T represents the type of service provider’s organization (eg, public health care provider, private health service provider, Internet service provider). P (properties) consists of systems architectural and technological aspects and PO is system’s privacy policy. Predictability (Pre), transparency (Tran), and ability (Ab) are different parameters that can be calculated from the system’s past history or by direct measurements. For
where DGD and DRB describe the level of system’s regulatory compliance. The DGD is the degree of data processing made by the system in compliance with international privacy protection directives. The DRB is the degree of data processing performed by the system compliant with health care-specific laws and rules. SPO and RP are parameters that are related to openness. SPO informs if the system has made its privacy policies openly available, and RP tells the status if the system has published its relationships. DSP, ASP, ATV, and AUT are willingness parameters. DSP describes the degree by which the system follows its own privacy policies. ASP informs that the system either enables or rejects the DS to inject personal policies to PHI collected or processed by the system. The ATV expresses whether the system accepts external monitoring of events related to the processing of PHI, and AUT tells whether the system enables external access to its audit trails. The PBL and CD are trustworthiness parameters. CD informs whether the system has been certified, and PBL informs about the position of the system on the blacklist. The DSA is an optional attribute that can be defined by the DS. For DGD and DRB, a linear scale (0...1) is used, whereas all others attributes have only binary values. In case of no or insufficient data, the attribute value is zero.
Using proposed
A layered framework model that describes trust and privacy services of the THEWS architecture is shown in
As it is difficult or even impossible for the DS to evaluate the trustworthiness of systems, an independent agent, the trust calculator (TC), is used for this task. The role of TC is not to make trust decisions. Similar to HL7 Privacy, Access and Security Services architecture, the TC should be understood as an information point that sends trust information to the DS [
The TC calculates
The context service collects systems’ contextual data, interprets it, and makes it available to TC and DS, using ontologies. The DS deploys policy management, policy-binding, policy assistance, and decision support services in policy formulation.
The monitoring service offers feedback, reduces risk, and recognizes policy conflicts. It records and assesses how a system in real life processes PHI. It recognizes policy conflicts and alarms the TC and the DS of possible malicious or illegal use of PHI. The notification service works as communication and transparency tool between the DS, systems and services. Using this service, the DS expresses personal policies to systems that in turn publish their policies and relations.
An architectural model describing the interconnection of the THEWS services is shown in
The THEWS architecture not only fulfils the THEWS requirements but also offers protection against many of the known privacy threats existing in pervasive systems as shown in
The framework model for the THEWS architecture.
The interconnection of privacy and trust services in the THEWS architecture.
The THEWS architecture approach for the challenges existing in pervasive systems.
Challenges and threats | THEWS approach |
Pervasive systems are dynamic in nature (eg, ad hoc networks) where static rules and privacy services will not work | Dynamic rules and services are used |
|
Dynamic creation and management of the DS’s privacy service portfolio |
No predefined trust | Dynamic trust calculation based on systems’ measured properties |
The need of PII is dynamic and purposes are unpredictable | Dynamic context-aware polices support ad hoc purposes |
Organizations do not always follow their own policies, and laws will be ineffective without sufficient control and penalties | The way systems process PHI is dynamically monitored, and the regulatory compliance is checked |
Users want to control how systems use PII | The DS define system-specific policies that rule the use, storing, and sharing of PHI |
It is difficult to know what is the actual privacy status of an enterprise (ie, what data and under what policy) | Status and policies are inspected and informed dynamically to the DS |
It is difficult to know how data has been used inside the enterprise | The monitoring service can check internal use |
Relationships between systems can be unknown | Systems must publish their relations |
All service providers do not use certificates | Trustworthiness is not based on certificates |
Selection of service provider needs trust and/or reputation | The TC offers calculated trust value and trust attributes to the DS |
|
Reputation is not used |
Determining of systems’ trustworthiness is challenging | The TC calculates trust using direct measurements |
|
The monitoring service gives feedback to the TC |
Which action the DS must take in the case of privacy breach? | The TC and/or monitoring service inform the DS of privacy breaches |
|
The DS can change policy dynamically |
How to guarantee that data is processed lawfully and according to the DS’s policies | Trust attributes offer required information |
|
The monitoring service notifies misuse |
Lack of awareness | Systems must publish their rules and relationships |
|
Awareness by monitoring service |
How to know what actions are permitted or forbidden in a context and what actions must be performed? | The DS defines personal context-aware rules |
How we can trust on systems privacy notices (or privacy manifesto)? | Privacy notice/manifesto is not used |
Threats caused by surveillance, identity theft, or malicious attacks | Communication platform and systems must implement reasonable safeguards |
Code of conduct, legal framework, and accreditation of centers will not guarantee trustworthiness | Those models are not used |
Consent does not guarantee adequate protection | Consent is only one possible item in the policy |
Anonymization such as “we know” will not guarantee adequate protection | Anonymization is only a value-added service |
Secondary use of PII must be monitored | Monitoring service |
Citizens need audit information | The monitoring service assesses the audit log and informs findings to the DS |
|
The TC can maintain a list of untrusted or malicious systems |
Data requestors can have subjective views of trust | The TC defines the used trust ontology |
How can we manage trust for systems with incomplete credentials? | Credentials are not used |
In this study, novel privacy architecture is developed for ubiquitous health. It enables the DS to ensure and manage information privacy by choosing personal context-aware privacy policies for each system with the help of computational trust information that includes a trust value and system-specific trust attributes. The architecture combines many trust and privacy services proposed by researchers for pervasive systems such as trust calculation and interpretation, policy management, policy assistance, policy binding and design, and context services and monitoring. The architecture goes far beyond the security services with traditional access control used in health care, and it also illustrates how the THEWS principles can be realized. Furthermore, the architecture offers protections against many privacy threats caused by ubiquitous computing and unsecure environment. Instead of continuous validation of systems’ trustworthiness, the architecture monitors functioning of the systems, detects and informs the DS of policy conflicts and data misuse, and thereby enables the DS to dynamically change policies.
Contrary to a widely used trust manifesto that is based on incomplete, insufficient, or inconclusive information [
For all pervasive systems, some of the unsolved privacy challenges are as follows: (1) How to prevent data from being collected and used in a way that DS cannot recognize? (2) How to prevent systems for breaching their promises? and (3) How to prevent the misuse of PHI after it has been released for secondary use?
Regulation and monitoring can give partial solution to first two challenges. Policy agents, self-destroying files, programmed death (apoptosis), destruction of cryptographic keys, and mutation engines have been proposed by researchers to give protection in the case of post-release [
In addition, there remain some more important challenges. The TC should understand both international and national regulations, and rules used by systems. Translation of narrative rules into machine-readable policies is an ongoing challenge [
data subject
personal health information
personal identifiable information
trust authority
trust calculator
Trusted eHealth and eWelfare Space
The results presented in this paper are based on the findings of the Trusted eHealth and eWelfare Space (THEWS) project. The project was supported by the Finnish Academy during 2009-2012 via the MOTIVE research program.
Conflicts of Interest: None declared.