Review
Abstract
Background: Over the past decade, the wide availability and small size of different types of sensors, together with the decrease in pricing, have allowed the acquisition of a substantial amount of data about a person’s life in real time. These sensors can be incorporated into personal electronic devices available at a reasonable cost, such as smartphones and small wearable devices. They allow the acquisition of images, audio, location, physical activity, and physiological signals among other data. With these data, usually denoted as lifelog data, we can then analyze and understand personal experiences and behaviors. This process is called lifelogging.
Objective: The objective of this paper was to present a narrative review of the existing literature about lifelogging over the past decade. To achieve this goal, we analyzed lifelogging applications used to retrieve relevant information from daily digital data, some of them with the purpose of monitoring and assisting people with memory issues and others designed for memory augmentation. We aimed for this review to be used by researchers to obtain a broad idea of the type of data used, methodologies, and applications available in this research field.
Methods: We followed a narrative review methodology to conduct a comprehensive search for relevant publications in Google Scholar and Scopus databases using lifelog topic–related keywords. A total of 411 publications were retrieved and screened. Of these 411 publications, 114 (27.7%) publications were fully reviewed. In addition, 30 publications were manually included based on our bibliographical knowledge of this research field.
Results: From the 144 reviewed publications, a total of 113 (78.5%) were selected and included in this narrative review based on content analysis. The findings of this narrative review suggest that lifelogs are prone to become powerful tools to retrieve memories or increase knowledge about an individual’s experiences or behaviors. Several computational tools are already available for a considerable range of applications. These tools use multimodal data of different natures, with visual lifelogs being one of the most used and rich sources of information. Different approaches and algorithms to process these data are currently in use, as this review will unravel. Moreover, we identified several open questions and possible lines of investigation in lifelogging.
Conclusions: The use of personal lifelogs can be beneficial to improve the quality of our life, as they can serve as tools for memory augmentation or for providing support to people with memory issues. Through the acquisition and analysis of lifelog data, lifelogging systems can create digital memories that can be potentially used as surrogate memory. Through this narrative review, we understand that contextual information can be extracted from lifelogs, which provides an understanding of the daily life of a person based on events, experiences, and behaviors.
doi:10.2196/30517
Keywords
Introduction
Background
With the expansive use of technology by humans, such as smartphones and wearable devices accessible worldwide, the acquisition of data about a person’s activity is changing dramatically, allowing the acquisition of a huge amount of different types of data every day in the form of images, locations, and physiological signals. With the rapid development of Internet of Things solutions, these personal data can be applied in a wide range of applications. One such application is lifelogging.
Lifelogging is defined as a form of pervasive computing, consisting of a unified digital record of the totality of an individual’s experiences, which is usually called a lifelogger, captured multimodally through digital sensors and stored permanently as a personal multimedia archive. In a simple way, lifelogging is the process of tracking and recording personal data created through our activities and behavior [
, ].The idea of storing knowledge and information to provide an auxiliary memory to support people was envisioned by Vannevar Bush [
]. At the end of the Second World War in 1945, Vannevar Bush presented the Memex concept to the world. Memex represented a device in which an individual stores knowledge and information, such as his books, records, and communications, based on association, similar to the brain, and exposes it as a memory aid. Bush also envisioned 2 other devices that have come to life: the minicamera worn on the forehead that would allow users to take photographs from their point of view and a device that would record voice in text format. Remarkably, the use of these 3 devices together would enable what could be considered as the starting point of lifelogging.With the evolution of digital technologies over the years, solutions to record, store, and organize a lifetime of information and knowledge have become possible, as envisioned by Vannevar Bush. Bush’s vision remains an inspiration for many information retrieval and lifelogging systems. However, the amount of information available to be stored and processed today is difficult to analyze and retrieve. To overcome this problem, a wide range of research fields can be explored, such as image and information retrieval, knowledge extraction, image understanding, sentiment analysis, and data mining just to name a few, which provide solutions to organize, process, and retrieve personal data. These personal data are also named as lifelogs and can be used as surrogate memory within a lifelogging system capable of organizing and managing these lifelogs [
]. Therefore, the extraction of relevant information from personal lifelogs can be used to improve the quality of everyday life for people with memory problems or even used as a digital diary.The practice of lifelogging has become an important resource of contextual data. Projects such as Digital Eye Glass [
- ], MyLifeBits [ - ], and SenseCam [ , ] were the most relevant in the past. The amount of lifelog data (volume), the different types of data obtained from several sources (variety), and the agility to process the lifelogs and generate the necessary information (velocity) make lifelogging an interesting and challenging big data application [ , ]. For example, Gurrin et al [ ] started to analyze the large visual lifelogs that were captured during a period of more than a year. Therefore, it is not surprising that the complexity and interdisciplinary challenges are increasing the attention on the lifelogging subject from the research community.Objectives
Memory is often compared with a computer as it constitutes an information processing system. Both systems have basic functions such as encoding (input and processing of information), storage (retention of information), and retrieval (obtaining information from the storage) [
]. The loss of information from memory, also known as forgetting, occurs when a failure in encoding occurs owing to interference or other memory errors. Encoding failures can be circumvented through lifelogging. Lifelogs, particularly visual lifelogs, provide context cues that can help recall and recognition [ ]. As a result, lifelogging has the potential for supporting memory augmentation, which can be applied to aid memory retrieval not only for people with dementia but also for healthy people.The world’s population above the age of 60 years has been increasing since 1950, and it is estimated to reach approximately 2.1 billion by 2050 [
]. Consequently, the World Health Organization recognized dementia as a public health priority and proposed a global action plan with several action areas, which includes the development, implementation, and improvement of surveillance and monitoring systems, to improve the functional trajectories of people with dementia, their careers, and families [ ]. Considering everything mentioned above, patients with dementia could benefit from a lifelogging application that would work as a digital everyday life journal or as a personal historical record [ ].Lifelogging technologies give us the opportunity to create human digital memories, allowing us to represent and understand every moment of our lives and store this information for further use. However, each memory has specific cues, which can be captured from multiple sources based on our surroundings, such as visual cues, verbal and environmental sounds, locations, and actions, thus providing a large amount of contextual information that requires an interactive software tool to retrieve and explore the memory space. In this narrative review, we have discussed about the several types of personal lifelogs and lifelogging applications used to retrieve these lifelogs.
Methods
Search Strategy
This narrative review [
, ] explored a broad perspective of lifelogging approaches and technologies with the aim of synthesizing and understanding the literature on this research topic. Google Scholar and Scopus databases were used to conduct an iterative search based on a combination of search terms or keywords and appropriated Boolean operators to identify relevant publications.The following search terms were explored: (lifelog OR lifelogging) AND (visual OR audio OR location OR physical activity OR physiological signal OR dementia). A search period was included for searching the publications within the period of 2008 to 2020. However, to explore a historical view of the research topic, relevant publications before 2008 were manually identified and included. This additional inclusion of potential manuscripts of interest was based on our knowledge of this research topic and the association of authors and references of the publications included previously. Only publications in English were considered.
Inclusion and Exclusion Criteria
A total of 411 search results were screened based on the relevance of their title and abstract. Of these 411 publications, 114 (27.7%) publications were selected for full-text analysis. Of the 114 publications, 31 (27.2%) publications were excluded based on their content, and finally, 113 publications were included in our narrative review after including several other publications through citation searching.
shows a flow diagram with the search strategy that led to the included citations, following the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines [ ]. Initially, our search resulted in a total of 14,614 articles by searching the keywords lifelog and lifelogging. However, as the number of resulting publications was high, we chose to combine keywords, such as visual, audio, location, physical activity, physiological signs, and dementia. Several duplicate articles were excluded, and we selected 2.81% (411/14,614) of the publications. To further restrict our article selection, several articles were excluded based on the relevance of their title and abstract, number of citations, relevance, and approaches or methods. As a result of this search, 27.7% (114/411) of the publications were selected and fully reviewed. Moreover, 27.2% (31/114) of these publications were excluded based on their content. Finally, based on our knowledge of this research topic and by exploring the publication records of the authors of the selected papers, we included 30 more articles to conclude our manuscript collection process with 113 publications.
Results
Types of Lifelog Data
Overview
Recent technological advances have introduced new types of sensors and devices that allow the compilation of vast archives of personal data. According to our research, the review of the literature proposes that the most prominent data explored in the lifelogging research are images, videos, locations, physical activities, and physiological signals, as presented graphically in
. In visual lifelogs, data are captured by cameras in the form of images or videos. Although audio is not widely used, the voice of the users or sounds in the environment can be useful data that can be integrated into lifelogging systems. The locations can be understood in 2 different ways, such as GPS locations (longitude and latitude) or physical locations (University of Aveiro, home, work, etc). Currently, devices such as smartwatches, which are wearable devices that incorporate sensors such as accelerometers, gyroscopes, force sensors, and pressure sensors, are frequently used by many people. They enable the extraction of information to monitor physical activities. However, these types of wearable devices also incorporate other sensors capable of recording physiological signals such as heart rate and body temperature.summarizes the types of data used in the selected studies on lifelogging. Description of the several approaches is presented in the following subsections. As seen in , visual data are the most used owing to its richness and the advances in image processing algorithms that allow the extraction of relevant information from images or video. However, several studies have already been reported on the use of other types of data and multimodal solutions.
Study | Visual | Audio | Location | Physical activity | Physiological signals |
Piasek et al [ | ]✓ (SenseCam) | —a | — | — | — |
Hodges et al [ | ]✓ (SenseCam) | — | — | — | — |
Doherty et al [ | , ]✓ (SenseCam) | — | — | — | — |
Gurrin et al [ | ]✓ (smartphone) | — | — | — | — |
Pauly-Takacs et al [ | ]✓ (SenseCam) | — | — | — | — |
Wang et al [ | , ]✓ (SenseCam) | — | — | — | — |
Song et al [ | ]✓ (Google Glass) | — | — | — | — |
Li et al [ | ]✓ (SenseCam) | — | — | — | — |
Bolanos et al [ | ]✓ (Narrative Clip) | — | — | — | — |
Talavera et al [ | ]✓ (Narrative Clip and SenseCam) | — | — | — | — |
Dimiccoli et al [ | ]✓ (data sets) | — | — | — | — |
Gupta and Gurrin [ | ]✓ (OMG Autographer) | — | — | — | — |
Fan et al [ | ]✓ (Narrative Clip) | — | — | — | — |
Garcia del Molino et al [ | ]✓ (data sets) | — | — | — | — |
Furnari et al [ | ]✓ (Looxcie LX2) | — | ✓ (semantic) | — | — |
Oliveira-Barra et al [ | ]✓ (data sets) | — | — | — | — |
Ellis and Lee [ | ]— | ✓ | — | — | — |
Shaikh et al [ | ]— | ✓ | — | — | — |
Shah et al [ | ]— | ✓ | ✓ (GPS) | — | — |
Yamano and Itou [ | ]— | ✓ | ✓ (GPS) | — | — |
Ziaei et al [ | ]— | ✓ | — | — | |
Li et al [ | ]— | — | ✓ (GPS) | — | — |
Tanaka et al [ | ]— | — | ✓ (GPS) | — | — |
Aizawa et al [ | ], Hori et al [ ], and Datchakorn et al [ ]✓ | ✓ | ✓ (GPS and semantic) | ✓ (multiple sensors) | — |
Doherty et al [ | ]✓ (SenseCam) | — | — | ✓ (smartphone) | — |
Hurvitz et al [ | ]— | — | ✓ (GPS and semantic) | ✓ (multiple sensors) | — |
Yang et al [ | , ]— | — | — | ✓ (multiple sensors) | — |
Dobbins et al [ | ]— | — | — | ✓ (data sets) | ✓ (data sets) |
Ni et al [ | ]— | — | ✓ (GPS) | ✓ (smartphone) | — |
Kim et al [ | ]— | — | — | — | ✓ (smartwatch) |
Choi et al [ | ]— | — | — | — | ✓ (multiple sensors) |
Dobbins and Fairclough [ | ]— | — | — | — | ✓ (multiple sensors) |
aThe study does not use this type of data.
Visual
We can observe from
that several studies on lifelogs have explored the use of visual data. Visual lifelogs are generally collected in the form of photographic or videographic records to trigger memories. Photographs are the preferable representation of autobiographical memories [ , ]. In recent years, wearable devices capable of capturing images or videos continuously from a personal perspective are increasingly used. Examples of these wearable devices are SenseCam, OMG Autographer, Narrative Clip, Google Glass, and GoPro. In addition to these wearable devices, smartphones with high-quality cameras and other sensors are also an important tool for lifelogging. Gurrin et al [ ] explored the use of smartphones as an alternative solution to wearable devices such as SenseCam and described several advantages of using smartphones as behavior monitoring devices.The ability of devices, such as SenseCam, to improve autobiographical memory was studied on a patient with amnesia [
]. This case study indicated that short-term recall improved owing to the use of SenseCam. Furthermore, the use of SenseCam also enhanced long-term memory of autobiographical events. In contrast to the SenseCam application, the written diary helped to recall events in the short term, but not in the long term. The main challenge that this type of devices, and consequently visual lifelogs, face is the processing of such tremendous amounts of data [ ]. It is essential to develop techniques that are able to automatically label, segment, and present relevant images in a meaningful sequence.Pauly-Takacs et al [
] used the images captured by SenseCam during a walk to assist a boy aged 13 years, with profound episodic memory difficulties in remembering those moments. The result of this experiment shows the ability of the images to support the formation of personal semantic memories and memory rehabilitation. In addition to helping in the improvement of retrospective memory, SenseCam can also be applied to patients with dementia, but as a cognitive stimulation therapy. A case study was conducted with the aim of mentally stimulating the patient and encouraging factual and opinionated communication [ ].It is obvious that visual lifelogs are essential as memory reminders to reconstruct previous life experiences, but these lifelogs can be used in other use cases, such as general lifestyle analysis. Doherty et al [
, ] proposed a method to automatically classify visual lifelogs into different lifestyle traits using images collected by SenseCam. The camera captures details of the individual’s everyday activities, in an approach to build a memory of the past. Moreover, Doherty et al [ ] used SenseCam images to complement accelerometry measures to identify behavior type and context information across a range of activity episodes.It is essential to develop techniques that are capable of summarizing the large number of images collected through visual lifelogging. Similarly, Wang and Smeaton [
] proposed a technique for identifying everyday activities captured using SenseCam. It is worth noting that these findings are consistent with previous literature [ , ]. In terms of daily human activities, a very wide range of semantic concepts can be identified in visual lifelogs. For the same activity, a variety of semantic concepts can be observed across individuals. Wang et al [ ] characterized everyday activities and behaviors of individuals based on the detection of semantic concepts that appear in visual lifelogs obtained from events that have been automatically segmented based on the technique introduced in the study by Lee et al [ ].In another study conducted using SenseCam, a day of a user was recorded by taking a photo every 30 seconds [
]. Following the lifelogging process, the user reviewed the collected data and classified the day into 12 events to create a ground truth. This method has the potential to retrieve autobiographical events, enabled by the creation of visual lifelogs. Therefore, the use of a wearable camera along with the methods mentioned in this paper constitutes a promising approach to help people retrieve their memories.In the study by Song et al [
], several egocentric videos were recorded using Google Glass, which captured the diversity and complexity of different daily human activities from a first-person perspective. These videos were collected from 10 different individuals and contained 13 categories of activities relevant to lifelogging applications. Song et al [ ] performed several experiments through which they accurately recognized these activities by adopting the dense trajectory approach.Bolanos et al [
] proposed a method for creation of visual summaries of a set of egocentric images captured by a wearable camera, the Narrative Clip. This summarization aims to support people with neuronal degradation. Other similar studies have been proposed based on the same methodology of clustering-based event segmentation [ ] and summarization using contextual and semantic information [ ].Recently, methods based on deep learning to extract visual concepts from images have grown rapidly, making it possible to automatically extract and annotate visual lifelogs accurately. Gupta and Gurrin [
] proposed event segmentation of visual lifelogs based on 2 different approaches for visual concept extraction and image classification, such as objects and activities. The visual lifelogs were collected using a wearable camera, OMG Autographer.Fan et al [
] proposed the compilation of a journal using the captions of photo streams acquired through camera-based lifelogs. This type of lifelogging collects a large number of images, which in turn are of low quality, noisy, and ambiguous, as they are taken automatically. In this study, 2 authors used Narrative Clip cameras for 5 months to create a data set.Most studies that used visual lifelogs collected images or videos and created data sets that often contain very limited data, which results in insufficient data to train machine and deep learning algorithms efficiently. In the study by Garcia del Molino et al [
], a large-scale data set with a first-person perspective was created with >1.5 million images captured by 57 users using a wearable camera to train a visual context predictor. This approach can be used to model daily activities and learn the associations between different scenes.Furnari et al [
] presented a method for temporal segmentation based on personal locations. This study is very promising because it achieves results that are as accurate as those of other methods in the literature. Oliveira-Barra et al [ ] proposed a comprehensive methodology for egocentric photo stream analysis. They performed a summary of autobiographical episodes and a semantic key-frame selection and, finally, implemented text-based inverted index retrieval techniques. The episode temporal segmentation was based on semantic regularized–clustering [ ]. This model was applied to a data set, and the results suggest that this system stimulates the memory of patients with mild cognitive impairment; for example, patients with dementia.Audio
As stated in
, a lifelogging application can also use audio lifelogs, generally captured by wearable audio recorders, smartphones, or video cameras that can record audio for several hours or days using a microphone. In the MyLifeBits project [ , ], Gordon Bell used a wearable microphone to record audio clips and stored them in his personal lifelogs. Ellis and Lee [ ] described several practical advantages of using audio lifelogs and conducted experiments with different equipment and techniques. Totally, there are 3 major advantages of using audio lifelogs [ ]: audio devices, such as microphones, are less sensitive to positioning or motion than cameras; audio data are smaller in file size than videos or image sequences; and audio archives can provide a wide range of useful information, such as location, activities, people, and words.Audio lifelogs can provide useful information to lifelogging systems, and human activities are reflected in a rich variety of acoustic events and environmental sound cues. Shaikh et al [
] proposed a method to detect and classify activities of daily living, such as laughing, talking, cooking, and so on, and location of the person, such as inside a train, at home, at school, and so on, from the environmental sound cues. Shah et al [ ] proposed a lifelogging system using audio records that included speech, music, and environmental sounds. In large audio lifelogs, manual browsing and searching for events or specific audio clips is time-consuming. Therefore, to deal with several types of audio and build an easy, intuitive, and efficient lifelogging application, a generalized and more complex approach was presented in the study by Shah et al [ ].Other studies have used audio lifelogs to segment and classify them according to several characteristics. For example, Yamano and Itou [
] recorded audio lifelogs using wearable microphones and conducted several experiments that enabled browsing these lifelogs. The audio lifelogs were segmented and clustered into events to classify them according to place, speaker, and time. Ziaei et al [ ] proposed an analysis system, which automatically estimates the number of unique people and environments using personal audio records.Location
Lifelogs based on locations can be recognized in 2 different ways: GPS coordinates, such as longitude and latitude, and physical or semantic locations characterized by the place or environment, such as home, office, or more specific locations such as the University of Aveiro. Literature indicates that GPS tracking devices and wearable devices improve the users’ self-esteem when evaluating the effects on the quality of life [
, ]. It is important to note that in this case, the data from GPS tracking devices were not intended to retrieve memories. Nevertheless, the location information may complement visual lifelogs by identifying where the images were taken. This information is usually expressed as coordinates. Moreover, lifelogs offer the option to register relevant locations under intuitive names such as my son’s house [ , ]. When the user checks her lifelog, both the image and the location are displayed. Thus, the user may recall the corresponding memory more easily, even if no spatial cues are visible in the image.Li et al [
] proposed a method for relating user activities to their location. The authors used spatial and temporal constraints to infer where the user worked or studied. Although this method does not correctly identify all the activities, the results are promising. Furthermore, the proposed method points to the possible automatic compilation of a journal with the places and activities of everyday life by just using a smartphone, which, in turn, can aid memory retrieval.In the study by Tanaka et al [
], a method for daily context recognition by recording lifelogs based on GPS location from a smartphone was proposed. The proposed method recognizes the lifelogger’s location and activity as contexts. It can also recognize several contexts at the same location; for example, in a shopping mall, the method can distinguish between shopping, eating a meal, or watching a movie at the cinema. By using a smartphone, the lifeloggers can track their activities over time and observe their daily life in more detail.Physical Activity
Physical activity is fundamental for human beings and is associated with better general health status and improved quality of life. Accelerometers, gyroscopes, goniometers, force sensors, and pressure sensors enable the collection of diverse information. When strategically placed on the user, these sensors can assess the gait and detect falls [
]. Moreover, these sensors are often incorporated into smartwatches or smart bands to monitor physical activity [ ]. In addition to counting steps and estimating walked distance, smartwatches and wristbands can record the heart rate and detect stair climbing, arousal, stress, and excitement through electrodermal activity [ ].Doherty et al [
], following their previous study on event-based segmentation [ ] and recognition of human activities [ ], proposed the use of accelerometers combined with images from wearable cameras to identify certain physical activity behaviors. In this approach, the accelerometer data determined the event boundaries, and the authors could identify sedentary and light, moderate, and vigorous intensity physical activities.With the easy accessibility of sensors such as accelerometers, which measure the acceleration forces acting on an object or person to determine the object’s position in space and monitor the movement, Hurvitz et al [
] proposed methods to measure and analyze activity behaviors using data, such as location, activity, and environment, collected from the combination of accelerometers, GPS data, and travel diaries. The authors also provided an interface tool to structure and visualize location and physical activity data simultaneously.Yang et al [
, ] studied several existing lifelogging physical activity measurement devices and identified some measurement uncertainties in an Internet of Things environment that impact the efficiency and accuracy of lifelogging and health applications.Several diseases such as obesity, hypertension, and cardiovascular diseases are correlated with insufficient physical activity. Dobbins et al [
] proposed an approach to collect and process data from triaxial accelerometers and a heart rate monitor to classify physical activities, such as lying, sitting, running, working on computer, and walking, into different activity levels. In addition to this classification, a visual interface was provided to display the classification of daily physical activities of the user on a smartwatch.Recently, Ni et al [
] explored a 2-stage hybrid model to predict human physical activity status from lifelogging data collected by wearable sensors. Their goal was to provide health care analytics to support individual decisions in real-time monitoring and statistical analysis, provide personalized advice to individuals, and ultimately, encourage positive attitudes toward healthy lifestyles.Physiological Signals
Physiological data are inevitably related to the health care service area. These data have been increasingly used in lifelogs over the years, which can be explained by the expansion of the fitness industry [
, ]. The main physiological data are presented in . The most relevant data are heart rate, blood pressure, electroencephalogram, electromyogram, electrocardiogram, blood oxygen saturation, blood glucose, body temperature, and breathing rate [ , , ]. However, the sensors needed to collect most of these data still have to be incorporated into more practical devices before they become prominent in lifelogging applications.Heart rate is related to user activity; therefore, it plays a relevant role; for example, when the intention is to identify user activities from visual lifelogs. In the study by Dobbins et al [
], the use of heart rate information was combined with an accelerometer to detect physical activity and support people with diseases such as obesity. Another relevant biological signal is blood pressure, and similar to heart rate, the respective sensors can be incorporated into wearable devices, particularly smartwatches [ ].Dobbins and Fairclough [
] collected lifelogging data from multiple sources including physiological signals, such as ECC and photoplethysmogram data, and driving data, such as the speed of the vehicle, location, and first-person environment images, to develop several classifiers for detecting stress in real-world driving.Challenges and Data Sets
Over the past years, the term lifelogging has received significant attention from both research and commercial communities. The events that introduced the lifelogging concept to the academic community were the Association for Computing Machinery Continuous Archiving of Personal Experiences workshops from 2004 to 2006 [
- ]. These workshops were essential for those who previously designed and developed solutions independently [ , , , ], share tools and experiences, and lead lifelogging as an emerging research area.presents the most relevant data sets existing in the literature for lifelog research.
Data sets | Visual | Audio | Location | Physical activity | Physiological signals |
NTCIRa-12 lifelog [ | ]✓ (OMG Autographer) | —b | ✓ (semantic) | ✓ (smartphone) | — |
NTCIR-13 lifelog [ | ]✓ (Narrative Clip) | ✓ (music listened) | ✓ (semantic) | ✓ (smartphone) | ✓ (multiple sensors) |
NTCIR-14 lifelog [ | ]✓ (OMG Autographer) | ✓ (music listened) | ✓ (semantic) | ✓ (smartphone) | ✓ (multiple sensors) |
EDUBc [ | ]✓ (Narrative Clip) | — | — | — | — |
aNTCIR: National Institute of Informatics Testbeds and Community for Information Access Research.
bThe data set does not contain this type of data.
cEDUB: Egocentric Dataset of the University of Barcelona.
In 2016, the first test collection for personal lifelog data was introduced [
], which has been used for the National Institute of Informatics Testbeds and Community for Information Access Research (NTCIR)-12–Lifelog task [ ]. It promoted a comparative evaluation of information access and retrieval systems operating over personal lifelogs. The lifelogs in this test collection consisted of images from 3 lifeloggers using the wearable camera, OMG Autographer. It also contained several semantic locations, such as home, work, and so on, and physical activities, such as walking, transport, and running. The data set was anonymized to ensure the privacy of both lifeloggers and individuals by removing identifiable content, such as recognizable faces and absolute GPS locations. The test collection also included a set of topic descriptions, which represent the retrieval and reflection reason of the lifelogger for accessing memories [ ].Consequently, challenges regarding lifelogging started emerging. The First Workshop on Lifelogging Tools and Applications in 2016 [
] aimed to discuss approaches to lifelog data capture, analytics, and applications, thus identifying opportunities and challenges for researchers in this new and challenging area. In 2017, the Second Workshop on Lifelogging Tools and Applications was organized simultaneously with the lifelog evaluation tasks, NTCIR-13 Lifelog-2 Task [ ] and ImageCLEFlifelog 2017 Task [ ].The ImageCLEFlifelog 2017 challenge was introduced with the NTCIR-12–Lifelog data set [
], but different subtasks were proposed to the participants. However, in the NTCIR-13 Lifelog-2 Task, the organizers created a new data set based on the requirements of the first test collection for personal lifelog data. In addition to the tasks of NTCIR-12–Lifelog, they addressed 2 different challenges for lifelog data organization and retrieval [ ].Since then, workshops and tasks have been organized to advance research on some of the key challenges: ImageCLEFlifelog challenges [
- ]; Lifelog Search Challenge [ - ], which aims to encourage the development of efficient interactive lifelog retrieval systems; and NTCIR Lifelog Tasks [ ]. Over the years, these challenges have focused on creating a comparative benchmark activity for lifelogging applications, and data sets used in each of them are very similar or even the same. These data sets start with the first test collection for personal lifelog data [ ], which they extend or improve.In addition to the data sets used in these challenges, several other data sets containing egocentric data are available [
, , , ]. However, most of these data sets focus on different and smaller amounts of data for specific use case applications and not on capturing all the daily activities and behaviors of a lifelogger. An example of these data sets is the Egocentric Dataset of the University of Barcelona (EDUB) [ ], which is divided into different sub–data sets depending on the data annotations, such as EDUB-Obj data set for object localization or segmentation [ ], EDUB-Seg data set for egocentric event segmentation [ , ], and EDUB-SegDesc data set that can be used either for egocentric event segmentation or for egocentric sequence description [ ].Lifelog Retrieval Software Tools
Throughout the referred challenges and workshops, several applications were presented. The Lifelog Search Challenge has been one of the challenges in which several lifelogging systems have been presented with several utilities for real-world use, unlike other challenges, such as ImageCLEFlifelog challenges, which present very specific tasks.
A retrieval and exploration lifelogging system, called lifeXplore, which allows to search and browse features that have been optimized for lifelog data, was presented by Münzer et al [
]. It was based on a video search system, diveXplore [ - ], previously developed for video retrieval competitions. Besides efficient presentation and summarization of lifelog data, it includes different methods of retrieving and visualizing content, such as feature map, day inspector, lifelog filter, sketch search, and similarity search. Over time, the lifeXplore system was improved by including location-based filtering, automatic feature map browsing, and optical character recognition. Moreover, uniform sampling was used as an alternative method for segmenting videos [ , ].Other tools obtained from video retrieval competitions are the VIRET tool [
- ], which is an updated version of the SIRET interactive video retrieval tool [ ] addressing specific properties of visual lifelogs, and vitrivr [ , ], which was developed for video retrieval [ ] and later adapted to support multimodal data [ ], such as lifelogs.Zhou et al [
] proposed an iterative lifelog search engine called LIFER, which is queried based on several different forms of lifelog data, such as visual concepts, activities, locations, time, and so on. Despite some limitations of LIFER, this application allows users to retrieve the moments from their personal life archives in a reliable and efficient manner. Enhanced versions of LIFER, such as LIFER 2.0 [ ] and LIFER 3.0 [ ], were proposed with additional visual features to solve several tasks of ImageCLEFlifelog 2019 and 2020, respectively. It should be noted that many other applications have been proposed in the challenges and workshops mentioned previously [ - ].In addition to the mentioned applications, other applications have been incorporated into the context of health care. Health lifelogs focus on medical and clinical perspectives. In this case, lifelogs exploit other sensors to gather information. Physical activity, heart rate, blood pressure, and body temperature are examples of measurements that may be valuable from a clinical perspective [
, , ]. Lifelogs can be used to create platforms that provide a collection of digital memories in a structured and searchable manner, similar to the DigMem system [ ]. Another example of an application is the compilation of a diary based on information extracted from the lifelogs [ ].A recent study introduced the use of lifelog monitoring for the early detection of complications in pregnancy [
]. These lifelogs feature physiological data and self-reported information. The authors aimed to detect physiological changes and, together with the multiomics data, try to understand the mechanisms responsible for pregnancy-related diseases. Kim et al [ ] proposed the development of a ubiquitous health care system based on biological and lifelog data. This system was designed to assist the care of patients with chronic medical conditions. A Japanese study discussed the viability of a platform (PeOPLe) containing self-recorded lifelogs and medical records to support health care applications [ ]. Each patient should provide lifelogs to the platform to assist the health management of the patients who are old and request physician support based on automatic predictions. Similar to PeOPLe, the study presented by Choi et al [ ] identified machine learning and mobile learning as helpful tools to examine big data resulting from lifelogs.In addition to developing diagnostic and health care systems, as illustrated by the examples mentioned previously, lifelogging can assist the change of lifestyle and behaviors [
]. The awareness provided by self-monitoring encourages users to make healthy choices, and if the progression is noticeable, they feel motivated to continue. This applies to nutrition, physical activity, sports, active travel, and psychological well-being [ , , , , ].Applications
Lifelogs comprise data of different natures, and consequently, they present an extensive range of possible applications within different use cases, as presented in
. It is noteworthy that work or other procedures may be recorded through lifelogging. An example is the visual lifelogging of a workday by health care professionals [ ]. Despite the popularity of wearable lifelogging devices, other sensors can be strategically placed to monitor user activity. These sensors can be used for older people with assisted living needs, and the data acquired by them can be recorded as a lifelog.In summary, besides memory assistance, monitoring is the main application of lifelogging in health care. This is specifically relevant for the older population, but not exclusively. In addition, monitoring prompts self-reflection by the user, resulting in the motivation for self-improvement.
Study | Daily activities | Event segmentation | Health care | Summarization | Retrieval |
Piasek et al [ | ]—a | — | ✓ | — | — |
Hodges et al [ | ]— | — | ✓ | — | — |
Doherty et al [ | ]✓ | — | — | — | — |
Gurrin et al [ | ]✓ | — | — | — | — |
Pauly-Takacs et al [ | ]— | — | ✓ | — | — |
Wang et al [ | , ]✓ | — | — | — | — |
Song et al [ | ]✓ | — | — | — | — |
Li et al [ | ]— | ✓ | — | — | — |
Bolanos et al [ | ]— | ✓ | — | ✓ | — |
Talavera et al [ | ]— | ✓ | — | — | — |
Dimiccoli et al [ | ]— | ✓ | — | — | — |
Gupta and Gurrin [ | ]— | ✓ | — | — | — |
Fan et al [ | ]— | — | — | ✓ | — |
Garcia del Molino et al [ | ]— | ✓ | — | — | — |
Furnari et al [ | ]— | ✓ | — | — | — |
Oliveira-Barra et al [ | ]— | — | — | ✓ | ✓ |
Ellis and Lee [ | ]— | ✓ | — | — | — |
Shaikh et al [ | ]✓ | — | — | — | — |
Shah et al [ | ]— | — | — | — | ✓ |
Yamano and Itou [ | ]— | ✓ | — | — | — |
Ziaei et al [ | ]— | ✓ | — | — | — |
Li et al [ | ]✓ | — | — | — | — |
Tanaka et al [ | ]✓ | — | — | — | — |
Doherty et al [ | ]✓ | — | — | ✓ | — |
Hurvitz et al [ | ]✓ | — | — | — | — |
Yang et al [ | , ]✓ | — | ✓ | — | — |
Dobbins et al [ | ]✓ | — | ✓ | — | — |
Ni et al [ | ]✓ | — | ✓ | — | — |
Kim et al [ | ]— | — | — | — | ✓ |
Choi et al [ | ]— | — | ✓ | — | ✓ |
Dobbins and Fairclough [ | ]— | — | ✓ | — | — |
Leibetseder and Schoeffmann [ | ]— | — | — | ✓ | ✓ |
Kovalčík et al [ | ]— | — | — | — | ✓ |
Gasser et al [ | ]— | — | — | — | ✓ |
Le et al [ | ]— | — | — | — | ✓ |
Le et al [ | ]— | — | — | — | ✓ |
Ribeiro et al [ | ]— | — | — | — | ✓ |
Mai-Nguyen et al [ | ]— | ✓ | — | — | ✓ |
Tran et al [ | ]— | ✓ | — | — | ✓ |
Rossetto et al [ | ]— | — | — | — | ✓ |
Khan et al [ | ]— | — | — | — | ✓ |
Dobbins et al [ | ]— | — | — | — | ✓ |
Karako et al [ | ]— | — | ✓ | — | — |
Sugawara et al [ | ]— | — | ✓ | — | — |
Kim et al [ | ]— | — | ✓ | — | — |
Dobbins and Fairclough [ | ]— | — | ✓ | — | — |
aThe computational tool does not focus on this application.
Privacy and Concerns
One of the most evident challenges associated with lifelogging is infringement of privacy [
, ]. The nonconsensual logging of bystanders and even the logging of aware friends and family exposes them. A possible solution for visual lifelogs is to blur faces [ , ]. However, when visual lifelogs are used as a memory aid, blurring the faces may hinder this function. Moreover, lifelogs may pose a privacy threat to the surrounding people. Lifeloggers are also susceptible to privacy issues, as lifelogs may constitute valuable information for corporations, including advertisers, which reinforces the necessity of the General Data Protection Regulation. Nevertheless, unobtrusive recording of audio or capturing of images without the explicit consent of everyone involved is prohibited by law.The use of smartwatches by lifeloggers can be advantageous for recording health data. Kim et al [
] proposed a method to collect data from smartwatches while preserving the user’s privacy. This study is of interest as it attempts to circumvent privacy issues regarding the use of smartwatches. These principles can serve as inspiration for similar approaches for other devices.Lifelogs may affect our perception of reality; for example, memories may seem more recent than they actually are [
]. Furthermore, despite all the efforts, lifelogs can only capture a small fraction of reality, and as such, only concrete information about subjective experiences can be recorded. Consequently, lifelogs cannot be considered as the ground truth, as there may be failures that prevent full documentation [ ].Another result of our analysis was the permanent character inherent in lifelogs. Although this is advantageous for applications such as memory retrieval, it may become problematic. For example, people with mental illnesses may be obsessed with some memories and dwell on them [
]. Furthermore, even for healthy people, this permanent record may put them under the impression that they are not allowed to change [ ]. Therefore, it has been proposed in the literature that lifelogs should try to mimic human memory and implement a forgetting functionality [ , , ].Another pertinent concern regarding lifelogging is the possibility that people may rely excessively on lifelogs to remember [
]. This is specifically relevant for future research, as the goal is to enhance the memory of healthy people or improve the memory of people with dementia.With the popularization of lifelogs and adherence by most of the population, surveillance may become an issue. On one hand, law enforcement may consider lifelogs as a viable method to investigate criminals, which may result in intrusion of the privacy of innocent people [
]. On the other hand, lifelogs may be admitted as proof of innocence. In addition, lifelogs can also potentially empower surveillance by authorities. A legitimate ethical question that emerges from this surveillance is whether illegal behaviors perpetrated by bystanders should be reported by lifeloggers [ ].Discussion
Principal Findings
In lifelogging, devices should be ubiquitous, and data capture should occur without requiring any action on the part of the wearer. Currently, everything and everybody with network connectivity can be turned into sensors that continuously generate data. Mobile and wearable devices have been integrated into everyday activities in a seamless and ubiquitous manner. It has become increasingly possible to remotely monitor behaviors using our smartphones or wearable devices.
Lifelogs are personal data created through life experiences and behaviors of individuals during their daily life, such as images, videos, audio, biometric data, or locations, that are collected by physical sensors. Lifelogs are prone to become a powerful tool to retrieve memories or increase the knowledge about an individual’s experiences or behaviors. However, regarding human digital memories (or personal digital memories), different viewpoints arise. Although some refer to human digital memories interchangeably with lifelogs, it is valid to argue that human digital memories are the result of the processing and organization of lifelogs [
, , ].Visual lifelogs are one of the most used data in lifelogging approaches and applications. These lifelogs provide important visual information such as environment, objects, activity, and behavior, which are performed and visualized by the lifelogger. As human beings, we can distinguish this visual information and interpret it to reconstruct a memory that was previously experienced. However, for machines such as our computers, this information is only pixels or numbers, which requires the development of algorithms and methods for the interpretation and analysis of these data to retrieve a specific memory efficiently. One of the main advantages of visual lifelogging is the resulting feeling of security. The users are not worried about remembering because they know that everything is being documented [
]. It should be noted that visual lifelogs are usually accompanied by supplementary information, as illustrated by the examples analyzed in the previous sections. These data can help in memory retrieval, because the richer the lifelogs, the more likely they are to hold relevant cues.Audio lifelogs are less used in lifelogging applications than visual lifelogs because of the additional challenges that they bring to the application. They can be uncomfortable for the lifelogger. However, audio lifelogs may contain important information for lifelogging applications, such as conversations, speeches, music, or several environmental sounds. Moreover, visual entry lifelogs can take advantage of sound records, as illustrated in the cases mentioned in the Results section. Although audio devices are mainly used as reminder devices, voice records can be used to document important events as the user is experiencing them or shortly thereafter. However, there is a lack of studies on the use of audio lifelogs and their relevance in lifelogging applications for people with dementia.
Location-based lifelogs allow people to retrieve information about the environment and activities that may occur in that location. Regarding memory retrieval, the locations complemented by other information, such as visual lifelogs or temporal features, facilitate the search for these data and make a lifelogging system more accurate [
]. For example, people with dementia tend to lose their ability to recognize familiar places or locations or become lost and confused about their location. Such information can be retrieved together with visual lifelogs and, therefore, stimulate the memory of these people.Extracting physical activities only from images is a complex process and sometimes inaccurate, because certain objects or scenes can be associated with a wide range of activities. However, lifelog data such as heart rate and accelerometer data can be used to recognize activities of the lifelogger. By using semantic concepts extracted from the images and locations, the classification of these activities can improve significantly. Human physiological signals have several potential benefits in lifelogging applications, such as for health care and daily life monitoring. However, to use a wide range of these data, several sensors are necessary, and most existing lifelogging technologies do not incorporate all these sensors. For example, multiple devices are required to collect these signals from an individual in real time, which becomes challenging for data synchronization and filtering [
].Physiological data are rarely used in isolation, and generally, these data alone rarely show cues to retrieve memories, particularly in patients with dementia, as their memories are triggered mainly by visual information. The main utility of physiological data in lifelogging is for medical records and physical activity. However, they may also be used to detect emotions, and similar to visual lifelogs, they can form a more complete digital memory [
].Regarding privacy and concerns, lifeloggers must have access to their data and opportunities to rectify, remove, and control the data that is collected. In addition, lifeloggers should be aware of how their data are stored and used, who owns the lifelogs, and who owns the information obtained from their lifelogs [
]. Gurrin et al [ ] assume that the data gatherer owns the lifelogs, which raises the question, “What happens to lifelogs when the correspondent lifelogger dies?” On one hand, lifelogs contain a lifetime of personal information. However, if they are stored in databases, it can help to improve research approaches. Thus, it is necessary to establish regulations on how to approach these concerning issues.Conclusions
The integration of lifelogging into people’s lives can be beneficial to improve the quality of their life, either by serving as a tool for memory augmentation or by providing support when having memory issues. Lifelogging systems can create relevant digital memories. Through this narrative review, we understand that contextual information can be extracted from lifelogs, which provides an understanding of a person’s daily activities based on events, experiences, and behaviors.
Initially, the scientific community in the lifelogging research field focused their attention on the design and development of solutions or devices capable of acquiring and storing data without interfering with one’s daily life. However, with the increase in wearable devices available for personal data acquisition and the large amount of data to be stored and retrieved, new challenges and issues arose regarding the storage, processing, organization, and retrieval of lifelogs.
An important conclusion of this research exercise is that visual lifelogs are most prevalent when the goal is to create digital memories as surrogate memories. Nevertheless, there is a tendency to associate visual lifelogs with other lifelog data such as audio, location, physical activities, and physiological signals. Audio lifelogs can provide relevant information, such as speeches or environmental sounds, which encode information about locations, activities, and overall context. Along with these personal data, location-based lifelogs can provide additional information. Physical activity and physiological lifelog data are often associated with health care and quality of life. The several sensors that can be incorporated in wearable and easy-to-use devices provide useful information for the recognition and classification of the activities and behaviors of a user. These data used in isolation have some benefits for health care and personal monitoring. Nevertheless, when combined with other lifelogs, they potentially provide important cues to retrieve and form more complete personal digital memories. In addition to creating human digital memories, the acquisition and processing of these lifelogs can be used for monitoring daily life and self-improvement. As they comprise data of different natures, they present an extensive range of possible applications within different use cases. In addition to their relevance in health care, several other applications have been explored such as daily activity analysis, event segmentation, summarization, and information retrieval.
The practice of lifelogging requires tracking and recording of lifelogs in everyday life, for which it is necessary to capture personal data over long periods or even the lifelogger’s entire life. These lifelogs can be combined to develop methods to recognize several contextual data to provide a broader understanding of the lifelogger’s life, such as events, experiences, behaviors, and moments. However, the lifelogs must be synchronized with each other, which can be achieved through time features recorded at the time of lifelog acquisition.
Nevertheless, when these lifelogs are introduced into a lifelogging application, some of them are not relevant or do not contain useful information for further processing and visualization. Therefore, preprocessing methods can be applied to select only relevant lifelogs and remove or correct those that may introduce errors and noise into the system. To retrieve and visualize the previously selected lifelogs, the lifelogging system must be able to interpret these lifelogs in a way similar to that of the lifelogger. Therefore, it is important to annotate, organize, and store the lifelogs with semantic concepts that provide more information about the environment and activities of the lifelogger. These semantic concepts are useful to understand the lifelogger’s behavior and define events and specific moments, which may be required and visualized in the future as surrogate memories.
This narrative review shows that there is a considerable number of published studies on lifelogging. However, we identified several open questions through the analysis and possible lines of investigation in this currently important topic.
Acknowledgments
This study was supported by the Integrated Programme of Scientific Research and Technological Development Project Smart Open Campus (CENTRO-01-0145-FEDER-000010), cofunded by the Centro 2020 program; Portugal 2020; and European Union, through the European Regional Development Fund. This work was supported by National Funds through the Foundation for Science and Technology, in the context of the project UIDB/00127/2020.
Authors' Contributions
RR, AT, and AJRN designed and conceptualized the study. RR was responsible for data collection and literature screening. RR drafted the manuscript. AT and AJRN contributed to the critical revision of this manuscript. All authors reviewed the manuscript and approved for publication.
Conflicts of Interest
None declared.
References
- Dodge M, Kitchin R. ‘Outlines of a world coming into existence’: pervasive computing and the ethics of forgetting. Environ Plann B Plann Des 2016 Jul 22;34(3):431-445. [CrossRef]
- Gurrin C, Smeaton AF, Doherty AR. LifeLogging: personal big data. FNT Inf Retrieval 2014;8(1):1-125. [CrossRef]
- Bush V. As we may think. Interactions 1996 Mar;3(2):35-46. [CrossRef]
- Mann S. Wearable computing: a first step toward personal imaging. Computer 1997;30(2):25-32. [CrossRef]
- Mann S. 'WearCam' (The wearable camera): personal imaging systems for long-term use in wearable tetherless computer-mediated reality and personal photo/videographic memory prosthesis. In: Proceedings of the Digest of Papers. Second International Symposium on Wearable Computers (Cat. No.98EX215). 1998 Presented at: Digest of Papers. Second International Symposium on Wearable Computers (Cat. No.98EX215); Oct 19-20, 1998; Pittsburgh, PA, USA. [CrossRef]
- Mann S. Continuous lifelong capture of personal experience with EyeTap. In: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences. 2004 Presented at: MM04: 2004 12th Annual ACM International Conference on Multimedia; Oct 15, 2004; New York, USA. [CrossRef]
- Mann S, Fung J, Aimone C, Sehgal A, Chen D. Designing EyeTap digital eyeglasses for continuous lifelong capture and sharing of personal experiences. 2005 Presented at: CHI 2005; April 2-7, 2005; Portland, Oregon.
- Mann S, Huang J, Janzen R, Lo R, Rampersad V, Chen A, et al. Blind navigation with a wearable range camera and vibrotactile helmet. In: Proceedings of the 19th ACM international conference on Multimedia. 2011 Presented at: MM '11: ACM Multimedia Conference; Nov 28 -Dec 1, 2011; Scottsdale Arizona USA. [CrossRef]
- Mann S, Lo R, Ovtcharov K, Gu S, Dai D, Ngan C, et al. Realtime HDR (High Dynamic Range) video for eyetap wearable computers, FPGA-based seeing aids, and glasseyes (EyeTaps). In: Proceedings of the 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). 2012 Presented at: 2012 25th IEEE Canadian Conference on Electrical and Computer Engineering (CCECE); Apr 29- May 2, 2012; Montreal, QC, Canada. [CrossRef]
- Mann S, Ali M, Lo R, Wu H. FreeGlass for developers, “haccessibility”, and Digital Eye Glass + Lifeglogging research in a (sur/sous)veillance society. In: Proceedings of the International Conference on Information Society (i-Society 2013). 2013 Presented at: International Conference on Information Society (i-Society 2013); Jun 24-26, 2013; Toronto, ON, Canada.
- Mann S, Mann C, Lam D, Mathewson K, Stairs J, Pierce C, et al. The human eye as a camera. In: Proceedings of the 2019 IEEE International Conference on E-health Networking, Application & Services (HealthCom). 2019 Presented at: 2019 IEEE International Conference on E-health Networking, Application & Services (HealthCom); Oct 14-16, 2019; Bogota, Colombia. [CrossRef]
- Gemmell J, Bell G, Lueder R, Drucker S, Wong C. Mylifebits: fulfilling the memex vision. In: Proceedings of the tenth ACM international conference on Multimedia. 2002 Presented at: MM02: ACMM02: ACM Multimedia 2002M Multimedia 2002; Dec 1 - 6, 2002; Juan-les-Pins France. [CrossRef]
- Gemmell J, Lueder R, Bell G. The mylifebits lifetime store. In: Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence. 2003 Presented at: ETP '03: Proceedings of the 2003 ACM SIGMM workshop on Experiential telepresence; Nov 7, 2003; Berkeley California. [CrossRef]
- Gemmell J, Bell G, Lueder R. MyLifeBits. Commun ACM 2006 Jan;49(1):88-95. [CrossRef]
- Bell G, Gemmell J. A digital life. Sci Am 2007 Mar;296(3):58-65. [CrossRef]
- Bell G, Gemmell J. Total Recall: How the E-Memory Revolution Will Change Everything. New York, NY: Dutton Penguin; 2009.
- Hodges S, Williams L, Berry E, Izadi S, Srinivasan J, Butler A, et al. SenseCam: a retrospective memory aid. In: Dourish P, Friday A, editors. UbiComp 2006: Ubiquitous Computing. UbiComp 2006. Lecture Notes in Computer Science, vol 4206. Berlin, Heidelberg: Springer; 2006:177-193.
- Hodges S, Berry E, Wood K. SenseCam: a wearable camera that stimulates and rehabilitates autobiographical memory. Memory 2011 Oct;19(7):685-696. [CrossRef] [Medline]
- Chen M, Mao S, Liu Y. Big data: a survey. Mobile Netw Appl 2014 Jan 22;19(2):171-209. [CrossRef]
- Gurrin C, Smeaton AF, Byrne D, O'Hare N, Jones GJF, O'Connor N. An examination of a large visual lifelog. In: Li H, Liu T, Ma WY, Sakai T, Wong KF, Zhou G, editors. Information Retrieval Technology. AIRS 2008: Lecture Notes in Computer Science, vol 4993. Berlin, Heidelberg: Springer; 2008:537-542.
- Harvey M, Langheinrich M, Ward G. Remembering through lifelogging: a survey of human memory augmentation. Pervasive Mobile Comput 2016 Apr;27:14-26. [CrossRef]
- World Population Ageing 2017 Highlights. Geneva: United Nations; 2019.
- Global Action Plan on the Public Health Response to Dementia 2017 - 2025. Geneva: World Health Organization; 2017.
- Piasek P, Irving K, Smeaton AF. Case study in SenseCam use as an intervention technology for early-stage dementia. Int J Comput Healthcare 2012;1(4):304. [CrossRef]
- Ferrari R. Writing narrative style literature reviews. Med Writing 2015 Dec 23;24(4):230-235. [CrossRef]
- Greenhalgh T, Thorne S, Malterud K. Time to challenge the spurious hierarchy of systematic over narrative reviews? Eur J Clin Invest 2018 Jun 16;48(6):e12931 [FREE Full text] [CrossRef] [Medline]
- Page M, McKenzie J, Bossuyt P, Boutron I, Hoffmann T, Mulrow C, et al. Updating guidance for reporting systematic reviews: development of the PRISMA 2020 statement. J Clin Epidemiol 2021 (forthcoming). [CrossRef]
- Doherty AR, Smeaton AF. Automatically segmenting lifelog data into events. In: Proceedings of the 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services. 2008 Presented at: 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services; May 7-9, 2008; Klagenfurt, Austria. [CrossRef]
- Doherty AR, Caprani N, Conaire CO, Kalnikaite V, Gurrin C, Smeaton AF, et al. Passively recognising human activities through lifelogging. Comput Human Behav 2011 Sep;27(5):1948-1958. [CrossRef]
- Gurrin C, Qiu Z, Hughes M, Caprani N, Doherty AR, Hodges SE, et al. The smartphone as a platform for wearable cameras in health research. Am J Prev Med 2013 Mar;44(3):308-313. [CrossRef] [Medline]
- Pauly-Takacs K, Moulin CJ, Estlin EJ. SenseCam as a rehabilitation tool in a child with anterograde amnesia. Memory 2011 Oct;19(7):705-712. [CrossRef] [Medline]
- Wang P, Smeaton AF. Using visual lifelogs to automatically characterize everyday activities. Inf Sci 2013 May;230:147-161. [CrossRef]
- Wang P, Sun L, Yang S, F. Smeaton A, Gurrin C. Characterizing everyday activities from visual lifelogs based on enhancing concept representation. Comput Vision Image Understanding 2016 Jul;148:181-192. [CrossRef]
- Song S, Chandrasekhar V, Cheung N, Narayan S, Li L, Lim J. Activity recognition in egocentric life-logging videos. In: Computer Vision - ACCV 2014 Workshops. Cham: Springer; 2015.
- Li N, Crane M, Ruskin HJ, Gurrin C. Application of statistical physics for the identification of important events in visual lifelogs. In: Proceedings of the 2013 IEEE International Conference on Bioinformatics and Biomedicine. 2013 Presented at: 2013 IEEE International Conference on Bioinformatics and Biomedicine; Dec 18-21, 2013; Shanghai, China. [CrossRef]
- Balonas M, Mestre B, Talavera E, Giró-i-Nieto X, Radeva P. Visual summary of egocentric photostreams by representative keyframes. In: Proceedings of the 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW). 2015 Presented at: 2015 IEEE International Conference on Multimedia & Expo Workshops (ICMEW); Jun 29- Jul 3, 2015; Turin, Italy. [CrossRef]
- Talavera E, Bolanos M, Dimiccoli M, Aghaei M, Radeva P. R-clustering for egocentric video segmentation. In: Pattern Recognition and Image Analysis. Cham: Springer; 2015 Presented at: 7th Iberian Conference, IbPRIA 2015; June 17-19, 2015; Santiago de Compostela, Spain. [CrossRef]
- Dimiccoli M, Bolaños M, Talavera E, Aghaei M, Nikolov SG, Radeva P. SR-clustering: semantic regularized clustering for egocentric photo streams segmentation. Comput Vision Image Understanding 2017 Feb;155:55-69. [CrossRef]
- Gupta R, Gurrin C. Approaches for event segmentation of visual lifelog data. In: MultiMedia Modeling. MMM 2018. Lecture Notes in Computer Science, vol 10704. Cham: Springer; 2018.
- Fan C, Zhang Z, Crandall DJ. Deepdiary: lifelogging image captioning and summarization. J Visual Commun Image Representation 2018 Aug;55:40-55. [CrossRef]
- Garcia del Molino A, Lim JH, Tan AH. Predicting visual context for unsupervised event segmentation in continuous photostreams. In: Proceedings of the 26th ACM international conference on Multimedia. 2018 Presented at: MM '18: ACM Multimedia Conference; Oct 22 - 26, 2018; Seoul Republic of Korea. [CrossRef]
- Furnari A, Battiato S, Farinella GM. Personal-location-based temporal segmentation of egocentric videos for lifelogging applications. J Visual Commun Image Representation 2018 Apr;52:1-12. [CrossRef]
- Oliveira-Barra G, Bolanos M, Talavera E, Gelonch O, Garolera M, Radeva P. Lifelog retrieval for memory stimulation of people with memory impairment. In: Alameda-Pineda X, Ricci E, Sebe N, editors. Multimodal Behavior Analysis in the Wild: Advances and Challenges. Cambridge, Massachusetts: Academic Press; 2019:135-158.
- Ellis D, Lee K. Minimal-impact audio-based personal archives. In: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences. 2004 Presented at: MM04: 2004 12th Annual ACM International Conference on Multimedia; Oct 15, 2004; New York, USA. [CrossRef]
- Shaikh MA, Molla MK, Hirose K. Automatic life-logging: a novel approach to sense real-world activities by environmental sound cues and common sense. In: Proceedings of the 2008 11th International Conference on Computer and Information Technology. 2008 Presented at: 2008 11th International Conference on Computer and Information Technology; Dec 24-27, 2008; Khulna, Bangladesh. [CrossRef]
- Shah M, Mears B, Chakraborty C, Spanias A. Lifelogging: archival and retrieval of continuously recorded audio using wearable devices. In: Proceedings of the 2012 IEEE International Conference on Emerging Signal Processing Applications. 2012 Presented at: 2012 IEEE International Conference on Emerging Signal Processing Applications; Jan 12-14, 2012; Las Vegas, NV, USA. [CrossRef]
- Yamano K, Itou K. Browsing audio lifelog data using acoustic and location information. In: Proceedings of the 2009 Third International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies. 2009 Presented at: 2009 Third International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies; Oct 11-16, 2009; Sliema, Malta. [CrossRef]
- Ziaei A, Sangwan A, Hansen JH. Prof-life-log: Personal interaction analysis for naturalistic audio streams. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2013 Presented at: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing; May 26-31, 2013; Vancouver, BC, Canada. [CrossRef]
- Li D, Gu Y, Kamijo S. Smartphone based lifelog with meaningful place detection. In: Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE). 2018 Presented at: 2018 IEEE International Conference on Consumer Electronics (ICCE); Jan 12-14, 2018; Las Vegas, NV, USA. [CrossRef]
- Tanaka G, Okada M, Mineno H. GPS-based daily context recognition for lifelog generation using smartphone. Int J Advanced Comput Sci App 2015;6(2):104-112. [CrossRef]
- Aizawa K, Ishijima K, Shiina M. Summarizing wearable video. In: Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205). 2001 Presented at: Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205); Oct 7-10, 2001; Thessaloniki, Greece. [CrossRef]
- Hori T, Aizawa K. Context-based video retrieval system for the life-log applications. In: Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval. 2003 Presented at: Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval; Nov 7, 2003; Berkeley California. [CrossRef]
- Datchakorn T, Toshihiko Y, Kiyoharu A. Practical experience recording and indexing of life log video. In: Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of personal experiences. 2005 Presented at: ICMI05: Seventh International Conference on Multimodal Interfaces 2005; Nov 11, 2005; Hilton Singapore. [CrossRef]
- Doherty AR, Kelly P, Kerr J, Marshall S, Oliver M, Badland H, et al. Using wearable cameras to categorise type and context of accelerometer-identified episodes of physical activity. Int J Behav Nutr Phys Act 2013 Feb 13;10(1):22 [FREE Full text] [CrossRef] [Medline]
- Hurvitz PM, Moudon AV, Kang B, Saelens BE, Duncan GE. Emerging technologies for assessing physical activity behaviors in space and time. Front Public Health 2014;2:2. [CrossRef] [Medline]
- Yang P, Hanneghan M, Qi J, Deng Z, Dong F, Fan D. Improving the validity of lifelogging physical activity measures in an internet of things environment. In: Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing. 2015 Presented at: 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing; Oct 26-28, 2015; Liverpool, UK. [CrossRef]
- Yang P, Stankevicius D, Marozas V, Deng Z, Liu E, Lukosevicius A, et al. Lifelogging data validation model for internet of things enabled personalized healthcare. IEEE Trans Syst Man Cybern Syst 2018 Jan;48(1):50-64. [CrossRef]
- Dobbins C, Rawassizadeh R, Momeni E. Detecting physical activity within lifelogs towards preventing obesity and aiding ambient assisted living. Neurocomputing 2017 Mar;230:110-132. [CrossRef]
- Ni J, Chen B, Allinson NM, Ye X. A hybrid model for predicting human physical activity status from lifelogging data. Eur J Operational Res 2020 Mar;281(3):532-542. [CrossRef]
- Kim JW, Lim JH, Moon SM, Jang B. Collecting health lifelog data from smartwatch users in a privacy-preserving manner. IEEE Trans Consum Electron 2019 Aug;65(3):369-378. [CrossRef]
- Choi J, Choi C, Ko H, Kim P. Intelligent healthcare service using health lifelog analysis. J Med Syst 2016 Aug 28;40(8):188. [CrossRef] [Medline]
- Dobbins C, Fairclough S. Signal processing of multimodal mobile lifelogging data towards detecting stress in real-world driving. IEEE Trans Mobile Comput 2019 Mar 1;18(3):632-644. [CrossRef]
- Doherty A, Pauly-Takacs K, Caprani N, Gurrin C, Moulin CJ, O'Connor N, et al. Experiences of aiding autobiographical memory using the SenseCam. Human Comput Interact 2012;27(1-2):174. [CrossRef]
- Dobbins C, Fairclough S. Lifelogging technologies to detect negative emotions associated with cardiovascular disease. In: Applied Computing in Medicine and Health. Burlington, Massachusetts: Morgan Kaufmann; 2016:27-44.
- Lee H, Smeaton AF, O’Connor NE, Jones G, Blighe M, Byrne D, et al. Constructing a SenseCam visual diary as a media process. Multimedia Syst 2008 Jul 12;14(6):341-349. [CrossRef]
- Gibson G, Newton L, Pritchard G, Finch T, Brittain K, Robinson L. The provision of assistive technology products and services for people with dementia in the United Kingdom. Dementia (London) 2016 Jul 05;15(4):681-701. [CrossRef] [Medline]
- Sanders D, Scott P. Literature review: technological interventions and their impact on quality of life for people living with dementia. BMJ Health Care Inform 2020 Jan 15;27(1):e100064 [FREE Full text] [CrossRef] [Medline]
- Rodgers MM, Pai VM, Conroy RS. Recent advances in wearable sensors for health monitoring. IEEE Sensors J 2015 Jun;15(6):3119-3126. [CrossRef]
- Zheng Y, Ding X, Poon CC, Lo BP, Zhang H, Zhou X, et al. Unobtrusive sensing and wearable devices for health informatics. IEEE Trans Biomed Eng 2014 May;61(5):1538-1554. [CrossRef]
- Gemmell J, Sundaram H. CARPE'04: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences. New York, NY, USA: Association for Computing Machinery; 2004.
- Gemmell J, Sundaram H. CARPE '05: Proceedings of the 2nd ACM workshop on Continuous archival and retrieval of personal experiences. New York, NY, USA: Association for Computing Machinery; 2005.
- Mase K. CARPE '06: Proceedings of the 3rd ACM workshop on Continuous archival and retrival of personal experences. New York, NY, USA: Association for Computing Machinery; 2006.
- GemmellJ G, Williams L, Wood K, Lueder R, Bell G. Passive capture and ensuing issues for a personal lifetime store. In: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences. 2004 Presented at: MM04: 2004 12th Annual ACM International Conference on Multimedia; Oct 15, 2004; New York USA. [CrossRef]
- Aizawa K, Tancharoen D, Kawasaki S, Yamasaki T. Efficient retrieval of life log based on context and content. In: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences. 2004 Presented at: MM04: 2004 12th Annual ACM International Conference on Multimedia; Oct 15, 2004; New York, USA. [CrossRef]
- Gurrin C, Joho H, Hopfgartner F, Zhou L, Albatal R. NTCIR lifelog: the first test collection for lifelog research. In: Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. 2016 Presented at: SIGIR '16: The 39th International ACM SIGIR conference on research and development in Information Retrieval; Jul 17 - 21, 2016; Pisa Italy. [CrossRef]
- Gurrin C, Joho H, Hopfgartner F, Zhou L, Gupta R, Albatal R, et al. Overview of NTCIR-13 Lifelog-2 Task. In: Proceedings of the The Thirteenth NTCIR conference (NTCIR-13). 2017 Presented at: The Thirteenth NTCIR conference (NTCIR-13); Dec 5-8, 2017,; Tokyo, Japan.
- Gurrin C, Joho H, Hopfgartner F, Zhou L, Ninh HT, Le TK, et al. Overview of the NTCIR-14 Lifelog-3 task. In: Proceedings of the 14th NTCIR Conference on Evaluation of Information Access Technologies. 2019 Presented at: 14th NTCIR Conference on Evaluation of Information Access Technologies; Jun 10-13, 2019; Tokyo Japan URL: https://doras.dcu.ie/25867/
- Bolanos M, Dimiccoli M, Radeva P. Toward storytelling from visual lifelogging: an overview. IEEE Trans Human Mach Syst 2017;47(1):77-90. [CrossRef]
- Gurrin C, Joho H, Hopfgartner F, Zhou L, Albatal R. Overview of NTCIR-12 lifelog task. In: Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies. 2016 Presented at: Proceedings of the 12th NTCIR Conference on Evaluation of Information Access Technologies; Jun 7-10, 2016; Tokyo Japan URL: https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings12/pdf/ntcir/OVERVIEW/01-NTCIR12-OV-LIFELOG-GurrinC.pdf [CrossRef]
- Gurrin C, Giro-i-Nieto X, Radeva P, Dimiccoli M, Johansen H, Joho H, et al. LTA 2016: the first workshop on lifelogging tools and applications. In: Proceedings of the 24th ACM international conference on Multimedia. 2016 Presented at: MM '16: ACM Multimedia Conference; Oct 15 - 19, 2016; Amsterdam The Netherlands. [CrossRef]
- Dang-Nguyen D, Piras P, Riegler M, Boato G, Zhou L, Gurrin C. Overview of ImageCLEF lifelog 2017: lifelog retrieval and summarization. In: ImageCLEF. Central Europe: CEUR-WS; 2017.
- Dang-Nguyen DT, Piras L, Riegler M, Zhou L, Lux M, Gurrin C. Overview of ImageCLEFlifelog 2018: daily living understanding and lifelog moment retrieval. In: Proceedings of the Conference and Labs of the Evaluation Forum. 2018 Presented at: Conference and Labs of the Evaluation Forum; Sep 10-14, 2018; Avignon, France URL: http://ceur-ws.org>
- Dang-Nguyen DT, Piras L, Riegler M, Zhou L, Lux M, Tran MT, et al. Overview of ImageCLEFlifelog 2019: solve my life puzzle and lifelog moment retrieval. In: Proceedings of CLEF 2019. 2019 Presented at: CLEF 2019; Sep 9-12, 2019; Lugano, Switzerland URL: http://ceur-ws.org/Vol-2380/paper_223.pdf
- Ninh VT, Le TK, Zhou L, Piras L, Riegler M, Halvorsen P, et al. Overview of ImageCLEF Lifelog 2020: lifelog moment retrieval and sport performance lifelog. In: Proceedings of CLEF 2020. 2020 Presented at: CLEF 2020; Sep 20-25, 2020; Thessalonik, Greece URL: http://ceur-ws.org/Vol-2696/paper_65.pdf [CrossRef]
- Gurrin C, Schoeffmann K, Joho H, Dang-Nguyen DT, Riegler M, Piras L. LSC '18: Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge. New York, NY, USA: Association for Computing Machinery; 2018.
- Gurrin C, Schoeffmann K, Joho H, Dang-Nguyen DT, Riegler M, Piras L. LSC '19: Proceedings of the ACM Workshop on Lifelog Search Challenge. New York, NY, USA: Association for Computing Machinery; 2019.
- Gurrin C, Schoeffmann K, Jónsson B, Dang-Nguyen DT, Lokoč J, Tran MT, et al. LSC '20: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. New York, NY, USA: Association for Computing Machinery; 2020.
- Rawassizadeh R, Tomitsch M, Wac K, Tjoa AM. UbiqLog: a generic mobile phone-based life-log framework. Pers Ubiquit Comput 2012 Apr 3;17(4):621-637. [CrossRef]
- Bolaños M, Radeva P. Ego-object discovery. arXiv 2015:1639.
- Bolaños M, Peris A, Casacuberta F, Soler S, Radeva P. Egocentric video description based on temporally-linked sequences. J Visual Commun Image Representation 2018 Jan;50:205-216. [CrossRef]
- Münzer B, Leibetseder A, Kletz S, Primus MJ, Schoeffmann K. lifeXplore at the lifelog search challenge 2018. In: Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge. 2018 Presented at: ICMR '18: International Conference on Multimedia Retrieval; Jun 11, 2018; Yokohama Japan. [CrossRef]
- Schoeffmann K, Primus MJ, Muenzer B, Petscharnig S, Karisch C, Qing X, et al. Collaborative feature maps for interactive video search. In: Amsaleg L, Guðmundsson G, Gurrin C, Jónsson B, Satoh S, editors. MultiMedia Modeling. MMM 2017. Lecture Notes in Computer Science, vol 10133. Cham: Springer; 2017:457-462.
- Primus MJ, Münzer B, Leibetseder A, Schoeffmann K. The ITEC collaborative video search system at the video browser showdown 2018. In: MultiMedia Modeling. MMM 2018. Lecture Notes in Computer Science, vol 10705. Cham: Springer; 2018:438-443.
- Leibetseder A, Münzer B, Primus MJ, Kletz S, Schoeffmann K. diveXplore 4.0: the ITEC deep interactive video exploration system at VBS 2020. In: MultiMedia Modeling. Berlin, Heidelberg: Springer-Verlag; 2020 Presented at: 26th International Conference on MultiMedia Modeling; January 5-8; Daejeon, South Korea p. 753-759. [CrossRef]
- Leibetseder A, Münzer B, Primus MJ, Kletz S, Schoeffmann K, Berns F, et al. Lifexplore at the lifelog search challenge 2019. In: Proceedings of the ACM Workshop on Lifelog Search Challenge. 2019 Presented at: ICMR '19: International Conference on Multimedia Retrieval; Jun 10 - 13, 2019; Ottawa ON Canada. [CrossRef]
- Leibetseder A, Schoeffmann K. Lifexplore at the lifelog search challenge 2020. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Lokoč J, Souček T, Kovalčik G. Using an interactive video retrieval tool for lifelog data. In: Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge. 2018 Presented at: ICMR '18: International Conference on Multimedia Retrieval; Jun 11, 2018; Yokohama Japan. [CrossRef]
- Lokoč J, Kovalčík G, Souček T, Moravec J, Čech P. Viret: a video retrieval tool for interactive known-item search. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval. 2019 Presented at: ICMR '19: International Conference on Multimedia Retrieval; Jun 10 - 13, 2019; Ottawa ON Canada. [CrossRef]
- Lokoč J, Souček T, Čech P, Kovalčík G. Enhanced VIRET tool for lifelog data. In: Proceedings of the ACM Workshop on Lifelog Search Challenge. 2019 Presented at: ICMR '19: International Conference on Multimedia Retrieval; Jun 10 - 13, 2019; Ottawa ON Canada. [CrossRef]
- Kovalčík G, Škrhak V, Souček T, Lokoč J. VIRET tool with advanced visual browsing and feedback. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Lokoč J, Kovalcík G, Soucek T. Revisiting SIRET video retrieval tool. In: International Conference on Multimedia Modeling. Cham: Springer; 2018:419-424.
- Rossetto L, Gasser R, Heller S, Parian M, Schuldt H. Retrieval of structured and unstructured data with vitrivr. In: Proceedings of the ACM Workshop on Lifelog Search Challenge. 2019 Presented at: ICMR '19: International Conference on Multimedia Retrieval; Jun 10 - 13, 2019; Ottawa ON Canada. [CrossRef]
- Heller S, Parian M, Gasser R, Sauter L, Schuldt H. Interactive lifelog retrieval with vitrivr. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Rossetto L, Giangreco I, Tanase C, Schuldt H. Vitrivr: a flexible retrieval stack supporting multiple query modes for searching in multimedia collections. In: Proceedings of the 24th ACM international conference on Multimedia. 2016 Presented at: MM '16: ACM Multimedia Conference; Oct 15 - 19, 2016; Amsterdam The Netherlands. [CrossRef]
- Gasser R, Rossetto L, Schuldt H. Multimodal multimedia retrieval with vitrivr. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval. 2019 Presented at: ICMR '19: International Conference on Multimedia Retrieval; Jun 10 - 13, 2019; Ottawa ON Canada. [CrossRef]
- Zhou L, Hinbarji Z, Dang-Nguyen DT, Gurrin C. Lifer: an interactive lifelog retrieval system. In: Proceedings of the 2018 ACM Workshop on The Lifelog Search Challenge. 2018 Presented at: ICMR '18: International Conference on Multimedia Retrieval; Jun 11, 2018; Yokohama Japan. [CrossRef]
- Ninh VT, Le TK, Zhou L, Piras L, Riegler M, Lux M, et al. LIFER 2.0: discovering personal lifelog insights using an interactive lifelog retrieval system. In: Proceedings of the CLEF 2019. 2019 Presented at: CLEF 2019; Sep 9-12, 2019; Lugano, Switzerland. [CrossRef]
- Le TK, Ninh VT, Zhou L, Nguyen-Ngoc MH, Trinh HD, Tran NH, et al. Organiser team at ImageCLEF Lifelog 2020: a baseline approach for moment retrieval and athlete performance prediction using lifelog data. In: Proceedings of the CLEF 2020. 2020 Presented at: CLEF 2020; Sep 22-25, 2020; Thessaloniki, Greece URL: http://ceur-ws.org/Vol-2696/paper_104.pdf [CrossRef]
- Ribeiro R, Silva J, Trifan A, Oliveira JL, Neves AJ. UA.PT Bioinformatics at ImageCLEF 2020: lifelog moment retrieval web based tool. In: Proceedings of the CLEF 2020. 2020 Presented at: CLEF 2020; Sep 22-25, 2020; Thessaloniki, Greece.
- Le TK, Ninh VT, Tran MT, Nguyen TA, Nguyen HD, Zhou L, et al. LifeSeeker 2.0: interactive lifelog search engine at LSC 2020. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Mai-Nguyen AV, Phan TD, Vo AK, Tran VL, Dao MS, Zettsu K. BIDAL-HCMUS@LSC2020: an interactive multimodal lifelog retrieval with query-to-sample attention-based search engine. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Tran MT, Nguyen TA, Tran QC, Tran MK, Nguyen K, Ninh VT, et al. FIRST - Flexible Interactive Retrieval SysTem for visual lifelog exploration at LSC 2020. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Rossetto L, Baumgartner M, Ashena N, Ruosch F, Pernischová R, Bernstein A. LifeGraph: a knowledge graph for lifelogs. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Khan OS, Larsen MD, Poulsen LA, Jónsson B, Zahálka J, Rudinac S, et al. Exquisitor at the lifelog search challenge 2020. In: Proceedings of the Third Annual Workshop on Lifelog Search Challenge. 2020 Presented at: ICMR '20: International Conference on Multimedia Retrieval; Jun 9, 2020; Dublin Ireland. [CrossRef]
- Karako K, Chen Y, Song P, Tang W. Super-aged society: constructing an integrated information platform of self-recording lifelogs and medical records to support health care in Japan. Biosci Trends 2019;13(3):276-278 [FREE Full text] [CrossRef] [Medline]
- Sugawara J, Ochi D, Yamashita R, Yamauchi T, Saigusa D, Wagata M, et al. Maternity Log study: a longitudinal lifelog monitoring and multiomics analysis for the early prediction of complicated pregnancy. BMJ Open 2019 Feb 19;9(2):e025939 [FREE Full text] [CrossRef] [Medline]
- Dobbins C, Merabti M, Fergus P, Llewellyn-Jones D, Bouhafs F. Exploiting linked data to create rich human digital memories. Comput Commun 2013 Sep;36(15-16):1639-1656. [CrossRef]
- Kim S, Yeom S, Kwon O, Shin D, Shin D. Ubiquitous healthcare system for analysis of chronic patients’ biological and lifelog data. IEEE Access 2018;6:8909-8915. [CrossRef]
- Jacquemard T, Novitzky P, O'Brolcháin F, Smeaton AF, Gordijn B. Challenges and opportunities of lifelog technologies: a literature review and critical analysis. Sci Eng Ethics 2014 Jun 9;20(2):379-409. [CrossRef] [Medline]
- Doherty AR, Hodges SE, King AC, Smeaton AF, Berry E, Moulin CJ, et al. Wearable cameras in health: the state of the art and future possibilities. Am J Prev Med 2013 Mar;44(3):320-323. [CrossRef] [Medline]
- Florez-Revuelta F, Mihailidis A, Ziefle M, Colonna L, Spinsante S. Privacy-aware and acceptable lifelogging services for older and frail people: The PAAL project. In: Proceedings of the 2018 IEEE 8th International Conference on Consumer Electronics - Berlin (ICCE-Berlin). 2018 Presented at: 2018 IEEE 8th International Conference on Consumer Electronics - Berlin (ICCE-Berlin); Sep 2-5, 2018; Berlin, Germany. [CrossRef]
- Sellen AJ, Whittaker S. Beyond total capture. Commun ACM 2010 May;53(5):70-77. [CrossRef]
- Dobbins C, Merabti M, Fergus P, Llewellyn-Jones D. Creating human digital memories with the aid of pervasive mobile devices. Pervasive Mobile Comput 2014 Jun;12:160-178. [CrossRef]
Abbreviations
EDUB: Egocentric Dataset of the University of Barcelona |
NTCIR: National Institute of Informatics Testbeds and Community for Information Access Research |
PRISMA: Preferred Reporting Items for Systematic Reviews and Meta-Analyses |
Edited by L Buis; submitted 18.05.21; peer-reviewed by A Hasan, X Dong; comments to author 13.08.21; revised version received 14.09.21; accepted 11.03.22; published 02.05.22
Copyright©Ricardo Ribeiro, Alina Trifan, António J R Neves. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 02.05.2022.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.