Dagstuhl, a remote place in the greens of Germany, is a well known destination in the Informatics community. The infamous Dagstuhl seminars provide a platform for researchers for personal interaction and open discussions of results, ideas, sketches and pending challenges. In the week between August, 31st and September, 5th, of this year 28 researchers from various disciplines came together to discuss the topic of “Augmenting the Human Memory — Capture in the Era of Lifelogging”.The seminar organized by Mark Billinghurst, Nigel Davies, Marc Langheinrich and Albrecht Schmidt, explored how technology can fundamentally change the way we interact with human memory. This included the focus on various trends that are currently changing our existing research on capture and playback technologies, privacy and society, as well as existing theories of memory.
Three core aspects of memory capture and recall technology were looked at:
- Collection: What is the best mix of technologies for capturing relevant human experiences to improve human memory? How can we create a novel class of capture systems that specifically support human memory functions while offering a fine-grained level of control over their recording and fully respecting personal privacy?
- Presentation: What are appropriate tools and methods for integrating, correlating, and visualizing captured sensor data and other information sources into a coherent “memory prosthetics” streams? Such streams will be based on theoretical principles of human memory organization, in order to positively influence the acquisition, retention and attenuation of knowledge from personal experiences.
- Theory: How can we use these new systems to validate psychological theories of human memory, in particular with respect to the feasibility of targeted attenuation of unwanted memories?
The seminar comprised short talks by its participants to kick off the discussions, a few of which shall be outlined in the following:
Collection
Simon Dennis from the University of Newcastle opened the first round of discussions with the question “What happens inside people’s brains when doing memory tasks?”. He reported findings from his experiments where he collected cell phone data from participants to identify context patterns and assessed people’s ability to recall certain situations in and out of context.
Christos Efstratiou talked about the topic of sensing people. In his work on capturing and potentially sharing human activity he and his colleagues built a system that combined smart phones and sensing data to capture social interactions. By capturing conversations through smartphones and providing a web interface, where people could capture information about their behavior, people were empowered to compare their behavior with that of others: Who spends the most time at their desk? Who is the most social? Findings from his studies include insights, such as the fact that people enjoy stalking others, but also how people wanted to do self-assessments to understand their lifestyles compared with others.
Ozan Cakmakci, one of the engineers on Google Glass, introduced seminar participants to the world of optics and technical principles for head-worn displays. When creating optical architectures, there are certain parameters that need to be taken into account, such as: eyebox, field of view, resolution, distortion and brightness. He gave a brief outlook on designing for holography by creating holograms through light interference patterns and freespace based approaches, in which glasses look like bug eyes.
Presentation
Aurelien Tabard focused on the visualization of activities and how visualizations can help reinstate episodic memory. His purely visual approach is meant to avoid activity recognition and specification.
Daniela Petrelli presented her work on the design of digital mementos. These are objects that manifest personal memories. She reported on a number of studies she conducted between 2007 and 2011 looking at how people could look back at a lifetime of digital belongings and how User Interfaces for this task would look like.
Vicki Hanson from the Rochester Institute of Technology reported on ongoing research in care homes where residents generally have troubles communicating. Her team is building a tool to help the care staff learn more about the histories of the residents by allowing the staff to listen to stories about the residents they care for. These stories are compiled either by the residents’ families or by articulate residents.
Wendy E. Mackay from the University of Paris South XI presented Video Clipper: a semi-structured video capture technology. This software was born out of the realization that analyzing video, once it has been captured, is non-trivial. The core of her idea: record less and tag as you go. It allows users to inject “title cards” into the stream of video that are being recorded, which serve to label the recording. If users know what they plan to record, they can create titlecards in advance, which can then be used to label logical episodes as they occur. If users do not know in advance what is coming, they can insert blank titlecards, and edit them just before, or even after, they shoot the video. The key is to store human-readable labels with the video episodes that are captured, in order to allow scanning the videos and finding what users want.
Katrin Wolf from the University of Stuttgart discussed the properties and perspectives of images and cameras that affect what types of images are captured. For example, images taken from a head perspective have different effects when being reviewed than images taken from perspectives such as neck and chest. Further, Wolf investigates how image information can be visualized and how image navigation can be designed: besides timelines or location-based navigation, there are more elaborate 3D examples or circles and spiral timelines.
Theory
Geoff Ward and Simon Dennis introduced participants to theories and practices on memory. Episodes were defined as events in context, hence events with the same cues can still be perceived differently which leads to the conclusion that event recalling is a function of time and context. Human memory is fallible and subject to retroactive and proactive interference. Further, Ward elaborated on the effectiveness of cues, encoding and retrieval effects.
Michel Beaudouin-Lafon challenged the seminar’s theme by posing the question: “Augmenting memory — a means to what end?”. Research in memory augmentation should not be conducted just because we can, but becase “we cannot”, because it is limited and because it’s not just about making things easier or looking at more pictures, but because it lets us do things we couldn’t do before. Memory helps us not make mistakes, learn, and tramsit skills. A lot of the work in this area is about memorising for the sake of memorising. But what can be done with augmented memory? Once we go beyond the recording of events “just because we can”, we can get to a better understanding of how we can create new ways of externalizing experiences so that they can be shared with others and so that we can learn from them, both individually and collectively, similar to how written language is a much more powerful externalizing of human speech than a mere audio recording.
The seminar further included a series of break-out sessions as well to foster more in-depth discussion around 3 central topics:
- Visualization
- Applications of Lifelogging
- Social Impact
Results and protocols of the seminar will soon be published in form of a Dagstuhl report.