Helen Papagiannis: Augmented Memories, Digital and Analog Realities

This paper was presented by Helen Papagiannis at HASTAC's Electronic Techtonics conference in 2006.

This is available in printed form as part of the Conference Proceedings Book at Lulu.

Streaming audio for this panel is available here

My current work in Augmented Reality (AR)2 explores integrating AR markers with lenticular-based lenses. My intent is to create tactile objects that may store and display multiple moving AR images, combining both analog and digital modes of memory. I have always been mesmerized by the technology embedded in lenticulars and their ability to contain and reveal multiple images with a slight shift of hand. I have recently created an AR marker contained within a lenticular lens that presents two separate marker patterns. Each of these patterns reveals a different moving AR image when the lenticular object is slightly tilted. The end result is a layered form of a futuristic moving image, one which comes to exist via an analog mode of animation.

I have been experimenting with various applications for lenticular-based AR, one of which explores the ability to display memories over time from past to present, combining both archival footage with contemporary moving images. This technique may be used to show growth over time, or various stages of ones life memories. A recent lenticular AR prototype I have created first displays a black and white film clip of two children playing and shyly kissing each other on the cheek; the second marker reveals a video clip of the two children now grown-up playfully behaving in the same manner as they once did, viewed in the previous moving image. The lenticular based AR markers may be used to display a before and after of sorts. The viewer can flip between the two moving images in the same hand-held object, mid-clip, reverting between the two, crossing over time with a slight hand gesture. Another prototype demonstrates the ability to change the direction of the moving image, between forward and reverse, when the hand-held lenticular object is slightly shifted.

I am particularly interested in the dual memory of the physical object and virtual imagery in lenticular-based AR. Although the augmented image is stored digitally within the software, activated upon recognition of the AR marker by the computer, the lenticular lens also contains an analog based memory system to store and reveal the two different markers with a physical tilting gesture. Each technology, AR and lenticular, presents an architecture which serves as a memory container with the final image only coming into full-view upon activation by the user. The completed images otherwise remain hidden from the viewer; the AR digital image appearing just as a square marker to the human eye without the software, and the lenticular analog image only a sole static still, unanimated. Although the AR image output is reliant on the software to translate and produce, the AR markers are initiated by the physical maneuvering of the lenticular lens by the viewer. This same gesturing is used to navigate between the final imagery, back and forth between the AR moving images. Both analog and digital methods must work together and coexist to bear lenticular-based AR. The direction of my current and future work looks to combine these two methods, utilizing both to create a final output where the digital and analog coalesce.

My work in AR began with a series of memory albums and paper-based objects which presented digital video footage from my travels. My interest in creating these works was due in part to a desire to capture live moments from my sojourns that were beyond still photographs, which would aid to temporarily transport me back to these foreign locales to relive those instants. These moving images assisted to evoke and recollect my memories by being able to rearticulate a past vision of a particular location: once again seeing how the waves crashed, how the wind blew, how my body moved in a space which I no longer have physical access to. I found that unlike the digital photographs that I took and would eventually print and place in an album, these moving images (MPEG format) most often remained archived on disc or on my computer never to be experienced again. I desired to create a tactile object where I could hold and view these live moments again, alongside my still photographs, offering an opportunity to move through the still images, extending into and beyond their virtual viewing space.

I created a series of small hand-held AR objects including a palm-sized memory album, a set of paper slides cased in a petite box, and a travelogue, which alongside video-clips, included actual objects from my journeys in addition to hand-written stories accompanying each clip. None of the moving images I chose to include featured people; they were all pans of landscapes of the sites I visited. I viewed this as an opportunity to document the physical places I visited, as a form of souvenir that would allow me to visually revisit (virtually) and enter that space again via a moving image that captured my field of vision in a horizontal pan. Without other people in the footage, this aided to create an intimate, uninterrupted space, as though that particular moment was for me, undisturbed by anyone else, a private memory, between that place and I. My works further exhibit a level of intimacy in their miniature scale; most of my projects fit in the palm of the viewers hand.

*QuickTime videos and images of the work are available at: www.aliceglass.com


(QuickTime videos of my work discussed below may be viewed at: http://www.aliceglass.com/research.html).

2 Augmented Reality (AR) is the convergence of the real and the virtual, often consisting of the overlaying of computer graphics onto a physical environment, which is interactive in real-time. The form of AR technology I am presently working with is based upon a series of black and white square markers. A web camera is utilized to capture images of the real world, which is then sent to a computer. Software on the computer searches through the live video stream for the various square markers. Once the software has recognized an AR marker, the marker is replaced with the corresponding video file to create the final output, which is overlaid onto reality.