Stories are an effective way to make sense of our experiences in the world. We use them to explain our lived experiences and the reasons behind our actions. By sharing stories with others, we can entertain, educate, and transmit knowledge between generations and cultures. As we experience a story, we understand and see the world of the storyteller.
Advances in augmented and virtual reality are now enabling a new way to share and experience stories. Immersive environments can bring participants to the center of the action. This has prompted researchers and practitioners to ask, what does the future of storytelling look like? At the forefront of this are questions about how interactivity, immersion, and artificial intelligence change the way we design and experience stories? Our research in this area over the next several years will no doubt answer these questions by drawing upon expertise from a wide variety of disciplines in the sciences and humanities.
Current research has suggested that VR can be used to elicit empathy, reduce implicit bias, and decrease prejudice. These findings open the possibility that well-designed immersive experiences can create a much stronger connection to a story than afforded by non-immersive media. But there are still questions that remain about who is most impacted by immersive stories and the degree to which they will be affected.
Furthermore, as immersive stories become widespread and are coupled with large databases and biometric data, we must consider how we can ensure that these experiences support positive outcomes. We must consider questions such as how we might ensure equity in discourse in VR, or how can we design systems to support ethical standards, truth, and impartiality? Future research might also consider how we prevent dark patterns and psychological manipulation?
Image: Under the Net (08:28): Tanzania, March 2017- UN Foundation; sponsored by Samsung, Sumitomo Chemical, the Ariadne Getty Foundation, and Parachute Home, with support from Discovery VR and Google VR. Learn more in the “Nothing But Nets” campaign website.
The workshop will be held online and is open to anyone interested in the topic.
We expect to involve researchers, students, and practitioners from a variety of fields, including:
The workshop will also have feature talks and discussions with award winning storytellers, that will share their unique insight on the production of compelling immersive experiences.
Celine Tricart is an acclaimed storyteller who has developed a unique and recognizable style involving highly emotional stories and strong visual artistry. Her work was showcased in numerous Academy Awards qualifying festivals including Sundance, Venice, Tribeca, SXSW, HotDocs and more. Celine was the recipient of a Lion for Best VR Immersive Work of the Venice Film Festival, a Storyscapes Award of Tribeca, two Lumiere Awards by the Advanced Imaging Society, two Telly Awards and a Platinum Aurora Award amongst many other accolades.
Celine co-directed and produced Maria Bello’s “Sun Ladies” VR documentary about the women Yazidi fighting ISIS in Iraq which premiered at Sundance. In 2019, Celine premiered “The Key”, an interactive experience mixing immersive theater and VR which won the Storyscapes Award at Tribeca. Shortly after, it won the prestigious Grand Jury Prize for Best VR Immersive Work at the Venice Film Festival. Celine founded Lucid Dreams Productions, a production company specializing in new technologies and bold, empowered and unapologetic storytelling.
Ken Fountain is a talented animator, with experience that ranges from the smallest commercial houses, to the largest animation studios in the world. In 2007 he joined DreamWorks Animation crew in Glendale California, and given the privilege to contribute animated performances to many worldwide blockbusters, including “Monsters Vs. Aliens”, “Shrek 4”, “Kung Fu Panda 2”, “Megamind”, “Puss in Boots”, as well as parts of the “How to Train Your Dragon” franchise.
Since leaving Dreamworks, Ken has continued working with major studios and directors as an independent artist. Most recently, he has had the pleasure of animating for “The Peanuts Movie” from Blue Sky Studios (2015); 2016ʼs critically acclaimed, and Oscar nominated, Google Spotlight Story “Pearl”, directed by Oscar-winning director, Patrick Osbourne (“Feast”); and Baobab Studios’ “The Legend of Crow” and “Bonfire”, the 2019 and 2020 Annie award winners (respectively) for “Best Virtual Reality Production”. He was previously the Animation Supervisor at Baobab Studios, and has been teaching advanced animation performance to students since 2010. Ken is current an Animator Supervisor at DNEG. For more information, visit: Splatfrog.com
Rebecca Rouse, PhD is a Senior Lecturer in Media Arts, Aesthetics and Narration in the School of Informatics at the University of Skövde, Sweden. Rouse’s research focuses on theoretical, critical, and design production work with storytelling for new technologies, such as augmented and mixed reality. Rouse designs and develops projects across museums, cultural heritage sites, interactive installations, and theatrical performance, all with the thread of investigating and inventing new modes of storytelling. This design work dovetails with Rouse’s research in design methods, media theory, and the history of technology. For more information visit www.rebeccarouse.com.
Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Lab, Thomas More Storke Professor in the Department of Communication, Professor (by courtesy) of Education, Professor (by courtesy) Program in Symbolic Systems, a Senior Fellow at the Woods Institute for the Environment, and a Faculty Leader at Stanford’s Center for Longevity. Bailenson studies the psychology of Virtual and Augmented Reality, in particular how virtual experiences lead to changes in perceptions of self and others. His lab builds and studies systems that allow people to meet in virtual space, and explores the changes in the nature of social interaction. His most recent research focuses on how virtual experiences can transform education, environmental conservation, empathy, and health.
Stephen G. Ware is an assistant professor at the University of Kentucky where he directs the Narrative Intelligence Lab and teaches courses on artificial intelligence and game development. He studies computational interactive narrative techniques for virtual worlds like video games, training simulations, and tutoring systems. His work focuses on strong story challenges, on balancing the player’s agency when the designer has specific constraints on the narrative’s content which must be met. Most of his contributions have focused on narrative planing algorithms that can anticipate many possible futures for a story based on computational models of important features like character beliefs, character intentionality, and audience perception.
All three reading group sessions are on Fridays, at either 10 am or 1 pm, held on Zoom. These are the regular meeting times of the Social Informatics and Immersive Experiences research groups at Virginia Tech, and will be a joint meeting.
If you are registered for the workshop, Zoom links and paper information will be emailed to you for the Reading Group sessions.
March 12 (10 am): Chapter Paper by Rebecca Rouse From Part 1: The Body in the XR Community: “Against the Instrumentalization of Empathy: Immersive technologies and social change”. (discussion to be lead by Justin Perkinson)
March 26 (1 pm): Paper- Cummings, J.J., Bailenson, J.N. (2016). How immersive is enough? A meta-analysis of the effect of immersive technology on user presence. Media Psychology. 19:2, 272-309, doi:10.1080/15213269.2015.1015740 (discussion to be lead by Jimmy Ivory)
April 2 (10 am): Paper- Stephen G. Ware, Edward T. Garcia, Alireza Shirvani, Rachelyn Farrell. Multi-agent narrative experience management as story graph pruning. In Proceedings of the 15th AAAI international conference on Artificial Intelligence and Interactive Digital Entertainment, pp. 87-93, 2019. by Stephen Ware (discussion to be lead by Denis Gracanin)
Other Papers from the keynote speakers:
Paper – Farrell, Rachelyn, Stephen G. Ware, and Lewis J. Baker. “Manipulating Narrative Salience in Interactive Stories Using Indexter’s Pairwise Event Salience Hypothesis.” IEEE Transactions on Games (2019). Discussion to be lead by Wallace Lages.
Paper Paper – Rouse, R. “Someone Else’s Story: an Ethical Approach to Interactive Narrative Design for Cultural Heritage.” Interactive Storytelling: Lecture Notes in Computer Science, Springer Press International. *Best Paper Award Nominee (2019). Discussion to be lead by Mike Horning.
Paper – Herrera, F., Bailenson, J.N., Weisz, E., Ogle, E. & Zaki J. “Building long-term empathy: A large-scale comparison of traditional and virtual reality perspective-taking.” PLoS ONE 13(10): e0204494. (2018) Discussion to be lead by Doug Bowman.
Discussion sessions on immersive storytelling experiences:
February 14, 2020: Bonfire, by Baobab– Led by Wallace Lages.
February 28, 2020: Celine Tricart’s “The Sun Ladies (2017). Led by Justin Perkinson
PST (San Franc)
|APRIL 15th||APRIL 16th|
|7:00 AM||4:00 PM||10:00 AM||Opening remarks||Opening|
|7:15 AM||4:15 PM||10:15 AM||KEYNOTE – Rebecca Rouse||KEYNOTE – Stephen Ware|
|8:00 AM||5:00 PM||11:00 AM||DISCUSSION SESSION 1 – The ethics of immersive storytelling||DISCUSSION SESSION 3 – Beyond the Linear Narrative|
|8:45 AM||5:45 PM||11:45 AM||Mini Talk – Myounghoon Jeon||Mini Talk – Ico Bukvic|
|9:00 AM||6:00 PM||12:00 PM||Long Break (1 h)|
|10:00 AM||7:00 PM||1:00 PM||ARTIST TALK – Ken Fountain||ARTIST TALK – Celine Tricart|
|11:00 AM||8:00 PM||2:00 PM||DISCUSSION SESSION 2 – Bringing Characters to Life||DISCUSSION SESSION 3 – Telling and Retelling Stories|
|11:30 AM||8:30 PM||2:30 PM||Mini Talk – Denis Gracanin||Mini Talk – L. Zhang and N. Shokhov|
|11:45 AM||8:45 PM||2:45 PM||Short Break (15 min)|
|11:50 AM||8:50 PM||2:50 PM||VIDEO DEMOS||VIDEO DEMOS|
|12:00 PM||9:00 PM||3:00 PM||KEYNOTE – Jeremy Bailenson||FINAL DISCUSSION|
|12:45 PM||9:45 PM||3:45 PM||Q & A||Closing remarks|
|01:00 PM||10:00 PM||4:00 PM||Networking & Poster Session|
Immersive media have been praised as ‘technologies for good’ in recent years in light of claims for their capacity to elicit empathy from users and even effect social change. These claims assume a shared imaginary of how social change works in relation to a particular technological ontology. This keynote examines how emerging immersive technologies such as Augmented Reality (AR) and Virtual Reality (VR) are participating within this imaginary, particularly as extensions of a fantasy of technological efficiency. This examination leads to suggestions for how we might reframe the role for AR and VR in movements for social change. A set of principles are shared for slowing down and complicating the design process, with the aim of producing more impactful, just, and transformational engagements between designer, technology, and society
With rapid increase of interests in robotics and AI, robotic art or AI art is also becoming more popular. Robot-theater is an integrated art platform, including theater play, singing, dance, and many more forms of art. After briefly introducing our iterative robot-theater programs for STEAM (STEM + art and design) education, the present talk analyzes the roles of the robots and the stories that students created in their live theater productions and their implications. The talk will also describe the technologies we have been developing to implement the robot-theater programs and performances. Finally, it will highlight the evolution of our robot-theater production from the STEAM education programs into professional theater plays at the immersive environment, Cube at VT with visionary plans about the stories we want to create via the robot-theater performances. This talk is expected to motivate our audience to think about how an introduction of new robot actors and technologies to theater production can enrich performing arts and inspire to create the new stories.
Characters are an essential part of any strong storytelling. Although live action can be used in certain experiences, digital characters allow the creation of stories without the constraints associated with actors, costumes, or performance capturing. Ken Fountain, a veteran of feature film and immersive media, will illustrate the specific challenges and exciting benefits of animating emotive character performances for interactive virtual reality experiences. He will talk about his process and how he handles thee integration between animation and the other aspects of this unique media.
A smart built environment (SBE) is a physical space enriched with smart objects that work continuously to make residents lives more comfortable. Extended Reality (XR) applications can take advantage of the SBE data and orchestrated interactions to improve quality of experience and interactions, as well as reduce spatial, functional and cognitive seams. The synergy of XR and SBE results in a Smart Immersive Environment (SIE) paradigm that provides an infrastructure for context-aware interactions and user interfaces augmented with contextualized SBE data visualizations. SBE data, especially physiological data, can provide insight into the user’s mental state and support cognitive, emotional, and compassionate empathy in SIE. An empathy enabled SIE provides a foundation for immersive story telling that uses empathy information to evaluate and improve the user experience.
Dr. Bailenson will introduce the The DICE Framework for understanding virtual reality experiences and will proceed in an extended Q & A session with the audience, moderated by Prof. Doug Bowman, director of the Center for Human-Computer Interaction at Virginia Tech.
Telling interactive stories, even in clearly-defined domains, is challenging because of the huge space of possibilities. We want players to have freedom and agency, but authors usually have aesthetic and educational constraints of their own. AI systems that reason about characters and narrative structure can help, but these systems are still in their early days. In this talk, I’ll talk about the past, present, and future of interactive storytelling algorithms by highlighting some stories that went wrong in entertaining ways but provided insight nonetheless.
Cinemacraft is a real-time immersive machinima storytelling platform based on the ubiquitous Minecraft video game. It extends Minecraft to incorporate full body motion and facial expressions using Kinect HD and further integrates voice recognition and digital signal processing using the pd-l2ork visual programming environment to animate mouth movement and enable rapid prototyping without requiring recompiling of the java code. The project further builds upon the OPERAcraft, offering critical affordances necessary for the real-time machinima production, including multiple camera angles, seamless scene changes, subtitles, performer cues, and an ability for audience to observe the drama both within and outside the game.
Creator Celine Tricart describes the making of the VR experience “The Key” which won the Storyscapes Award at Tribeca 2019 and the Grand Jury Prize for Best Immersive Work at the 2019 Venice Film Festival. From the first brainstorming session to the release on the Oculus Store, the session details the whole workflow and lists mistakes and best practices for VR creators. It is recommended to watch “The Key” prior to the session, available for free on the Oculus Store for both Oculus Rift and Oculus Quest headsets.
Lei Zhang will talk about his Immunology VR project and PhD work on how to design interactivity and storytelling elements in immersive educational virtual reality experiences to promote learning of complex scientific concepts. Nikita Shokhov, Yuan Li, and Feiyu Lu will talk about the project Orientation Device, that they initiated as a team of cisgender artists and computer scientists with the aim to give space for marginalized queer voices to express themselves to a wide audience through currently pioneering mediums of scene geometry AR and volumetric filmmaking.
This is the fifth workshop organized by the Center for Human-computer Interaction at Virginia Tech. Previous workshops are: Algorithms that Make you Think (2019), Designing Socio-Technical Systems of Truth (2018), Technology on the Trail (2017), and the inaugural workshop, What comes after HCI: people, systems and information (2016).
Dr. Lages research is focused on Augmented and Virtual Reality, with intersections with computer graphics, robotics, digital games, interactive art, and design. His recent research investigates glanceable AR interfaces, machine agency, active haptics in VR, and the use of live-action techniques in immersive storytelling. Directs the Reality Design studio.
Interested in how technology innovations impact the news industry. His current work is focussed on studying how virtual and augmented reality can be used in news reporting and how storytelling changes in these media environments.
Principal investigator of the 3D Interaction Group, his research focuses on the topics of three-dimensional user interface design and the benefits of immersion in virtual environments. Dr. Bowman is one of the co-authors of 3D User Interfaces: Theory and Practice. He is an ACM Distinguished Scientist and received the Technical Achievement award from the IEEE Visualization and Graphics Technical Committee in 2014
Associate Director of the Center for Human-Computer Interaction
Dr. Kavanaugh interests lie in the area of social computing, specifically communication behavior and effects, communication systems and institutions, urban informatics, and digital government.
His primarily interested in understanding the outdoor communities on trails, reducing the affect of digital technology has on the user experience outdoors, and designing systems utilizing citizen science methodologies.
She is interested in the connection between creativity, play, and democratic education. Her research interests include game based learning in both classroom and informal settings, and the experience of social studies teachers surrounding creativity.
Tabitha Hartman, Dept. of Computer Science
Teresa Hall, Dept. of Computer Science
Holly Williams, Institute of Creativity, Arts, and Technology
Melissa Wyers, Institute of Creativity, Arts, and Technology