Museums and the Web 2005
Papers
Screen Shot: 3D plane objects

Reports and analyses from around the world are presented at MW2005.

The PECA Code: Codifying Pedagogy in 3D Virtual Instructors

Jayfus Tucker Doswell, George Mason University, USA

Abstract

The Pedagogical Embodied Conversational Agent (PECA) is an 'artificially intelligent' computer 3D graphic, animated character that teaches from computer simulated environments and naturally interacts with human end-users. What distinguishes a PECA from the traditional virtual instructor or pedagogical agent is the PECA's knowledge of empirically evaluated pedagogical techniques and learning theories that it utilizes for enhancing human learning performance. It combines this 'art and science' of instruction understanding with knowledge of domain based facts, culture, and the individual learner in order to (1) facilitate a more personalized learning experience and (2) improve its own instructional capabilities. However, equally challenging as engineering a realistically behaving 3D character in a computer simulated environment is the challenge of enhancing it with defined pedagogical rules and knowledge of human learning capabilities across cultures. Hence, the key to successful use of PECAs as an instructional tool is the ability to integrate codified instructional techniques, knowledge of how humans learn, and information about the learner interacting with the PECA. This is especially true in museum environments where visitors may vary in their culture, socio-economics, language, and physical capabilities. The task for PECA then is to define these differences and to present the same museum content to a multicultural, multilingual population, while giving consideration to individual learning styles and perception. This paper presents the state of PECA research, a software architectural approach for addressing key challenges researchers face in integrating pedagogical and learning theory knowledge into PECAs, and an examination of the actual and potential use of PECAs in museum environments. The PECA project is a research initiative conducted by Jayfus Doswell, PH.D candidate, at George Mason University in the USA.

Keywords: Virtual Instructor, Pedagogical Agent, Conversational agent, Virtual reality, Edutainment, Learning Technology, E-Learning

Introduction

Pedagogical Embodied Conversational Agents

Pedagogy is commonly defined as the science, art, theory, and practice of teaching. A Pedagogical Embodied Conversational Agent (PECA) is a pedagogical agent or virtual instructor that consistently facilitate rules of pedagogy to provide personalized human learning experiences by applying empirically evaluated and tested instructional techniques. These instructional techniques combining the art and science of teaching (i.e., pedagogy) are exemplified by three-dimensional (3D) animated characters that intelligently consider multiple variables for improving and potentially accelerating the human learning process. These variables include, but are not limited to, learning styles, human emotion, culture, gender, and instructional techniques (e.g., pedagogy and andragogy - for adult learners). Additionally, these 3D-animated characters are designed to behave autonomously in mixed reality (e.g., virtual reality, augmented reality, etc.) environments, respond to human verbal/non-verbal input across distributed and wireless computer networks, and naturally interact with human learners using context-aware intelligence across varying cultures. Combining advancement in technology with instructional techniques, the ultimate goal for PECAs is to improve and accelerate human learning performance anytime, anywhere, and at any pace. However, in order for the PECA, virtual instructor, or pedagogical agent to improve and accelerate human learning performance, it must combine the strengths of 'master' instructors who possess expertise in specific knowledge domains and exceed human pedagogical capabilities required to effectively guide learners through complex concepts/tasks and clarify misunderstandings, while at the same time becoming intimately involved in understanding the learner and knowledge being learned.

The new PECA research is derived from Embodied Conversational Agent (ECA) research (Cassell, J. et al, 1999). ECAs are defined as autonomous 3D simulated characters that use their bodies in lifelike or believable fashions in order to naturally communicate with end-users. Researchers have defined Embodied ECAs as computer interfaces that provide conversational behaviours and specifically are humanlike in the way they use their bodies in conversation (Cassell, J., Bickmore, T.,2000). These non-verbal behaviours are what distinguish ECAs from more traditional dialogue systems. ECA systems are also considered virtual humans, virtual characters, interactive perceptive actors, humanoids, among other names that suggest the computational perceptual abilities and natural conversational skills they exhibit, which separate them from other types of human computer interfaces. Typically ECAs perceive objects in their environment through virtual sensors (e.g., visual, tactile, and auditory sensors, etc.). Based on their perceived information, an ECA's cognitive structure determines the actions it will perform. For example, ECAs may be programmed to grow and evolve in a virtual reality (VR) environment and interact with human users based on their level of cognitive growth that parallels the growth process of human beings (e.g., from age 8 to 18). ECAs are virtual characters that are able to plan and execute tasks based on a model of the current state of their computer simulated environment. Hence, these autonomous actors (i.e., those that do not require human intervention) should react to their environments and make decisions based on their perception systems, memories and reasoning abilities.

The motivation for utilizing ECAs comes from increasing computational capacity in many environments outside of the desktop computer (e.g., smart rooms, intelligent toys, etc.), and in environments as diverse as a military battlefield or a children's museum. In order for this more natural human computer interface to evolve, it can no longer depend on non-intuitive and anti-social interfaces like the keyboard and mouse. Therefore, a more natural way of interacting with a computer is desired. Autonomous 3D graphical agents that interact with humans in real-time show promise for more natural human computer interaction. Leading researchers in the field of Embodied Conversational Agents (ECA) describe ECAs as having the same properties as humans in face to face conversation, including:

  • the ability to recognize and respond to verbal and non-verbal input
  • the ability to generate verbal and non-verbal output
  • the ability to deal with conversational functions such as turn taking, feedback, and repair mechanisms
  • the ability to give signals that indicate the state of the conversation, as well as to contribute new propositions to the discourse (Cassell, J. et al., 1999).

The development of ECA systems drawing from multidisciplinary research areas such as computational linguistics, multi-modal interfaces, 3D computer graphics, and intelligent agents, has led to the development of increasingly sophisticated autonomous or semi-autonomous agents over the last six years. ECA implementations are composed of several real-time, multithreaded software components that together provide a realistic multi-modal interface for human computer interaction. These interoperable software components may combine multi-modal input processors, multilingual dialogue generators, non-verbal dialogue generators, natural language processors, face recognition systems, gesture recognition systems, knowledgebase/expert systems, and real-time 3D graphics processes/generators, among other components.

Mixed Reality Environments

The simulated environments PECAs inhabit and from which they provide instruction also deserves discussion. Typically, virtual instructors and pedagogical agents have occupied virtual reality environments (Cassell, 2000; Norma, 2000; Rickel, 2000; Lester, 2000). However, to support various environmental contexts and the mobile learner, augmented reality, volumetric displays, hand-held displays, and holographic environments should be considered and evaluated during research. Therefore, an additional challenge is raised: How to design and develop PECAs or virtual instructors in mixed reality environments? The term mixed reality environments defines a categorization for various types of virtual reality (VR) type systems. Hence, researchers have defined a continuum of real-to-virtual environments in which VR and AR are parts of the general area of what is now considered mixed reality (Milgram). Figure 1 illustrates this mixed reality continuum. In augmented reality, digital objects are added to the real environment. In augmented virtuality, real objects are added to virtual ones. In virtual environments (or virtual reality), the surrounding environment is completely virtual.

Diagram: Milgram's reality-virtuality continuum

Fig 1: Milgram's reality-virtuality continuum.

The advantage of PECAs that have capabilities to operate in mixed reality environments is that they may use the best simulated environment for igniting learner motivation, reinforcing concepts, guiding learners through new and complex tasks, clarifying ambiguity, and providing a more enhanced and natural e-learning environment. Hence, the PECA becomes flexible and extensive. For example, if a learner wanted to learn about a museum's historical architecture, the PECA may be loaded in an augmented reality environment to supplement tour guides by tailoring the experience to the learner's personal interest and learning strengths; and thereby augmenting the capabilities of the tour guide by personalizing learning experience for the individual. This can be very difficult for a human to achieve when dealing with a large and diverse population.

Pedagogy and Andragogy

As discussed earlier, pedagogy is defined as the 'art and science' of teaching. Pedagogy includes formal pedagogical knowledge and vernacular pedagogical knowledge (McNamara, 1991). Formal or scientific pedagogy is a more theoretical form, with principles defined through systematic and empirical research. It is more abstract and more general than vernacular, or practical pedagogy. Practical pedagogy has a strong experiential basis, including "the contingent and idiosyncratic aspects of teaching and learning" (Moore, 2000) that no textbook theory can fully detail or predict. There are a number of pedagogical methods that vary among cultures, curricula, subject matter, and time periods throughout the ages. These pedagogical methods will dynamically evolve as we understand more about instruction and human learning. One way to ensure that learning experiences are effective is to have human instructors continually monitor student activities and provide feedback. However, this places heavy demands upon the instructors' time and may limit the amount of access that students have to the learning environment.

The functions of pedagogical research are twofold: to generate new knowledge about teaching and learning (i.e., findings that are fundamentally non-linear) and to enable educators "to understand, explain, defend, justify, and, where necessary, modify their pedagogy" (Dixon et al , 2001). Universities, research institutes, and government departments; individual researchers; and teachers' practical experience of, and reflection on, effective pedagogy are the sources of advances in scientific pedagogy. Scientific (i.e., more theoretical) and practical pedagogy are linked in countless ways. The bridges between scientific and practical pedagogy are also increasing through the use of research into areas such as meta-cognition. This is evident in the instructional behavior of many teachers, who, depending on their level, are as much researchers and theorists as they are practitioners, with one significant provision: time permitting. Relevant to the situation in schools, a Bearing Point consulting firm reporting on knowledge management (1998) identified the key "knowledge problems" and thus the teaching problems as:

  • information overload.
  • lack of time to share knowledge.
  • not using technology to share knowledge effectively.
  • the difficulty of capturing tacit knowledge.
  • reinventing the wheel.

Moreover, education researchers define pedagogy, along with learning, into the following foci:

  1. Teaching: defined as the techniques and methods (and the theories informing them) by which educators transmit subject/content knowledge, stimulate and supervise students' independent work, and facilitate students' development by means of outcome-oriented activities. Assessment may be summative or formative, if it is to assist students with revising and improving the quality of their thinking and understanding. Ultimately, teachers make a major difference by ensuring that students' learning outpaces any perceived level of current achievement.
  2. Learning may be defined as the processes by which students develop independence and initiative in acquiring and developing knowledge as well as skills (such as investigation, critical thinking, communication, teamwork, organization, and problem-solving). Over time, quality teaching may lead to students' higher-level thinking and deep understanding, knowledge of their own learning processes, or metacognition, ability to transfer what they have learned to new situations, and a general capacity for lifelong and life-wide learning (Bransford, J. et. al., 1999).

These foci may be structured and decomposed, without losing their inherent value, to a 'codifed' set of pedagogical rules expressed, naturally, by the PECA. Thus, pedagogy methods may be categorized into multi-sensory instruction, personality based instruction, multi-cultural based instruction, among other pedagogical methods. Additionally, sub-fields of pedagogy may be formulized into computational rules. These rules may evolve in ever-increasing detail to deal with individual facets of instruction, including research and practices relating to cognitive neuroscience, developmental psychology, deep learning, meta-cognition, sociology, cross-cultural studies, learning outcomes, educational governance and management, learning communities, and curriculum content.

Similar to methods of pedagogy is andragogy, which tailors instruction to adult learners. Andragogy, the art and science of helping adults learn (Knowles, M. S., 1973), is based on four crucial assumptions about the characteristics of adult learners that are different from the assumptions about child learners. It states that as persons mature:

  • Their self concept moves from being a dependent personality toward one of being a self-directing human being.
  • They accumulate a growing reservoir of experience that becomes an increasing resource for learning.
  • Their readiness to learn becomes oriented increasingly to the developmental tasks of their social roles.
  • Their time perspective changes from one of postponed application of knowledge to immediacy of application.

A discussion of pedagogy without examining learning styles would be incomplete. Even though there is controversial research on whether learning styles actually exist, some research literature reports as many as 12 or 13 types of intelligences and approximately 48 different types of learning styles. However, most learning researchers would agree that there are actually three primary learning styles, which are discussed as follows:

  • Visual Learners prefer seeing what they are learning. Pictures and images help them understand ideas and information better than explanations. A drawing may help more than a discussion about the same. When someone explains something to visual learners, they may create mental pictures of what the speaker describes. If you are a visual learner, you may find it helpful to see the person speaking. You may watch a speaker talk, as well as listen to what is said. Children with visual-perceptual strengths learn best by looking. Demonstrations from the blackboard, diagrams, graphs and charts are all valuable tools for them. However, children with weak visual-perceptual skills have difficulty interpreting consistently and without distortion what they see. Because they perceive differently, they won't learn well by studying visual examples. The farther away from the children the teacher is in giving a demonstration, the greater the probability that these children will experience difficulty in grasping the concept.
  • Auditory Learners prefer spoken messages. The less understood auditory learners need to hear their own voices to process the information. The more prevalent type, Listeners, most likely do well in school. Out of school too, they remember things said to them and make the information their own. They may even carry on mental dialogues and determine how to continue by thinking back on the words of others. Conversely, those who need to talk it out often find themselves talking to those around them. In a class setting when the instructor is not asking questions, auditory-verbal processors (i.e., talkers) tend to mutter comments to themselves. They are not trying to be disruptive and may not even realize they need to talk. Some researchers go so far as to call these learners, 'Interactives.'
  • Kinesthetic Learners want to sense the position and movement of what they are working on. Tactile learners want to touch. "Enough talking and looking," they may say. "Let's work with this stuff. Let's get our hands dirty already." Even if kinesthetic or tactile learners don't get much from the discussion or the written materials, they may catch up and exceed the lesson plan by working through scenarios and labs. Often, they don't thrive in traditional schools because most classrooms don't offer enough opportunity to move or touch. Most assessments group kinesthetic and tactile styles together, though they mean different things. Their similarity is that both types perceive information through nerve ends in the skin, as well as through muscles, tendons, and joints.

Mixed reality environments, as discovered many times in research, are conducive to satisfying these various learning styles. However, even though learners may have all the material to learn, they still need guides on how to use the material to crystallize ambiguous concepts. This is applicable from the formal classroom experience to the informal museum experience where the student or visitor may not grasp the concept. Therefore, the value added service of a PECA is to supplement these mixed reality environments with expert instructional guidance along with considerations of learning strengths, culture, and other variables that enhance and expedite the learning process.

PECA Related Research

A few true pedagogical and virtual instructors have been empirically investigated as interactive learning applications for use in environments such as immersive story telling systems, computer games, and web-based virtual worlds (Nakamura, 1999). Their research includes the role of emotion and motivational state in action selection, learning and adaptation; directability (the integration of external control with autonomous behavior); synthetic perception; and models of motor control for expressive movement (Blumberb, 1997). Studies suggests that animated pedagogical agents can be pedagogically effective because students perceive these types of interfaces as being extremely helpful, credible, and entertaining (Badler, 1997; Plantec, 2004).

Several research studies have examined the use of virtual instructors or pedagogical agents. An animated computer graphic character, Cosmo, is one example of a pedagogical agent that interacts with students as they learn about networking routing mechanisms by navigating through a series of subnets. Helpful, encouraging, and with a bit of attitude, Cosmo, explains how computers are connected, how routing is performed, what types of networks have which different physical characteristics, how the Internet address scheme works, and how network outages and traffic considerations come into play (James, 1997). Another pedagogical agent, Jack, uses virtual human modeling and control with an emphasis on autonomous action, gesture, attention, and locomotion. Jack consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a PaT-Net parallel finite-state machine controller that can be used to drive virtual humans through complex tasks. In this architecture, a pedagogical agent, Jack Presenter, demonstrates the feasibility of controlling pointing gestures, attention, body motion, and speech through a uniform interface processed by PaT-Nets in a 3D environment. Other researchers have designed a pedagogical agent, Steve, that can travel about a virtual reality ship and guide students to virtual equipment, and then use gaze and deictic gestures during a verbal lesson about the actual equipment. The agent's objective is to help students learn to perform physical, procedural tasks, such as operating and maintaining equipment. This pedagogical agent interacts with human end-users by handling verbal interruption and, consequently, provides verbal and non-verbal feedback (i.e., in the form 3D of nods and headshakes). Even though there is discussion about how anthropomorphic 3D characters provide instruction, there is very little 3D animated agent research that incorporates computational components to imbue knowledge of pedagogy and learning styles from empirical research so that these agents are able to manifest better instructional skills with the goal to improve learning performance.

One research study suggests that the key to pedagogy is to teach how to choose and carry out the right strategy for a given situation and to maintain realistic beliefs in one's own efficacy (Marsella, S.C., 2000). In this study, the researcher's Bright IDEAS 'pedagogy' is applied in an interactive story telling experience to teach mothers problem solving skills to cope with their pediatric cancer patients. Their Bright IDEAS pedagogy applies cognitive appraisal models of human emotion, tracking the mother's ego-involvement, expectancy, accountability, and coping potential. In their study, mothers interact with autonomous agents during problem scenarios. However, like many other research projects on pedagogical agents and virtual instructors, very little pedagogy, andragogy, or learning style examination is formulized into rules that are expressed by these anthropomorphic characters. Possibly, the process of decomposing known rules into a computational format is a challenging endeavor that requires extensive research.

PECA Product Line

To address these limitations in existing pedagogical and virtual instructor systems, a PECA Product Line architecture (formally referred to as the Joint Embodied Pedagogical Agent Architecture (JEPAA)) was created to solve aforementioned research limitations and to provide an interoperable computer system/software framework that facilitates building pervasive PECAs. The PECA Product Line uses kernel (i.e., required), optional, and alternative (i.e., selections from a mutually exclusive set) features for generating unique pervasive PECAs. Using this architecture, PECAs may operate in various pedagogical roles as guides, storytellers, or lecturers. Some of the PECA Product Line supported features are as follows:

  • Software component interoperability
  • Codified Pedagogy and Learning rules
  • Knowledge domain (e.g., academic) Independence
  • Autonomous 3D character animation
  • Modular, scalable, and extensible system/software components
Required Features
1. 3D characters behave autonomously and do not rely on user intervention.
2. A PECA may operate in one or more network virtual environments.
3. Domain knowledge is distributed and thus, separated from its 3D-character model representation.
4. A PECA may recognize and respond to verbal and non-verbal input.
5. A PECA may communicate with end users using verbal and gesture output.
6. A PECA may deal with conversational functions such as turn taking, feedback, and repair mechanisms.
7. A PECA may recognize and act on objects in mixed reality environments.
8. A PECA may derive conversational rules and respond to questions from a base of distributed knowledge.
9. A PECA may autonomously update their own base of pedagogical and domain knowledge.
10. A PECA may adapt to an end userís learning style during instruction.
11. A PECA may apply proven pedagogical techniques while instructing human end-users.

Table 1: PECA Product Line Required Features

For each PECA system generated by the PECA Product Line, there are required features/functionality listed in Table 1.

Optional Features
1 A PECA is capable of communicating with human end-users in multiple languages
2 A PECA is capable of communicating with human end-users using sign language.
3 A PECA is able to recognize an end userís facial expression and hand gestures.

Table 2: PECA Product Line Optional Features

In addition to these kernel requirements, the PECA Product Line incorporates optional requirements, which are not required by all pervasive pedagogical storyteller systems, but may be optionally selected for inclusion in a target system. Some optional requirements are listed in Table 2.

Screen Shot: Milgram's reality-virtuality continuum

Fig 1: Milgram's reality-virtuality continuum

Once a target PECA system is created, it may be incorporated in mixed reality environments including, but not limited to, virtual reality, augmented reality, holographic environments, among other current and future digital simulated environments.

Diagram: The PECA Product Line

Fig 2: The PECA Product Line High Level Architecture

Figure 2 illustrates a high level architecture view of what a target PECA system would look like architecturally after its has been generated from the PECA Product Line. The PECA Product Line defines multiple software tiers to instantiate target PECA target systems including: PECA Client, PECA Cognitive Server, and PECA Knowledge Server tiers. Each of these tiers is considered a kernel subsystem.

The PECA Client tier is a subsystem that typically exists at the client end (e.g., museum or museum's Web server) to achieve more real-time and continuous human computer interaction. This tier also contains components for achieving real-time 3D-character animation and provides control objects to coordinate an end-user's interaction with a PECA. Interface objects exist at this tier to quickly manage the unpredictable flow of input information from external input devices (e.g., camera, microphone, etc.) with which users interact with the PECA. These interface objects also manage the flow of information to all existing external output devices (e.g., visual display, speaker, etc.). Once the interface objects at the PECA client tier collect input data, control objects identify and pass the collected input data to other objects (mostly application logic objects) located at the PECA Cognition tier for data interpretation.

The PECA Cognition Server tier is the PECA's digital cognitive processing unit: a concurrent, server subsystem that contains an Application Programming Interface (API) through which various PECA services are called. In this version of the PECA Product Line, these services include natural language conversational dialogue service, instruction service, learning management service, PECA configuration service, data services to interface with the PECA Knowledge Server, among other services that may be added in subsequent versions of the PECA Product Line, and thus, target PECA systems. The PECA Cognition Server isolates internal components from calling objects/subsystems, primarily PECA Clients. It is also the primary place for reusable components that may be added, updated, removed, or replaced without affecting operations of a target PECA system. The advantage of the PECA Cognition Server is that it supports a flexible configuration, serves the needs of simultaneously operating PECA clients, scales to include additional functionality (without affecting exiting clients), and supports interoperability with external systems.

The PECA Knowledge Server is designed as a subsystem to store and retrieve data. This subsystem is decoupled from both the PECA Client and PECA Cognition tiers and serves as a centralized, independent, and abstract repository for a set of relational databases, knowledge bases, file systems, storage area networks (SANs), and other data repositories. The API component of the PECA Knowledge subsystem provides a high level interface that a target PECA Cognition subsystem calls to retrieve and store data. For example, PECA Cognition Server components may call the PECA Knowledge tier to obtain user data from a relational database and insert rules into a specific knowledge base. Additionally, PECA cognition components may retrieve a PECA model from the file store, update metadata for conversational behavior data, and store real-time interactive data in a SAN, etc. Another advantage of the PECA Knowledge server is that it allows the addition, removal, replacement, and modification of data repositories without requiring modification to components within the PECA Knowledge server. Hence, software components located at the PECA Cognition tier may preserve their 'calling' interfaces.

PECA Museum Integration

The PECA Product Line is being used to generate PECA systems for operation in museum environments. In these environments, visitors are able to interact with PECAs to learn about museum artifacts and the history of the museum, and receive tailored instruction on museum content within a particular museum domain. Thus, various knowledge constraints are placed on the PECA such that it may effectively serve the information needs of its visitors leveraging a focused base of knowledge. Research was conducted on PECA integration for two U.S based museums, the National Postal Museum in Washington, DC, and The National Great Blacks In Wax Museum in Baltimore, MD. The National Postal Museum is a Smithsonian Institution museum, created by an agreement between the Smithsonian Institution and the United States Postal Service in 1990 and opened to the public in 1993. As such, the NPM serves visitors from around the world. The NPM Museum contains a national collection of US artifacts ranging from US postal history to three-dimensional objects that trace the evolution of the postal services. The National Great Blacks and Wax museum contains full life-sized wax figures of African and African American people who have made significant contributions to the World. The National Great Blacks In Wax Museum is internationally renowned and serves the entire nation, particularly Maryland, Washington, D.C., Virginia, Pennsylvania, Delaware, New York, New Jersey, and North Carolina. The Museum also serves visitors from other states throughout the United States. International visitors have hailed from such countries as Canada, France, Africa, England, Japan, and Israel.

Specifically, research was conducted to design PECAs from the PECA Product Line that complements the museum exhibits and, at the same time, provides instruction considering those variables previously discussed in this paper. Extensive multidisciplinary research was conducted to develop PECAs for computer-simulated environments inside the museum and for complementary museum on-line environments. For both museums, the roles PECA plays are animated pedagogical storytellers. Hence, PECAs were developed from the PECA Product Line to exemplify a 3D-animated, pedagogical storyteller with all of its underlying software components.

For the Smithsonian's National Postal Museum, a PECA was researched and designed to operate in the context of a computer video game experience to assist children to learn about the US airmail period, airmail pilots, their important missions, the surrounding environments and conditions, and the political landscape during that time period. As in many video games, PECAs were designed to operate as Non-Playable Characters (NPC), but with intelligence to interact with end users using external sensor devices including the digital camera and microphone. A complementary offline PECA was examined and components selected from the PECA Product Line to operate within the context of the actual NPM for upcoming exhibits.

The purpose of the video game is to improve the learning performance of airmail history. The airmail game was designed to provide a 3D virtual reality environment of the airmail time period that players may explore to solve challenging puzzles and obstacles in each level of the game. Challenges were constrained to realistic depiction of challenges that would have occurred during a specific airmail time period (e.g., 1918 to 1930). Figures 3a and 3b illustrates 3D plane objects that were designed for use in the airmail video game.

Illustration: 3D plane objects Illustration: 3D plan objects

Fig 3a & 3b: 3D plane objects for airmail video game

Illustration: A Benjamin Bannekar PECA Illustration: A Benjamin Bannekar PECA

Fig 4a and 4b: A Benjamin Bannekar PECA

Various PECAs are being designed and developed to complement The National Great Blacks In Wax Museum experience within on-line learning environments as pervasive storytellers. Benjamin Banekar is one figure that has been transformed from a wax figure into a computer 3D animated character. Figure 3 illustrates the head shot of a young Benjamin Bannekar PECA. The aforementioned PECA Cognition and PECA Knowledge servers previously discussed are used to control Banneker's natural interaction with museum content and to personalize the story of his life to interest his visitors.

For both systems, a PECA was designed to communicate using a natural language recognition subsystem, recognize human gestures using artificial intelligent image algorithms, and apply pedagogical techniques during instruction within a story context. In the prototype system, real-time animation components were developed using a combination of C++ and OpenGL. The Artificial Intelligence Markup Language (AIML) was used as the knowledge-base shell to store academic and conversational knowledge. The Intel's OpenCV was integrated into the system for human gesture recognition. The Java Speech API (JSAPI) was used for speech recognition. Additionally, custom software components were written in Java for adapting pedagogical instruction to a human-end user's learning style.

Conclusion and Future Direction

PECAs designed for use in museum environments present an innovative example of a museum learning interface providing effective instruction within informal learning environments. The prototypes developed demonstrate the effectiveness of a PECA for seamless integration in mixed reality museum environments that may augment and extend museum interactions to on-line and personalized learning experiences. Research is continuing to advance PECA's interaction capabilities, conversational dialogue, and instructional expertise for even more interactive and natural interactions for use in both on-line and offline museum experiences.

References

Badler, N. (1997). Real-time Virtual Humans. Proc. IEEE 5th Pacific Conference on Computer Graphics and Applications.

Bransford, J., et. al. (1999). How People Learn: Brain, Mind, Experience, and School. Washington DC: National Research Council and National Academy Press.

Cassell, J. et. al. (2000). Human Conversation as a System Framework: Designing Embodied Conversational Agents. Embodied Conversational Agents. MIT Press, 29-61.

Cassell, J. et al(1999). Requirements for an Architecture for Embodied Conversational Characters. Computer Animation and Simulation (Eurographics Series).

Cassell, J., C. Pelachaud, N. Badler, M.Steedman, B.Achorn, T.Becket, B.Douville, S. Prevost and M.Stone (1994). Animated Conversation: Rule-based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents. Siggraph'94 Conference Proceedings, 413-420.

Cassell, J.(2000). More Than Just Another Pretty Face: Embodied Conversational Interface Agents. Communications of the ACM.

Dixon, H., et. al (2001). Theory in Practice for Educators. Palmerston North: Dunmore Press.

Gomaa, H. (2005). Designing Software Product Lines with UML: From Use Cases to Pattern-Based Software Achitectures. Addison Wesley.

Horace H., S. Ip and C. S. Chan (1996). Script-based Facial Gesture and Speech: Animation Using a NURBS Based Face Model. Computers & Graphics, Vol. 20, No. 6, 881-891.

Horace H. S. Sam C.Ip, C.S. Chan and Maria S. W. Lam. (1998). HACS: Hand Action Coding System for Anatomy-Based Synthesis of Hand Gestures. Proc. 1998 IEEE Int'l Conf. Systems, Man, and Cybernetics, San Diego, California, USA, pp. 1207-1212.

James, L., J. Voerman, J. Towns, and C.Callaway (1997). Cosmo: A Life-like Animated Pedagogical Agent with Deictic Believability. Proc. of the International Joint Conference on Artificial Intelligence (IJCAI) Workshop on Animated Interface Agents: Making Them Intelligent, 61-69.

Kalra P., N. Thalmann, L. Moccozet, G. Sannier, A. Aubel, and D. Thalmann (1998). Real-Time Animation of Realistic Virtual Humans. Proc. IEEE Computer Graphics and Applications, Vol. 18, 42-56.

Lester, J. et. al (2000). Deictic and Emotive Communication and Animated Pedagogical Agents. Embodied Conversational Agents. MIT Press, 123-154.

Massaro et. al (2000). Developing and Evaluating Conversational Agents. Embodied Conversational Agents. MIT Press, 287-319.

Marsella, S.C., et. al (2000). Interactive Pedagogical Drama. Proc. 4th Int'l Conf. Autonomous Agents. ACM Press, 301-308.

McNamara, D. (1991). Vernacular Pedagogy, British Journal of Educational Studies, Vol. XXXIX. No. 3, 297- 310.

Moore, R. (2000). For knowledge. Cambridge Journal of Education, Vol., No. 30,17-36.

Nakamura, I. (1999). Playing and Learning in the Digital Future. IEEE Micro. MIT Media Laboratory, 36-41.

Noma, T. (2000). Design of a Virtual Human Presenter. IEEE Computer Graphics and Applications, pp.79-83.

Plantec, Peter (2004). Virtual Humans. Amercan Management Association.

Rickel et. al (2000). Task-Oriented Collaboration With Embodied Agents in Virtual Worlds. Embodied Conversational Agents. MIT Press, 96-122.

Terzopoulos, D. et al (1993). Analysis and Synthesis of Facial Image Sequences Using Physical and Anatomical Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 15, No. 6, pp. 569-579.

Cite as:

Doswell, J., The PECA Code: Codifying Pedagogy in 3D Virtual Instructors, in J. Trant and D. Bearman (eds.). Museums and the Web 2005: Proceedings, Toronto: Archives & Museum Informatics, published March 31, 2005 at http://www.archimuse.com/mw2005/papers/doswell/doswell.html