Detecting Certainness in Spoken Tutorial Dialogues
4 pages
English

Detecting Certainness in Spoken Tutorial Dialogues

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
4 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Description

Detecting Certainness in Spoken Tutorial DialoguesJackson Liscombe,JuliaHirschberg,Jennifer J. VendittiSpoken Language Processing GroupDepartment of Computer ScienceColumbiaUniversityNew York City, NY, USAjaxin,julia,jjv @cs.columbia.eduAbstract ... 9.7 min. into dialogue ...TUTOR: So when you apply a force what is the resultWhat role does affect play in spoken tutorial systems and isof application of force on a body?it automatically detectable? We investigated the classificationSTUDENT: The force is transferred to the container.of student certainness in a corpus collected for ITSPOKE, a(UNCERTAIN)speech-enabled Intelligent Tutorial System (ITS). Our studysuggeststhattutorsrespondtoindicationsofstudentuncertainty TUTOR: No. Force does not get transferred todifferently from student certainty. Results of machine learning anything.experiments indicate that acoustic-prosodic features can distin- STUDENT: Uh-huh. (NEUTRAL)guish student certainness from other student states. A combi-TUTOR: Force is exerted and what does the forcenation of acoustic-prosodic features extracted at two levels ofproduce?intonational analysis — breath groups and turns — achievesSTUDENT: Movement of the container. (UNCERTAIN)76.42% classification accuracy, a 15.8% relative improvementoverbaseline performance. Ourresultssuggest thatstudent cer- TUTOR: No. acceleration.tainnesscanbeautomaticallydetectedandutilizedtocreatebet-STUDENT: Acceleration. OK. (CERTAIN)ter spoke dialog ...

Informations

Publié par
Nombre de lectures 16
Langue English

Extrait


Detecting Certainness in Spoken Tutorial Dialogues
Jackson Liscombe,JuliaHirschberg,Jennifer J. Venditti
Spoken Language Processing Group
Department of Computer Science
ColumbiaUniversity
New York City, NY, USA
jaxin,julia,jjv @cs.columbia.edu
Abstract ... 9.7 min. into dialogue ...
TUTOR: So when you apply a force what is the result
What role does affect play in spoken tutorial systems and is
of application of force on a body?it automatically detectable? We investigated the classification
STUDENT: The force is transferred to the container.of student certainness in a corpus collected for ITSPOKE, a
(UNCERTAIN)speech-enabled Intelligent Tutorial System (ITS). Our study
suggeststhattutorsrespondtoindicationsofstudentuncertainty TUTOR: No. Force does not get transferred to
differently from student certainty. Results of machine learning anything.
experiments indicate that acoustic-prosodic features can distin- STUDENT: Uh-huh. (NEUTRAL)
guish student certainness from other student states. A combi-
TUTOR: Force is exerted and what does the forcenation of acoustic-prosodic features extracted at two levels of
produce?intonational analysis — breath groups and turns — achieves
STUDENT: Movement of the container. (UNCERTAIN)76.42% classification accuracy, a 15.8% relative improvement
overbaseline performance. Ourresultssuggest thatstudent cer- TUTOR: No. acceleration.
tainnesscanbeautomaticallydetectedandutilizedtocreatebet-
STUDENT: Acceleration. OK. (CERTAIN)
ter spoke dialog ITSs.
Figure 1: A transcribed excerpt from our corpus of human-1. Introduction
human spoken tutorial dialogues (with certainness annotation
As Intelligent Tutoring Systems (ITSs) move from text-based of student turns in parentheses).
interactivesystems tospoken dialogue systems, newavenues of
exploration emerge by virtue of the rich meta-linguistic infor-
mation encapsulated in human speech; among them emotion.
the acoustic-prosodic features we extracted from the corpus di-While researchers have been studying emotion as it is mani-
alogues at both levels, while Section 6 compares certainnessfested in isolated, acted speech for some time, interest in de-
classification results using different feature set partitioning. Intecting emotion in conversational speech has emerged only in
Section 7 we discuss the implications this study has on futurethe past few years as a response to the needs of real-world sys-
researchinthedetectionofcertainness–andemotioningeneraltems. Emotiondetectionisconsideredanimportanttaskinava-
– in spoken dialogue ITSs.riety of applications, such as customer care centers [1, 2], task
planning systems [3, 4], as well as ITSs [5]. The expression of
user emotion in these contexts — emotions such as anger, frus- 2. Corpus Description
tration, or confusion — conveys important information that, if
detected, could be used to improve user satisfaction. Our corpus is comprised of human-human spoken dialogues
Inthispaper,weexaminemanifestationsofstudentcertain- collected for the development of ITSPOKE, an intelligent tu-
ness as it is expressed within the context of a spoken dialogue toring spoken dialogue system in the physics domain [6]. In
ITS. We investigate a spoken corpus of human-human tutorial total, 141 dialogues from 17 subjects (7 female, 10 male) were
dialogues, described in Section 2, in which student turns are used for our study. A dialogue consists of audio recordings of
annotated with certainness labels (Section 3). A few impor- a tutoring session between a student and a tutor. Each student
tant questions to consider when looking at student certainness is first asked to type an essay in response to a physics question.
in spoken dialogue ITSs are whether or not human tutors use Thetutorandstudent thendiscuss thestudent’sanswer untilthe
such information when tutoring students and whether or notde- tutor determines that the student has successfully mastered the
tection of certainness aids in student learning. To address these material. The student and tutor were each recorded with dif-
questions, we describe in Section 4 how tutor behavior differs ferent microphones and each channel was manually transcribed
based on whether thestudent isperceived tobe ‘certain’ or‘un- andsegmentedintoturns. Whileboththestudentandtutorwere
certain’ about their previous statement. These turns have also inthesameroomtogether, theywereseparated byapartitionin
been segmented into breath groups by procedures described such a way that they could not see each other. In total, our cor-
in Section 5.2. We present results of automatic classification pus contains 6778 student turns (about 400 turns per subject),
of certainness using acoustic-prosodic information, calculated each averaging 2.3 seconds in length. An excerpt of a dialogue
both at the turn and breath group level. Section 5 enumerates from the corpus is shown in Figure 1.







Dialogue Act Certain Uncertain3. Certainness Annotation
Bot 4.5% 6.9%
All student turns containing human speech in our corpus (6778 DeepAnsQ 5.6% 3.1%
in total) were labeled for certainness. In particular, a student LongAnsQ 1.1% 3.0%
turn was annotated with one of the following labels: uncertain, Neg 6.7% 9.6%
certain, neutral, mixed.Aneutral turn is one that is perceived Pos 23.7% 22.2%
to be neither certain nor uncertain, whereas a mixed turn isone Rcp 1.1% 2.5%
thatappearstoconveyboth. Studentturnswereannotatedbased RD 1.1% 3.2%
ontheperceptionofthelabeler. Thedistributionofthelabelsis: Rst 27.1% 14.4%
64.2%neutral,18.4%certain,13.6%uncertain,3.8%mixed.In SC 3.4% 6.1%
this study we exclude student turns that were labeled mixed. ShortAnsQ 9.3% 13.0%
Inter-labeler agreement for this annotation was calculated
using Cohen’s Kappa statistic [7] on a subset of the data con- Table 1: Frequency of tutor dialogue acts immediately follow-
sisting of 505 student turns labeled for certainness by three dif- ing certain or uncertain student turns.
ferentlabelers. TheaverageKappascoreamong thethreelabel-
ers was 0.52. This score is consonant with labeling agreement
in spoken dialogue emotion classification [2, 3, 5]. The labels 5.1. Turn features
used in this study are those from a single labeler.
In order to generalize the prosodic aspects of each student turn
initsentirety,57acoustic-prosodic featureswereextractedover
4. Tutor Responses to Student Certainness each student turn in the data. Turn level features were divided
into two feature sets: (1) those extracted from the current turn
In addition to the certainness annotation described in Section only (t cur) and (2) contextual features expressed as the rela-
3, our corpus has also been labeled with dialogue acts indicat- tionship between the current student turn and select turns in the
ing the pragmatic effect of turns and tailored for the tutoring dialogue history (t cxt).
domain. Tutoring dialogue acts include the following: ques- The t cur feature set includes 15 acoustic-prosodic fea-
tion types a tutor might ask a student (ShortAnsQ, LongAnsQ, tures. The features in this set comprise:
DeepAnsQ),directives(RD),restatementsorrewordings ofstu-
(5) mean absolute slope, minimum, maximum, mean,dent answers (Rst), tutor hints (Hint), tutor answers in the face
of student failure (Bot), novel information (Exp), review of and standard deviation statistics of fundamental fre-
pastarguments(Rcp),and directpositiveand negativefeedback quency ( )
(Pos, Neg). For detailed description of these tutorial dialogue
(4) minimum, maximum, mean, and standard deviationacts see [8].
statisticsof intensity (RMS)
One of the most obvious questions concerning the percep-
(1) ratio of voiced frames to total frames in the speechtion of student certainness in tutorial domains is, do tutors re-
signal as an approximation of speaking ratespond to it? Do they change their behavior if they detect that
a student is uncertain despite the fact that what they may have (4)relativepositionintheturnwhereminimum ,max-
said is factually correct; inother words, a lucky guess? In order imum , minimum RMS, and maximum RMS occur
to explore this in our corpus we examined all the dialogue acts
(1) turn durationthat directly follow certain and uncertain student turns. Table
1 lists the frequency of use of each tutor dialogue act given that
The t cxt feature set contains 42 features. These cap-the preceding student turn was certain or uncertain. Based on
ture contextual information provided by the dialogue history
thisevidence,wecanmakethefollowingobservation: Thetutor
by tracking how the student’s prosody changes over time. The
uses the following techniques more frequently when a student
intuition behind features in this set is that changing acoustic-
turnisperceivedtobeuncertain: solvingtheproblemexplicitly
prosodic measurements may be an indication of changes in
(Bot), providing direct negative feedback (Neg), and recapping
emotional state. Features in this set include features compar-pastdiscussion (Rcp). Inaddition,thetutorrestatesthestudents
ing the rate of change between 10 of the t cur features: mean
answer (Rst) less frequently in the face of certain student turns.
absolute slope, minimum, maximum, mean, and standard devi-
Finally, the tutor more frequently utilizes deep reasoning type
ation of ; minimum, maximum, mean, and standard devia-
questions (DeepAnsQ) when the student is certain; whereas,
tion of RMS; and

  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • Podcasts Podcasts
  • BD BD
  • Documents Documents