UGR
  |
> >
SPOKEN AND MULTIMODAL DIALOGUE SYSTEMS
(Ref. TIC-018)
19
marzo
2024
marzo 2024
<- ->
L M X J V S D
1 2 3
4 5 6 7 8 9 10
11 12 13 14 15 16 17
18 19 20 21 22 23 24
25 26 27 28 29 30 31

Información sobre el grupo

logo del grupo

 

 

 

Research group Spoken and Multimodal Dialogue Systems (SISDIAL)

Dept. of Languages and Computer Systems, Faculty of Computer Science and Telecommunications, University of Granada, Spain

Spoken dialogue systems enable human-computer interaction (HCI) using spontaneous speech. Many of these systems are employed currently to automate telephone-based information services.

Mutlimodal dialogue systems enable HCI employing a number of interaction modalities, for example, speech, body gestures and facial expresions. These systems are employed nowadays to make more human-like the interaction. For example, they are used in tutoring systems, Ambient Intelligence (AmI), robots and healthcare systems.

The research group Spoken and Multimodal Dialogue Systems (SISDIAL) focuses on the analysis, design and development of these systems, paying special attention to the following issues:

  • Speech recognition, understanding and generation
  • Dialogue management
  • User simulation
  • Affective computing
  • Multimodal interaction
  • Ambient Intelligence (AmI)
  • Generation of computer personality models

Desarrollado por: