In recent years we have witnessed a flourishing of automated and community-based question answering (cQA) services, which have emerged as an effective paradigm for information seeking, knowledge disseminating, and expert involvement. However, existing cQA services have several severe limitations. First they are not automated and rely on users to provide the answers, even when good answers are available in their archives. Second, they usually offer only textual answers, while many intuitive video answers are available on the Web. Third, they lack social and structuring elements to link experts and bloggers, as well as integrate information from knowledge sources, sQA archives and blog sites, to provide the best answers. This project examines architecture to link cQA archives, knowledge sources and blogs, as well as users and experts to offer the best multimedia QA service:
(1) Enrich textual QA with appropriate multimedia data by leveraging community-contributed knowledge to bridge the semantic gap between textual questions and media answers.
(2) Predict the availability of multimedia answers on the web by jointly analyzing semantic cues and visual information, as illustrated in Figure 2.7.
(3) Boost the search performance of complex queries generated from QA pairs to enhance relevant multimedia data selection.
(4) Incorporate social features to better match questions to experts to provide the answers, so as to improve the response rates from experts (see Figure 2.8).
(5) Generate content ontologies of QA portals flexibly with the aim of deriving dynamic and timely knowledge structures from a variety of information sources, with contributions from the crowd sourcing and content experts.
(6) Help users to learn, ask and find better questions and answers by leveraging on knowledge structures and QA archives.
To demonstrate the above capabilities, we will focus our research on several vertical domains such as medical, community health and product.
Figure 2.7: The proposed framework for multimedia answer re-ranking for complex queries.
Figure 2.8: Question annotation framework for social QA.
 Liqiang Nie, Meng Wang, Gao Yue, Zheng-Jun Zha, Tat-Seng Chua： Beyond Text QA: Multimedia Answer Generation by Harvesting Web Information. IEEE Transactions on Multimedia 15(2): 426-441 (2013)
 Liqiang Nie, Meng Wang, Zheng-Jun Zha, Guangda Li, Tat-Seng Chua: Multimedia answering: enriching text QA with media information. ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2011): 695-704