Research Projects

  • image

    Document Intelligence

    Document AI, or Document Intelligence, refers to techniques for automatically reading, understanding, and analyzing business documents. Existing research cannot fully comprehend documents and appropriately answer complex questions. There are several challenges remain to be solved, including but not limited to the following: i) Rich structure: how to understand long documents with rich layouts; ii) Heterogeneous content: how to understand multimodal and heterogeneous information contained in a document, e.g., tables, charts, figures, and so on; iii) Complex questions: how to answer complex questions with long output answers that grounded to the input documents, which may require reasoning over long pieces of text; iv) Transferable model: how to learn a document-level QA model that can be applied to different domains (e.g., manuscripts of different products) with limited training data for each domain. The general goal of this project is to develop document-grounded NLP systems to cope with real-world usage scenarios.

    Relevant Materials

  • image

    Learning Embodied Expert Agents

    In this research direction, we aim to develop systems that can leverage multimodal data and knowledge to solve expert tasks in various domains and accelerate scientific discovery. We desire to efficiently learn expert models which have lower resource requirements, greater generalizability, and better explainability of their behaviors compared to existing foundation models. Specifically, we aim to develop theories and techniques that combine geometric multimodal representation learning with analogy-based reasoning for learning embodied agents in a multimodal and multitask setting, improving learning efficiency, model generalizability, and model explainability.

    Relevant Materials

  • image

    Causal and Analogical AI

    In this research direction, we want to explore the key roles that causality and analogy play in NLP and machine intelligence. Our long-term vision is to improve the generalization, robustness, and explainability of machine learning models in NLP and apply them to different domains, e.g., news media, healthcare, and finance.

  • image

    NLP4MatSci: Natural Language Processing for Material Science

    Data have always been a fundamental ingredient for scientific discovery. In materials science, large amounts and heterogeneous data are being produced every day, such as scientific publications, lab reports, manuals, tables, and so on. Natural language processing (NLP) is therefore playing a key role in understanding and unlocking the rich datasets in material science, especially for understanding scientific literature and extracting useful information from them. In this research project, we aim to develop effective NLP techniques for materials discovery by mapping current NLP techniques to the domain of material discovery or developing a new algorithmic pipeline.

    Data have always been a fundamental ingredient for scientific discovery. In materials science, large amounts and heterogeneous data are being produced every day, such as scientific publications, lab reports, manuals, tables, and so on. The different materials classes and materials properties that are of interest or explored lead to data that range many orders of magnitude.

    To accelerate material discovery (and also scientific discovery in other domains), we need to transform large and heterogeneous data into a form readily consumed and mined. By coupling such data with domain expertise, we can build upon previous findings, rapidly enter a new field, connect individual research efforts, link across disciplines, and accelerate discovery. Natural language processing (NLP) is therefore playing a key role in understanding and unlocking the rich datasets in material science, especially for understanding scientific literature and extracting useful information from them. Capturing unstructured information from the vast and evergrowing number of scientific publications has substantial promise to enable the creation of experimental-based databases currently lacking and meet the various needs in the materials domain. Developing data mining techniques for material science literature may also prevent information loss. Without structuring information, scientists cannot make the necessary connections among findings.

    However, directly applying NLP techniques developed in the general domain to the material science domain cannot give us satisfactory performance in different tasks. The reasons include but are not limited to the following: i) the content and style of material science literature are different from general domain texts such as news articles, which leads to degraded performance for NLP tasks; ii) understanding the literature requires significant in-domain expert knowledge; and iii) we lack high-quality and large-scale labeled training datasets for NLP tasks in the material science domain. It's also challenging and expensive to create new datasets.

    In this research project, we aim to develop effective NLP techniques for materials discovery by mapping current NLP techniques to the domain of material discovery or developing a new algorithmic pipeline.

    Relevant Materials

  • image

    DLG4NLP: Deep Learning on Graphs for Natural Language Processing

    This project aims to develop efficient graph representation, learning, and reasoning techniques for natural language processing tasks.

  • image

    Automated Question Generation and Question Answering

    This project aims to develop efficient algorithms and techniques for question generation and question answering.

  • image

    Web-scale Knowledge Construction and Expansion

    This project focuses on Taxonomy/Ontology/Knowledge Graph construction and expansion. Our techniques and systems have been deployed into various industry applications, e.g., Tencent QQ Browser, WeChat, and Mobile QQ for news feeds recommendation, document tagging, or searching, serving over a billion active mobile device users.

  • image

    Text Matching via Deep Learning and Graph Neural Networks

    This project aims to develop efficient algorithms for text matching tasks. We have developed tree-based and graph-based models and representations for short/long text matching tasks.

  • image

    RePaGer for Academic Reading Path Generation

    We introduce a new task named Reading Path Generation (RPG) which aims at automatically producing a path of papers to read for a given query. To serve as a research benchmark, we further propose SurveyBank, a dataset consisting of large quantities of survey papers in the field of computer science as well as their citation relationships. A real-time Reading Path Generation (RePaGer) system has been also implemented with our designed model.

  • image

    Story Forest for Hot Events Discovery and Organization

    We developed the Story Forest system to automatically cluster news documents into events, while connecting related events in a growing tree structure to tell an evolving story. Story Forest has been deployed into Tencent QQ Browser for document clustering and hot event discovery.

  • image

    Spatial Data Analysis and Recovery

    This project is about spatial data analysis and recovery. We design novel algorithms for house price estimation and spatial recovery based on partial aggregated observations.

    Reconstructing fine-grained spatial densities from coarse-grained measurements, namely the aggregate observations recorded for each subregion in the spatial field of interest, is a critical problem in many real world applications. In this project, we propose a novel Constrained Spatial Smoothing (CSS) approach for the problem of spatial data reconstruction. We observe that local continuity exists in many types of spatial data. Based on this observation, our approach performs sparse recovery via a finite element method, while in the meantime enforcing the aggregated observation constraints through an innovative use of the Alternating Direction Method of Multipliers (ADMM) algorithm framework. Furthermore, our approach is able to incorporate external information as a regression add-on to further enhance recovery performance.

    Online real-estate information systems such as Zillow and Trulia have gained increasing popularity in recent years. One important feature offered by these systems is the online home price estimate through automated data-intensive computation based on housing information and comparative market value analysis. State-of-the-art approaches model house prices as a combination of a latent land desirability surface and a regression from house features. However, by using uniformly damping kernels, they are unable to handle irregularly shaped regions or capture land value discontinuities within the same region due to the existence of implicit sub-communities, which are common in real-world scenarios. In this paper, we explore the novel application of recent advances in spatial functional analysis to house price modeling and propose the Hierarchical Spatial Functional Model (HSFM), which decomposes house values into land desirability at both the global scale and hidden local scales as well as the feature regression component. We propose statistical learning algorithms based on finite-element spatial functional analysis and spatial constrained clustering to train our model.

    Relevant Materials

  • image

    Network Latency Estimation and Low-Rank Matrix Completion

    Predict network latencies between nodes in a network is critical to real-time applications. Traditional algorithms is not suitable for mobile network due to the time-varying network condition. This project collected network latency data from the Seattle platform and developed new algorithms for network latency prediction. The proposed approaches significantly outperform various state-of-the-art latency prediction techniques. The paper based on this project is accepted by IEEE INFOCOM 2015 and IEEE/ACM Transactions on Networking (TON).