Select:
2023 2022 2021 2020 2019 2018 Earlier Books

    2023

    Word Sense Disambiguation as a Game of Neurosymbolic Darts(link) arXiv 2023 Tiansi Dong, Rafet Sifa
    Abstract (click)

    Word Sense Disambiguation (WSD) is one of the hardest tasks in natural language understanding and knowledge engineering. The glass ceiling of 80% F1 score is recently achieved through supervised deep-learning, enriched by a variety of knowledge graphs. Here, we propose a novel neurosymbolic methodology that is able to push the F1 score above 90%. The core of our methodology is a neurosymbolic sense embedding, in terms of a configuration of nested balls in n-dimensional space. The centre point of a ball well-preserves word embedding, which partially fix the locations of balls. Inclusion relations among balls precisely encode symbolic hypernym relations among senses, and enable simple logic deduction among sense embeddings, which cannot be realised before. We trained a Transformer to learn the mapping from a contextualized word embedding to its sense ball embedding, just like playing the game of darts (a game of shooting darts into a dartboard). A series of experiments are conducted by utilizing pre-training n-ball embeddings, which have the coverage of around 70% training data and 75% testing data in the benchmark WSD corpus. The F1 scores in experiments range from 90.1% to 100.0% in all six groups of test data-sets (each group has 4 testing data with different sizes of n-ball embeddings). Our novel neurosymbolic methodology has the potential to break the ceiling of deep-learning approaches for WSD. Limitations and extensions of our current works are listed.

    Editorial: Multimodal communication and multimodal computing(link ) Frontiers in Artificial Intelligence June 2023 Alexander Mehler, Andy Lücking, Tiansi Dong
    Abstract (click)

    .

    2022

    How Can Cross-lingual Knowledge Contribute Better to Fine-Grained Entity Typing?(link) Findings in ACL 2022 Hailong Jin, Tiansi Dong, Lei Hou, Juanzi Li, Hui Chen, Zelin Dai, Qu Yincen
    Abstract (click)

    Cross-lingual Entity Typing (CLET) aims at improving the quality of entity type prediction by transferring semantic knowledge learned from rich-resourced languages to low-resourced languages. In this paper, by utilizing multilingual transfer learning via the mixture-of-experts approach, our model dynamically capture the relationship between target language and each source language, and effectively generalize to predict types of unseen entities in new languages. Extensive experiments on multi-lingual datasets show that our method significantly outperforms multiple baselines and can robustly handle negative transfer. We questioned the relationship between language similarity and the performance of CLET. With a series of experiments, we refute the commonsense that the more source the better, and propose the Similarity Hypothesis for CLET.

    Structure and Learning(link ) Dagstuhl Reports , 11(8), 2022. ISSN 2192-5283 Tiansi Dong, Achim Rettinger, Jie Tang, Barbara Tversky, Frank van Harmelen
    Abstract (click)

    This report documents the program and the outcomes of Dagstuhl Seminar 21362 “Structure and Learning”, held from September 5 to 10, 2021. Structure and learning are among the most prominent topics in Artificial Intelligence (AI) today. Integrating symbolic and numeric inference was set as one of the next open AI problems at the Townhall meeting “A 20 Year Roadmap for AI” at AAAI 2019. In this Dagstuhl seminar, we discussed related problems from an interdiscplinary perspective, in particular, Cognitive Science, Cognitive Psychology, Physics, Computational Humor, Linguistic, Machine Learning, and AI. This report overviews presentations and working groups during the seminar, and lists two open problems.

    Rotating Spheres: A New Wheel for Neuro-Symbolic Unification(link ) Dagstuhl Reports , 11(8):18, 2022. ISSN 2192-5283 Tiansi Dong
    Spatial Humor(link ) Dagstuhl Reports , 11(8):28, 2022. ISSN 2192-5283 Tiansi Dong, Christian Hempelmann
    Rotating Sphere Model for NLP(link ) Dagstuhl Reports , 11(8):28, 2022. ISSN 2192-5283 Roberto Navigli, Tiansi Dong, Thomas Liebig, Yong Liu, Alexander Mehler, Tristan Miller, Siba Mohsen, Sven Naumann
    Towards a Survey of Meaning Representation(link ) Dagstuhl Reports , 11(8):29, 2022. ISSN 2192-5283 Tiansi Dong, Anthony Cohn, Christian Hempelmann, Kanishka Misra, Jens Lehmann, Alexander Mehler, Tristan Miller, Siba Mohsen, Roberto Navigli, Julia Rayz, Stefan Wrobel, Ron Sun, Volker Tresp
    What would be an aggregated neural model for syllogistic reasoning?(link ) Dagstuhl Reports , 11(8):31, 2022. ISSN 2192-5283 Tiansi Dong, Pietro Lio, Ron Sun
    Can we diagram the understanding of humour?(link ) Dagstuhl Reports , 11(8):31, 2022. ISSN 2192-5283 Tristan Miller, Anthony Cohn, Tiansi Dong, Christian Hempelmann, Siba Mohsen, Julia Rayz

    2021

    Interpretable and Low-Resource Entity Matching via Decoupling Feature Learning from Decision Making ACL-IJCNLP (2021)(link) Zijun Yao, Chengjiang Li, Tiansi Dong, Xin Lv, Jifan Yu, Lei Hou, Juanzi Li, Yichi Zhang and Zelin Dai
    Abstract (click)

    Entity Matching (EM) aims at recognizing entity records that denote the same real-world object. Neural EM models learn vector representation of entity descriptions and match entities end-to-end. Though robust, these methods require many annotated resources for training, and lack of interpretability. In this paper, we propose a novel EM framework that consists of Heterogeneous Information Fusion (HIF) and Key Attribute Tree (KAT) Induction to decouple feature representation from matching decision. Using self-supervised learning and mask mechanism in pre-trained language modeling, HIF learns the embeddings of noisy attribute values by inter-attribute attention with unlabeled data. Using a set of comparison features and a limited amount of annotated data, KAT Induction learns an efficient decision tree that can be interpreted by generating entity matching rules whose structure is advocated by domain experts. Experiments on 6 public datasets and 3 industrial datasets show that our method is highly efficient and outperforms SOTA EM models in most cases. We will release the codes upon acceptance.

    A Geometric Approach to the Unification of Symbolic Structures and Neural Networks(link) Springer monograph 2021 Tiansi Dong

    2020

    Learning Syllogism with Euler Neural-Networks arXiv:2007.07320 (2020)(link) Dong, Tiansi, Chengjiang Li, Christian Bauckhage, Juanzi Li, Stefan Wrobel, Armin B. Cremers
    Abstract (click)

    Traditional neural networks represent everything as a vector, and are able to approximate a subset of logical reasoning to a certain degree. As basic logic relations are better represented by topological relations between regions, we propose a novel neural network that represents everything as a ball and is able to learn topological configuration as an Euler diagram. So comes the name Euler Neural-Network (ENN). The central vector of a ball is a vector that can inherit representation power of traditional neural network. ENN distinguishes four spatial statuses between balls, namely, being disconnected, being partially overlapped, being part of, being inverse part of. Within each status, ideal values are defined for efficient reasoning. A novel back-propagation algorithm with six Rectified Spatial Units (ReSU) can optimize an Euler diagram representing logical premises, from which logical conclusion can be deduced. In contrast to traditional neural network, ENN can precisely represent all 24 different structures of Syllogism. Two large datasets are created: one extracted from WordNet-3.0 covers all types of Syllogism reasoning, the other extracted all family relations from DBpedia. Experiment results approve the superior power of ENN in logical representation and reasoning. Datasets and source code are available upon request.

    2019

    Triple Classification Using Regions and Fine-Grained Entity Typing. AAAI 2019(link) Tiansi Dong, Zhigang Wang, Juanzi Li, Christian Bauckhage, Armin B. Cremers
    Abstract (click)

    A Triple in knowledge-graph takes a form that consists of head, relation, tail. Triple Classification is used to determine the truth value of an unknown Triple. This is a hard task for 1-to-N relations using the vector-based embedding approach. We propose a new region-based embedding approach using fine-grained type chains. A novel geometric process is presented to extend the vectors of pre-trained entities into n-balls (n-dimensional balls) under the condition that head balls shall contain their tail balls. Our algorithm achieves zero energy loss, therefore, serves as a case study of perfectly imposing tree structures into vector space. An unknown Triple (h, r, x) will be predicted as true, when x's n-ball is located in the r-subspace of h's n-ball, following the same construction of known tails of h. The experiments are based on large datasets derived from the benchmark datasets WN11, FB13, and WN18. Our results show that the performance of the new method is related to the length of the type chain and the quality of pre-trained entity-embeddings, and that performances of long chains with well-trained entity-embeddings outperform other methods in the literature.

    Imposing Category Trees Onto Word-Embeddings Using A Geometric Construction ICLR 2019(link) Tiansi Dong, Christian Bauckhage, Hailong Jin, Juanzi Li, Olaf Cremers, Daniel Speicher, Armin B. Cremers, Joerg Zimmermann
    Abstract (click)

    We present a novel method to precisely impose tree-structured category information onto word-embeddings, resulting in ball embeddings in higher dimensional spaces ($\mathcal{N}$-balls for short). Inclusion relations among $\mathcal{N}$-balls implicitly encode subordinate relations among categories. The similarity measurement in terms of the cosine function is enriched by category information. Using a geometric construction method instead of back-propagation, we create large $\mathcal{N}$-ball embeddings that satisfy two conditions: (1) category trees are precisely imposed onto word embeddings at zero energy cost; (2) pre-trained word embeddings are well preserved. A new benchmark data set is created for validating the category of unknown words. Experiments show that $\mathcal{N}$-ball embeddings, carrying category information, significantly outperform word embeddings in the test of nearest neighborhoods, and demonstrate surprisingly good performance in validating categories of unknown words. Source codes and data-sets are free for public access \url{https://github.com/GnodIsNait/nball4tree.git} and \url{https://github.com/GnodIsNait/bp94nball.git}.

    Fine-Grained Entity Typing via Hierarchical Multi Graph Convolutional Networks EMNLP-IJCNLP 2019(link) Jin HaiLong, Hou Lei, Li Juanzi, Tiansi Dong
    Abstract (click)

    This paper addresses the problem of inferring the fine-grained type of an entity from a knowledge base. We convert this problem into the task of graph-based semi-supervised classification, and propose Hierarchical Multi Graph Convolutional Network (HMGCN), a novel Deep Learning architecture to tackle this problem. We construct three kinds of connectivity matrices to capture different kinds of semantic correlations between entities. A recursive regularization is proposed to model the subClassOf relations between types in given type hierarchy. Extensive experiments with two large-scale public datasets show that our proposed method significantly outperforms four state-of-the-art methods.

    Prototypes within Minimum Enclosing Balls ICANN 2019(link) Christian Bauckhage, Rafet Sifa, and Tiansi Dong
    Abstract (click)

    We revisit the kernel minimum enclosing ball problem and show that it can be solved using simple recurrent neural networks. Once solved, the interior of a ball can be characterized in terms of a function of a set of support vectors and local minima of this function can be thought of as prototypes of the data at hand. For Gaussian kernels, these minima can be naturally found via a mean shift procedure and thus via another recurrent neurocomputing process. Practical results demonstrate that prototypes found this way are descriptive, meaningful, and interpretable.

    2018

    Joint Representation Learning of Cross-lingual Words and Entities via Attentive Distant Supervision EMNLP 2018(link) Yixin Cao, Lei Hou, Juanzi Li, Zhiyuan Liu, Chengjiang Li, Xu Chen, Tiansi Dong
    Abstract (click)

    Jointly representation learning of words and entities benefits many NLP tasks, but has not been well explored in cross-lingual settings. In this paper, we propose a novel method for joint representation learning of cross-lingual words and entities. It captures mutually complementary knowledge, and enables cross-lingual inferences among knowledge bases and texts. Our method does not require parallel corpus, and automatically generates comparable data via distant supervision using multi-lingual knowledge bases. We utilize two types of regularizers to align cross-lingual words and entities, and design knowledge attention and cross-lingual attention to further reduce noises. We conducted a series of experiments on three tasks: word translation, entity relatedness, and cross-lingual entity linking. The results, both qualitative and quantitative, demonstrate the significance of our method.

    Attributed and Predictive Entity Embedding for Fine-Grained Entity Typing in Knowledge Bases COLING 2018 (link) Jin HaiLong, Hou Lei, Li Juanzi, Tiansi Dong
    Abstract (click)

    Fine-grained entity typing aims at identifying the semantic type of an entity in KB. Type information is very important in knowledge bases, but are unfortunately incomplete even in some large knowledge bases. Limitations of existing methods are either ignoring the structure and type information in KB or requiring large scale annotated corpus. To address these issues, we propose an attributed and predictive entity embedding method, which can fully utilize various kinds of information comprehensively. Extensive experiments on two real DBpedia datasets show that our proposed method significantly outperforms 8 state-of-the-art methods, with 4.0% and 5.2% improvement in Mi-F1 and Ma-F1, respectively.

    An Assertion Framework for Mobile Robotic Programming with Spatial Reasoning COMPSAC 2018(link) Sun Hao, Xiaoxing Ma, Tiansi Dong, Armin B. Cremers, Cao Chun
    Abstract (click)

    Assertions are intensively used to facilitate correctness reasoning and error detection in daily programming. However, composing assertions for mobile robotic programs can be painfully inconvenient, because classic Hoare logic lacks the expressing power on spatial knowledge, which is crucial when robots interact with their physical environments. The problem is especially evident for Behavior-Based Robotics (BBR) where the world is not explicitly represented with program variables. In this paper, we propose to incorporate spatial reasoning capability in the assertion framework of Hoare logic. The proposed framework features a two-dimensional region calculus and additional axioms for robot movements. The calculus makes the world representation and the specification of mobile \robotic program natural and intuitive, and the axioms enable the reasoning about program correctness. We illustrate the use of the framework with a typical behavior-based robotic program. In addition, we present a runtime error detection and recovery mechanism for BBR programs based on the assertion framework. Preliminary experiments with NAO robots demonstrate the effectiveness.

    earlier

    OpenBudgets.eu: A Distributed Open-Platform for Managing Heterogeneous Budget Data Conference: 13th International Conference on Semantic Systems - Posters & Demos2017(link) Musyaffa, F. A, Orlandi, F, Dong Tiansi, Halilaj, L.
    Cross-Domain Cue Switching Cross-Lingual Cross-Media Content Linking: Annotations and Joint Representations (Dagstuhl Seminar 15201) 2015(link) Dong, Tiansi
    A Novel Machine Translation Method for Learning Chinese as a Foreign Language CICLing 2014(link) Dong, Tiansi, Armin B. Cremers
    Slow Intelligent Segmentation of Chinese Sentences using Conceptual Interval The 19th International Conference on Distributed Multimedia Systems 2013(link) Dong, Tiansi and Cui, Peiling
    Relating Slow Intelligence Research to Bilingulism The 18th International Conference on Distributed Multimedia Systems 2013(link) Dong, Tiansi and Gloeckner, Ingo
    Relating Slow Intelligence Research to Bilingulism The 18th International Conference on Distributed Multimedia Systems 2012(link) Dong, Tiansi and Gloeckner, Ingo
    LogAnswer in Question Answering Forums The 3rd International Conference on Agents and Artificial Intelligence 2011(link) Pelzer, B., Gloeckner, Ingo, and Dong, Tiansi
    A Natural Language Question Answering System as a Participant in Human Q&A Portals IJCAI 2011(link) Dong, Tiansi, Furbach, U., Gloeckner, I., and Pelzer, B.
    Word Expert Translation from German into Chinese in the Slow Intelligence Framework The 17th International Conference on Distributed Multimedia Systems 2011(link) Dong, Tiansi and Gloeckner, I.
    Qualitative Spatial Knowledge Acquisition Based on the Connection Relation The Third International Conference on Advanced Cognitive Technologies and Applications 2011(link) Dong, Tiansi and Vor der Brueck, T.
    Qualitative Spatial Knowledge Acquisition Based on the Connection Relation The Third International Conference on Advanced Cognitive Technologies and Applications 2011(link) Dong, Tiansi and Vor der Brueck, T.
    Modeling Human Intelligence as A Slow Intelligence System The 16th International Conference on Distributed Multimedia Systems 2010(link) Dong, Tiansi
    A Common Sense Approach to Representing Spatial Knowledge between Extended Objects Novel Approaches in Cognitive Informatics and Natural Intelligence 2010(link) Dong, Tiansi
    A Comment on RCC: from RCC to RCC++ Journal of Philosophical Logic 37(4): 319-352 (2008)(link) Tiansi Dong
    A Uniform Framework for Orientation Relations based on Distance Comparison the 7th IEEE International Conference on Cognitive Informatics 2008(link) Dong, Tiansi and Guesgen, W. Hans
    Cognitive Prism: a Bridge between Meta Cognitive Model and Higher Cognitive Models the 7th IEEE International Conference on Cognitive Informatics 2008(link) Dong, Tiansi
    Towards a Spatial Representation for the Meta Cognitive Process Layer of Cognitive Informatics the 6th IEEE International Conference on Cognitive Informatics 2008(link) Dong, Tiansi
    The Nine Comments on the RCC Theory AAAI'07 Workshop on Spatial and Temporal Reasoning 2007(link) Dong, Tiansi
    Is an Orientation Relation a Distance Comparison Relation? IJCAI'07 Workshop for Spatial Temporal Reasoning 2007 Dong, Tiansi and Guesgen, W.H.
    The Theory of Cognitive Prism - Recognizing Variable Vista Spatial Environments FLAIRS 2006 (link) Dong, Tiansi
    SNAPVis and SPANVis: Ontologies for Recognizing Variable Vista Spatial Environments Spatial Cognition IV: Reasoning, Action, Interaction, International Conference on Spatial Cognition 2005 (link) Dong, Tiansi
    A Computational Approach to Distinguish Similar Assemblies of Circles European Cognitive Science Conference 2003 (link) Dong, Tiansi

    Books

    A Geometric Approach to the Unification of Symbolic Structures and Neural Networks(link) Springer monograph 2021 Tiansi Dong
    Recognizing Variable Environment -- The Theory of Cognitive Prism(link) Springer monograph 2012 Tiansi Dong

    Book Chapters

    Dong, Tiansi (2010). Cognitive Prism - More than a Metaphor of Metaphor. Y. Wang, D. Zhang, and W. Kinsner (Eds.) Advances in Cognitive Informatics and Cognitive Computing