Endowing machines with knowledge has long been regarded as one of the important goals of AI. To fulfil this aim, researchers have built many knowledge bases at different scales for different knowledge types. Traditionally, symbols and their relations represent knowledge for reasoning. By contrast, large language models (LLMs) follow quite a different paradigm: the tradition of connectionism and neural networks. It employs distributional numerical vectors/matrices to represent the knowledge. This way, almost all knowledge types can be represented and embodied into a unified semantic space. However, compared with traditional symbolic knowledge bases, LLMs still have limitations, including inexplainability, approximation, and hallucination. This lecture will start with simple logic deduction and show that the traditional neural approach can not do explainable and rigorous reasoning. Then, step-by-step, we motivate geometric solutions, by opening the black box of deep learning and revisiting classic research in cognitive psychology. We give detailed instructions on how to develop neuro-symbolic geometric embedding to logical deduction that inherits both good features from neural computing and classic symbolic AI. After that, we apply our geometric neural computing for a variety of fundamental cognitive reasoning tasks.
Neural networks can take almost any kind of senory inputs, e.g., visual, audio, video, text, tactic, and embed them into the vector space. This creates the possibility to develop a unified meaning representation for cross-modal reasoning. In this seminar, we survey multimodal neural computing, with a focus on the relations among language, images, knowledge, and emotion.
Endowing machines with knowledge has long been regarded as one of the important goals of AI. To fulfil this aim, researchers have built many knowledge bases at different scales for different knowledge types. Traditionally, symbols and their relations represent knowledge for reasoning. By contrast, large language models (LLMs) follow quite a different paradigm: the tradition of connectionism and neural networks. It employs distributional numerical vectors/matrices to represent the knowledge. This way, almost all knowledge types can be represented and embodied into a unified semantic space. However, compared with traditional symbolic knowledge bases, LLMs still have limitations, including inexplainability, approximation, and hallucination. This lecture will start with simple logic deduction and show that the traditional neural approach can not do explainable and rigorous reasoning. Then, step-by-step, we motivate geometric solutions, by opening the black box of deep learning and revisiting classic research in cognitive psychology. We give detailed instructions on how to develop neuro-symbolic geometric embedding to logical deduction that inherits both good features from neural computing and classic symbolic AI. After that, we apply our geometric neural computing for a variety of fundamental cognitive reasoning tasks.
.
.
Most of the deep-learning systems are black-box systems and lack of explainability and trustworthiness. In this lecture, I will start by introducing the achievements and limitations of large Deep-Learning systems, e.g. Watson, GPT, self-driving cars. Then I introduce two different neural approaches to syllogistic reasoning – one from the black box perspective, the other from the white box perspective. The state-of-the-art XAI approaches will be applied for the understanding of the black-box neural system. The white-box system will be followed by an introduction of spatial representation and reasoning, then continue to introduce features into spheres, and land on neural understanding of Humors. Background theories in cogitive science and humor research will be introduced, e.g., the dual-model theories of mind(System 1 and 2), laws of cognition, the Theory of Verbal Humor.
.
.
.