Learning distributed representations of concepts using linear relational embedding
- 1 January 2001
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Transactions on Knowledge and Data Engineering
- Vol. 13 (2) , 232-244
- https://doi.org/10.1109/69.917563
Abstract
We introduce linear relational embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between these concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization.Keywords
This publication has 10 references indexed in Scilit:
- A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge.Psychological Review, 1997
- Modular Labeling RAAMPublished by Springer Nature ,1995
- A scaled conjugate gradient algorithm for fast supervised learningNeural Networks, 1993
- Learning and Extracting Finite State Automata with Second-Order Recurrent Neural NetworksNeural Computation, 1992
- Recursive distributed representationsArtificial Intelligence, 1990
- Indexing by latent semantic analysisJournal of the American Society for Information Science, 1990
- Finding structure in timeCognitive Science, 1990
- Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern RecognitionPublished by Springer Nature ,1990
- Finite State Automata and Simple Recurrent NetworksNeural Computation, 1989
- Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesisPsychometrika, 1964