Volker Tresp


Learning with Memory Embeddings and its Application in the Digitalization of Healthcare

Embedding learning, a.k.a. representation learning,  has been shown to be able to model large-scale semantic knowledge graphs. A  key concept is a mapping of the knowledge graph to  a tensor representation whose entries are predicted by models using latent representations of generalized entities.  Latent variable models are well suited to deal with the high dimensionality and sparsity of typical knowledge graphs and have successfully been employed in knowledge graph completion and fact extraction from the Web.  We have extended the approach to also consider temporal evolutions, temporal patterns and subsymbolic representations, which permits us to model medical decision processes. In addition, we consider embedding approaches to be a possible basis for modeling cognitive memory functions, in particular semantic and concept memory, episodic memory, sensory memory, short-term memory, and working memory.


Volker Tresp received a Diploma degree from the University of Goettingen, Germany, in 1984 and the M.Sc. and Ph.D. degrees from Yale University, New Haven, CT, in 1986 and 1989 respectively. Since 1989 he is the head of various research teams in machine learning at Siemens, Research and Technology.  He filed more than 70 patent applications and was inventor of the year of Siemens in 1996. He has published more than 100 scientific articles and administered over 20 Ph.D. theses. The company Panoratio is a spin-off out of his team.  His research focus in recent years has been „Machine Learning in Information Networks“ for modelling Knowledge Graphs, medical decision processes and sensor networks. He is the coordinator of one of the first nationally funded Big Data projects for the realization of „Precision Medicine“.   Since 2011 he is also a  Professor at the Ludwig Maximilian University of Munich where he teaches an annual course on Machine Learning.

Principal Research Scientist Siemens, Professor @ LMU