Robert Peharz received his PhD from TU Graz (Austria) in
2015. In his PhD he was working on probabilistic graphical models and probabilistic circuits
(sum-product networks), with applications to signal processing.
He was a postdoc in the
at the Medical University of Graz (Austria), working on interdisciplinary
approaches for early recognition of neural maldevelopment via behavioral neuroscience.
From 2017-2018 he was a postdoc in the
Machine Learning Group (MLG)
at the University of Cambridge
and a Marie-Curie Individual Fellow at MLG Cambridge from 2018-2019.
From 2019-2021 he was an Assistant Professor at the Eindhoven University of Technology
Currently, he is an Assistant Professor at Graz University of Technology.
Active Bayesian Causal Inference
has been accepted at NeurIPS.
Christian Toth, Lars Lorch, Christian Knoll, Andreas Krause, Franz Pernkopf, and Julius von Kügelgen.
Antonio Vergari, YooJung Choi, Guy Van den Broeck, will give a new version of our
Tutorial on Probabilistic Circuits at NeurIPS.
We will have a virtual workshop on Neuro Causal and Symbolic AI at
for details and submission.
With Devendra Singh Dhami (TU Darmstadt, hessian.AI), Christina Winkler (TU München),
Thomas Kipf (Google Brain), Matej Zečević (TU Darmstadt),
Petar Veličković (DeepMind, University of Cambridge), and
Kristian Kersting (TU Darmstadt, hessian.AI)
Conditional sum-product networks: Modular probabilistic circuits via gate functions
has been published in IJAR.
Xiaoting Shao, Alejandro Molina, Antonio Vergari, Karl Stelzner, Thomas Liebig, and Kristian Kersting
I re-joined my Alma Mater, TU Graz, as an Assistant Professor.
Our workshop Tractable Probabilistic Modeling
has been accepted and will be hosted at the conference on Uncertainty in Artificial Intellegence.
Organizers: Antonio Vergari, Tahrima Rahman, Robert Peharz, Alejandro Molina, Pedram Rooshenas, Daniel Lowd, Zoubin Ghahramani
Our paper Novel AI driven approach to General Movement Assessment
has been accepted at Scientific Reports (Springer Nature).
With Peter Marschik, Simon Reich, Dajie Zhang, Tomas Kulvicius, Sven Bölte, Karin Nielsen-Saines, Florian Pokorny, Robert Peharz, Luise Poustka, Florentin Wörgötter, and Christa Einspieler
Last quarter, I was teaching Data Mining and Machine Learning together with Sibylle Hess.
You can watch some of my recorded lectures on youtube:
Exact and efficient probabilistic inference and learning are becoming more and more mandatory when we want to quickly take complex decisions in presence of uncertainty in real-world scenarios where approximations are not a viable option. In this tutorial, we will introduce probabilistic circuits (PCs) as a unified computational framework to represent and learn deep probabilistic models guaranteeing tractable inference. Differently from other deep neural estimators such as variational autoencoders and normalizing flows, PCs enable large classes of tractable inference with little or no compromise in terms of model expressiveness. Moreover, after showing a unified view to learn PCs from data and several real-world applications, we will cast many popular tractable models in the framework of PCs while leveraging it to theoretically trace the boundaries of tractable probabilistic inference.
IJCAI'20Probabilistic Circuits: Representations, Inference, Learning and Theory Website ECAI'20Probabilistic Circuits: Representations, Inference, Learning and Theory Website |
Slides ECML PKDD'20Probabilistic Circuits: Representations, Inference, Learning and Theory Website |
Slides AAAI'20Probabilistic Circuits: Representations, Inference, Learning and Theory Website |
NeurIPS'20Joints in Random Forests
Alvaro Correia, Robert Peharz, Cassio P. de Campos
Decision trees and random forests are some of the most widely used machine learning models,
and random forests are one of the strongest classifiers on tabular data.
But did you know that there was always a generative model hiding in your random forest?
Here we show how to exploit this fact for little extra resources.
Specifically, we show how decision trees can be translated into probabilistic circuits (PCs), and random forests
into an ensemble of PCs.
This generalizes the possibilities of standard random forests, such as consistent treatment of
missing data (by probabilistic inference) and outlier detection.
ICML'20Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic Circuits
Robert Peharz, Steven Lang, Antonio Vergari, Karl Stelzner, Alejandro Molina, Martin Trapp, Guy Van Den Broeck, Kristian Kersting, Zoubin Ghahramani
Probabilistic circuits are tractable probabilistic models and allow exact and efficient inference.
However, they used to be slow in comparison to deep neural networks, since their special structure
(which makes them tractable in the first place) does not map nicely onto deep learning frameworks
such as PyTorch or Tensorflow. Here we proposed a "smart" implementation of PCs, by squeezing all PC
operations into one a handful large einsum operations.
The result: dramatic speedups and memory savings, large-scale
generative modeling, including data imputation and outlier detection.
AISTATS'20Deep Structured Mixtures of Gaussian Processes
Martin Trapp, Robert Peharz, Franz Pernkopf, Carl Edward Rasmussen
Gaussian processes (GPs) are a powerful tool for Bayesian regression, as they represent a prior
over functions, which gets updated to a posterior via Bayesian inference.
Interestingly, this Bayesian update is tractable as it takes cubic time and quadratic memory.
While polynomial, this complexity is still prohibitive for large data, i.e. a few thousand data points.
Here we marry GPs with probabilistic circuits (PCs), yielding Deep Structured Mixture of Gaussian Processes (DSMGPs),
a new process model which elegantly mixes the tractable inference mechanisms of GPs and PCs.
The new model fits data better than several GP approximations while having comparable runtimes.
DSMGPs are also more data efficient than these approximate techniques and allow to model heteroscedastic noise.
ECML PKDD'20PS3: Batch Mode Active Learning for Hate Speech Detection in Imbalanced Text Data
Ricky Maulana Fajri, Samaneh Khoshrou, Robert Peharz, Mykola Pechenizkiy
The steadily growing prominence of social media exacerbates the problem of hostile
contents and hate-speech.
Automatically recognizing hate-speech is difficult, since the difference between hate-speech
and non-hate-speech might be subtle.
Moreover, hate-speech is relatively rare, leading to a highly class-skewed problem.
We developed PS3, a simple and effective batch mode active learning solution, which
updates the detection system by querying human domain-experts to annotate carefully selected
batches of data instances.
Despite its simplicity, PS3 sets state-of-the art on several hate-speech datasets.
ICML'19Hierarchical Decompositional Mixtures of Variational Autoencoders
Ping Liang Tan, Robert Peharz
Variational autoencoders (VAEs) are simple and powerful neural density estimators and have
received a lot of attention recently.
However, inference and learning in VAEs is still challenging due to the intractable nature of the
model, especially in high dimensional data spaces.
Here we propose a divide-and-conquer approach and break up the overall density estimation
problem into many sub-problems, which are each modeled with a set of "small VAEs."
Learning and inference in these VAE components are orchestrated via probabilistic circuits
(PCs), yielding hierarchical decompositional mixtures of VAEs.
This novel model effectively uses hybrid exact-approximate inference (exact from PCs,
approximate from VAEs) in a natural way.
We show that our model outperforms classical VAEs on almost all of our experimental
Moreover, we show that our model is highly data efficient and degrades very gracefully in
extremely low data regimes.
UAI'19Random Sum-Product Networks: A Simple and Effective Approach to Probabilistic Deep Learning
Robert Peharz, Antonio Vergari, Karl Stelzner, Alejandro Molina, Xiaoting Shao, Martin Trapp, Kristian Kersting, Zoubin Ghahramani
Probabilistic circuits (PCs) such as sum-product networks (SPNs) are expressive probabilistic
models with a rich set of exact and efficient inference routines.
Their structure, however, does not easily map to deep learning frameworks such as Tensorflow.
Here we use an unspecialized random SPN structure which maps easily onto these frameworks and
can be scale to millions of parameters.
These Random and Tensorized SPNs (RAT-SPNs) perform often en par with state-of-the-art
neural net learners and deep neural networks on a diverse range of generative and discriminative
RAT-SPNs can be used to naturally treat missing data and for outlier analysis and detection.
Senior Committee Member
IJCAI (2019, 2020)
ICML (2013, 2014, 2019, 2020 [top 33%])
NeurIPS (2018 [top 30%], 2019 [top 50%])
AAAI (2019, 2021)
IJCAI-ECAI (2018 [top 11.5%])
CVPR (2015, 2016, 2017, 2018)
ICCV/ECCV (2015, 2016, 2018)
Interspeech, ICASSP (2013, 2014, 2016)
Reviewed for Journals
Transactions on Artificial Intelligence
Journal of Machine Learning Research
IEEE Transactions on Audio, Speech and Language Processing
Data Mining and Knowledge Discovery, Springer
Machine Learning, Springer
Neural Computation, The MIT Press
International Journal of Approximate Reasoning, Elsevier
Expert Systems With Applications, Elsevier
Signal Processing, Elsevier
Computational Optimization and Applications, Springer