AI Research Group

Artificial Intelligence & Data Science

About us

The research group was established to provide a knowledge and know-how center at the Eötvös University in the state-of-the-art AI techniques. This initiative is a joint effort of the Department of Computer Science, Institute of Mathematics, Eötvös Loránd University and the Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences.

At the Department of Computer Science, Institute of Mathematics, Eötvös Loránd University AI, data and network science have been an active topic of research and teaching for more than a decade. We have a long tradition and expertise in modelling, solving problems and use-cases from a wide range of domains. We work with with the objective of keeping up a good balance of mathematical and engineering approach. Our expertise was called in several R&D projects from the fields of telecommunication, finance, security and also life sciences.

Machine/Deep Learning competencies includes the state-of-the-art image, sequential and textual data analysis and retrieval techniques; unsupervised learning; and generative and adversarial approaches. Further special strengths of the department are combinatorics, graph/network theory and algorithms, optimization and their applications.

The research group is involved two larger AI R&D initiatives. The first is the AI4EU project aiming to build the European AI on-demand platform, a hub for AI applicators and researchers in the European AI ecosystem. As the member of the AI4EU consortium, our tasks are doing research on AI and building of the AI4EU community. The AI4EU project is supported by the European Union under the Horizon 2020 program, grant No. 825619.

The other project on the mathematical tools and theory of artificial intelligence, especially of machine and deep learning. A direction of research is to bridge the gap between mathematical theory and machine learning practice by exploiting newly discovered deep connections between fundamental results related to the study of large networks and the more applied domain of machine learning. Additionally, pilot interdisciplinary projects are realized that may directly demonstrate the practical applicability of the theoretical research. The project entitled The mathematical foundations of artificial intelligence runs under the frame of the National Excellence Program 2018-1.2.1-NKP of the Hungarian Research, Development and Innovation Office, between 2019 and 2021.

 

Deep Learning

Data Science

Network Science

Visual Analytics

Research

Automated theorem proving

Automated Theorem Proving (ATP) and Deep Learning (DL) are two important branches of artificial intelligence both of which have undergone huge development over the past decade. A novel and exciting research direction is to find a synthesis of these two domains. One possible approach is to use an intelligent learning system to guide the theorem prover as it explores the search space of possible derivations. Our group tackled the question of how to generalize from short proofs to longer ones with a strongly related structure. This is an important task since proving interesting problems typically requires thousands of steps, while current ATP methods are only finding proofs that are at most a couple dozens of steps long.

 

The project has a homepage and a public code repository. This work has been presented at the Bumerang workshop, the Conference on Artificial Intelligence and Theorem Proving and the Dagstuhl Logic and Learning Seminar. A paper about this project is currently under review, and was accepted as oral presentation for the Knowledge Representation & Reasoning Meets Machine Learning Workshop at NeurIPS 2019.

Autoencoders and representations

Wasserstein Autoencoders are autoencoders with the extra goal of making the pushforward (the latent image) of the data distribution close to some prior. For such models, the regularization term enforcing this closeness is based on the latent image of a single minibatch. (In effect, it is some normality test statistic based on a single minibatch as test sample.) We argue that when the size of the minibatch is of the same magnitude as the latent dimension, such statistics are not powerful enough. Our ongoing project investigates models where the regularization term is a function of the latent image of the full dataset.

Medical image computing

Within the medical industry, medical imaging is one of the most prominent industries where deep learning can play a huge role. The aim of our research is twofold: first, to develop solutions to pressing problems within the field such as inconsistent inter-rater reliability and the declining amount of practicing radiologists by introducing deep learning backed automation in the diagnostic pipeline, secondly to improve upon existing state-of-the-art methods by studying the application of Generative Adversarial Networks (GANs) to medical imaging data. Currently we are researching structure-correcting adversarial networks on X-ray segmentation tasks, as well as super-resolution methods on computed tomography scans.

Self-healing of networks

TBW

Study Groups

Natural Language Processing

TBW

Usefulness of neurons

We are interested in whether some neurons in a neural network can be classified as useful and useless. We aim to predict this usefulness during the training phase of the neural network, ideally using some easy-to-calculate methods. This interactive visualization shows one of our first results that the usefulness based on the loss or on the accuracy is correlating. Our current method is able to predict the established usefulness measure using only the network’s inside data during the training phase. The codebase of this project can be found here.

Model visualization

Deep neural networks were considered notoriously opaque, hard to interpret systems for a long time, but in the last few years, deep dream based algorithms made great progress in helping us understand the internal structure of trained vision models. These algorithms do gradient ascent on pixel inputs to maximize neuron activation. Our aim is to understand deep vision models better by utilizing such tools. One area that is to the best of our knowledge is currently uncharted is transfer learning: how specific neurons adapt when the classification task changes, say, from classifying animals to classifying flowers?

Software mining

TBW

Partial difference equations for teaching DNNs

TBW

Learning theory

TBW

Members

Seniors researchers

Adrián Csiszárik

Research Fellow
PhD student

András Lukács

Assistant Professor

Péter Sziklai

Professor

Dániel Varga

Senior Research Fellow

Zsolt Zombori

Research Fellow

PhD students, project leaders, researchers

Judit Ács

Technical Lead
PhD Student

Bea Benkő

PhD Student

Bálint Csanády

PhD Student

Domokos Czifra

Resarcher

Imre Fekete

Project Lead
Research Fellow

Gusztáv Gaál

Project Lead
MSc Student

Melinda Kiss

PhD Student

Balázs Maga

PhD Student

András Molnár

PhD Student

Márton Neogrády-Kiss

PhD Student

Students

Márton Csillag

Junior Resarcher

Dániel Lévai

MSc Student

Róbert Szabó

BSc Student

Deep Mind

AI Student

XY

Student

Partners

Contact

Address:
Pázmány Péter sétány 1/C
H-1117 Budapest, Hungary

Phone:
+36-30-4789663​

Email:
mathinst[ ]math.elte.hu