CIAS and DAIS at the Corvinus University of Budapest

Revealing the Transparent Logic Behind AI Decisions

Abstract
This talk explores a novel perspective on explainable AI by focusing on the logical and causal relations that connect inputs to outputs in AI systems. Rather than relying solely on statistical or black-box interpretations, we seek to reveal transparent if–then structures that mirror human-like reasoning. By identifying these underlying decision rules, we aim to reconstruct interpretable mappings that expose how specific input features lead to specific outcomes. This approach not only enhances trust and accountability but also provides a formal framework for analysing the stability and generalizability of AI behaviour. The talk will illustrate this concept through selected examples and propose methodological steps for extracting structured logic from complex models. Ultimately, we argue that uncovering such transparent logic is essential for the integration of AI into safety-critical and ethically sensitive domains.


Biography
Peter Baranyi, a notable Hungarian scholar, has made significant contributions to the fields of non-linear control theory and modeling. Among his key inventions is the TP model transformation, a sophisticated form of higher-order singular value decomposition for continuous functions. This transformation is crucial in the development of nonlinear control design theories and facilitates new optimisation techniques. Baranyi’s scientific achievements have been recognised with several prestigious awards, including the Investigator Award from Sigma Xi, the Kimura Award, and the International Design Gabor Award. He has published over 100 journal papers and authored four books, resulting in an h-index of 51.