{"id":992,"date":"2024-07-16T15:03:08","date_gmt":"2024-07-16T15:03:08","guid":{"rendered":"https:\/\/scitope.com\/ait24\/?page_id=992"},"modified":"2025-08-01T06:49:10","modified_gmt":"2025-08-01T06:49:10","slug":"prof-helen-meng-2","status":"publish","type":"page","link":"https:\/\/scitope.com\/ait25\/?page_id=992","title":{"rendered":"Prof. P\u00e9ter Baranyi"},"content":{"rendered":"<p>[vc_row][vc_column][vc_single_image image=&#8221;492&#8243; alignment=&#8221;center&#8221; style=&#8221;vc_box_circle_2&#8243;][vc_column_text]<\/p>\n<h5 style=\"text-align: center;\"><span style=\"font-size: 20px;\"><em>CIAS and DAIS at the Corvinus University of Budapest<\/em><\/span><\/h5>\n<p>[\/vc_column_text][vc_column_text]<\/p>\n<h2 style=\"text-align: center;\">Revealing the Transparent Logic Behind AI Decisions<\/h2>\n<p>[\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]<span style=\"font-size: 15px;\"><strong>Abstract<br \/>\n<\/strong><\/span>This talk explores a novel perspective on explainable AI by focusing on the logical and causal relations that connect inputs to outputs in AI systems. Rather than relying solely on statistical or black-box interpretations, we seek to reveal transparent if\u2013then structures that mirror human-like reasoning. By identifying these underlying decision rules, we aim to reconstruct interpretable mappings that expose how specific input features lead to specific outcomes. This approach not only enhances trust and accountability but also provides a formal framework for analysing the stability and generalizability of AI behaviour. The talk will illustrate this concept through selected examples and propose methodological steps for extracting structured logic from complex models. Ultimately, we argue that uncovering such transparent logic is essential for the integration of AI into safety-critical and ethically sensitive domains.<\/p>\n<p><span style=\"font-size: 15px;\"><br \/>\n<\/span><span style=\"font-size: 15px;\"><strong>Biography<br \/>\n<\/strong><\/span>Peter Baranyi, a notable Hungarian scholar, has made significant contributions to the fields of non-linear control theory and modeling. Among his key inventions is the TP model transformation, a sophisticated form of higher-order singular value decomposition for continuous functions. This transformation is crucial in the development of nonlinear control design theories and facilitates new optimisation techniques. Baranyi\u2019s scientific achievements have been recognised with several prestigious awards, including the Investigator Award from Sigma Xi, the Kimura Award, and the International Design Gabor Award. He has published over 100 journal papers and authored four books, resulting in an h-index of 51.[\/vc_column_text][vc_column_text]<\/p>\n<h3 style=\"text-align: center;\"><a href=\"https:\/\/scholar.google.hu\/citations?user=5Y9kzEEAAAAJ&amp;hl=hu&amp;oi=ao\"><span style=\"color: #db931b;\">Scholar Profile<\/span><\/a><\/h3>\n<p>[\/vc_column_text][\/vc_column][\/vc_row]<\/p>\n","protected":false},"excerpt":{"rendered":"<p>[vc_row][vc_column][vc_single_image image=&#8221;492&#8243; alignment=&#8221;center&#8221; style=&#8221;vc_box_circle_2&#8243;][vc_column_text] CIAS and DAIS at the Corvinus University of Budapest [\/vc_column_text][vc_column_text] Revealing the Transparent Logic Behind AI Decisions [\/vc_column_text][\/vc_column][\/vc_row][vc_row][vc_column][vc_column_text]Abstract This talk explores a novel perspective on explainable AI by focusing on the logical and causal relations that connect inputs to outputs in AI systems. Rather than relying solely on statistical or black-box [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"class_list":["post-992","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/992","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=992"}],"version-history":[{"count":7,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/992\/revisions"}],"predecessor-version":[{"id":1328,"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=\/wp\/v2\/pages\/992\/revisions\/1328"}],"wp:attachment":[{"href":"https:\/\/scitope.com\/ait25\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=992"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}