imPACT: The Internet of tomorrow: Privacy, Accountability, Compliance and Trust
-
Counterfactual Explanations for Recommenders
more
A provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item.
-
Credibility Analysis in News Communities
more
A probabilistic graphical model to jointly identify credible news articles, trustworthy news sources, and expert users by leveraging joint interactions in a news community.
-
Credibility Analysis in Health Communities
more
Assessing trustworthiness of users, objectivity of language, and credibility of user statements in online health communities.
-
Probabilistic Graphical Models for Credibility Analysis
more
Probabilistic graphical models to extract "credible", "trustworthy" and "expert" information from large-scale, non-expert, user-generated content in online communities.
-
Deep Learning based Credibility Analysis
more
A deep learning based approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
-
Web Credibility Analysis
more
A generic approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
-
R-Susceptibility
more
This project presents a ranking-based approach to the assessment of privacy risks emerging from textual contents in online communities, focusing on sensitive topics, such as being depressed.
-
Fair Data Representations
more
This project introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models.
-
Mediator Accounts
more
This project proposes a framework which leverages solidarity in a large community to scramble user interaction histories.
-
Relationships between Actions and Feeds
more
This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds.
-
Learning from Feedback on Explanations
more
A human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.
-
ExFAKT: Explainable Fact Checking
more
Moving forward towards deriving more human understandable evidence from Knowledge graphs and text based on background knowledge in the form of rules.