imPACT: The Internet of tomorrow: Privacy, Accountability, Compliance and Trust
-
Counterfactual Explanations for Recommenders
mehr
A provider-side mechanism to produce tangible explanations for end-users, where an explanation is defined to be a set of minimal actions performed by the user that, if removed, changes the recommendation to a different item.
-
Credibility Analysis in News Communities
mehr
A probabilistic graphical model to jointly identify credible news articles, trustworthy news sources, and expert users by leveraging joint interactions in a news community.
-
Credibility Analysis in Health Communities
mehr
Assessing trustworthiness of users, objectivity of language, and credibility of user statements in online health communities.
-
Probabilistic Graphical Models for Credibility Analysis
mehr
Probabilistic graphical models to extract "credible", "trustworthy" and "expert" information from large-scale, non-expert, user-generated content in online communities.
-
Deep Learning based Credibility Analysis
mehr
A deep learning based approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
-
Web Credibility Analysis
mehr
A generic approach for credibility analysis of unstructured textual claims in an open-domain setting with interpretable explanations.
-
R-Susceptibility
mehr
This project presents a ranking-based approach to the assessment of privacy risks emerging from textual contents in online communities, focusing on sensitive topics, such as being depressed.
-
Fair Data Representations
mehr
This project introduces a method for probabilistically clustering user records into a low-rank representation that captures individual fairness yet also achieves high accuracy in classification and regression models.
-
Mediator Accounts
mehr
This project proposes a framework which leverages solidarity in a large community to scramble user interaction histories.
-
Relationships between Actions and Feeds
mehr
This project presents FAIRY, a framework that systematically discovers, ranks, and explains relationships between users’ actions and items in their social media feeds.
-
Learning from Feedback on Explanations
mehr
A human-in-the-loop framework, called ELIXIR, where user feedback on explanations is leveraged for pairwise learning of user preferences.
-
ExFAKT: Explainable Fact Checking
mehr
Moving forward towards deriving more human understandable evidence from Knowledge graphs and text based on background knowledge in the form of rules.