The Year Before Last
PhD
[1]
J. Bund, “Hazard-free Clock Synchronization,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{Bund_PhD2022,
TITLE = {Hazard-free Clock Synchronization},
AUTHOR = {Bund, Johannes},
URL = {urn:nbn:de:bsz:291--ds-404463},
DOI = {10.22028/D291-40446},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Bund, Johannes
%Y Bläser, Markus
%A referee: Lenzen, Christoph
%A referee: Függer, Matthias
%A referee: Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
External Organizations
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Hazard-free Clock Synchronization :
%U http://hdl.handle.net/21.11116/0000-000D-D178-0
%U urn:nbn:de:bsz:291--ds-404463
%R 10.22028/D291-40446
%F OTHER: hdl:20.500.11880/36387
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P xii,180 p.
%V phd
%9 phd
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/36387
[2]
C. X. Chu, “Knowledge Extraction from Fictional Texts,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
Knowledge extraction from text is a key task in natural language processing, which involves many sub-tasks, such as taxonomy induction, named entity recognition and typing, relation extraction, knowledge canonicalization and so on. By constructing structured knowledge from natural language text, knowledge extraction becomes a key asset for search engines, question answering and other downstream applications. However, current knowledge extraction methods mostly focus on prominent real-world entities with Wikipedia and mainstream news articles as sources. The constructed knowledge bases, therefore, lack information about long-tail domains, with fiction and fantasy as archetypes. Fiction and fantasy are core parts of our human culture, spanning from literature to movies, TV series, comics and video games. With thousands of fictional universes which have been created, knowledge from fictional domains are subject of search-engine queries - by fans as well as cultural analysts. Unlike the real-world domain, knowledge extraction on such specific domains like fiction and fantasy has to tackle several key challenges: - Training data: Sources for fictional domains mostly come from books and fan-built content, which is sparse and noisy, and contains difficult structures of texts, such as dialogues and quotes. Training data for key tasks such as taxonomy induction, named entity typing or relation extraction are also not available. - Domain characteristics and diversity: Fictional universes can be highly sophisticated, containing entities, social structures and sometimes languages that are completely different from the real world. State-of-the-art methods for knowledge extraction make assumptions on entity-class, subclass and entity-entity relations that are often invalid for fictional domains. With different genres of fictional domains, another requirement is to transfer models across domains. - Long fictional texts: While state-of-the-art models have limitations on the input sequence length, it is essential to develop methods that are able to deal with very long texts (e.g. entire books), to capture multiple contexts and leverage widely spread cues. This dissertation addresses the above challenges, by developing new methodologies that advance the state of the art on knowledge extraction in fictional domains. - The first contribution is a method, called TiFi, for constructing type systems (taxonomy induction) for fictional domains. By tapping noisy fan-built content from online communities such as Wikia, TiFi induces taxonomies through three main steps: category cleaning, edge cleaning and top-level construction. Exploiting a variety of features from the original input, TiFi is able to construct taxonomies for a diverse range of fictional domains with high precision. - The second contribution is a comprehensive approach, called ENTYFI, for named entity recognition and typing in long fictional texts. Built on 205 automatically induced high-quality type systems for popular fictional domains, ENTYFI exploits the overlap and reuse of these fictional domains on unseen texts. By combining different typing modules with a consolidation stage, ENTYFI is able to do fine-grained entity typing in long fictional texts with high precision and recall. - The third contribution is an end-to-end system, called KnowFi, for extracting relations between entities in very long texts such as entire books. KnowFi leverages background knowledge from 142 popular fictional domains to identify interesting relations and to collect distant training samples. KnowFi devises a similarity-based ranking technique to reduce false positives in training samples and to select potential text passages that contain seed pairs of entities. By training a hierarchical neural network for all relations, KnowFi is able to infer relations between entity pairs across long fictional texts, and achieves gains over the best prior methods for relation extraction.
Export
BibTeX
@phdthesis{Chuphd2022,
TITLE = {Knowledge Extraction from Fictional Texts},
AUTHOR = {Chu, Cuong Xuan},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-361070},
DOI = {10.22028/D291-36107},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {Knowledge extraction from text is a key task in natural language processing, which involves many sub-tasks, such as taxonomy induction, named entity recognition and typing, relation extraction, knowledge canonicalization and so on. By constructing structured knowledge from natural language text, knowledge extraction becomes a key asset for search engines, question answering and other downstream applications. However, current knowledge extraction methods mostly focus on prominent real-world entities with Wikipedia and mainstream news articles as sources. The constructed knowledge bases, therefore, lack information about long-tail domains, with fiction and fantasy as archetypes. Fiction and fantasy are core parts of our human culture, spanning from literature to movies, TV series, comics and video games. With thousands of fictional universes which have been created, knowledge from fictional domains are subject of search-engine queries -- by fans as well as cultural analysts. Unlike the real-world domain, knowledge extraction on such specific domains like fiction and fantasy has to tackle several key challenges: -- Training data: Sources for fictional domains mostly come from books and fan-built content, which is sparse and noisy, and contains difficult structures of texts, such as dialogues and quotes. Training data for key tasks such as taxonomy induction, named entity typing or relation extraction are also not available. -- Domain characteristics and diversity: Fictional universes can be highly sophisticated, containing entities, social structures and sometimes languages that are completely different from the real world. State-of-the-art methods for knowledge extraction make assumptions on entity-class, subclass and entity-entity relations that are often invalid for fictional domains. With different genres of fictional domains, another requirement is to transfer models across domains. -- Long fictional texts: While state-of-the-art models have limitations on the input sequence length, it is essential to develop methods that are able to deal with very long texts (e.g. entire books), to capture multiple contexts and leverage widely spread cues. This dissertation addresses the above challenges, by developing new methodologies that advance the state of the art on knowledge extraction in fictional domains. -- The first contribution is a method, called TiFi, for constructing type systems (taxonomy induction) for fictional domains. By tapping noisy fan-built content from online communities such as Wikia, TiFi induces taxonomies through three main steps: category cleaning, edge cleaning and top-level construction. Exploiting a variety of features from the original input, TiFi is able to construct taxonomies for a diverse range of fictional domains with high precision. -- The second contribution is a comprehensive approach, called ENTYFI, for named entity recognition and typing in long fictional texts. Built on 205 automatically induced high-quality type systems for popular fictional domains, ENTYFI exploits the overlap and reuse of these fictional domains on unseen texts. By combining different typing modules with a consolidation stage, ENTYFI is able to do fine-grained entity typing in long fictional texts with high precision and recall. -- The third contribution is an end-to-end system, called KnowFi, for extracting relations between entities in very long texts such as entire books. KnowFi leverages background knowledge from 142 popular fictional domains to identify interesting relations and to collect distant training samples. KnowFi devises a similarity-based ranking technique to reduce false positives in training samples and to select potential text passages that contain seed pairs of entities. By training a hierarchical neural network for all relations, KnowFi is able to infer relations between entity pairs across long fictional texts, and achieves gains over the best prior methods for relation extraction.},
}
Endnote
%0 Thesis
%A Chu, Cuong Xuan
%Y Weikum, Gerhard
%A referee: Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Knowledge Extraction from Fictional Texts :
%G eng
%U http://hdl.handle.net/21.11116/0000-000A-9598-2
%R 10.22028/D291-36107
%U nbn:de:bsz:291--ds-361070
%F OTHER: hdl:20.500.11880/32914
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 129 p.
%V phd
%9 phd
%X Knowledge extraction from text is a key task in natural language processing, which involves many sub-tasks, such as taxonomy induction, named entity recognition and typing, relation extraction, knowledge canonicalization and so on. By constructing structured knowledge from natural language text, knowledge extraction becomes a key asset for search engines, question answering and other downstream applications. However, current knowledge extraction methods mostly focus on prominent real-world entities with Wikipedia and mainstream news articles as sources. The constructed knowledge bases, therefore, lack information about long-tail domains, with fiction and fantasy as archetypes. Fiction and fantasy are core parts of our human culture, spanning from literature to movies, TV series, comics and video games. With thousands of fictional universes which have been created, knowledge from fictional domains are subject of search-engine queries - by fans as well as cultural analysts. Unlike the real-world domain, knowledge extraction on such specific domains like fiction and fantasy has to tackle several key challenges: - Training data: Sources for fictional domains mostly come from books and fan-built content, which is sparse and noisy, and contains difficult structures of texts, such as dialogues and quotes. Training data for key tasks such as taxonomy induction, named entity typing or relation extraction are also not available. - Domain characteristics and diversity: Fictional universes can be highly sophisticated, containing entities, social structures and sometimes languages that are completely different from the real world. State-of-the-art methods for knowledge extraction make assumptions on entity-class, subclass and entity-entity relations that are often invalid for fictional domains. With different genres of fictional domains, another requirement is to transfer models across domains. - Long fictional texts: While state-of-the-art models have limitations on the input sequence length, it is essential to develop methods that are able to deal with very long texts (e.g. entire books), to capture multiple contexts and leverage widely spread cues. This dissertation addresses the above challenges, by developing new methodologies that advance the state of the art on knowledge extraction in fictional domains. - The first contribution is a method, called TiFi, for constructing type systems (taxonomy induction) for fictional domains. By tapping noisy fan-built content from online communities such as Wikia, TiFi induces taxonomies through three main steps: category cleaning, edge cleaning and top-level construction. Exploiting a variety of features from the original input, TiFi is able to construct taxonomies for a diverse range of fictional domains with high precision. - The second contribution is a comprehensive approach, called ENTYFI, for named entity recognition and typing in long fictional texts. Built on 205 automatically induced high-quality type systems for popular fictional domains, ENTYFI exploits the overlap and reuse of these fictional domains on unseen texts. By combining different typing modules with a consolidation stage, ENTYFI is able to do fine-grained entity typing in long fictional texts with high precision and recall. - The third contribution is an end-to-end system, called KnowFi, for extracting relations between entities in very long texts such as entire books. KnowFi leverages background knowledge from 142 popular fictional domains to identify interesting relations and to collect distant training samples. KnowFi devises a similarity-based ranking technique to reduce false positives in training samples and to select potential text passages that contain seed pairs of entities. By training a hierarchical neural network for all relations, KnowFi is able to infer relations between entity pairs across long fictional texts, and achieves gains over the best prior methods for relation extraction.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32914
[3]
J. Fischer, “More than the sum of its parts,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that these fields complement each other and propose to combine them to gain from each other’s strengths. We, first, show how to efficiently discover succinct and non-redundant sets of patterns that provide insight into data beyond conjunctive statements. We leverage the interpretability of such patterns to unveil how and which information flows through neural networks, as well as what characterizes their decisions. Conversely, we show how to combine continuous optimization with pattern discovery, proposing a neural network that directly encodes discrete patterns, which allows us to apply pattern mining at a scale orders of magnitude larger than previously possible. Large neural networks are, however, exceedingly expensive to train for which ‘lottery tickets’ – small, well-trainable sub-networks in randomly initialized neural networks – offer a remedy. We identify theoretical limitations of strong tickets and overcome them by equipping these tickets with the property of universal approximation. To analyze whether limitations in ticket sparsity are algorithmic or fundamental, we propose a framework to plant and hide lottery tickets. With novel ticket benchmarks we then conclude that the limitation is likely algorithmic, encouraging further developments for which our framework offers means to measure progress.
Export
BibTeX
@phdthesis{Fischerphd2022,
TITLE = {More than the sum of its parts},
AUTHOR = {Fischer, Jonas},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-370240},
DOI = {10.22028/D291-37024},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that these fields complement each other and propose to combine them to gain from each other{\textquoteright}s strengths. We, first, show how to efficiently discover succinct and non-redundant sets of patterns that provide insight into data beyond conjunctive statements. We leverage the interpretability of such patterns to unveil how and which information flows through neural networks, as well as what characterizes their decisions. Conversely, we show how to combine continuous optimization with pattern discovery, proposing a neural network that directly encodes discrete patterns, which allows us to apply pattern mining at a scale orders of magnitude larger than previously possible. Large neural networks are, however, exceedingly expensive to train for which {\textquoteleft}lottery tickets{\textquoteright} -- small, well-trainable sub-networks in randomly initialized neural networks -- offer a remedy. We identify theoretical limitations of strong tickets and overcome them by equipping these tickets with the property of universal approximation. To analyze whether limitations in ticket sparsity are algorithmic or fundamental, we propose a framework to plant and hide lottery tickets. With novel ticket benchmarks we then conclude that the limitation is likely algorithmic, encouraging further developments for which our framework offers means to measure progress.},
}
Endnote
%0 Thesis
%A Fischer, Jonas
%Y Vreeken, Jilles
%A referee: Weikum, Gerhard
%A referee: Parthasarathy, Srinivasan
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T More than the sum of its parts : pattern mining neural networks, and how they complement each other
%G eng
%U http://hdl.handle.net/21.11116/0000-000B-38BF-0
%R 10.22028/D291-37024
%U nbn:de:bsz:291--ds-370240
%F OTHER: hdl:20.500.11880/33893
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 250 p.
%V phd
%9 phd
%X In this thesis we explore pattern mining and deep learning. Often seen as orthogonal, we show that these fields complement each other and propose to combine them to gain from each other’s strengths. We, first, show how to efficiently discover succinct and non-redundant sets of patterns that provide insight into data beyond conjunctive statements. We leverage the interpretability of such patterns to unveil how and which information flows through neural networks, as well as what characterizes their decisions. Conversely, we show how to combine continuous optimization with pattern discovery, proposing a neural network that directly encodes discrete patterns, which allows us to apply pattern mining at a scale orders of magnitude larger than previously possible. Large neural networks are, however, exceedingly expensive to train for which ‘lottery tickets’ – small, well-trainable sub-networks in randomly initialized neural networks – offer a remedy. We identify theoretical limitations of strong tickets and overcome them by equipping these tickets with the property of universal approximation. To analyze whether limitations in ticket sparsity are algorithmic or fundamental, we propose a framework to plant and hide lottery tickets. With novel ticket benchmarks we then conclude that the limitation is likely algorithmic, encouraging further developments for which our framework offers means to measure progress.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33893
[4]
A. Guimarães, “Data Science Methods for the Analysis of Controversial Social Media Discussions,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
Social media communities like Reddit and Twitter allow users to express their views on<br>topics of their interest, and to engage with other users who may share or oppose these views.<br>This can lead to productive discussions towards a consensus, or to contended debates, where<br>disagreements frequently arise.<br>Prior work on such settings has primarily focused on identifying notable instances of antisocial<br>behavior such as hate-speech and “trolling”, which represent possible threats to the health of<br>a community. These, however, are exceptionally severe phenomena, and do not encompass<br>controversies stemming from user debates, differences of opinions, and off-topic content, all<br>of which can naturally come up in a discussion without going so far as to compromise its<br>development.<br>This dissertation proposes a framework for the systematic analysis of social media discussions<br>that take place in the presence of controversial themes, disagreements, and mixed opinions from<br>participating users. For this, we develop a feature-based model to describe key elements of a<br>discussion, such as its salient topics, the level of activity from users, the sentiments it expresses,<br>and the user feedback it receives.<br>Initially, we build our feature model to characterize adversarial discussions surrounding<br>political campaigns on Twitter, with a focus on the factual and sentimental nature of their<br>topics and the role played by different users involved. We then extend our approach to Reddit<br>discussions, leveraging community feedback signals to define a new notion of controversy<br>and to highlight conversational archetypes that arise from frequent and interesting interaction<br>patterns. We use our feature model to build logistic regression classifiers that can predict future<br>instances of controversy in Reddit communities centered on politics, world news, sports, and<br>personal relationships. Finally, our model also provides the basis for a comparison of different<br>communities in the health domain, where topics and activity vary considerably despite their<br>shared overall focus. In each of these cases, our framework provides insight into how user<br>behavior can shape a community’s individual definition of controversy and its overall identity.
Export
BibTeX
@phdthesis{Decarvalhophd2021,
TITLE = {Data Science Methods for the Analysis of Controversial Social Media Discussions},
AUTHOR = {Guimar{\~a}es, Anna},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-365021},
DOI = {10.22028/D291-36502},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {Social media communities like Reddit and Twitter allow users to express their views on<br>topics of their interest, and to engage with other users who may share or oppose these views.<br>This can lead to productive discussions towards a consensus, or to contended debates, where<br>disagreements frequently arise.<br>Prior work on such settings has primarily focused on identifying notable instances of antisocial<br>behavior such as hate-speech and {\textquotedblleft}trolling{\textquotedblright}, which represent possible threats to the health of<br>a community. These, however, are exceptionally severe phenomena, and do not encompass<br>controversies stemming from user debates, differences of opinions, and off-topic content, all<br>of which can naturally come up in a discussion without going so far as to compromise its<br>development.<br>This dissertation proposes a framework for the systematic analysis of social media discussions<br>that take place in the presence of controversial themes, disagreements, and mixed opinions from<br>participating users. For this, we develop a feature-based model to describe key elements of a<br>discussion, such as its salient topics, the level of activity from users, the sentiments it expresses,<br>and the user feedback it receives.<br>Initially, we build our feature model to characterize adversarial discussions surrounding<br>political campaigns on Twitter, with a focus on the factual and sentimental nature of their<br>topics and the role played by different users involved. We then extend our approach to Reddit<br>discussions, leveraging community feedback signals to define a new notion of controversy<br>and to highlight conversational archetypes that arise from frequent and interesting interaction<br>patterns. We use our feature model to build logistic regression classifiers that can predict future<br>instances of controversy in Reddit communities centered on politics, world news, sports, and<br>personal relationships. Finally, our model also provides the basis for a comparison of different<br>communities in the health domain, where topics and activity vary considerably despite their<br>shared overall focus. In each of these cases, our framework provides insight into how user<br>behavior can shape a community{\textquoteright}s individual definition of controversy and its overall identity.},
}
Endnote
%0 Thesis
%A Guimarães, Anna
%Y Weikum, Gerhard
%A referee: de Melo, Gerard
%A referee: Yates, Andrew
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Data Science Methods for the Analysis of
Controversial Social Media Discussions :
%G eng
%U http://hdl.handle.net/21.11116/0000-000A-CDF7-9
%R 10.22028/D291-36502
%U nbn:de:bsz:291--ds-365021
%F OTHER: hdl:20.500.11880/33161
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 94 p.
%V phd
%9 phd
%X Social media communities like Reddit and Twitter allow users to express their views on<br>topics of their interest, and to engage with other users who may share or oppose these views.<br>This can lead to productive discussions towards a consensus, or to contended debates, where<br>disagreements frequently arise.<br>Prior work on such settings has primarily focused on identifying notable instances of antisocial<br>behavior such as hate-speech and “trolling”, which represent possible threats to the health of<br>a community. These, however, are exceptionally severe phenomena, and do not encompass<br>controversies stemming from user debates, differences of opinions, and off-topic content, all<br>of which can naturally come up in a discussion without going so far as to compromise its<br>development.<br>This dissertation proposes a framework for the systematic analysis of social media discussions<br>that take place in the presence of controversial themes, disagreements, and mixed opinions from<br>participating users. For this, we develop a feature-based model to describe key elements of a<br>discussion, such as its salient topics, the level of activity from users, the sentiments it expresses,<br>and the user feedback it receives.<br>Initially, we build our feature model to characterize adversarial discussions surrounding<br>political campaigns on Twitter, with a focus on the factual and sentimental nature of their<br>topics and the role played by different users involved. We then extend our approach to Reddit<br>discussions, leveraging community feedback signals to define a new notion of controversy<br>and to highlight conversational archetypes that arise from frequent and interesting interaction<br>patterns. We use our feature model to build logistic regression classifiers that can predict future<br>instances of controversy in Reddit communities centered on politics, world news, sports, and<br>personal relationships. Finally, our model also provides the basis for a comparison of different<br>communities in the health domain, where topics and activity vary considerably despite their<br>shared overall focus. In each of these cases, our framework provides insight into how user<br>behavior can shape a community’s individual definition of controversy and its overall identity.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33161
[5]
J. Hladký, “Latency Hiding and High Fidelity Novel View Synthesis on Thin Clients Using Decoupled Streaming Rendering from Powerful Servers,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{Hladky_PhD22,
TITLE = {Latency Hiding and High Fidelity Novel View Synthesis on Thin Clients Using Decoupled Streaming Rendering from Powerful Servers},
AUTHOR = {Hladk{\'y}, Jozef},
LANGUAGE = {eng},
URL = {urn:nbn:de:bsz:291--ds-376882},
DOI = {10.22028/D291-37688},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Hladký, Jozef
%Y Seidel, Hans-Peter
%A referee: Steinberger, Markus
%A referee: Ritschel, Tobias
%+ Computer Graphics, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T Latency Hiding and High Fidelity Novel View Synthesis on Thin Clients Using Decoupled Streaming Rendering from Powerful Servers :
%G eng
%U http://hdl.handle.net/21.11116/0000-000F-24A1-2
%R 10.22028/D291-37688
%U urn:nbn:de:bsz:291--ds-376882
%F OTHER: hdl:20.500.11880/34640
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%V phd
%9 phd
%U https://scidok.sulb.uni-saarland.de/handle/20.500.11880/34640
[6]
A. Horňáková, “Lifted Edges as Connectivity Priors for Multicut and Disjoint Paths,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{HornakovaPhD22,
TITLE = {Lifted Edges as Connectivity Priors for Multicut and Disjoint Paths},
AUTHOR = {Hor{\v n}{\'a}kov{\'a}, Andrea},
LANGUAGE = {eng},
URL = {urn:nbn:de:bsz:291--ds-369193},
DOI = {10.22028/D291-36919},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Horňáková, Andrea
%Y Swoboda, Paul
%A referee: Schiele, Bernt
%A referee: Werner, Tomáš
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
External Organizations
%T Lifted Edges as Connectivity Priors for Multicut and Disjoint Paths :
%G eng
%U http://hdl.handle.net/21.11116/0000-000B-2AD2-9
%U urn:nbn:de:bsz:291--ds-369193
%R 10.22028/D291-36919
%F OTHER: hdl:20.500.11880/33680
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P X, 150 p.
%V phd
%9 phd
%U http://dx.doi.org/10.22028/D291-36919
[7]
V. T. Ho, “Entities with Quantities: Extraction, Search and Ranking,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{Ho_PhD2022,
TITLE = {Entities with Quantities: Extraction, Search and Ranking},
AUTHOR = {Ho, Vinh Thinh},
LANGUAGE = {eng},
URL = {urn:nbn:de:bsz:291--ds-380308},
DOI = {10.22028/D291-38030},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Ho, Vinh Thinh
%Y Weikum, Gerhard
%A referee: Stepanova, Daria
%A referee: Theobald, Martin
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
%T Entities with Quantities: Extraction, Search and Ranking :
%G eng
%U http://hdl.handle.net/21.11116/0000-000C-B756-5
%R 10.22028/D291-38030
%U urn:nbn:de:bsz:291--ds-380308
%F OTHER: hdl:20.500.11880/34538
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P xii, 131p.
%V phd
%9 phd
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/34538
[8]
A. Kinali-Dogan, “On Time, Time Synchronization and Noise in Time Measurement Systems,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{Attilaphd2022,
TITLE = {On Time, Time Synchronization and Noise in Time Measurement Systems},
AUTHOR = {Kinali-Dogan, Attila},
LANGUAGE = {eng},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Kinali-Dogan, Attila
%Y Lenzen, Christoph
%A referee: Mehlhorn, Kurt
%A referee: Vernotte, Francoise
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
%T On Time, Time Synchronization and Noise in Time Measurement Systems :
%G eng
%U http://hdl.handle.net/21.11116/0000-000B-5436-A
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 140 p.
%V phd
%9 phd
[9]
P. Lahoti, “Operationalizing Fairness for Responsible Machine Learning,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a growing disconnect between the ML fairness literature and the needs to operationalize fairness in practice. This thesis addresses the need for responsible ML by developing new models and methods to address challenges in operationalizing fairness in practice. Specifically, it makes the following contributions. First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset. Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation. Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs, wherein pairs of individuals can be identified as deemed similar with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data. Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen reasons. To ensure responsible model deployment, potential failure risks need to be predicted, and mitigation actions need to be devised, for example, deferring to a human expert when uncertain or collecting additional data to address model’s blind-spots. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty. This dissertation brings ML fairness closer to real-world applications by developing methods that address key practical challenges. Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice.
Export
BibTeX
@phdthesis{Lahotophd2022,
TITLE = {Operationalizing Fairness for Responsible Machine Learning},
AUTHOR = {Lahoti, Preethi},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-365860},
DOI = {10.22028/D291-36586},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a growing disconnect between the ML fairness literature and the needs to operationalize fairness in practice. This thesis addresses the need for responsible ML by developing new models and methods to address challenges in operationalizing fairness in practice. Specifically, it makes the following contributions. First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset. Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation. Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs, wherein pairs of individuals can be identified as deemed similar with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data. Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen reasons. To ensure responsible model deployment, potential failure risks need to be predicted, and mitigation actions need to be devised, for example, deferring to a human expert when uncertain or collecting additional data to address model{\textquoteright}s blind-spots. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty. This dissertation brings ML fairness closer to real-world applications by developing methods that address key practical challenges. Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice.},
}
Endnote
%0 Thesis
%A Lahoti, Preethi
%Y Weikum, Gerhard
%A referee: Gummadi, Krishna
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Group K. Gummadi, Max Planck Institute for Software Systems, Max Planck Society
%T Operationalizing Fairness for
Responsible Machine Learning :
%G eng
%U http://hdl.handle.net/21.11116/0000-000A-CEC6-F
%R 10.22028/D291-36586
%U nbn:de:bsz:291--ds-365860
%F OTHER: hdl:20.500.11880/33465
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 129 p.
%V phd
%9 phd
%X As machine learning (ML) is increasingly used for decision making in scenarios that impact humans, there is a growing awareness of its potential for unfairness. A large body of recent work has focused on proposing formal notions of fairness in ML, as well as approaches to mitigate unfairness. However, there is a growing disconnect between the ML fairness literature and the needs to operationalize fairness in practice. This thesis addresses the need for responsible ML by developing new models and methods to address challenges in operationalizing fairness in practice. Specifically, it makes the following contributions. First, we tackle a key assumption in the group fairness literature that sensitive demographic attributes such as race and gender are known upfront, and can be readily used in model training to mitigate unfairness. In practice, factors like privacy and regulation often prohibit ML models from collecting or using protected attributes in decision making. To address this challenge we introduce the novel notion of computationally-identifiable errors and propose Adversarially Reweighted Learning (ARL), an optimization method that seeks to improve the worst-case performance over unobserved groups, without requiring access to the protected attributes in the dataset. Second, we argue that while group fairness notions are a desirable fairness criterion, they are fundamentally limited as they reduce fairness to an average statistic over pre-identified protected groups. In practice, automated decisions are made at an individual level, and can adversely impact individual people irrespective of the group statistic. We advance the paradigm of individual fairness by proposing iFair (individually fair representations), an optimization approach for learning a low dimensional latent representation of the data with two goals: to encode the data as well as possible, while removing any information about protected attributes in the transformed representation. Third, we advance the individual fairness paradigm, which requires that similar individuals receive similar outcomes. However, similarity metrics computed over observed feature space can be brittle, and inherently limited in their ability to accurately capture similarity between individuals. To address this, we introduce a novel notion of fairness graphs, wherein pairs of individuals can be identified as deemed similar with respect to the ML objective. We cast the problem of individual fairness into graph embedding, and propose PFR (pairwise fair representations), a method to learn a unified pairwise fair representation of the data. Fourth, we tackle the challenge that production data after model deployment is constantly evolving. As a consequence, in spite of the best efforts in training a fair model, ML systems can be prone to failure risks due to a variety of unforeseen reasons. To ensure responsible model deployment, potential failure risks need to be predicted, and mitigation actions need to be devised, for example, deferring to a human expert when uncertain or collecting additional data to address model’s blind-spots. We propose Risk Advisor, a model-agnostic meta-learner to predict potential failure risks and to give guidance on the sources of uncertainty inducing the risks, by leveraging information theoretic notions of aleatoric and epistemic uncertainty. This dissertation brings ML fairness closer to real-world applications by developing methods that address key practical challenges. Extensive experiments on a variety of real-world and synthetic datasets show that our proposed methods are viable in practice.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33465
[10]
A. Nusser, “Fine-Grained Complexity and Algorithm Engineering of Geometric Similarity Measures,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{NusserPhD22,
TITLE = {Fine-Grained Complexity and Algorithm Engineering of Geometric Similarity Measures},
AUTHOR = {Nusser, Andr{\'e}},
LANGUAGE = {eng},
URL = {urn:nbn:de:bsz:291--ds-370184},
DOI = {10.22028/D291-37018},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
}
Endnote
%0 Thesis
%A Nusser, André
%Y Bringmann, Karl
%A referee: Mehlhorn, Kurt
%A referee: Chan, Timothy
%A referee: de Ber, Mark
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
%T Fine-Grained Complexity and Algorithm Engineering of Geometric Similarity Measures :
%G eng
%U http://hdl.handle.net/21.11116/0000-000C-2693-3
%R 10.22028/D291-37018
%U urn:nbn:de:bsz:291--ds-370184
%F OTHER: hdl:20.500.11880/33904
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P XIV, 210 p.
%V phd
%9 phd
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33904
[11]
D. Stutz, “Understanding and Improving Robustness and Uncertainty Estimation in Deep Learning,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
Deep learning is becoming increasingly relevant for many high-stakes applications such as autonomous driving or medical diagnosis where wrong decisions can have massive impact on human lives. Unfortunately, deep neural networks are typically assessed solely based on generalization, e.g., accuracy on a fixed test set. However, this is clearly insufficient for safe deployment as potential malicious actors and distribution shifts or the effects of quantization and unreliable hardware are disregarded. Thus, recent work additionally evaluates performance on potentially manipulated or corrupted inputs as well as after quantization and deployment on specialized hardware. In such settings, it is also important to obtain reasonable estimates of the model's confidence alongside its predictions. This thesis studies robustness and uncertainty estimation in deep learning along three main directions: First, we consider so-called adversarial examples, slightly perturbed inputs causing severe drops in accuracy. Second, we study weight perturbations, focusing particularly on bit errors in quantized weights. This is relevant for deploying models on special-purpose hardware for efficient inference, so-called accelerators. Finally, we address uncertainty estimation to improve robustness and provide meaningful statistical performance guarantees for safe deployment. In detail, we study the existence of adversarial examples with respect to the underlying data manifold. In this context, we also investigate adversarial training which improves robustness by augmenting training with adversarial examples at the cost of reduced accuracy. We show that regular adversarial examples leave the data manifold in an almost orthogonal direction. While we find no inherent trade-off between robustness and accuracy, this contributes to a higher sample complexity as well as severe overfitting of adversarial training. Using a novel measure of flatness in the robust loss landscape with respect to weight changes, we also show that robust overfitting is caused by converging to particularly sharp minima. In fact, we find a clear correlation between flatness and good robust generalization. Further, we study random and adversarial bit errors in quantized weights. In accelerators, random bit errors occur in the memory when reducing voltage with the goal of improving energy-efficiency. Here, we consider a robust quantization scheme, use weight clipping as regularization and perform random bit error training to improve bit error robustness, allowing considerable energy savings without requiring hardware changes. In contrast, adversarial bit errors are maliciously introduced through hardware- or software-based attacks on the memory, with severe consequences on performance. We propose a novel adversarial bit error attack to study this threat and use adversarial bit error training to improve robustness and thereby also the accelerator's security. Finally, we view robustness in the context of uncertainty estimation. By encouraging low-confidence predictions on adversarial examples, our confidence-calibrated adversarial training successfully rejects adversarial, corrupted as well as out-of-distribution examples at test time. Thereby, we are also able to improve the robustness-accuracy trade-off compared to regular adversarial training. However, even robust models do not provide any guarantee for safe deployment. To address this problem, conformal prediction allows the model to predict confidence sets with user-specified guarantee of including the true label. Unfortunately, as conformal prediction is usually applied after training, the model is trained without taking this calibration step into account. To address this limitation, we propose conformal training which allows training conformal predictors end-to-end with the underlying model. This not only improves the obtained uncertainty estimates but also enables optimizing application-specific objectives without losing the provided guarantee. Besides our work on robustness or uncertainty, we also address the problem of 3D shape completion of partially observed point clouds. Specifically, we consider an autonomous driving or robotics setting where vehicles are commonly equipped with LiDAR or depth sensors and obtaining a complete 3D representation of the environment is crucial. However, ground truth shapes that are essential for applying deep learning techniques are extremely difficult to obtain. Thus, we propose a weakly-supervised approach that can be trained on the incomplete point clouds while offering efficient inference. In summary, this thesis contributes to our understanding of robustness against both input and weight perturbations. To this end, we also develop methods to improve robustness alongside uncertainty estimation for safe deployment of deep learning methods in high-stakes applications. In the particular context of autonomous driving, we also address 3D shape completion of sparse point clouds.
Export
BibTeX
@phdthesis{Stutzphd2022,
TITLE = {Understanding and Improving Robustness and Uncertainty Estimation in Deep Learning},
AUTHOR = {Stutz, David},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-372867},
DOI = {10.22028/D291-37286},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {Deep learning is becoming increasingly relevant for many high-stakes applications such as autonomous driving or medical diagnosis where wrong decisions can have massive impact on human lives. Unfortunately, deep neural networks are typically assessed solely based on generalization, e.g., accuracy on a fixed test set. However, this is clearly insufficient for safe deployment as potential malicious actors and distribution shifts or the effects of quantization and unreliable hardware are disregarded. Thus, recent work additionally evaluates performance on potentially manipulated or corrupted inputs as well as after quantization and deployment on specialized hardware. In such settings, it is also important to obtain reasonable estimates of the model's confidence alongside its predictions. This thesis studies robustness and uncertainty estimation in deep learning along three main directions: First, we consider so-called adversarial examples, slightly perturbed inputs causing severe drops in accuracy. Second, we study weight perturbations, focusing particularly on bit errors in quantized weights. This is relevant for deploying models on special-purpose hardware for efficient inference, so-called accelerators. Finally, we address uncertainty estimation to improve robustness and provide meaningful statistical performance guarantees for safe deployment. In detail, we study the existence of adversarial examples with respect to the underlying data manifold. In this context, we also investigate adversarial training which improves robustness by augmenting training with adversarial examples at the cost of reduced accuracy. We show that regular adversarial examples leave the data manifold in an almost orthogonal direction. While we find no inherent trade-off between robustness and accuracy, this contributes to a higher sample complexity as well as severe overfitting of adversarial training. Using a novel measure of flatness in the robust loss landscape with respect to weight changes, we also show that robust overfitting is caused by converging to particularly sharp minima. In fact, we find a clear correlation between flatness and good robust generalization. Further, we study random and adversarial bit errors in quantized weights. In accelerators, random bit errors occur in the memory when reducing voltage with the goal of improving energy-efficiency. Here, we consider a robust quantization scheme, use weight clipping as regularization and perform random bit error training to improve bit error robustness, allowing considerable energy savings without requiring hardware changes. In contrast, adversarial bit errors are maliciously introduced through hardware- or software-based attacks on the memory, with severe consequences on performance. We propose a novel adversarial bit error attack to study this threat and use adversarial bit error training to improve robustness and thereby also the accelerator's security. Finally, we view robustness in the context of uncertainty estimation. By encouraging low-confidence predictions on adversarial examples, our confidence-calibrated adversarial training successfully rejects adversarial, corrupted as well as out-of-distribution examples at test time. Thereby, we are also able to improve the robustness-accuracy trade-off compared to regular adversarial training. However, even robust models do not provide any guarantee for safe deployment. To address this problem, conformal prediction allows the model to predict confidence sets with user-specified guarantee of including the true label. Unfortunately, as conformal prediction is usually applied after training, the model is trained without taking this calibration step into account. To address this limitation, we propose conformal training which allows training conformal predictors end-to-end with the underlying model. This not only improves the obtained uncertainty estimates but also enables optimizing application-specific objectives without losing the provided guarantee. Besides our work on robustness or uncertainty, we also address the problem of 3D shape completion of partially observed point clouds. Specifically, we consider an autonomous driving or robotics setting where vehicles are commonly equipped with LiDAR or depth sensors and obtaining a complete 3D representation of the environment is crucial. However, ground truth shapes that are essential for applying deep learning techniques are extremely difficult to obtain. Thus, we propose a weakly-supervised approach that can be trained on the incomplete point clouds while offering efficient inference. In summary, this thesis contributes to our understanding of robustness against both input and weight perturbations. To this end, we also develop methods to improve robustness alongside uncertainty estimation for safe deployment of deep learning methods in high-stakes applications. In the particular context of autonomous driving, we also address 3D shape completion of sparse point clouds.},
}
Endnote
%0 Thesis
%A Stutz, David
%Y Schiele, Bernt
%A referee: Hein, Matthias
%A referee: Kumar, Pawan
%A referee: Fritz, Mario
%+ Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
External Organizations
External Organizations
Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society
%T Understanding and Improving Robustness and
Uncertainty Estimation in Deep Learning :
%G eng
%U http://hdl.handle.net/21.11116/0000-000B-3FE6-C
%R 10.22028/D291-37286
%U nbn:de:bsz:291--ds-372867
%F OTHER: hdl:20.500.11880/33949
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 291 p.
%V phd
%9 phd
%X Deep learning is becoming increasingly relevant for many high-stakes applications such as autonomous driving or medical diagnosis where wrong decisions can have massive impact on human lives. Unfortunately, deep neural networks are typically assessed solely based on generalization, e.g., accuracy on a fixed test set. However, this is clearly insufficient for safe deployment as potential malicious actors and distribution shifts or the effects of quantization and unreliable hardware are disregarded. Thus, recent work additionally evaluates performance on potentially manipulated or corrupted inputs as well as after quantization and deployment on specialized hardware. In such settings, it is also important to obtain reasonable estimates of the model's confidence alongside its predictions. This thesis studies robustness and uncertainty estimation in deep learning along three main directions: First, we consider so-called adversarial examples, slightly perturbed inputs causing severe drops in accuracy. Second, we study weight perturbations, focusing particularly on bit errors in quantized weights. This is relevant for deploying models on special-purpose hardware for efficient inference, so-called accelerators. Finally, we address uncertainty estimation to improve robustness and provide meaningful statistical performance guarantees for safe deployment. In detail, we study the existence of adversarial examples with respect to the underlying data manifold. In this context, we also investigate adversarial training which improves robustness by augmenting training with adversarial examples at the cost of reduced accuracy. We show that regular adversarial examples leave the data manifold in an almost orthogonal direction. While we find no inherent trade-off between robustness and accuracy, this contributes to a higher sample complexity as well as severe overfitting of adversarial training. Using a novel measure of flatness in the robust loss landscape with respect to weight changes, we also show that robust overfitting is caused by converging to particularly sharp minima. In fact, we find a clear correlation between flatness and good robust generalization. Further, we study random and adversarial bit errors in quantized weights. In accelerators, random bit errors occur in the memory when reducing voltage with the goal of improving energy-efficiency. Here, we consider a robust quantization scheme, use weight clipping as regularization and perform random bit error training to improve bit error robustness, allowing considerable energy savings without requiring hardware changes. In contrast, adversarial bit errors are maliciously introduced through hardware- or software-based attacks on the memory, with severe consequences on performance. We propose a novel adversarial bit error attack to study this threat and use adversarial bit error training to improve robustness and thereby also the accelerator's security. Finally, we view robustness in the context of uncertainty estimation. By encouraging low-confidence predictions on adversarial examples, our confidence-calibrated adversarial training successfully rejects adversarial, corrupted as well as out-of-distribution examples at test time. Thereby, we are also able to improve the robustness-accuracy trade-off compared to regular adversarial training. However, even robust models do not provide any guarantee for safe deployment. To address this problem, conformal prediction allows the model to predict confidence sets with user-specified guarantee of including the true label. Unfortunately, as conformal prediction is usually applied after training, the model is trained without taking this calibration step into account. To address this limitation, we propose conformal training which allows training conformal predictors end-to-end with the underlying model. This not only improves the obtained uncertainty estimates but also enables optimizing application-specific objectives without losing the provided guarantee. Besides our work on robustness or uncertainty, we also address the problem of 3D shape completion of partially observed point clouds. Specifically, we consider an autonomous driving or robotics setting where vehicles are commonly equipped with LiDAR or depth sensors and obtaining a complete 3D representation of the environment is crucial. However, ground truth shapes that are essential for applying deep learning techniques are extremely difficult to obtain. Thus, we propose a weakly-supervised approach that can be trained on the incomplete point clouds while offering efficient inference. In summary, this thesis contributes to our understanding of robustness against both input and weight perturbations. To this end, we also develop methods to improve robustness alongside uncertainty estimation for safe deployment of deep learning methods in high-stakes applications. In the particular context of autonomous driving, we also address 3D shape completion of sparse point clouds.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33949
[12]
A. Tigunova, “Extracting Personal Information from Conversations,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
Personal knowledge is a versatile resource that is valuable for a wide range of downstream applications. Background facts about users can allow chatbot assistants to produce more topical and empathic replies. In the context of recommendation and retrieval models, personal facts can be used to customize the ranking results for individual users. A Personal Knowledge Base, populated with personal facts, such as demographic information, interests and interpersonal relationships, is a unique endpoint for storing and querying personal knowledge. Such knowledge bases are easily interpretable and can provide users with full control over their own personal knowledge, including revising stored facts and managing access by downstream services for personalization purposes. To alleviate users from extensive manual effort to build such personal knowledge base, we can leverage automated extraction methods applied to the textual content of the users, such as dialogue transcripts or social media posts. Mainstream extraction methods specialize on well-structured data, such as biographical texts or encyclopedic articles, which are rare for most people. In turn, conversational data is abundant but challenging to process and requires specialized methods for extraction of personal facts. In this dissertation we address the acquisition of personal knowledge from conversational data. We propose several novel deep learning models for inferring speakers’ personal attributes: • Demographic attributes, age, gender, profession and family status, are inferred by HAMs - hierarchical neural classifiers with attention mechanism. Trained HAMs can be transferred between different types of conversational data and provide interpretable predictions. • Long-tailed personal attributes, hobby and profession, are predicted with CHARM - a zero-shot learning model, overcoming the lack of labeled training samples for rare attribute values. By linking conversational utterances to external sources, CHARM is able to predict attribute values which it never saw during training. • Interpersonal relationships are inferred with PRIDE - a hierarchical transformer-based model. To accurately predict fine-grained relationships, PRIDE leverages personal traits of the speakers and the style of conversational utterances. Experiments with various conversational texts, including Reddit discussions and movie scripts, demonstrate the viability of our methods and their superior performance compared to state-of-the-art baselines.
Export
BibTeX
@phdthesis{Tiguphd2022,
TITLE = {Extracting Personal Information from Conversations},
AUTHOR = {Tigunova, Anna},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-356280},
DOI = {10.22028/D291-35628},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {Personal knowledge is a versatile resource that is valuable for a wide range of downstream applications. Background facts about users can allow chatbot assistants to produce more topical and empathic replies. In the context of recommendation and retrieval models, personal facts can be used to customize the ranking results for individual users. A Personal Knowledge Base, populated with personal facts, such as demographic information, interests and interpersonal relationships, is a unique endpoint for storing and querying personal knowledge. Such knowledge bases are easily interpretable and can provide users with full control over their own personal knowledge, including revising stored facts and managing access by downstream services for personalization purposes. To alleviate users from extensive manual effort to build such personal knowledge base, we can leverage automated extraction methods applied to the textual content of the users, such as dialogue transcripts or social media posts. Mainstream extraction methods specialize on well-structured data, such as biographical texts or encyclopedic articles, which are rare for most people. In turn, conversational data is abundant but challenging to process and requires specialized methods for extraction of personal facts. In this dissertation we address the acquisition of personal knowledge from conversational data. We propose several novel deep learning models for inferring speakers{\textquoteright} personal attributes: \mbox{$\bullet$} Demographic attributes, age, gender, profession and family status, are inferred by HAMs -- hierarchical neural classifiers with attention mechanism. Trained HAMs can be transferred between different types of conversational data and provide interpretable predictions. \mbox{$\bullet$} Long-tailed personal attributes, hobby and profession, are predicted with CHARM -- a zero-shot learning model, overcoming the lack of labeled training samples for rare attribute values. By linking conversational utterances to external sources, CHARM is able to predict attribute values which it never saw during training. \mbox{$\bullet$} Interpersonal relationships are inferred with PRIDE -- a hierarchical transformer-based model. To accurately predict fine-grained relationships, PRIDE leverages personal traits of the speakers and the style of conversational utterances. Experiments with various conversational texts, including Reddit discussions and movie scripts, demonstrate the viability of our methods and their superior performance compared to state-of-the-art baselines.},
}
Endnote
%0 Thesis
%A Tigunova, Anna
%Y Weikum, Gerhard
%A referee: Yates, Andrew
%A referee: Demberg, Vera
%+ Databases and Information Systems, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
Databases and Information Systems, MPI for Informatics, Max Planck Society
External Organizations
%T Extracting Personal Information from
Conversations :
%G eng
%U http://hdl.handle.net/21.11116/0000-000B-3FE1-1
%R 10.22028/D291-35628
%U nbn:de:bsz:291--ds-356280
%F OTHER: hdl:20.500.11880/32546
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 139 p.
%V phd
%9 phd
%X Personal knowledge is a versatile resource that is valuable for a wide range of downstream applications. Background facts about users can allow chatbot assistants to produce more topical and empathic replies. In the context of recommendation and retrieval models, personal facts can be used to customize the ranking results for individual users. A Personal Knowledge Base, populated with personal facts, such as demographic information, interests and interpersonal relationships, is a unique endpoint for storing and querying personal knowledge. Such knowledge bases are easily interpretable and can provide users with full control over their own personal knowledge, including revising stored facts and managing access by downstream services for personalization purposes. To alleviate users from extensive manual effort to build such personal knowledge base, we can leverage automated extraction methods applied to the textual content of the users, such as dialogue transcripts or social media posts. Mainstream extraction methods specialize on well-structured data, such as biographical texts or encyclopedic articles, which are rare for most people. In turn, conversational data is abundant but challenging to process and requires specialized methods for extraction of personal facts. In this dissertation we address the acquisition of personal knowledge from conversational data. We propose several novel deep learning models for inferring speakers’ personal attributes: • Demographic attributes, age, gender, profession and family status, are inferred by HAMs - hierarchical neural classifiers with attention mechanism. Trained HAMs can be transferred between different types of conversational data and provide interpretable predictions. • Long-tailed personal attributes, hobby and profession, are predicted with CHARM - a zero-shot learning model, overcoming the lack of labeled training samples for rare attribute values. By linking conversational utterances to external sources, CHARM is able to predict attribute values which it never saw during training. • Interpersonal relationships are inferred with PRIDE - a hierarchical transformer-based model. To accurately predict fine-grained relationships, PRIDE leverages personal traits of the speakers and the style of conversational utterances. Experiments with various conversational texts, including Reddit discussions and movie scripts, demonstrate the viability of our methods and their superior performance compared to state-of-the-art baselines.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/32546
[13]
J. Wang, “3D Hand Reconstruction From Monocular Camera With Model-Based Priors,” Universität des Saarlandes, Saarbrücken, 2022.
Export
BibTeX
@phdthesis{WangJiayi_PhD2023,
TITLE = {{3D} Hand Reconstruction From Monocular Camera With Model-Based Priors},
AUTHOR = {Wang, Jiayi},
URL = {urn:nbn:de:bsz:291--ds-399055},
DOI = {10.22028/D291-39905},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
}
Endnote
%0 Thesis
%A Wang, Jiayi
%Y Theobalt, Christian
%A referee: Casas, Dan
%A referee: Steimle, Jürgen
%+ Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Visual Computing and Artificial Intelligence, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
Computer Graphics, MPI for Informatics, Max Planck Society
%T 3D Hand Reconstruction From Monocular Camera With Model-Based Priors :
%U http://hdl.handle.net/21.11116/0000-000D-7322-B
%U urn:nbn:de:bsz:291--ds-399055
%R 10.22028/D291-39905
%F OTHER: hdl:20.500.11880/36048
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P xvi,123p
%V phd
%9 phd
%U https://scidok.sulb.uni-saarland.de/handle/20.500.11880/36048
[14]
B. Wiederhake, “Pulse Propagation, Graph Cover, and Packet Forwarding,” Universität des Saarlandes, Saarbrücken, 2022.
Abstract
We study distributed systems, with a particular focus on graph problems and fault<br>tolerance.<br>Fault-tolerance in a microprocessor or even System-on-Chip can be improved by using<br>a fault-tolerant pulse propagation design. The existing design TRIX achieves this goal<br>by being a distributed system consisting of very simple nodes. We show that even in<br>the typical mode of operation without faults, TRIX performs significantly better than a<br>regular wire or clock tree: Statistical evaluation of our simulated experiments show that<br>we achieve a skew with standard deviation of O(log log H), where H is the height of the<br>TRIX grid.<br>The distance-r generalization of classic graph problems can give us insights on how<br>distance affects hardness of a problem. For the distance-r dominating set problem, we<br>present both an algorithmic upper and unconditional lower bound for any graph class<br>with certain high-girth and sparseness criteria. In particular, our algorithm achieves a<br>O(r · f(r))-approximation in time O(r), where f is the expansion function, which correlates<br>with density. For constant r, this implies a constant approximation factor, in constant<br>time. We also show that no algorithm can achieve a (2r + 1 − δ)-approximation for any<br>δ > 0 in time O(r), not even on the class of cycles of girth at least 5r. Furthermore, we<br>extend the algorithm to related graph cover problems and even to a different execution<br>model.<br>Furthermore, we investigate the problem of packet forwarding, which addresses the<br>question of how and when best to forward packets in a distributed system. These packets<br>are injected by an adversary. We build on the existing algorithm OED to handle more<br>than a single destination. In particular, we show that buffers of size O(log n) are sufficient<br>for this algorithm, in contrast to O(n) for the naive approach.
Export
BibTeX
@phdthesis{Wiederhakephd2021,
TITLE = {Pulse Propagation, Graph Cover, and Packet Forwarding},
AUTHOR = {Wiederhake, Ben},
LANGUAGE = {eng},
URL = {nbn:de:bsz:291--ds-366085},
DOI = {10.22028/D291-36608},
SCHOOL = {Universit{\"a}t des Saarlandes},
ADDRESS = {Saarbr{\"u}cken},
YEAR = {2022},
DATE = {2022},
ABSTRACT = {We study distributed systems, with a particular focus on graph problems and fault<br>tolerance.<br>Fault-tolerance in a microprocessor or even System-on-Chip can be improved by using<br>a fault-tolerant pulse propagation design. The existing design TRIX achieves this goal<br>by being a distributed system consisting of very simple nodes. We show that even in<br>the typical mode of operation without faults, TRIX performs significantly better than a<br>regular wire or clock tree: Statistical evaluation of our simulated experiments show that<br>we achieve a skew with standard deviation of O(log log H), where H is the height of the<br>TRIX grid.<br>The distance-r generalization of classic graph problems can give us insights on how<br>distance affects hardness of a problem. For the distance-r dominating set problem, we<br>present both an algorithmic upper and unconditional lower bound for any graph class<br>with certain high-girth and sparseness criteria. In particular, our algorithm achieves a<br>O(r · f(r))-approximation in time O(r), where f is the expansion function, which correlates<br>with density. For constant r, this implies a constant approximation factor, in constant<br>time. We also show that no algorithm can achieve a (2r + 1 {\textminus} $\delta$)-approximation for any<br>$\delta$ > 0 in time O(r), not even on the class of cycles of girth at least 5r. Furthermore, we<br>extend the algorithm to related graph cover problems and even to a different execution<br>model.<br>Furthermore, we investigate the problem of packet forwarding, which addresses the<br>question of how and when best to forward packets in a distributed system. These packets<br>are injected by an adversary. We build on the existing algorithm OED to handle more<br>than a single destination. In particular, we show that buffers of size O(log n) are sufficient<br>for this algorithm, in contrast to O(n) for the naive approach.},
}
Endnote
%0 Thesis
%A Wiederhake, Ben
%Y Lenzen, Christoph
%A referee: Mehlhorn, Kurt
%+ Algorithms and Complexity, MPI for Informatics, Max Planck Society
International Max Planck Research School, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
Algorithms and Complexity, MPI for Informatics, Max Planck Society
%T Pulse Propagation, Graph Cover,
and Packet Forwarding :
%G eng
%U http://hdl.handle.net/21.11116/0000-000A-CEBE-9
%R 10.22028/D291-36608
%U nbn:de:bsz:291--ds-366085
%F OTHER: hdl:20.500.11880/33316
%I Universität des Saarlandes
%C Saarbrücken
%D 2022
%P 83 p.
%V phd
%9 phd
%X We study distributed systems, with a particular focus on graph problems and fault<br>tolerance.<br>Fault-tolerance in a microprocessor or even System-on-Chip can be improved by using<br>a fault-tolerant pulse propagation design. The existing design TRIX achieves this goal<br>by being a distributed system consisting of very simple nodes. We show that even in<br>the typical mode of operation without faults, TRIX performs significantly better than a<br>regular wire or clock tree: Statistical evaluation of our simulated experiments show that<br>we achieve a skew with standard deviation of O(log log H), where H is the height of the<br>TRIX grid.<br>The distance-r generalization of classic graph problems can give us insights on how<br>distance affects hardness of a problem. For the distance-r dominating set problem, we<br>present both an algorithmic upper and unconditional lower bound for any graph class<br>with certain high-girth and sparseness criteria. In particular, our algorithm achieves a<br>O(r · f(r))-approximation in time O(r), where f is the expansion function, which correlates<br>with density. For constant r, this implies a constant approximation factor, in constant<br>time. We also show that no algorithm can achieve a (2r + 1 − δ)-approximation for any<br>δ > 0 in time O(r), not even on the class of cycles of girth at least 5r. Furthermore, we<br>extend the algorithm to related graph cover problems and even to a different execution<br>model.<br>Furthermore, we investigate the problem of packet forwarding, which addresses the<br>question of how and when best to forward packets in a distributed system. These packets<br>are injected by an adversary. We build on the existing algorithm OED to handle more<br>than a single destination. In particular, we show that buffers of size O(log n) are sufficient<br>for this algorithm, in contrast to O(n) for the naive approach.
%U https://publikationen.sulb.uni-saarland.de/handle/20.500.11880/33316