Abstract
Concept Bottleneck Models (CBMs) enhance the interpretability of neural
networks by basing predictions on human-understandable concepts. However,
current CBMs typically rely on concept sets extracted from large language
models or extensive image corpora, limiting their effectiveness in data-sparse
scenarios. We propose Data-efficient CBMs (DCBMs), which reduce the need for
large sample sizes during concept generation while preserving interpretability.
DCBMs define concepts as image regions detected by segmentation or detection
foundation models, allowing each image to generate multiple concepts across
different granularities. This removes reliance on textual descriptions and
large-scale pre-training, making DCBMs applicable for fine-grained
classification and out-of-distribution tasks. Attribution analysis using
Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized
in test images. By leveraging dataset-specific concepts instead of predefined
ones, DCBMs enhance adaptability to new domains.
BibTeX
@online{Prasse2412.11576, TITLE = {{DCBM}: Data-Efficient Visual Concept Bottleneck Models}, AUTHOR = {Prasse, Katharina and Knab, Patrick and Marton, Sascha and Bartelt, Christian and Keuper, Margret}, LANGUAGE = {eng}, URL = {https://arxiv.org/abs/2412.11576}, EPRINT = {2412.11576}, EPRINTTYPE = {arXiv}, YEAR = {2025}, MARGINALMARK = {$\bullet$}, ABSTRACT = {Concept Bottleneck Models (CBMs) enhance the interpretability of neural<br>networks by basing predictions on human-understandable concepts. However,<br>current CBMs typically rely on concept sets extracted from large language<br>models or extensive image corpora, limiting their effectiveness in data-sparse<br>scenarios. We propose Data-efficient CBMs (DCBMs), which reduce the need for<br>large sample sizes during concept generation while preserving interpretability.<br>DCBMs define concepts as image regions detected by segmentation or detection<br>foundation models, allowing each image to generate multiple concepts across<br>different granularities. This removes reliance on textual descriptions and<br>large-scale pre-training, making DCBMs applicable for fine-grained<br>classification and out-of-distribution tasks. Attribution analysis using<br>Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized<br>in test images. By leveraging dataset-specific concepts instead of predefined<br>ones, DCBMs enhance adaptability to new domains.<br>}, }
Endnote
%0 Report %A Prasse, Katharina %A Knab, Patrick %A Marton, Sascha %A Bartelt, Christian %A Keuper, Margret %+ External Organizations External Organizations External Organizations External Organizations Computer Vision and Machine Learning, MPI for Informatics, Max Planck Society %T DCBM: Data-Efficient Visual Concept Bottleneck Models : %G eng %U http://hdl.handle.net/21.11116/0000-0010-BF46-9 %U https://arxiv.org/abs/2412.11576 %D 2025 %X Concept Bottleneck Models (CBMs) enhance the interpretability of neural<br>networks by basing predictions on human-understandable concepts. However,<br>current CBMs typically rely on concept sets extracted from large language<br>models or extensive image corpora, limiting their effectiveness in data-sparse<br>scenarios. We propose Data-efficient CBMs (DCBMs), which reduce the need for<br>large sample sizes during concept generation while preserving interpretability.<br>DCBMs define concepts as image regions detected by segmentation or detection<br>foundation models, allowing each image to generate multiple concepts across<br>different granularities. This removes reliance on textual descriptions and<br>large-scale pre-training, making DCBMs applicable for fine-grained<br>classification and out-of-distribution tasks. Attribution analysis using<br>Grad-CAM demonstrates that DCBMs deliver visual concepts that can be localized<br>in test images. By leveraging dataset-specific concepts instead of predefined<br>ones, DCBMs enhance adaptability to new domains.<br> %K Computer Science, Computer Vision and Pattern Recognition, cs.CV